What Is Random Forest Algorithm In Machine Learning ?

A Friendly Introduction To Cross-Entropy For Machine learning

1 A Friendly Introduction To Cross-Entropy For Machine Learning

MACHINE LEARNING:-

-In recent years, there was a special searching study related to machine learning with respect to both industry and academia(academe) and showed its potential stability in large scale applications like data exploration , predictions and pattern recognition and analysis ,pattern computation, and predicting outcomes from collected data.

-In this field of study and research , resources are important in active learning task that help in creating deeper understanding and provides different forms of data.

-Where in small scale datasets, expert knowledge is acceptable to some extent for precise annotation and interpretation.

-In large scale datasets, data analysis(Data analysis is a process of cleansing,of data transforming and modeling data for getting useful information, informative  conclusions).This techniques are used to solve the  problems by grasp the posteriori knowledge learned from the big data.

-With the volume of the increasing datasets, their analysis tends to generalize better but commentary cost in terms of money and time, by adding more, and different mathematical and statistical techniques  are highly deployed for successful annotations. The same device is cross-entropy.

2 A Friendly Introduction To Cross-Entropy For Machine Learning

Now,  let’s learn about  Main topic(Cross-Entropy), add-on (Loss Function(Machines learn by mode of a loss function) and KL Divergence) and their part in respect to Machine Learning.

* Cross-entropy is often used in machine learning as a loss function.

 

UNDERSTANDING THE CONCEPT OF CROSS- ENTROPY

 

So in  order to understand the concept and definition  of cross- entropy , let’s first understand the definition of Entropy :-

INTRODUCTION TO ENTROPY

We live in a world where we curious to know  about the outcomes in advance. We did  like to know if it is  going to rain today , we’d like to know if any particular sports team is going to win their next game. we will need that umbrella in the rain or not or knowing that our team is going to win. This isn’t likely. As there are too much data are generated , often make arriving at a conclusion virtually impossible  to predict for the average person. But if we could, we could dramatically change the life cycle

General Definition :-Entropy is a measure of change ability . much like the concept of infinity, entropy is  basically used to help model and used to represent the degree of uncertainty of a random variable.

Mathematically we can say , the probability distribution is used to define entropy.

Look at the graph given below , 

3 A Friendly Introduction To Cross-Entropy For Machine Learning

Now , concentrate on the definition of  cross-entropy,

GENERAL DEFINITION :-

In general terms , Cross-entropy is define as the  measure of the difference between  2 probability distributions for a given random variable or In the terms of machine learning ,the  set of events in the link with Supervised machine learning  , from the 2 probability distributions ,1  of the probability distributions shows the label ”true”  or ‘’1’’ for  the training samples  given and correct results are indicated with the value hundred percent 

Now, 

      Look at the graph of probability distribution vs cross entropy given below,

 

4 A Friendly Introduction To Cross-Entropy For Machine Learning

So, Cross-Entropy is expressed by the equation given below:-

5 A Friendly Introduction To Cross-Entropy For Machine Learning

Here in the above equation ,

-x represents the (predicted results) shown by using the Machine learning  algorithm

-p(x) is the probability distribution of “true” or ‘’1’’  label from training samples given 

-q(x) depicts the estimation of the Machine learning  algorithm.

The term Cross-entropy builds on the concept of data-entropy and used for finding the variety of bits needed to transform/change  an event from one distribution to another distribution.

Cross-entropy examines the predictions of models with the true probability distribution. It goes down when predictions get more accurate and become zero when predictions tend to perfect.

NOTE:-Cross Entropy is also surely  a good loss function for Classification Problems, the reason behind that is that  it minimizes the distance between 2 probability distributions — predicted  outcomes and actual outcomes.

Some discussion related to Relative Entropy

Let’s start with some introduction to Relative Entropy:-

-Relative entropy is a measure of the “distance” between two distributions.

-In statistics,relative entropy  i.e D(p||q) is a measure of the inefficiency of supposing that the distribution is denoted by ‘’q’’, when the true distribution is  is p.p here is denoted as actual distribution.

-Since it is not symmetric, i.e. D(p||q) =/= D(q||p), and does not satisfy the triangle inequality. The triangle inequality would have D(p||q) <= D(p||u) + D(u||q), which in general does not hold 

-A important and main property related to relative entropy is that  its non-negativity, i.e. D(p||q) greater than equal to 0.

GO THROUGH THE EXPLANATION BELOW:-

6 A Friendly Introduction To Cross-Entropy For Machine Learning

In above derivation given , Equation (2) to Equation(3) follows from Jensen’s inequality.

Now , we discussed about the  CROSS FUNCTION AS LOSS FUNCTION

CROSS FUNCTION AS LOSS FUNCTION

-It is also k nown as log loss or logistic loss.

-Each predicted class probability is compared with  the actual class which  desired output  are 2 (0 or 1 )and a score/loss is calculated that penalizes the probability which basically based on how far it is from the actual value which is expected.

-Cross-entropy loss is used when  we adjusting model during training process . Main Aim is to decrease the loss, that’s , the smaller the loss the better the model is.

-A perfect model always  has a cross-entropy loss of  0.

Go through the graph shown below based on log loss vs predicted probability

7 A Friendly Introduction To Cross-Entropy For Machine Learning

-Cross entropy is  also used as a Loss Function when optimizing classification models, e.g. logistics regression or ANN algorithms  basically used for classification purposes and tasks.

8 A Friendly Introduction To Cross-Entropy For Machine Learning

FORMULAS:-

9 A Friendly Introduction To Cross-Entropy For Machine Learning

CONCLUSION

This blog covers all the important concepts and topics related to cross entropy  that are listed below:-

-Entropy

-Cross-Entropy, and its extensions( In terms loss function and KL Divergence)

-Cross-Entropy for Machine Learning

-Also you have learned Cross-Entropy for Machine Learning and how Cross-Entropy can be applied as a Loss Function when optimizing classification models, and different from KL Divergence .

 

I hope this blog  gave you a meaningful and clear understanding of these commonly used terms and their use/roles in a better understanding of machine learning terms and neutral networks . HAPPY LEARNING:-)

Article by: Shivangi Pandey

 

If you are Interested In Machine Learning You Can Check Machine Learning Internship Program

Also Check Other technical and Non Technical Internship Programs

Facebook
WhatsApp
LinkedIn
Email