classification - Naive bayes classifier calculation -


I am trying to use a simple beta classifier to classify my datasets. My questions are:

1 - Generally when we try to calculate equality, we use the formula:

P (c | x) = P ( C | x1) * P (c | x2) * ... P (c | xn) * P (C) but in some instances it says that to avoid getting very little results, we can use P (c | x) = Exp (log (c | x1) + log (c | x2) + ... log (c | xn) + logP (C)). Can someone tell me the difference between these two formulas and they are both used to calculate "creation" or second, which is called "information profit".

2- In some cases when we try to classify our datasets, some additions are zero. To avoid tap joints, some PPL is used for "lamplus smoothing technique". Is not the effect of this technique on the accuracy of our classification?

Thank you in advance for all your time. I am just new to this algorithm and am trying to know more about it. Should I read a recommended letter? Thanks a lot.

I will take a knife on my first question, code> P in your second equation I think that the equation you are finally moving towards is:

log p (c | x) = log p (c | x1) + log p (c | x2) + .. + If this is the log, then the examples show that working with logarithm of distribution function in many statistical calculations is often easier, as is the opposite of distribution function. Practically speaking, it is related to the fact that many statistical distributions involve an exponential work. For example, where you have the maximum gauge distribution K * exp ^ (- s_0 * (x-x_0) ^ 2) , it is by solving a mathematical complex complex problem (if we are going To know where its largest logarithm is K-s_0 * (x-x_0) ^ 2 , through the complete formal process of taking derivatives and finding equation roots.

It leads to many places where "both sides take the logarithm" is a standard step in optimization calculation.

In addition, in computationally, when you can optimize tasks, in many multiplicity words, multiplying small floating point numbers together by adding the logarithm of small floating point numbers is less likely to cause numerical problems. < / P>


Comments

Popular posts from this blog

java - org.apache.http.ProtocolException: Target host is not specified -

java - Gradle dependencies: compile project by relative path -

ruby on rails - Object doesn't support #inspect when used with .include -