Naive Bayes Classifier
Naive Bayes Classifier
assumption among predictors. In simple terms, a Naive Bayes classifier assumes that
the presence of a particular feature in a class is unrelated to the presence of any other
feature.
The Naïve Bayes classifier is a popular supervised machine learning algorithm used for
learning algorithms, which means that it models the distribution of inputs for a given
class or category. This approach is based on the assumption that the features of the
input data are conditionally independent given the class, allowing the algorithm to make
predictions quickly and accurately. In statistics, naive Bayes classifiers are considered
as simple probabilistic classifiers that apply Bayes’ theorem. This theorem is based on
the probability of a hypothesis, given the data and some prior knowledge. The naive
Bayes classifier assumes that all features in the input data are independent of each
other, which is often not true in real-world scenarios. However, despite this simplifying
assumption, the naive Bayes classifier is widely used because of its efficiency and good
As a result, the naive Bayes classifier is a powerful tool in machine learning, particularly
For example, a fruit may be considered to be an apple if it is red, round, and about 3
inches in diameter. Even if these features depend on each other or upon the existence
of the other features, all of these properties independently contribute to the probability
methods.
Bayes theorem provides a way of computing posterior probability P(c|x) from P(c), P(x)
● P(c|x) is the posterior probability of class (c, target) given predictor (x, attributes).
● P(x|c) is the likelihood which is the probability of the predictor given class.
to classify whether players will play or not based on weather condition. Let’s follow the
Create Likelihood table by finding the probabilities like Overcast probability = 0.29 and
Now, use the Naive Bayesian equation to calculate the posterior probability for
each class. The class with the highest posterior probability is the outcome of
the prediction.
Problem: Players will play if the weather is sunny. Is this statement correct?
Here P( Sunny | Yes) * P(Yes) is in the numerator, and P (Sunny) is in the denominator.
Here we have P (Sunny |Yes) = 3/9 = 0.33, P(Sunny) = 5/14 = 0.36, P( Yes)= 9/14 =
0.64
Now, P (Yes | Sunny) = 0.33 * 0.64 / 0.36 = 0.60, which has higher probability.
Pros and Cons of Naive Bayes?
Pros:
● It is easy and fast to predict class of test data set. It also perform well in multi
class prediction
● When assumption of independence holds, the classifier performs better
compared to other machine learning models like logistic regression or decision
tree, and requires less training data.
● It perform well in case of categorical input variables compared to numerical
variable(s). For numerical variable, normal distribution is assumed (bell curve,
which is a strong assumption).
Cons:
● If categorical variable has a category (in test data set), which was not observed
in training data set, then model will assign a 0 (zero) probability and will be
unable to make a prediction. This is often known as “Zero Frequency”. To solve
this, we can use the smoothing technique. One of the simplest smoothing
techniques is called Laplace estimation.
● On the other side, Naive Bayes is also known as a bad estimator, so the
probability outputs from predict_proba are not to be taken too seriously.
● Another limitation of this algorithm is the assumption of independent predictors.
In real life, it is almost impossible that we get a set of predictors which are
completely independent.
and it is super fast. Thus, it could be used for making predictions in real time.
● Multi-class Prediction: This algorithm is also well known for multi class
target variable.
● Text classification/ Spam Filtering/ Sentiment Analysis: Naive Bayesian
classifiers mostly used in text classification (due to better result in multi class
e-mail) and Sentiment Analysis (in social media analysis, to identify positive and
together builds a Recommendation System that uses machine learning and data
mining techniques to filter unseen information and predict whether a user would
The Naive Bayes uses a similar method to predict the probability of different classes
based on various attributes. This algorithm is mostly used in text classification (nlp) and
● Multinomial Naive Bayes: It is used for discrete counts. For example, let’s say,
we have a text classification problem. Here we can consider Bernoulli trials which
is one step further and instead of “word occurring in the document”, we have
“count how often word occurs in the document”, you can think of it as “number of
boolean (i.e. zeros and ones). One application would be text classification with
‘bag of words’ model where the 1s & 0s are “word occurs in the document” and
complement of each class is used to calculate the model weights. So, this is
suitable for imbalanced data sets and often outperforms the MNB on text
classification tasks.
● Categorical Naive Bayes: Categorical Naive Bayes is useful if the features are
numeric format using the ordinal encoder for using this algorithm.