classification is a method of grouping a pattern from 
the  data  in  input.  There  are  diverse  algorithms  for 
solving  a  classification  problem  such  as  K- 
neighbours  classifier,  decision  tree  classifier,  and 
gradient boosting classifier, without forgetting neural 
networks, which have shown great success. 
2.1
 
Random Forest Classifier 
The Random forest (Culter, 2012) is a supervised 
learning algorithm. The idea is to build a “forest” like 
a  set  of  decision  trees,  generally  trained  with 
combining  several  models  with  the  “bagging” 
method,  which  helps  in  increasing  the  general 
precision. 
This algorithm can be used for both regression and 
classification  problems,  which  make  it  useful  and 
powerful. 
2.2
 
Logistic Regression Classifier 
Logistic  regression  (Wright,  1995)  is  a  predictive 
technique. It aims to build a model making it possible 
to predict / explain the values taken by a qualitative 
target variable (most often binary, we then speak of 
binary  logistic  regression;  if  it  has  more  than  2 
modalities,  we  speak  of  polychromous  logistic 
regression)  from  a  set  of  quantitative  or  qualitative 
explanatory  variables  (coding  is  necessary  in  this 
case). 
2.3
 
Decision Tree Classifier 
This decision support or data mining tool allows you 
to represent a set of choices in the graphic form of a 
tree. It is one of the most popular supervised learning 
methods for data classification problems. Concretely, 
a decision tree models a hierarchy of tests to predict a 
result.  
The possible decisions are located at the ends of 
the branches (the "leaves" of the tree) and are reached 
based on decisions made at each stage.  
2.4
 
Naïve Bayes Classifier 
The  Naive  Bayesian  classification  (Leung,  2007) 
method  is  a  supervised  machine  learning  algorithm 
that classifies a set of observations according to rules 
determined by the algorithm itself. There is a theory 
called Bayes, where the name of the algorithm comes 
from.  
This classification tool must first be trained on a 
training  dataset,  which  shows  the  expected  class 
according to the inputs during the learning phase, the 
algorithm develops its classification rules on this data 
set,  in  order  to  apply  them  secondly  to  the 
classification of a prediction data set. 
2.5
 
Support Vector Machine Classifier 
The main idea behind the Support Vector Classifier 
(Hearst,  1998)  is  to  find  a  decision  boundary  with 
maximum  width  that  can  classify  both  classes. 
Maximum margin classifiers are extremely sensitive 
to outliers in training data, which makes them quite 
lame. Choosing a threshold that allows classification 
errors  is  an  example  of  the  Bias-Variance  tradeoff 
that affects all machine learning algorithms. 
2.6
 
K-Nearest Neighbours Classifier 
In  artificial  intelligence,  more  precisely  in  machine 
learning,  the  k  nearest  neighbours  (Peterson,  2009) 
method  is  a  supervised  learning  method.  In 
abbreviated  form  k-NN  or  KNN,  from  English  k-
nearest neighbors. 
In this context, we have a training database made 
up of N "input-output" pairs. To estimate the output 
associated with a new input x, the k nearest neighbors 
method  consists  of  taking  into  account  (identically) 
the k training samples whose according to a defined 
distance, input is closest to the new input x. 
For example, in a classification problem, we will 
retain the most represented class among the k outputs 
associated with the k inputs closest to the new input 
x. 
2.7
 
Gradient Boosting Classifier 
Gradient Boosting (Friedman, 2002) classifiers are a 
category  of  machine  learning  algorithms  that 
combine multiple learning models to create a stronger 
one.  Decision  trees  are  generally  used  when 
increasing  gradients.  Gradient  enhancement  models 
are  popular  due  to  their  efficiency  in  classifying 
complex data sets and have recently been used to win 
many Kaggle Data Science competitions. 
2.8
 
Artificial Neural Network Classifier 
First,  the  neural  network  is  a  concept.  It's  not 
physical. The concept of Artificial Neural Networks 
(ANN)  (Wang,  2003)  was  inspired  by  biological 
neurons.  In  a  biological  neural  network,  several 
neurons work together, receive input signals, process 
information,  and  trigger  an  output  signal.  The 
artificial intelligence neural network is based on the 
same model as the biological neural network.