analysis,  principal  component  analysis  (PCA)  is 
recommended with Kaiser-Meyer-Olkin (KMO) and 
Bartlett’s methods.  
3.2  Suitability of Data for Factor 
Analysis 
There are two conditions to check that the observed 
data is suitable and appropriate for exploratory factor 
analysis;  Sampling Adequacy tested by The Kaiser-
Meyer-Olkin  KMO.  The  relationship  among 
variables is assessed through Bartlett’s test sphericity 
(Moumen, 2019). 
3.2.1  Kaiser-Meyer-Olkin KMO  
The  KMO  method  measures  the  adequacy  of  the 
sample; if the value of the KMO is more than 0.5, the 
sampling is sufficient; according to (Kaiser,   1974), 
A  high  KMO  indicates  that  there  is  a  statistically 
acceptable factor solution. 
3.2.2  Bartlett Test of Sphericity 
The researcher uses the Bartlett test of Sphericity to 
check  if  there  is  redundancy among variables that 
could  be  summarized  with  a  few  factors,  in  other 
words,  to  verify  data  compression  in  a  meaningful 
way. This test comes before data reduction techniques 
such  as  principal  component  analysis  (PCA) 
(Gorsuch, 1973). 
4  CONFIRMATORY FACTOR 
ANALYSIS 
EFA  explores  whether  your  data  fits  a  model  that 
makes  sense  based  on  a  conceptual  or  theoretical 
framework.  It  doesn’t  confirm  hypotheses  or  test 
competing models as in confirmatory factor analysis 
CFA (Costello and Osborne, 2005).  
According to (Hoyle, 2012) CFA is a multivariate 
statistical procedure for testing hypotheses about the 
commonality among variables. 
Confirmatory  factor  analysis  concerns  a  large 
sample  that  exceeds  30  observations  according  to 
Gaussian law; this analysis aims to prove or disprove 
the research hypotheses (Moumen, 2021).  
4.1  Hypothesis Testing 
The hypothesis testing evaluates what data provides 
against the  hypothesis.  The  researcher begins  a test 
with  two  hypotheses  called  the  null  hypothesis  H0 
and  the  alternative  hypothesis  H1,  and  the  two 
hypotheses are opposite (Moumen, 2021). 
If  data  provides  enough  evidence  against  the 
hypothesis, it will be rejected. To reject or accept the 
null  hypothesis  H0,  there  is  a  Significance  Level 
(Alpha)  beyond  which  we  cannot  reject  the  null 
hypothesis. Alpha is the probability that a researcher 
make a mistake of rejecting the null hypothesis that 
is, in fact, true (Moumen, 2021). 
Three  options  are  available  for  a  significance 
level: 5%, 1% and 0.1%; the choice of a significance 
level  is  conventional  and  depends  on  the  field  of 
application. (Moumen, 2021).  
A  golden  rule  for  a  significance  level  of  5% 
(Moumen, 2021): 
If alpha > 5%, H0 is accepted, and H1 is rejected. If 
alpha <= 5%, then H0 is rejected, and H1 is accepted. 
Examples of statistical hypotheses: 
 - Normal distribution hypothesis  
 - Representativeness test  
 - Test of association 
There  are  two  categories  of  hypothesis  testing; 
parametric  and  non-parametric  hypothesis  (Verma, 
2019). 
4.1.1  Parametric Hypothesis Test 
According to (Verma, 2019), the parametric tests aim 
to test the adequacy of the observed distribution of the 
random  variables  on  the  sample  compared  to  the 
known  and  pre-established  (supposed)  statistical 
distribution of the population. 
The goal is to compare  the parameters observed 
with the theoretical parameters to generalize from the 
sample to the population, with a margin of error. 
The  parametric  hypothesis  test  supposes  a  normal 
distribution of values (Verma, 2019). 
Examples of parametric tests: 
-Chi-square  
-One-Way Anova 
- Simple t-test 
4.1.2  Non-parametric Hypothesis Test 
The  researcher  can  use  non-parametric  tests  when 
parametric tests are not appropriate. It doesn’t require 
any assumption on the statistical distribution of data 
and  doesn’t  involve  population  parameters  (Datta, 
2018). 
The purpose of this test remains the same as the 
parametric tests; that means to verify the hypothesis 
according to a Significance Level (Alpha).