
hand the ease of  producing  the  MCQs and, on  the 
other hand, the specific requirements and variants that 
the  MCQs  examination  method  presents.  In  this 
context,  the  only  requirement  concerning  the 
construction of the question bank which goes beyond 
the ‘basic’ compilation requirements, for the adaptive 
MCQs,  is  that  3  sets  of  questions  should  be 
constructed, clearly differing with respect to the level 
of difficulty of the questions they contain.  
The statistical coincidence of the adaptive MCQs 
examination  method  scores  with  the  CRQ 
examination  method  scores  provides  a  degree  of 
assurance  about  the  soundness  of  the  choices  that 
were  made  in  designing  the  adaptive  method, 
concerning the number of questions included in each 
of the 3 phases of the examination procedure, as well 
as the range of the weighting factors. Nevertheless, 
future  investigations,  in  addition  to  extending  the 
application  to  larger  sets  of  students  and  to  other 
topics,  should  include  variations  in  the  previously 
mentioned parameters, especially to check in a more 
in-depth way the effects of the inclusion of type C 
questions  in  the  3
rd
  phase  of  the  examination 
procedure.  The  attention  paid  to  these  types  of 
questions  assumes  that  their  presence  is  the  most 
effective  way  to  alleviate  the  positive-grade  bias 
related  to  the  positive-grades-only  rule  used.  The 
optimum  compromise  should  also  be  investigated, 
between the differences in the level of difficulty of 
the 3  types of  questions so as,  on  the one hand to 
restrict  less-well-prepared  students  from  acquiring 
partial scores due to guessing and, on the other hand, 
to avoid dissuading effects due to the augmentation 
of  the  difficulty  in  type  B  questions  and,  perhaps 
more  importantly,  in  type  C  questions,  that  might 
occur even to well-prepared students.  
In conclusion, keeping in mind the restrictions of 
the present investigation concerning the number of 
students that participated in the study, as well as the 
fact that it was applied to a specific course, the present 
work provides indications that the principles used for 
designing  the  adaptive  MCQs  examination  method 
achieved their aim to a satisfactory degree, since the 
positive-grades  bias  was  avoided,  no  negative 
marking  scheme  was  used,  neither  explicitly  nor 
implicitly, and the requirements for constructing the 
question bank were not substantially augmented, in 
comparison to the standard requirements imposed by 
the  implementation  of  any  generic  MCQs 
examination method. 
 
 
REFERENCES 
Adesina,  A.,  R.  Stone,  F.  Batmaz,  and  I.  Jones.  2014. 
“Touch Arithmetic, A process-based Computer-Aided 
Assessment approach  for capture  of problem  solving 
steps  in  the  context  of  elementary  mathematics.” 
Computers and Education 78, 333-343. 
Ćukušić, M., Z. Garača, and M. Jadrić. 2014. “Online self-
assessment and  students'  success in  higher  education 
institutions.” Computers and Education 72, 100–109. 
Culbertson, M. J. 2016. “Bayesian networks in educational 
assessment,  The  state  of  the  field.”  Applied 
psychological measurement 40, 3-21. 
De-Marcos, L., J. R. Hilera, R. Barchino, L. Jiménez, J. J. 
Martínez,  J.  A.  Gutiérrez,  S.  Otón,  et  al.  2010.  “An 
experiment  for  improving  students’  performance  in 
secondary  and  tertiary  education  by  means  of  m-
learning auto-assessment.”  Computers and Education 
55, 1069-1079. 
Desrochers,  M.  N.,  and  J.  M.  Shelnutt.  2012.  “Effect  of 
answer format and review method on college students' 
learning.” Computers and Education 59, 946-951.  
Downing,  S.  M.  2003.  “Validity,  On  meaningful 
interpretation of assessment data.” Medical Education 
37, 830-837. 
Downing, S. M. 2004. “Reliability, On the reproducibility 
of assessment data.” Medical Education 38, 1006-1012. 
Hirose, H., Masanori T., Yusuke Y., Tetsuji T., Tatsuhiro 
H.,  Fujio  K.,  Mitsunori  I.,  and  K.  Tetsuya.  2016. 
Questions  and  Answers  Database  Construction  for 
Adaptive Online IRT Testing Systems, Analysis Course 
and Linear Algebra Course. Proceedings of the 5
th
 IIAI 
International  Congress  on  Advanced  Applied 
Informatics, 433-438. Piscataway, New Jersey, IEEE. 
Howard, C., K. Schenk, and R. Discenza. 2004. Distance 
Learning  and  University  Effectiveness,  Changing 
Educational Paradigms for Online learning. London, 
Information Science Publishing. 
Liu, Y. C., M. C. Chiang, S. C. Chen, V. Istanda, and T. H. 
Huang.  2007.  An  online  system  using  dynamic 
assessment and adaptive material. Proceedings of the 
37th  Annual  ASEE/IEEE  Frontiers  in  Education 
Conference,  T3D6-T3D10.  Piscataway,  New  Jersey, 
IEEE. 
Lukhele,  R.,  D.  Thissen,  and  H.  Wainer.  1994.  “On  the 
relative value of multiple-choice, constructed response, 
and  examinee-selected  items  on  two  achievement 
tests.” Journal of Educational Measurement 31, 234-
250. 
McCoubrie,  P.,  and  L.  McKnight.  2008.  “Single  best 
answer  MCQs,  a  new  format  for  the  FRCR  part  2a 
exam.” Clinical Radiology 63, 506-510. 
McMorran,  C.,  K.  Ragupathi,  and  S.  Luo.  2017. 
“Assessment and learning without grades? Motivations 
and concerns with implementing gradeless learning in 
higher  education.”  Assessment  and  Evaluation  in 
Higher Education 42, 361-377. 
Nulty,  D.  D.  2008.  “The  adequacy  of  response  rates  to 
online  and  paper  surveys,  What  can  be  done?” 
CSEDU 2020 - 12th International Conference on Computer Supported Education
364