
processing, including EEG processing, audio speech 
recognition,  and  video  image  or  face  tracking.  The 
idea is that successive stages bump an instance up or 
down in likelihood but our mislabelled instance is not 
repeatedly trained on with increasing weights until it 
is  labelled  “correctly”  (Long  and  Servidio,  2005, 
2008, 2010). 
We  advocate  the  use  of  chance-corrected 
evaluation in all circumstances, and it is important to 
modify all learning algorithms to use a better costing. 
Uncorrected measures are deprecated and should never 
be  used  to  compare  across  datasets  with  different 
prevalences or algorithms with different biases. 
ACKNOWLEDGEMENTS 
This  work  was  supported  in  part  by  the  Chinese 
Natural  Science  Foundation  under  Grant  No. 
61070117,  and  the  Beijing  Natural  Science 
Foundation under Grant No. 4122004, the Australian 
Research  Council  under  ARC  Thinking  Systems 
Grant    No.  TS0689874, as  well as  the  Importation 
and Development of High-Caliber Talents Project of 
Beijing Municipal Institutions. 
REFERENCES 
Atyabi, Adham, Luerssen, Martin H. & Powers, David M. 
W. (2013), PSO-Based Dimension Reduction of EEG 
Recordings: Implications for Subject Transfer in BCI, 
Neurocomputing. 
Cumming,  G. (2012).  Understanding  The  New Statistics: 
Effect Sizes, Confidence Intervals, and Meta-Analysis. 
New York: Routledge  
Entwisle, Jim & Powers, David MW (1998). The present 
use of statistics in the evaluation of NLP parsers, Joint 
Conferences on New Methods in Language Processing 
& Computational Natural Language Learning, 215-224. 
Fitzgibbon S. P., Lewis T. W., Powers D. M. W., Whitham 
EM,  Willoughby  JO  and  Pope  KJ  (2013),  Surface 
Laplacian  of  central  scalp  electrical  signals  is 
insensitive  to  muscle  contamination,  IEEE 
Transactions on Biomedical Eng. 
Fitzgibbon, S. P., Powers D. M. W., Pope K. J. & Clark C. 
R.  (2007). Removal  of  EEG noise and  artifact  using 
blind  source  separation,  Journal  of  Clinical 
Neurophysiology 24 (3), 232-243 
Freund, Y. (1995). Boosting a weak learning algorithm by 
majority. Information and Computation, 121(2), 256–285 
Freund,  Y.  &  Schapire,  R.  (1997).  A  decision-theoretic 
generalization of on-line learning and an application to 
boosting.  Journal  of  Computer  and  System  Sciences, 
55(1), 119–139 
Huang, J. H.  & Powers  D. M. W. (2001).  Large scale 
experiments on correction of confused words, Australian  
  Computer Science Communications 23:77-82 
Jia,  Xibin,  Han,  Yanfang,  Powers,  D.  and  Bao,  Xiyuan 
(2012). Spatial and temporal visual speech feature for 
Chinese  phonemes.  Journal  of  Information  & 
Computational Science 9(14):4177-4185. 
Jia  Xibin,  Bao  Xiyuan,  David  M  W.  Powers,  Li  Yujian 
(2013),  Facial  expression  recognition  based  on  block 
Gabor  wavelet  fusion  feature,  Journal of  Information 
and Computational Science. 
Kearns,  M.  and  Valiant,  L.  G.  (1989).  Crytographic 
limitations  on  learning  Boolean  formulae  and  finaite 
automata.  Proceedings of the 21
st
 ACM Symposium on 
Theory  of  Computing  (pp.433-444).  New  York  NY: 
ACM Press 
Lewis, Trent W. & Powers, David M. W. (2004). Sensor 
fusion  weighting  measures  in  audio-visual  speech 
recognition,  27th  Australasian  Conference  on 
Computer Science 26:305-314. 
Long, Philip M. & Servedio, Rocco A. (2005). Martingale 
Boosting. Learning Theory/COLT 40-57. 
Long, Philip M. & Servedio, Rocco A. (2008). Adapative 
Martingale  Boosting.  Neural  Information  Processing 
Systems (NIPS). 
Long, Philip  M.  &  Servedio, Rocco  A.  (2010). Random 
Classification  Noise  defeats  all  Convex  Potential 
Boosters. Machine Learning 78:287-304 
Powers David  M.  W.   (1983),  Neurolinguistics  and 
Psycholinguistics as a Basis for Computer Acquisition 
of Natural Language, SIGART 84:29-34 
Powers David M. W.  (1991), How far can self-organization 
go?  Results  in  unsupervised  language  learning,  AAAI 
Spring  Symposium  on  Machine  Learning  of  Natural 
Language & Ontology:131-136 
Powers,  D.  M.  W.  (2011).  Evaluation:  From  Precision, 
Recall  and  F-Measure  to  ROC,  Informedness, 
Markedness & Correlation. Journal of Machine Learning 
Technology, 2(1), 37–63 
Powers,  D.  M.  W.  (2012).  The  Problem  with  Kappa. 
European  Meeting  of  the  Association  for 
Computational Linguistics, 345–355 
Powers, D. M. W.  (2003). Recall and Precision versus the 
Bookmaker,  International  Conference  on  Cognitive 
Science, 529-534 
Schapire, R. E., & Freund, Y. (2012). Boosting, MIT Press, 
Cambridge MA 
Schapire, R. E., & Singer, Y. (1999). Improved boosting 
algorithms  using  confidence-rated  predictions. 
Machine Learning, 37, 297–336 
Schapire, R. E. (1990). The strength of weak learnability. 
Machine Learning 5:197-227 
Valiant,  L.  G.  (1984).  A  theory  of  the  learnable. 
Communications of the ACM, 27(11):1134-1142 
Viola,  Paul  &  Jones,  Michael  (2001).  Rapid  Object 
Detection using a Boosted Cascade of Simple Features, 
Conference  on  Computer  Vision  and  Pattern 
Recognition. 
Webb, Geoffrey I. (2000) MultiBoosting: A Technique for 
Combining Boosting and Wagging. Machine Learning, 
40, 159–39 
Witten, I. H., Frank, E., & Hall, M., (2011). Data Mining: 
Practical Machine Learning Tools and Techniques. 3
rd
 Edn 
Amsterdam: Morgan Kauffman 
ICINCO 2013 - 10th International Conference on Informatics in Control, Automation and Robotics
358