Authors:
Hananel Hazan
and
Larry Manevitz
Affiliation:
University of Haifa, Israel
Keyword(s):
Liquid State Machine, Small world topology, Robustness, Machine Learning.
Related
Ontology
Subjects/Areas/Topics:
Artificial Intelligence
;
Biomedical Engineering
;
Biomedical Signal Processing
;
Complex Artificial Neural Network Based Systems and Dynamics
;
Computational Intelligence
;
Computational Neuroscience
;
Data Manipulation
;
Health Engineering and Technology Applications
;
Human-Computer Interaction
;
Methodologies and Methods
;
Neural Networks
;
Neurocomputing
;
Neurotechnology, Electronics and Informatics
;
Pattern Recognition
;
Physiological Computing Systems
;
Sensor Networks
;
Signal Processing
;
Soft Computing
;
Theory and Methods
Abstract:
The Liquid State Machine (LSM) is a method of computing with temporal neurons, which can be used amongst other things for classifying intrinsically temporal data directly unlike standard artificial neural networks. It has also been put forward as a natural model of certain kinds of brain functions. There are two results in this paper: (1) We show that the LSM as normally defined cannot serve as a natural model for brain function. This is because they are very vulnerable to failures in parts of the model. This result is in contrast to work by Maass et al which showed that these models are robust to noise in the input data. (2) We show that specifying certain kinds of topological constraints (such as "small world assumption"), which have been claimed are reasonably plausible biologically, can restore robustness in this sense to LSMs.