loading
Papers

Research.Publish.Connect.

Paper

Authors: Marie-Christine Suhner and Philippe Thomas

Affiliation: Université de Lorraine and CNRS, France

ISBN: 978-989-758-274-5

Keyword(s): Neural Network, Extreme Learning Machine, Multilayer Perceptron, Parameters Initialization, Randomly Fixed Hidden Neurons.

Related Ontology Subjects/Areas/Topics: Artificial Intelligence ; Biomedical Engineering ; Biomedical Signal Processing ; Computational Intelligence ; Health Engineering and Technology Applications ; Human-Computer Interaction ; Learning Paradigms and Algorithms ; Methodologies and Methods ; Neural Networks ; Neurocomputing ; Neurotechnology, Electronics and Informatics ; Pattern Recognition ; Physiological Computing Systems ; Sensor Networks ; Signal Processing ; Soft Computing ; Theory and Methods

Abstract: Neural network is a well-known tool able to learn model from data with a good accuracy. However, this tool suffers from an important computational time which may be too expansive. One alternative is to fix the weights and biases connecting the input to the hidden layer. This approach has been denoted recently extreme learning machine (ELM) which is able to learn quickly a model. Multilayers perceptron and ELM have identical structure, the main difference is that only the parameters linking hidden to output layers are learned. The weights and biases which connect the input to the hidden layers are randomly chosen and they don’t evolved during the learning. The impact of the choice of these random parameters on the model accuracy is not studied in the literature. This paper draws on extensive literature concerning the feedforward neural networks initialization problem. Different feedforward neural network initialisation algorithms are recalled, and used for the determination of ELM parameters connecting input to hidden layers. These algorithms are tested and compared on several regression benchmark problems. (More)

PDF ImageFull Text

Download
CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 18.204.48.40

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Suhner, M. and Thomas, P. (2017). Impact of Hidden Weights Choice on Accuracy of MLP with Randomly Fixed Hidden Neurons for Regression Problems.In Proceedings of the 9th International Joint Conference on Computational Intelligence - Volume 1: IJCCI, ISBN 978-989-758-274-5, pages 223-230. DOI: 10.5220/0006495702230230

@conference{ijcci17,
author={Marie{-}Christine Suhner. and Philippe Thomas.},
title={Impact of Hidden Weights Choice on Accuracy of MLP with Randomly Fixed Hidden Neurons for Regression Problems},
booktitle={Proceedings of the 9th International Joint Conference on Computational Intelligence - Volume 1: IJCCI,},
year={2017},
pages={223-230},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0006495702230230},
isbn={978-989-758-274-5},
}

TY - CONF

JO - Proceedings of the 9th International Joint Conference on Computational Intelligence - Volume 1: IJCCI,
TI - Impact of Hidden Weights Choice on Accuracy of MLP with Randomly Fixed Hidden Neurons for Regression Problems
SN - 978-989-758-274-5
AU - Suhner, M.
AU - Thomas, P.
PY - 2017
SP - 223
EP - 230
DO - 10.5220/0006495702230230

Login or register to post comments.

Comments on this Paper: Be the first to review this paper.