loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Paper Unlock

Authors: R. Gil-Pita ; P. Jarabo-Amores ; M. Rosa-Zurera and F. López-Ferreras

Affiliation: Universidad de Alcalá, Spain

Keyword(s): PNN, MLP, RBFN, neural network, synthetic training set.

Abstract: A study is presented to compare the performance of multilayer perceptrons, radial basis functionm networks, and probabilistic neural networks for classification. In many classification problems, probabilistic neural networks have outperformed other neural classifiers. Unfortunately, with this kind of networks, the number of required operations to classify one pattern directly depends on the number of training patterns. This excessive computational cost makes this method difficult to be implemented in many real time applications. On the contrary, multilayer perceptrons have a reduced computational cost after training, but the required training set size to achieve low error rates is generally high. In this paper we propose an alternative method for training multilayer perceptrons, using data knowledge derived from the probabilistic neural network theory. Once the probability density functions have been estimated by the probabilistic neural network, a new training set can be generated using these estimated probability density functions. Results demonstrate that a multilayer perceptron trained with this enlarged training set achieves results equally good than those obtained with a probabilistic neural network, but with a lower computational cost. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 34.204.176.71

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Gil-Pita, R.; Jarabo-Amores, P.; Rosa-Zurera, M. and López-Ferreras, F. (2004). Enlarging Training Sets for Neural Networks. In Proceedings of the First International Workshop on Artificial Neural Networks: Data Preparation Techniques and Application Development (ICINCO 2004) - ANNs; ISBN 972-8865-14-7, SciTePress, pages 23-31. DOI: 10.5220/0001149800230031

@conference{anns04,
author={R. Gil{-}Pita. and P. Jarabo{-}Amores. and M. Rosa{-}Zurera. and F. López{-}Ferreras.},
title={Enlarging Training Sets for Neural Networks},
booktitle={Proceedings of the First International Workshop on Artificial Neural Networks: Data Preparation Techniques and Application Development (ICINCO 2004) - ANNs},
year={2004},
pages={23-31},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0001149800230031},
isbn={972-8865-14-7},
}

TY - CONF

JO - Proceedings of the First International Workshop on Artificial Neural Networks: Data Preparation Techniques and Application Development (ICINCO 2004) - ANNs
TI - Enlarging Training Sets for Neural Networks
SN - 972-8865-14-7
AU - Gil-Pita, R.
AU - Jarabo-Amores, P.
AU - Rosa-Zurera, M.
AU - López-Ferreras, F.
PY - 2004
SP - 23
EP - 31
DO - 10.5220/0001149800230031
PB - SciTePress