Minimal Architecture and Training Parameters of Multilayer Perceptron for its Efficient Parallelization

Volodymyr Turchenko, Lucio Grandinetti

2009

Abstract

The development of a parallel algorithm for batch pattern training of a multilayer perceptron with the back propagation algorithm and the research of its efficiency on a general-purpose parallel computer are presented in this paper. The multilayer perceptron model and the usual sequential batch pattern training algorithm are theoretically described. An algorithmic description of the parallel version of the batch pattern training method is introduced. The efficiency of the developed parallel algorithm is investigated by progressively increasing the dimension of the parallelized problem on a general-purpose parallel computer NEC TX-7. A minimal architecture for the multilayer perceptron and its training parameters for an efficient parallelization are given.

References

  1. Haykin, S.: Neural Networks. Prentice Hall, New Jersey (1999).
  2. Mahapatra, S., Mahapatra, R., Chatterji, B.: A Parallel Formulation of BP Learning on Distributed Memory Multiprocessors. Parallel Computing. 22 (12) (1997) 1661-1675.
  3. Hanzálek, Z.: A Parallel Algorithm for Gradient Training of Feed-forward Neural Networks. Parallel Computing. 24 (5-6) (1998) 823-839.
  4. Murre, J.M.J.: Transputers and Neural Networks: An Analysis of Implementation Constraints and Perform. IEEE Transactions on Neural Networks. 4 (2) (1993) 284-292.
  5. Dongarra, J., Shimasaki, M., Tourancheau, B.: Clusters and Computational Grids for Scientific Computing. Parallel Computing. 27 (11) (2001) 1401-1402.
  6. Topping, B.H.V., Khan, A.I., Bahreininejad, A.: Parallel Training of Neural Networks for Finite Element Mesh Decomposition. Computers and Structures. 63 (4) (1997) 693-707.
  7. Rogers, R.O., Skillicorn, D.B.: Using the BSP Cost Model to Optimise Parallel Neural Network Training. Future Generation Computer Systems. 14 (5) (1998) 409-424.
  8. Ribeiro, B., Albrecht, R.F., Dobnikar, A., et al: Parallel Implementations of Feed-forward Neural Network using MPI and C# on .NET Platform. In: Proceedings of the International Conference on Adaptive and Natural Computing Algorithms. Coimbra (2005) 534-537.
  9. Turchenko, V.: Computational Grid vs. Parallel Computer for Coarse-Grain Parallelization of Neural Networks Training. In: Meersman, R., Tari, Z., Herrero, P. (eds.): OTM 2005. Lecture Notes in Computing Science, vol. 3762. Springer-Verlag, Berlin Heidelberg New York (2005) 357-366.
  10. Turchenko, V.: Fine-Grain Approach to Development of Parallel Training Algorithm of Multi-Layer Perceptron. Artificial Intelligence, the Journal of National Academy of Sciences of Ukraine. 1 (2006) 94-102.
  11. Golovko, V., Galushkin, A.: Neural Networks: Training, Models and Applications. Radiotechnika, Moscow (2001) (in Russian).
  12. Turchenko, V.: Scalability of Parallel Batch Pattern Neural Network Training Algorithm. Artificial Intelligence, the Journal of National Academy of Sciences of Ukraine. 2 (2009).
Download


Paper Citation


in Harvard Style

Turchenko V. and Grandinetti L. (2009). Minimal Architecture and Training Parameters of Multilayer Perceptron for its Efficient Parallelization . In Proceedings of the 5th International Workshop on Artificial Neural Networks and Intelligent Information Processing - Volume 1: Workshop ANNIIP, (ICINCO 2009) ISBN 978-989-674-002-3, pages 79-87. DOI: 10.5220/0002265800790087


in Bibtex Style

@conference{workshop anniip09,
author={Volodymyr Turchenko and Lucio Grandinetti},
title={Minimal Architecture and Training Parameters of Multilayer Perceptron for its Efficient Parallelization },
booktitle={Proceedings of the 5th International Workshop on Artificial Neural Networks and Intelligent Information Processing - Volume 1: Workshop ANNIIP, (ICINCO 2009)},
year={2009},
pages={79-87},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0002265800790087},
isbn={978-989-674-002-3},
}


in EndNote Style

TY - CONF
JO - Proceedings of the 5th International Workshop on Artificial Neural Networks and Intelligent Information Processing - Volume 1: Workshop ANNIIP, (ICINCO 2009)
TI - Minimal Architecture and Training Parameters of Multilayer Perceptron for its Efficient Parallelization
SN - 978-989-674-002-3
AU - Turchenko V.
AU - Grandinetti L.
PY - 2009
SP - 79
EP - 87
DO - 10.5220/0002265800790087