loading
Papers

Research.Publish.Connect.

Paper

Authors: Jorge López 1 ; Andrey Laputenko 2 ; Natalia Kushik 1 ; Nina Yevtushenko 3 and Stanislav N. Torgaev 2

Affiliations: 1 SAMOVAR, CNRS, Télécom SudParis, Université Paris-Saclay, 9 rue Charles Fourier, 91000 Évry and France ; 2 Department of Information Technologies, Tomsk State University, 36 Lenin street, 634050 Tomsk and Russia ; 3 Department of Information Technologies, Tomsk State University, 36 Lenin street, 634050 Tomsk, Russia, Ivannikov Institute for System Programming of the Russian Academy of Sciences, 25 Alexander Solzhenitsyn street, 109004, Moscow and Russia

ISBN: 978-989-758-320-9

Keyword(s): Supervised Machine Learning, Digital Circuits, Constrained Devices, Deep Learning.

Related Ontology Subjects/Areas/Topics: Data Communication Networking ; Distributed and Mobile Software Systems ; Enterprise Information Systems ; Internet of Things ; Parallel and High Performance Computing ; Sensor Networks ; Software Agents and Internet Computing ; Software and Architectures ; Software Engineering ; Telecommunications

Abstract: Computationally constrained devices are devices with typically low resources / computational power built for specific tasks. At the same time, recent advances in machine learning, e.g., deep learning or hierarchical or cascade compositions of machines, that allow to accurately predict / classify some values of interest such as quality, trust, etc., require high computational power. Often, such complicated machine learning configurations are possible due to advances in processing units, e.g., Graphical Processing Units (GPUs). Computationally constrained devices can also benefit from such advances and an immediate question arises: how? This paper is devoted to reply the stated question. Our approach proposes to use scalable representations of ‘trained’ models through the synthesis of logic circuits. Furthermore, we showcase how a cascade machine learning composition can be achieved by using ‘traditional’ digital electronic devices. To validate our approach, we present a set of prelimin ary experimental studies that show how different circuit apparatus clearly outperform (in terms of processing speed and resource consumption) current machine learning software implementations. (More)

PDF ImageFull Text

Download
CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 34.204.203.142

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
López, J.; Laputenko, A.; Kushik, N.; Yevtushenko, N. and Torgaev, S. (2018). Scalable Supervised Machine Learning Apparatus for Computationally Constrained Devices.In Proceedings of the 13th International Conference on Software Technologies - Volume 1: ICSOFT, ISBN 978-989-758-320-9, pages 518-528. DOI: 10.5220/0006908905520562

@conference{icsoft18,
author={Jorge López. and Andrey Laputenko. and Natalia Kushik. and Nina Yevtushenko. and Stanislav N. Torgaev.},
title={Scalable Supervised Machine Learning Apparatus for Computationally Constrained Devices},
booktitle={Proceedings of the 13th International Conference on Software Technologies - Volume 1: ICSOFT,},
year={2018},
pages={518-528},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0006908905520562},
isbn={978-989-758-320-9},
}

TY - CONF

JO - Proceedings of the 13th International Conference on Software Technologies - Volume 1: ICSOFT,
TI - Scalable Supervised Machine Learning Apparatus for Computationally Constrained Devices
SN - 978-989-758-320-9
AU - López, J.
AU - Laputenko, A.
AU - Kushik, N.
AU - Yevtushenko, N.
AU - Torgaev, S.
PY - 2018
SP - 518
EP - 528
DO - 10.5220/0006908905520562

Login or register to post comments.

Comments on this Paper: Be the first to review this paper.