loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Paper Unlock

Authors: Vladimir Samsonov 1 ; Chrismarie Enslin 1 ; Hans-Georg Köpken 2 ; Schirin Baer 2 and Daniel Lütticke 1

Affiliations: 1 Institute of Information Management in Mechanical Engineering, RWTH Aachen University, Aachen, Germany ; 2 Siemens AG, Digital Factory Division, Nuernberg, Germany

Keyword(s): Reinforcement Learning, Soft Actor-Critic, Supervised Learning, Industrial Manufacturing, Process Optimisation, Machine Tool Optimisation.

Abstract: Modern manufacturing is increasingly data-driven. Yet there are a number of applications traditionally performed by humans because of their capabilities to think analytically, learn from previous experience and adapt. With the appearance of Deep Reinforcement Learning (RL) many of these applications can be partly or completely automated. In this paper we aim at finding an optimal clamping position for a workpiece (WP) with the help of deep RL. Traditionally, a human expert chooses a clamping position that leads to an efficient, high quality machining without axis limit violations or collisions. This decision is hard to automate because of the variety of WP geometries and possible ways to manufacture them. We investigate whether the use of RL can aid in finding a near-optimal WP clamping position, even for unseen WPs during training. We develop a use case representing a simplified problem of clamping position optimisation, formalise it as a Markov Decision Process (MDP) and conduct a number of RL experiments to demonstrate the applicability of the approach in terms of training stability and quality of the solutions. First evaluations of the concept demonstrate the capability of a trained RL agent to find a near-optimal clamping position for an unseen WP with a small number of iterations required. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 3.134.104.173

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Samsonov, V.; Enslin, C.; Köpken, H.; Baer, S. and Lütticke, D. (2020). Using Reinforcement Learning for Optimization of a Workpiece Clamping Position in a Machine Tool. In Proceedings of the 22nd International Conference on Enterprise Information Systems - Volume 1: ICEIS; ISBN 978-989-758-423-7; ISSN 2184-4992, SciTePress, pages 506-514. DOI: 10.5220/0009354105060514

@conference{iceis20,
author={Vladimir Samsonov. and Chrismarie Enslin. and Hans{-}Georg Köpken. and Schirin Baer. and Daniel Lütticke.},
title={Using Reinforcement Learning for Optimization of a Workpiece Clamping Position in a Machine Tool},
booktitle={Proceedings of the 22nd International Conference on Enterprise Information Systems - Volume 1: ICEIS},
year={2020},
pages={506-514},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0009354105060514},
isbn={978-989-758-423-7},
issn={2184-4992},
}

TY - CONF

JO - Proceedings of the 22nd International Conference on Enterprise Information Systems - Volume 1: ICEIS
TI - Using Reinforcement Learning for Optimization of a Workpiece Clamping Position in a Machine Tool
SN - 978-989-758-423-7
IS - 2184-4992
AU - Samsonov, V.
AU - Enslin, C.
AU - Köpken, H.
AU - Baer, S.
AU - Lütticke, D.
PY - 2020
SP - 506
EP - 514
DO - 10.5220/0009354105060514
PB - SciTePress