
rithms foreseeing robot-environment interaction tra-
jectories (Petrone et al., 2022).
REFERENCES
Agarap, A. F. (2019). Deep Learning using Rectified Linear
Units (ReLU). arXiv preprint: 1803.08375.
Caccavale, F., Natale, C., Siciliano, B., and Villani, L.
(1999). Six-DOF impedance control based on an-
gle/axis representations. IEEE Trans. Robot. Au-
tomat., 15(2):289–300.
Chua, K., Calandra, R., McAllister, R., and Levine, S.
(2018). Deep Reinforcement Learning in a Handful
of Trials using Probabilistic Dynamics Models. In
Adv. Neural Inform. Process. Syst., volume 32, pages
4754–4765.
Duan, J., Gan, Y., Chen, M., and Dai, X. (2018). Adap-
tive variable impedance control for dynamic contact
force tracking in uncertain environment. Robot. Au-
ton. Syst., 102:54–65.
Featherstone, R. and Orin, D. E. (2016). Dynamics. In
Siciliano, B. and Khatib, O., editors, Springer Hand-
book of Robotics, volume 3, pages 195–211. Springer,
2 edition.
Formenti, A., Bucca, G., Shahid, A. A., Piga, D., and
Roveda, L. (2022). Improved impedance/admittance
switching controller for the interaction with a variable
stiffness environment. Compl. Eng. Syst., 2(3). Art.
no. 12.
Haddadin, S., Parusel, S., Johannsmeier, L., Golz, S.,
Gabl, S., Walch, F., Sabaghian, M., J
¨
ahne, C., Haus-
perger, L., and Haddadin, S. (2022). The Franka
Emika Robot: A Reference Platform for Robotics Re-
search and Education. IEEE Robot. Automat. Mag.,
29(2):46–64.
Huang, H., Guo, Y., Yang, G., Chu, J., Chen, X., Li, Z.,
and Yang, C. (2022). Robust Passivity-Based Dy-
namical Systems for Compliant Motion Adaptation.
IEEE/ASME Trans. Mechatron., 27(6):4819–4828.
Iskandar, M., Ott, C., Albu-Sch
¨
affer, A., Siciliano, B., and
Dietrich, A. (2023). Hybrid Force-Impedance Control
for Fast End-Effector Motions. IEEE Robot. Automat.
Lett., 8(7):3931–3938.
Jung, S., Hsia, T. C. S., and Bonitz, R. G. (2004). Force
Tracking Impedance Control of Robot Manipulators
Under Unknown Environment. IEEE Trans. Contr.
Syst. Technol., 12(3):474–483.
Khatib, O. (1987). A unified approach for motion and force
control of robot manipulators: The operational space
formulation. IEEE J. Robot. Automat., 3(1):43–53.
Kingma, D. P. and Ba, J. (2015). Adam: A Method for
Stochastic Optimization. In Int. Conf. Learn. Repre-
sent.
Koenig, N. and Howard, A. (2004). Design and Use
Paradigms for Gazebo, an Open-Source Multi-Robot
Simulator. In IEEE Int. Conf. Intell. Robots Syst., vol-
ume 3, pages 2149–2154.
Li, K., He, Y., Li, K., and Liu, C. (2023). Adaptive
fractional-order admittance control for force tracking
in highly dynamic unknown environments. Int. J.
Robot. Res. Applic., 50(3):530–541.
Matschek, J., Bethge, J., and Findeisen, R. (2023). Safe
Machine-Learning-Supported Model Predictive Force
and Motion Control in Robotics. IEEE Trans. Contr.
Syst. Technol., 31(6):2380–2392.
Nagabandi, A., Kahn, G., Fearing, R. S., and Levine, S.
(2018). Neural Network Dynamics for Model-Based
Deep Reinforcement Learning with Model-Free Fine-
Tuning. In IEEE Int. Conf. Robot. Automat., pages
7559–7566.
Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J.,
Chanan, G., Killeen, T., Lin, Z., Gimelshein, N.,
Antiga, L., Desmaison, A., K
¨
opf, A., Yang, E., De-
Vito, Z., Raison, M., Tejani, A., Chilamkurthy, S.,
Steiner, B., Fang, L., Bai, J., and Chintala, S. (2019).
PyTorch: An Imperative Style, High-Performance
Deep Learning Library. In Adv. Neural Inform. Pro-
cess. Syst., volume 33, pages 8026–8037.
Petrone, V., Ferrentino, E., and Chiacchio, P. (2022).
Time-Optimal Trajectory Planning With Interaction
With the Environment. IEEE Robot. Automat. Lett.,
7(4):10399–10405.
Petrone, V., Puricelli, L., Pozzi, A., Ferrentino, E., Chi-
acchio, P., Braghin, F., and Roveda, L. (2025). Op-
timized Residual Action for Interaction Control with
Learned Environments. IEEE Trans. Contr. Syst. Tech-
nol. Accepted for publication.
Quigley, M., Conley, K., Gerkey, B., Faust, J., Foote, T.,
Leibs, J., Wheeler, R., and Ng, A. (2009). ROS: an
open-source Robot Operating System. In IEEE Int.
Conf. Robot. Automat., volume 3.
Roveda, L., Castaman, N., Franceschi, P., Ghidoni, S., and
Pedrocchi, N. (2020). A Control Framework Defini-
tion to Overcome Position/Interaction Dynamics Un-
certainties in Force-Controlled Tasks. In IEEE Int.
Conf. Robot. Automat., pages 6819–6825.
Roveda, L. and Piga, D. (2021). Sensorless environ-
ment stiffness and interaction force estimation for
impedance control tuning in robotized interaction
tasks. Auton. Robots, 45(3):371–388.
Shen, Y., Lu, Y., and Zhuang, C. (2022). A fuzzy-based
impedance control for force tracking in unknown en-
vironment. J. Mech. Sci. Technol., 36(10):5231–5242.
Shu, X., Ni, F., Min, K., Liu, Y., and Liu, H. (2021).
An Adaptive Force Control Architecture with Fast-
Response and Robustness in Uncertain Environment.
In Int. Conf. Robot. Biom., pages 1040–1045.
Siciliano, B. and Villani, L. (1999). Indirect Force Control.
In Robot Force Control, pages 31–64. Springer US.
Yu, X., Liu, S., Zhang, S., He, W., and Huang, H. (2024).
Adaptive Neural Network Force Tracking Control of
Flexible Joint Robot With an Uncertain Environment.
IEEE Trans. Ind. Electron., 71(6):5941–5949.
Augmenting Neural Networks-Based Model Approximators in Robotic Force-Tracking Tasks
401