
0 2 4 6 8 10
·10
−5
Split
λ = 10
3
0 2 4 6 8
0
1
2
3
4
·10
−5
Split
Error in m
λ =0
train
val
test
Figure 7: RMSE during LOLIMOT training. The best
model according to the validation error termination is
marked in red.
model for the unregularized case (λ =0) has four splits
because the validation error increases due to the sev-
enth split. This is caused by the splitting procedure
of LOLIMOT (see Sec. 2.1). The addition of a LM,
n
LM,x
→ n
LM,x
+ 1, requires updating the centers and
standard deviations of the Gaussians. Due to the nor-
malization, the validity functions Φ
[x]
j
are modified,
which in turn influences the state trajectory and thus
the output error. A poor initialization of the consec-
utive optimization of the split results and cannot be
compensated. Consequently, the training error of the
seven-split model is higher than the six-split model.
When evaluating the unregularized models on test
data, the models obtained by split numbers four and
seven show unstable behavior. In those cases, no sen-
sible RMSE can be obtained, which is why their val-
ues are omitted from the figure. Within the regular-
ized model ensemble, no instability could be detected
regarding test data. Consequently, the robustness of
LOLIMOT is improved.
Furthermore, it can be observed that regulariza-
tion allows LOLIMOT to perform seven splits for
the best model, compared to only four splits in the
unregularized case. Regularization enables the al-
gorithm to process the training dataset information
with more LMs. Evaluation on the benchmark’s test
dataset yields an RMSE of 0.981· 10
−5
m for the reg-
ularized best model, whereas the unregularized one
became unstable. In contrast to the stable splits, it
can be stated that space-filling regularization enables
better models. In comparison to other approaches,
the achieved result is among the best for this bench-
mark (Nonlinear Benchmark Working Group, 2025).
6 CONCLUSIONS
In this paper, we investigated the effect of nonlinear
optimization on the extended input/state space data
point distribution of nonlinear state space models.
Two regularization approaches for penalizing poor
space-filling quality are derived and tested, resulting
in models with more meaningful and accurate local
models. This is achieved by more local estimation
and less interpolation as well as stable local behavior.
Furthermore, higher modeling performance could be
accomplished and the number of iterations could be
reduced per optimization. Future lines of research
will focus on the stable extrapolation behavior due
to space-filling enforcement in the local model state
space network. Morevover, the applicability of space-
fillling metrics with higher-dimensional settings will
be adressed.
REFERENCES
Belz, J., M
¨
unker, T., Heinz, T. O., Kampmann, G., and
Nelles, O. (2017). Automatic modeling with local
model networks for benchmark processes. IFAC-
PapersOnLine, 50(1):470–475. 20th IFAC World
Congress.
Bemporad, A. (2024). An l-bfgs-b approach for linear and
nonlinear system identification under ℓ
1
and group-
lasso regularization.
Boyd, S. and Vandenberghe, L. (2004). Convex Optimiza-
tion. Cambridge University Press.
Forgione, M. and Piga, D. (2021). Model structures and fit-
ting criteria for system identification with neural net-
works.
Garulli, A., Paoletti, S., and Vicino, A. (2012). A sur-
vey on switched and piecewise affine system identifi-
cation. IFAC Proceedings Volumes, 45(16):344–355.
16th IFAC Symposium on System Identification.
Herkersdorf, M. and Nelles, O. (2025). Online and offline
space-filling input design for nonlinear system identi-
fication: A receding horizon control-based approach.
arXiv preprint arXiv:2504.02653.
Kullback, S. and Leibler, R. A. (1951). On Information and
Sufficiency. The Annals of Mathematical Statistics,
22(1):79 – 86.
Liu, Y., T
´
oth, R., and Schoukens, M. (2024). Physics-
guided state-space model augmentation using
weighted regularized neural networks.
Ljung, L. (1999). System identification (2nd ed.): theory for
the user. Prentice Hall PTR, USA.
Ljung, L., Andersson, C., Tiels, K., and Sch
¨
on, T. B.
(2020). Deep learning and system identification.
IFAC-PapersOnLine, 53(2):1175–1181. 21st IFAC
World Congress.
Luenberger, D. (1967). Canonical forms for linear multi-
variable systems. Automatic Control, IEEE Transac-
tions on, AC-12:290 – 293.
McKelvey, T., Akcay, H., and Ljung, L. (1996). Subspace-
based multivariable system identification from fre-
quency response data. IEEE Transactions on Auto-
matic Control, 41(7):960–979.
Nelles, O. (2020). Nonlinear System Identification From
Classical Approaches to Neural Networks, Fuzzy
Space-Filling Regularization for Robust and Interpretable Nonlinear State Space Models
415