A Visionary Way to Novel Process Optimization Techniques
The Transfer of a Process Modeling Language to the Neuronal Level
Norbert Gronau and Marcus Grum
Department of Business Informatics, esp. Processes and Systems, University of Potsdam,
August-Bebel-Strasse 89, 14482 Potsdam, Germany
ngronau@lswi.de, mgrum@lswi.de
Keywords:
Process Modeling, Artificial Intelligence, Machine Learning, Neuronal Networks, Knowledge Modeling
Description Language (KMDL), Process Simulation, Simulation Process Building, Process Optimization.
Abstract:
Modern process optimization approaches do build on various qualitative and quantitative tools, but are mainly
limited to simple relations in different process perspectives like cost, time or stock. In this paper, a new
approach is presented, which focuses on techniques of the area of Artificial Intelligence to capture complex
relations within processes. Hence, a fundamental value increase is intended to be gained. Existing modeling
techniques and languages serve as basic concepts and try to realize the junction of apparently contradictory
approaches. This paper therefore draws a vision of promising future process optimization techniques and
presents an innovative contribution.
1 INTRODUCTION
A great potential of Artificial Neural Networks (short:
ANN) is well known since nearly four decades. In
general, those techniques copy the capabilities and
working behavior of the brain in simulating a network
of simple nerve cells. Early ANN architectures go
back to the 1940s and numerous improvements can
be found in late 1980 - 2000 (Schmidhuber, 2015).
Because of their ability to learn non-linear relations,
to generalize correctly and to built biologically moti-
vated efficiently working structures, ANN have been
applied successfully in various contexts such as mu-
sic composition, banking issues, medicine, etc. Even
simple processes have been modeled on behalf of
ANN (Chambers and Mount-Campbell, 2000).
Nowadays, in times of big data, enormous
amounts of data are available and the computing
power has increased immensely and with this, the
possibility to create bigger and more complex net-
works. Although, the collection of processing data
has become easy, the neuronal modeling and decod-
ing of complex processes has not been realized, yet.
Hence, the following research will focus on deep
learning with ANN with the intention to answer the
following research question: ”How can the capabil-
ity to create efficiently working structures of ANN be
used for process optimizations?” This paper intends
not to draw an all-embracing description of concrete
technical realizations of those novel process optimiza-
tion techniques. It intends to set a first step to real-
ize the conjunction of the process modeling and opti-
mization world on the one hand and the ANN world
on the other hand, such that a sub research question
is: ”How can a process modeling language be trans-
ported on a neuronal level?”
In the following, a Neuronal Process Modeling is
referred to as the modeling of processes on a neuronal
level with a common process modeling language,
the reinterpretation of the common process modeling
based on that understanding as well as their differ-
ence quantity. The Neuronal Process Simulation is
referred to as the process simulation of common pro-
cess models considering ANN as knowledge model of
process participants (persons and machines), the sim-
ulation of common process models reinterpreted as
deep neuronal network and their difference quantity.
The Neuronal Process Optimization is referred to as
common process optimization techniques that are re-
alized on a neuronal level (e.g. double-loop learning
on a neuronal level), process optimizations that can be
realized because of the learning capabilities of ANN
in the domain of common process models as well as
their difference quantity. Within this paper, a focus
lays on the Neuronal Process Modeling.
The research approach is intended to be design-
oriented as Peffers proposes (Peffers et al., 2006; Pef-
fers et al., 2007), such that the paper is structured as
11
follows: Section 2 presents underlying concepts; Sec-
tion 3 derives objectives for a Neuronal Process Mod-
eling; Section 4 provides the design, followed by its
demonstration (Section 5) and evaluation (Section 6);
Finally, Section 7 concludes the paper.
2 UNDERLYING CONCEPTS
Starting with the selection of a modeling approach
and the question, how processes can be optimized in
the first subsection, the second subsection refers to
underlying knowledge generation concepts. A further
subsection introduces ANN.
2.1 Process Optimization
Following the fundamental procedure model for sim-
ulation studies of Gronau (2017), a model creation is
realized after the modeling purpose has been defined,
analyzed and corresponding data has been collected.
Hence, the following starts with modeling issues. Af-
terwards, as the model is valid, simulation studies
are realized and results collected, analyzed and inter-
preted. As changes or optimizations are required, ad-
justments are defined and simulations tested as long
as a sufficient solution has been identified. This will
be realized.
The following starts with the understanding of
process models to be a homomorphous mapping of
a system that reduces the complexity of the real world
with respect to the modeling objectives (Gronau,
2016). According to Krallmann et al. (2001), a sys-
tem to be modeled consists of an amount of system
elements, that are connected with an amount of sys-
tem relations. As it is limited by a system border,
the system environment and the system are connected
with an interface to exchange system input and system
output.
For the modeling of systems, several process
modeling languages can be used. Considering or-
ganizational, behavior-oriented, informational and
knowledge-oriented perspectives, Sultanow et al.
(2012) identify the Knowledge Modeling Description
Language (short: KMDL) to be superior in the com-
parison of twelve common modeling approaches.
Because of the analogy with a human brain as
knowledge processing unit, especially a knowledge
process modeling is focused. Here, Remus gives an
overview of existing modeling methods and a com-
parison of their ability to represent knowledge (Re-
mus, 2002). ARIS, EULE2, FORWISS, INCOME,
PROMOTE and WORKWARE are only some repre-
sentatives. Again, the KMDL can be identified to
be superior because of its ability to overcome lacks
in visualizations and analyses through the combina-
tion of several views such as the process view, activ-
ity view and communication view (Gronau and Maas-
dorp, 2016).
This language has been developed over more than
ten years iteratively. Having collected experiences
in numerous projects of numerous application areas
such as software engineering, product development,
quality assurance and investment good sales, the evo-
lution of the KMDL can be found in (Gronau, 2012).
Currently, the development of a third version is in
progress (Gronau et al., 2016b). In addition to the
modeling language, the KMDL reaches a fully devel-
oped research method which is described by (Gronau,
2009) in detail.
With its strengths in visualization and the focus
of knowledge generation, the KMDL seems attractive
for a transfer to the neuronal level. To the best of
our knowledge, such a transfer has not been realized
yet in any other process modeling language. With its
intention to focus on the generation of knowledge fol-
lowing (Nonaka and Takeuchi, 1995) and to transfer
the learning potential of ANN, the KMDL enables the
modeling of tacit knowledge bases and single or nu-
merous knowledge transfers beside common process-
ing issues. Hence, the KMDL is selected as modeling
language for the demonstration in section 5. The cur-
rent paper builds on the wide spread KMDL version
2.2 (Gronau and Maasdorp, 2016).
Once, a valid process model has been created, a
dynamic process can be simulated. Aiming to gain
insights within a closed simulation system, the inten-
tion is to transfer them to reality. For this, the follow-
ing pre-conditions have to be fulfilled: process mod-
els have to provide completeness. This includes the
registration of input data such as time, costs, partici-
pants, etc. Further, process models have to provide in-
terpretability of decisions. Here, values of variables,
state change conditions and transfer probabilities are
included. Further, meta information have to be con-
sidered, as for example the number of process real-
izations within a simulation. Beneath further objec-
tives, the following can be evaluated quickly and at
low costs: current sequences of operations, as well as
plans and process alternatives. Those evaluations can
be realized before expensive adjustments within cur-
rent process models are carried out (Gronau, 2017).
2.2 Knowledge Representation
Nonaka and Takeuchi distinguish between ex-
plicit knowledge and tacit knowledge (Nonaka and
Takeuchi, 1995). While the first can be verbalized
Seventh International Symposium on Business Modeling and Software Design
12
and externalized easily, the second is hard to detect.
On-building, the following four knowledge conver-
sion types can be distinguished:
An internalization is the process of integration of
explicit knowledge in tacit knowledge. Here, ex-
periences and aptitudes are integrated in existing
mental models.
A socialization is the process of experience ex-
change. Here, new tacit knowledge such as com-
mon mental models or technical ability are cre-
ated.
An externalization is the process of articulation
of tacit knowledge in explicit concepts. Here,
metaphors, analogies or models can serve to ver-
balize tacit knowledge.
A combination is the process of connection of
available explicit knowledge, such that a new ex-
plicit knowledge is created. Here, a reorganiza-
tion, reconfiguration or restructuring can result in
new explicit knowledge.
With the intention to focus on the potentials
of human brains and its generation of knowledge,
the knowledge generation concepts of (Nonaka and
Takeuchi, 1995) seem attractive for the modeling on
a neuronal level. Further, the KMDL builds on them,
which is selected for demonstration purposes.
2.3 Neuronal Networks
Originally, neural networks were designed as mathe-
matical models to copy the functionality of biologi-
cal brains. First researches were done by (Rosenblatt,
1963), (Rumelhart et al., 1986) and (McCulloch and
Pitts, 1988). As the brain connects several nerve cells,
so called neurons, by synapses, those mathematical
networks are composed of several nodes, which are
related by weighted connections. As the real brain
sends electrical activity typically as a series of sharp
spikes, the mathematical activation of a node repre-
sents the average firing rate of these spikes.
As human brains show very complex structures
and are confronted with different types of learn-
ing tasks (unsupervised, supervised and reinforce-
ment learning) various kinds of networking struc-
tures have established, which all have advantages
for a certain learning task. There are for example
Perceptrons (Rosenblatt, 1958), Hopfield Nets (Hop-
field, 1982), Multilayer Perceptrons (Rumelhart et al.,
1986), (Werbos, 1988), (Bishop, 1995), Radial Ba-
sis Function Networks (Broomhead and Lowe, 1988)
and Kohonen maps (Kohonen, 1989). Networks con-
taining cyclic connections are called feedbackward or
recurrent networks.
The following focuses on Multilayer Perceptrons
and recurrent networks being confronted with super-
vised learning tasks. Here, input and output val-
ues are given and a learning can be carried out in
minimizing a differentiable error function by adjust-
ing the ANN’s weighted connections. For this, nu-
merous gradient descent methods can be used, such
as Backpropagation (Plaut et al., 1986) and (Bishop,
1995), PROP (Riedmiller and Braun, 1993), quick-
prop (Fahlman, 1989), conjugate gradients (Hestenes
and Stiefel, 1952), (Shewchuk, 1994), L-BFGS (Byrd
et al., 1995), RTRL (Robinson and Fallside, 1987)
and BPTT (Williams and Zipser, 1995). As the weight
adjustment can be interpreted as a small step in an op-
timization direction, the fix step size can be varied to
reduce great errors quickly. The learning rate decay
can be used to reduce small errors efficiently and a
momentum can be introduced to avoid local optima.
In this stepwise optimization, analogies to continuous
process optimizations can be found (see section 2.1).
Since neuronal networks model human brains and
model the knowledge of a certain learning task, the
following refers to neuronal networks as neuronal
knowledge models.
3 OBJECTIVES OF A NEURONAL
PROCESS MODELING
As one assumes to have a given process model and
one aims to consider a neuronal network as a process
participant’s knowledge model within the simulation
of that process model, the following objectives have
to be considered coming from a modeling side:
1. Neuronal knowledge models have to be integrated
within existing process models.
2. The same neuronal knowledge models have to be
able to be integrated several times within a pro-
cess model.
3. Neuronal knowledge models have to be integrated
within process simulations.
4. Modeled environmental factors (material such as
non-material objects) have to be integrated with
considered knowledge models.
5. Outcomes (materialized such as non-
materialized) of considered knowledge models
have to be considered within the process model.
Further, objectives have to be considered coming
from a neuronal techniques side:
6. Neuronal tasks have to be considered following its
neurons biological models.
A Visionary Way to Novel Process Optimization Techniques - The Transfer of a Process Modeling Language to the
Neuronal Level
13
7. Parallel neuronal task realizations have to be con-
sidered within neuronal networks.
8. Time-dependent neuronal behaviors have to be
considered within neuronal networks.
9. Sequential neuronal task realization have to be
considered within neuronal networks.
10. Different levels of neuronal task abstractions have
to be considered in the neuronal process modeling
and simulation.
11. Sensory information and knowledge flows have to
be considered within the modeled neuronal net-
work.
12. Actuator information and knowledge have to be
considered as outcomes of neuronal networks.
Each identified objective of those domains is rel-
evant for the transfer of a process modeling language
and serves as input for the following sections.
4 DESIGN OF A NEURONAL
PROCESS MODELING
The following gives definitions of the concept of neu-
ronal modeling. For this, basic definitions are given
firstly and on-building definitions are given after-
wards.
Neuronal knowledge objects are defined to be neu-
ronal patterns, that evolve as current over a certain pe-
riod of time that causes a specific behavior of consec-
utive neurons. Those patterns can reach from single
time steps to long periods of time.
Neuronal information objects are defined to be
neuronal currents, that serve as interface from and to
the environment such as incoming sensory informa-
tion and outgoing actuator information. Here, stored
information is included as well.
Considering those objects, a neuronal conversion
is defined to be the transfer of neuronal input objects
to neuronal output objects. In accordance to (Nonaka
and Takeuchi, 1995), the following neuronal conver-
sion types can be distinguished:
A neuronal internalization is the process of inte-
gration of explicit knowledge (neuronal informa-
tion objects) in tacit knowledge. Here, experi-
ences and aptitudes are integrated in existing men-
tal models.
A neuronal socialization is the process of expe-
rience exchange between neurons within a closed
ANN. Here, new tacit knowledge such as com-
mon mental models or technical abilities are cre-
ated.
A neuronal externalization is the process of artic-
ulation of tacit knowledge (neuronal knowledge
objects) in explicit concepts (neuronal informa-
tion objects). Here, patterns can serve to verbalize
tacit knowledge.
A neuronal combination is the process of con-
nection of available explicit knowledge (neuronal
information objects), such that a new explicit
knowledge is created. Here, a reorganization, re-
configuration or restructuring can result in new
explicit knowledge.
Neuronal input objects are defined to be sensory
information objects and knowledge objects.
Neuronal output objects are defined to be actuator
information objects and knowledge objects.
An atomic neuronal conversion is defined to be a
neuronal conversion considering only one input ob-
ject and only one output object.
Complex neuronal conversion are defined to be
neuronal conversions considering at least three neu-
ronal objects of one neuron. Pure complex neuronal
conversions do consider only one neuronal conversion
type, while impure complex neuronal conversion do
consider several neuronal conversion types such that
one is not able to distinguish them.
Abstract neuronal conversion are defined to be
neuronal conversions considering neuronal objects of
more than one transferring neuron such that one is not
able to identify participating neurons.
All together, those definitions are the basis for the
transfer of a process modeling languages to the neu-
ronal level.
5 DEMONSTRATION OF THE
NEURONAL PROCESS
MODELING
The following subsections show the realization of the
neuronal process modeling on behalf of the KMDL.
For this, theoretic examples and corresponding pro-
cess process models are given, that visualize basic
definitions. Then, practical examples follow.
5.1 Theoretical Examples
Definitions as they were given in section 4 are visual-
ized in the following three theoretical examples: First,
atomic knowledge conversions on a neuronal level
can be found in Figure 1.
In this Figure, one can see a neuronal socializa-
tion on the top left, a neuronal externalization on the
top right, a neuronal combination on the bottom right
Seventh International Symposium on Business Modeling and Software Design
14
Figure 1: Atomic neuronal conversions.
and a neuronal internalization on the bottom left. All
of them were visualized in the activity view of the
KMDL.
The entity of persons as process participants (yel-
low) was mapped to neurons who interact on a neu-
ronal level. In consequence, the entity of tacit knowl-
edge objects (purple) are connected to neurons. The
entity of the conversion (green) was mapped to the ac-
tivity of a neuron that generates new knowledge based
on the transfer of its input objects. The environment
as well as interaction possibilities with the environ-
ment are modeled with the entity of a database (white
rectangle). Further, neuronal information objects are
stored within a database. In consequence, the shape
of information objects (red) are connected to those
databases.
Second, complex neuronal conversions are visual-
ized in Figure 2.
Again, in this Figure, one can see a neuronal so-
cialization on the top left, a neuronal externalization
on the top right, a neuronal combination on the bot-
tom right and a neuronal internalization on the bottom
left. All of them were visualized in the activity view
of the KMDL.
Following the KMDL, conversions of the activity
view can be repeated without control flow. Hence,
each neuron can develop several neuronal knowl-
edge objects or neuronal information objects over
time. Hence, modeled neuronal objects do represent
the identified current knowledge of a certain neuron.
Therefore, a strict sequence modeling therefore can
be realized with help of the listener concept or the
process view.
Third, an abstract neuronal conversion can be
found in Figure 3.
In this Figure, one can see several impure complex
conversions, which is the reason for the black color of
the visualized arrows, as the KMDL asks for. Since
more than one neuron (B1 and B2) are considered on
that process model, an abstract level of neuronal con-
versions has been visualized.
5.2 Practical Examples
Using basic definitions of a neuronal process model-
ing, their transfer to practical examples coming from
the industry is intended. The following gives four
practical examples. All of them serve as a fruitful
domain to visualize neuronal modelings, simulations
and optimizations.
A first example focuses on the organization of
goods depots. Those can follow various strategies.
For example fix places can hold reservations for cer-
tain goods. Alternatively, goods can get an arbitrary
place, which considers current free spaces. Here, the
human brain can serve as biological inspiration for
strategies to store memories and can optimize the de-
pot organization of goods.
A second example focuses on production pro-
cesses. Here, goods are not needed constantly. Mean-
while, they can be stored in goods depots or storage
areas. Once, they are needed, they can be brought
to the corresponding process step with help of trans-
portation elements (Gronau et al., 2016a). As they
are not required, a transportation element pauses and
buffers currently not needed goods. Alternatively,
materials can be considered as just-in-time inventory,
such that they do not have to be stored in expensive
goods depots. Here, the velocity of transportation el-
ements is adjusted in dependence to the production
order. Analogies can be found in the human brain. As
the storage of goods, the storage of memories can be
organized or vice versa. A short-term-memory (cur-
rent currencies) deals with neuronal knowledge ob-
jects similarly to just-in-time inventory. Here, neu-
ronal knowledge objects are used at consecutive neu-
rons as they are needed. Buffered goods are stored
within long-term-memories similar to goods depots.
Here, currencies are unlocked as they are needed
within the current process.
A third example focuses on specializations of pro-
duction machines. As production processes can be
considered as a single process network, machines are
A Visionary Way to Novel Process Optimization Techniques - The Transfer of a Process Modeling Language to the
Neuronal Level
15
Figure 2: Complex neuronal conversions.
part of them. Since machines can show high special-
izations, the organization of production processes can
be inspired by the organization of the human brain.
Here, certain areas are responsible for a certain task
and show high specializations as well. For example
the auditory cortex deals mainly with acoustic infor-
mation, the visual cortex mainly with optical informa-
tion, etc.
The best choice to realize the entire process model
is not always to realize all process parts in the own
company. As parts can be outsourced to external par-
ties, analogies can be found in the human brain as
well. Here, speed relevant actions can be initiated by
reflexes. This is efficient since the realization of a full
cognitive task processing would be to slow. As an ex-
ample, one can imagine the start of a sprinkler system.
In case of a fire, it was not sense full to create action
alternatives but start fighting a fire immediately like a
reflex.
6 EVALUATION
Faced with the demonstration artifacts of the previous
section, objectives of section 3 have been considered
as follows.
Objective 1 can be fulfilled as neuronal knowledge
models are modeled within the activity view charac-
terizing a certain person. Here, a decomposition rises
the process model granularity of the selected activity
and connects all neuronal process models with com-
Figure 3: Abstract neuronal conversion.
mon process models. Since the common activity view
characterizes a corresponding process task of the pro-
cess view, neuronal knowledge models are integrated
within existing process models. Since a neuronal net-
work characterizes entities of persons, a trained neu-
ronal network can be reused in any activity (objec-
tive 2). As neuronal knowledge models can be acti-
vated and can evolve over time, they can be integrated
within discrete process simulations easily (objective
3). From a common activity view modeled environ-
mental factors (material such as non-material objects)
serve as interface for the activity view on a neuronal
level. Hence, objective 4 and objective 5 are consid-
ered as well.
Further, objectives have been considered coming
from a neuronal techniques side as follows: As learn-
ing with neuronal networks is not affected by the
here presented concepts, neuronal tasks can follow
the neurons biological models (objective 6). A paral-
lel neuronal task realization within neuronal networks
has been considered (objective 7) as can be seen in
Figure 2 (neuronal socialization and neuronal exter-
nalization) and Figure 3. Here, at least two neurons
realize a parallel task processing. Objective 8 can
be met as soon as recurrent connections are consid-
ered within the neuronal process models. Then, time-
dependent neuronal behaviors are considered within
neuronal networks. A sequential neuronal task real-
ization within neuronal networks can be considered
within the neuronal process modeling (objective 9),
as presented activity views are characterizing corre-
sponding tasks of the process view. Since logical
control-flow operators can be used here, a sequential
neuronal task processing can be modeled easily. Fur-
ther, a time-dependent behavior of a network mod-
eled within the activity view can result in a sequential
task processing. Objective 10 has been met as can
be seen in Figure 3. Here, the task ”Neuronal Per-
ception of Neuron Group B” models the activity of
Neuron B1 and Neuron B2 on an abstract level. Fur-
Seventh International Symposium on Business Modeling and Software Design
16
ther, knowledge objects, information objects, neurons
and databases can be grouped and visualized on an
abstract level. Sensory information and knowledge
flows can be considered within the modeled neuronal
network as can be seen in for example in Figure 1
and Figure 2. In both Figures, possible sensory in-
formation flows can be seen on the bottom (neuronal
internalization and neuronal combination). Possible
knowledge flows can be seen in both Figures on the
top (neuronal socialization and neuronal externaliza-
tion). Objective 12 can be met as follows: Actuator
information and knowledge have been considered as
outcomes of neuronal networks, as can be seen in Fig-
ure 1 and Figure 2. In both Figures, possible actuator
information flows can be seen on the right (neuronal
externalization and neuronal combination). Possible
knowledge flows can be seen in both Figures on the
left (neuronal socialization and neuronal internaliza-
tion).
Considering the here presented evaluation of
given objectives, it becomes clear that an idea for ev-
ery objective has been identified. This supports the
functioning of the transfer of the KMDL to the neu-
ronal level, such that a neuronal process modeling, a
neuronal process simulation and a neuronal process
optimization can be built on base of that.
7 CONCLUSIONS
In this paper, a visionary way to novel process opti-
mization techniques has been drawn and the base has
been realized on behalf of the KMDL. Main contribu-
tions and scientific novelties are the following: Defi-
nitions of a neuronal process modeling, neuronal pro-
cess simulation and a neuronal process optimization
have been created. Objectives of a transfer of a com-
mon process modeling language have been identified.
Further, definitions for those concepts have been cre-
ated and a modeling language has been transferred to
the neuronal world. This includes the reinterpretation
of existing shapes of the KMDL. On that base, the-
oretical examples have been visualized on behalf of
the KMDL. Further, analogies for the use of the here
presented concepts in the industry context have been
identified.
With this, the drawn transfer has been applied and
proven. Hence, the sub research question was an-
swered and the following potentials are suitable next
steps:
The concretion of the functioning of previously
presented concepts will be realized. Then, those
will be realized as quantitative neuronal process mod-
elings, simulations and optimizations. Further, the
comparison of the here presented concepts with tra-
ditional results was attractive as well. Still promising
is the rebuilding of common process model optimiza-
tion on behalf of the here presented concepts.
The application of the here presented concepts
are assumed to cause a fundamentally value increase.
As simple and complex relations in different process
perspectives like cost, time or stock can be consid-
ered, the prediction quality of process simulations is
strongly improved. Further, common optimization
potentials can be estimated efficiently. Additionally,
new optimization approaches and optimization poten-
tials can be identified.
REFERENCES
Bishop, C. (1995). Neural Networks for Pattern
Recognition. Oxford University Press, Inc., ISBN
0198538642, page 1.
Broomhead, D. and Lowe, D. (1988). Multivariate func-
tional interpolation and adaptive networks. Complex
Systems, 2:321–355.
Byrd, R. H., Lu, P., Nocedal, J., and Zhu, C. Y. (1995).
A limited memory algorithm for bound constrained
optimization. SIAM Journal on Scientific Computing,
16(6):1190–1208.
Chambers, M. and Mount-Campbell, C. (2000). Pro-
cess optimization via neural network metamodel-
ing. International Journal of Production economics,
79(2000):93–100.
Fahlman, S. (1989). Faster learning variations on back-
propagation: An empirical study. Proceedings of
the 1988 connectionist models summer school, In D.
Touretszky, G. Hinton and T.Sejnowski, editors, San
Mateo, Morgan Kaufmann, pages 38–51.
Gronau, N. (2009). Process Oriented Management of
Knowledge: Methods and Tools for the Employment
of Knowledge as a Competitive Factor in Organiza-
tions (Wissen prozessorientiert managen: Methode
und Werkzeuge f
¨
ur die Nutzung des Wettbewerbsfak-
tors Wissen in Unternehmen). Oldenbourg Verlag
M
¨
unchen.
Gronau, N. (2012). Modeling and Analyzing knowledge in-
tensive business processes with KMDL - Comprehen-
sive insights into theory and practice. GITO mbH Ver-
lag Berlin.
Gronau, N. (2016). Gesch
¨
aftsprozessmanagement in
Wirtschaft und Verwaltung. Gito.
Gronau, N. (2017). Gesch
¨
aftsprozessmanagement in
Wirtschaft und Verwaltung, volume 2. Gito.
Gronau, N., Grum, M., and Bender, B. (2016a). Determin-
ing the optimal level of autonomy in cyber-physical
production systems. Proceedings of the 14th Interna-
tional Conference on Industrial Informatics (INDIN).
Gronau, N. and Maasdorp, C. (2016). Modeling of or-
ganizational knowledge and information : analyzing
A Visionary Way to Novel Process Optimization Techniques - The Transfer of a Process Modeling Language to the
Neuronal Level
17
knowledge-intensive business processes with KMDL.
GITO mbH Verlag Berlin.
Gronau, N., Thiem, C., Ullrich, A., Vladova, G., and Weber,
E. (2016b). Ein Vorschlag zur Modellierung von Wis-
sen in wissensintensiven Gesch
¨
aftsprozessen. Tech-
nical report, University of Potsdam, Department of
Business Informatics, esp. Processes and Systems.
Hestenes, M. R. and Stiefel, E. (1952). Methods of con-
jugate gradients for solving linear systems. Jour-
nal of Research of National Bureau of Standards,
49(6):409–436.
Hopfield, J. J. (1982). Neural networks and physical sys-
tems with emergent collective computational abilities.
PNAS, 79(8):2554–2558.
Kohonen, T. (1989). Self-organization and associativ mem-
ory. Springer-Verlag New York, Inc., New York, NY,
USA, ISBN 0-387-51387-6, 3rd edition:1.
Krallmann, H., Frank, H., and Gronau, N. (2001). Sys-
temanalyse im Unternehmen. Oldenbourg Wis-
senschaftsverlag.
McCulloch, W. S. and Pitts, W. (1988). A logical calcu-
lus of the ideas immanent in nervous activity. MIT
Press, Cambridge, MA, USA, ISBN 0-262-01097-6,
pages 15–27.
Nonaka, I. and Takeuchi, H. (1995). The knowledge-
creating company: How Japanese companies create
the dynamics of innovation. Oxford university press.
Peffers, K., Tuunanen, T., Gengler, C. E., Rossi, M., Hui,
W., Virtanen, V., and Bragge, J. (2006). The design
science research process: A model for producing and
presenting information systems reseach. 1st Interna-
tional Conference on Design Science in Information
Systems and Technology (DESRIST), 24(3):83–106.
Peffers, K., Tuunanen, T., Rothenberger, M. A., and Chat-
terjee, S. (2007). A design science research method-
ology for information systems research. Management
Informations Systems, 24(3):45–78.
Plaut, D. C., Nowlan, S. J., and Hinton, G. E. (1986). Exper-
iments on learning backpropagation. Technical Report
CMU-CS-86-126, Carnegie-Mellon University, Pitts-
burgh, PA, page 1.
Remus, U. (2002). Process-oriented knowledge manage-
ment. Design and modelling. PhD thesis, University
of Regensburg.
Riedmiller, M. and Braun, H. (1993). A direct adap-
tive method for faster backpropagation learning: The
RPROP algorithm. Proc. of the IEEE Intl. Conf. on
Neural Networks, San Francisco, CA, pages 586–591.
Robinson, A. J. and Fallside, F. (1987). The utility driven
dynamic error propagation network. Technical Report
CUED/F-INFENG/TR.1, Cambridge University Engi-
neering Department, page 1.
Rosenblatt, F. (1958). The perceptron: A probabilistic
model for information storage and organization in the
brain. Psychological Review, 65:386–408.
Rosenblatt, F. (1963). Principles of Neurodynamics. Spar-
tan, New York, page 1.
Rumelhart, D. E., Hinton, G. E., and Williams, R. J. (1986).
Learning internal representations by error propaga-
tion. MIT Press, Cabridge, MA, USA, ISBN 0-262-
68053-X, pages 318–362.
Schmidhuber, J. (2015). Deep learning in neural networks:
An overview. Neural Networks, 61:85 – 117.
Shewchuk, J. R. (1994). An introduction to the conjugate
gradient method without the agonizing pain. Technical
Report, Carnegie Mellon university, Pittsburgh, PA,
USA, page 1.
Sultanow, E., Zhou, X., Gronau, N., and Cox, S. (2012).
Modeling of processes, systems and knowledge: a
multi-dimensional comparison of 13 chosen meth-
ods. International Review on Computers and Software
(IRECOS), (6):3309–3319.
Werbos, P. J. (1988). Generalization of backpropagation
with application to a recurrent gas market model. Neu-
ral Networks, page 1.
Williams, R. J. and Zipser, D. (1995). Gradient-based learn-
ing algorithms for recurrent networks and their com-
putational complexity. In Y. Chauvin and D.E. Rumel-
hart, editors, Lawrence Erlbaum Publishers, Hillsdale,
N.J. Back-propagation: Theory, Architectures and
Applications, pages 433–486.
Seventh International Symposium on Business Modeling and Software Design
18