Traded Control Architecture for Automated Vehicles Enabled by the
Scene Complexity Estimation
Juan Felipe Medina-Lee
a
, Jorge Villagra
b
and Antonio Artu
˜
nedo
c
Autopia Program, Centre for Automation and Robotics (CSIC), Ctra. M300 Campo Real, km 0.200, Arganda del Rey, Spain
Keywords:
Autonomous Driving System, Traded Control, Situational Awareness.
Abstract:
A number of urban driving situations are still today too challenging to be handled by an autonomous driving
system (ADS), and an intervention from humans inside the vehicle may be necessary. In this work, a novel
traded control architecture is proposed to enhance the operational domain of the ADS under the premise that
vehicles and humans may need to adapt their cooperation level depending on the context. To that end, a
complexity level will be defined and computed in real time for each driving scene, and the role of the ADS
and the human operator will be defined accordingly. With this information in hand, the system will alert the
human operator when the involvement level will be lower than required or when a complex scene is detected.
1 INTRODUCTION
Different autonomous driving systems (ADS) have
been introduced over the last years. Although these
systems have shown benefits like safety or comfort,
they still need intervention from the human operator
to handle all possible situations. The trading of con-
trol is suitable for situations when the actor that has
the authority over the vehicle (ADS or human opera-
tor) is not able to handle the situation anymore (Chao
huang, fazel Naghdy, 2019). This responsibility shift
may lead to wrong and even unsafe behaviors if the
situation awareness of the human operator is not high
enough during the handover situation (Drexler et al.,
2020).
Contrarily to shared control, traded control refers
to a scheme where a specific task is entirely per-
formed by a unique agent, either human alone or au-
tomation alone (Inagaki, 2003). For trading of control
to be implemented, it is necessary to decide when the
control must be handed over and to which agent; who
makes the decision on the authority arbitration is also
important (Muslim and Itoh, 2019). This still remains
one of the greatest challenges for assistive technolo-
gies in automobiles (Inagaki and Sheridan, 2018).
Mutual understanding is the essence of trading
control systems. As a result, the ADS shall be able to
a
https://orcid.org/0000-0003-4489-4280
b
https://orcid.org/0000-0002-3963-7952
c
https://orcid.org/0000-0003-2161-9876
perceive the driver status in order to perform different
driving tasks and make safer decisions, and the human
operator must easily understand the goals and capa-
bilities of the ADS (Muslim and Itoh, 2019). This
is addressed in (Lindemann et al., 2018) by imple-
menting an augmented-reality windshield display to
increase the situation awareness of the human opera-
tor by showing him/her the ADS status. In (Sonoda
and Wada, 2017) the authors use vibro-tactile devices
that enable the human operator to predict or perceive
actions selected by the ADS, increasing also the sit-
uation awareness and the trust in the automated deci-
sions.
Human-machine interaction has been addressed
with a wider scope in some recent EU-funded re-
search projects. (AutoMate-project, 2019) focuses
on driver-automation interaction and communication
with other vehicles for high levels of driving automa-
tion. In this context, different degrees of coopera-
tion are introduced to achieve a successful human-
machine interaction. In contrast, (Vi-DAS-project,
2019) focuses on the development of intuitive HMI to
warn and assist the driver in anticipating potentially
critical events by applying the latest advances in sen-
sors, data fusion and machine learning. Moreover,
(ADAS&ME-project, 2019) addresses the transition
between different levels of driving automation, con-
sidering the driver state with regard to its attention, vi-
sual/cognitive distraction, stress, workload, emotions,
sleepiness and fainting.
This work, framed in EU-funded PRYSTINE
Medina-Lee, J., Villagra, J. and Artuñedo, A.
Traded Control Architecture for Automated Vehicles Enabled by the Scene Complexity Estimation.
DOI: 10.5220/0010183702550261
In Proceedings of the 4th International Conference on Computer-Human Interaction Research and Applications (CHIRA 2020), pages 255-261
ISBN: 978-989-758-480-0
Copyright
c
2020 by SCITEPRESS Science and Technology Publications, Lda. All rights reserved
255
project (Druml et al., 2019), explores an alternative
view where vehicles and humans may need to adapt
their cooperation level depending on the context. To
that end, it defines and assigns a Complexity Level
(CL) to each driving scene in real time and defines
the role of the ADS and the human operator accord-
ingly. The CL of the scene depends on the num-
ber and quality of the trajectory candidates generated
by the ADS, which is significantly different when
driving into a highly occupied roundabout than nav-
igating on a highway at off-peak hours. When the
CL decreases, the proposed ADS changes the level
of driving automation accordingly, and can handle
more driving tasks without human intervention. Nev-
ertheless, the human operator must be prepared for
an eventual system-to-human transition of control to
avoid undesirable consequences (Biondi et al., 2019);
for that reason, in this work, a driving monitoring sys-
tem (DMS) is constantly estimating the involvement
level of the human operator. With this information in
hand the ADS may generate a warning when the in-
volvement of the human is lower than recommended,
so the situation awareness is kept in safe levels.
The remainder of the paper is as follows: section
2 presents a review of the ADS implemented by Au-
topia Program. Section 3 describes the traded control
architecture proposed in this work. Finally, section 4
analyses the results in a simulated urban scenario.
2 AUTOMATED DRIVING
ARCHITECTURE
The traded control system is embedded in the Au-
topia program’s autonomous driving architecture (Ar-
tunedo, 2019). This chapter reviews the main com-
ponents of that architecture in order to provide the
reader the necessary background for the remainder of
the paper. The functional diagram of Figure 1 depicts
the main components of the autonomous driving ar-
chitecture.
The ADS interacts with both the ego-vehicle and
the human operator. The interaction with the ego-
vehicle consists of reading the sensors and handling
the actuators in order to conduct the driving process
in a safely and comfortable way. The system interacts
with the human operator using an on-board HMI and
a Driving Monitoring System (DMS) that estimates
its drowsiness level.
The perception and motion prediction module
merges information from different ego-vehicle sen-
sors like LiDARs, GPS and propioceptive sensors
into a dynamic occupancy grid that stores occupied
space and objects’ dynamic information. This mod-
Trajectory
generator
Chauffeur
On-board computing system
Manoeuvre
planner
Supervisor
HMI
DMS
Perception and
Motion prediction
Figure 1: Autonomous driving architecture for Autopia Pro-
gram.
ule also applies signal processing techniques to ob-
tain a reliable localization estimation. Finally, it pre-
dicts the traffic agents motion by considering their
inter-dependent behaviours in a probabilistic frame-
work (Villagra et al., 2020).
The trajectory generator module creates a set of
safe and comfort-optimized trajectories for the cur-
rent traffic scene. The path candidates are generated
using B
´
ezier curves and the speed profiles are com-
puted taking into account the reachable states of dy-
namic obstacles, the traffic regulations and comfort
parameters. A complete description of the trajec-
tory generation module is presented in (Medina-lee
et al., 2020). The quality of each candidate is quanti-
fied using four decision variables: longitudinal com-
fort, lateral comfort, safety and utility. Each deci-
sion variable is obtained by combining a set of Trajec-
tory Performance Indicators (TPI) like accelerations,
jerks, closeness to obstacles, smoothness, among oth-
ers. The TPI are normalized scalar values from 0 to 1,
summarized in Table 1.
Each decision variable DV
(i,k)
, k [1, 4] for a can-
didate i is computed by combining a number T
k
of
TPI, listed in Table (1), using a weighted geometric
mean as follows
DV
(i,k)
=
T
k
v
u
u
t
T
k
j=1
f
T PI
(i, j)
, ω
j
, k = 1...4 (1)
where ω
j
are normalized values between 0 and 1.
The Manoeuvre planner module selects the
best set of navigable corridors for the ego-vehicle
(Medina-lee et al., 2020). It uses traffic information,
obstacles on the scene and global route data to decide
which of the available corridors are the most pertinent
for the trajectory generator module. This hierarchical
architecture allows the ADS to execute strategic ma-
noeuvres like overtaking or adjusting the global route
when a lane is blocked.
SUaaVE 2020 - Special Session on Reliable Estimation of Passenger Emotional State in Autonomous Vehicles
256
Table 1: Trajectory Performance indicators and Decision
Variables of trajectory candidates.
Decision Variable TPI
Longit. Comfort
Long. Accel. Avg.
Long. Accel. Max.
Long. Jerk. Avg.
Long. Jerk. Max.
Lateral Comfort
Lat. Accel. Avg.
Lat. Accel. Max.
Lat. Jerk. Avg.
Lat. Jerk. Max.
Smoothness
Safety
Free Ride
Safe chase
Closeness
Lane Invasion
Utility
Avg. speed
Path Length
Obstacle Free
The supervisor module is in charge of three main
tasks, as depicted in Figure 2. The selection of the
best trajectory candidate is performed by selecting the
candidate that maximizes a merit function that com-
bines the four decision variables. The traded control
task decides the CL of the current scene and estab-
lishes the Level of Driving Automation (LoDA) for
the ADS and the Required Involvement Level (RIL)
for the human operator in the driving process. The
HMI is used to display the current trajectory, impor-
tant warnings, the required and current involvement
levels and the ego-vehicle status. The human opera-
tor can use it to change the driving mode (manual or
automated), to select the destination point and to per-
form a safe-stop manoeuvre. The traded control task
is described in detail in the next chapter.
Human Involvement low
Human Involvement high
Human Involvement medium
Selection of best
trajectory
Traded
Control
Interaction with
human driver
Task
Output
Figure 2: Supervisor functional diagram.
The chauffeur module is the low-level control for
the vehicle. It receives the best trajectory from the su-
pervisor and generates the necessary steering wheel,
throttle and brake commands for the ego-vehicle to
follow that trajectory. This control system has been
already tested in a real car with good performances in
(Artunedo et al., 2017).
3 TRADED CONTROL
ARCHITECTURE
The traded control module determines the CL of the
current driving scene, the LoDA and, ultimately, the
proper RIL for the human operator. If the traffic
scene is too complex, this module recommends the
human operator to take over the wheel and pedals
or it performs a safe-stop manoeuvre if the human is
completely disengaged. The block diagram of Figure
3 depicts the different stages in this process, whose
main components are detailed below.
Ego-vehicle
automation
level
Human
Involvement
Level
Suitable
candidates
Scene
complexity
HMI
DMS
Figure 3: Traded control architecture.
3.1 Scene Complexity Level
The CL of the scene is estimated based on the qual-
ity of the candidate trajectory set. To that end, the
concept of candidate suitability S
i
is proposed in this
work. A candidate is considered suitable if its deci-
sion variables are greater than predetermined thresh-
olds, as shown below:
S
i
=
1 i f DV
(i,k)
> thd
k
0 otherwhise
(2)
The percentage of suitable candidates is computed
as the ratio between the number of suitable candi-
Traded Control Architecture for Automated Vehicles Enabled by the Scene Complexity Estimation
257
dates and the number of valid candidates. A candi-
date is considered valid if it meets three requirements:
the maximum curvature is feasible for the vehicle, it
is collision-free and it fully remains inside the navi-
gable space. This percentage of suitable candidates
is used as an input of a finite state machine (FSM)
to determine the CL of the scene (see Figure 4). In
this work, four complexity levels are proposed: Com-
plex, Medium, Simple and Basic. Once a CL state is
reached, it is not possible to change to another state
for a period of time, which is a configurable parame-
ter of the ADS.
Simple
Medium
Complex
s
u
i
t
(
%
)
<
t
h
d
_
m
e
d
i
u
m
s
u
i
t
(
%
)
>
t
h
d
_
s
i
m
p
l
e
s
u
i
t
(
%
)
<
t
h
d
_
c
o
m
p
l
e
x
s
u
i
t
(
%
)
>
t
h
d
_
s
i
m
p
l
e
Basic
s
u
i
t
(
%
)
>
t
h
d
_
b
a
s
i
c
s
u
i
t
(
%
)
<
t
h
d
_
s
i
m
p
l
e
Figure 4: FSM for scene complexity level.
Note that if Complex state is reached, this state is
maintained until the scene is considered Simple, so
the behaviour of the FSM is more stable on complex
scenarios.
3.2 Levels of Driving Automation
In this work, the Society for Automotive Engineers
(SAE) J3016 standard (SAE International, 2016) is
used as a starting point to define the levels of driving
automation (LoDA). The LoDA 1 of the standard was
not implemented because a driver assistance system
is out of the scope of this project. Table 2 presents
a description of the LoDA implemented in this archi-
tecture.
The proper LoDA for the ego-vehicle is automat-
ically determined by the system using the FSM de-
picted in Figure 5. The transitions between states de-
pend on the CL, on the reactivity of the human opera-
tor to the HMI requests and on the involvement level
measured by the DMS system. A state named safe-
stop is proposed to handle critical situations when
the human operator is not involved at all, this state
can only be reached from LoDA 4. Once the safe-
stop state is reached, the speed profiles for the tra-
jectories apply a constant deceleration until the ego-
vehicle gets to 0 km/h. This state is maintained until
the human operator uses the HMI to resume the au-
tonomous driving or take over the wheel.
S
C
N
=
C
o
m
p
l
e
x
H
M
I
C
o
n
f
i
r
m
a
t
i
o
n
LoDA 4
LoDA 2
LoDA 3
SAFE
STOP
LoDA 0
S
C
N
=
b
a
s
i
c
S
C
N
=
S
i
m
p
l
e
H
M
I
c
o
n
f
i
r
m
a
t
i
o
n
SCN = Complex
Involvement NONE
H
M
I
C
o
n
f
i
r
m
a
t
i
o
n
S
C
N
=
C
o
m
p
l
e
x
I
n
v
o
l
v
e
m
e
n
t
!
=
N
O
N
E
HMI Confirmation
N
O
H
M
I
C
o
n
f
i
r
m
a
t
i
o
n
S
C
N
=
C
o
m
p
l
e
x
H
M
I
c
o
n
f
i
r
m
a
t
i
o
n
S
C
N
=
C
o
m
p
l
e
x
Figure 5: FSM for automation level.
3.3 Required Involvement Level
Once the CL and the LoDA are established, both vari-
ables are combined to propose an involvement level to
the human operator, which can take 3 different values
(none, medium, high). To that end, the correspon-
dence matrix presented in Table 3is used.
Note that the higher the LoDA, the lower is the
requested involvement from the human operator. In
LoDA 4, no involvement is required from the human
operator at all.
Finally, if the involvement level estimated by the
DMS is lower than the one requested, an alarm will
be prompted in the HMI.
4 EXPERIMENTAL RESULTS
The performance of the ADS was evaluated in an ur-
ban scenario with roundabouts and crossings in a re-
alistic simulation environment using SCANeR Studio
1.9 software (AVSimulation, 2019).
4.1 Experiment Description
In the setup of the experiment, the ego-vehicle will
first encounter a four-way intersection with two in-
coming and two outgoing vehicles (Figure 6(a)); it
will then find a roundabout with two vehicles inside
(Figure 6(b)) and, at the end, it will face a traffic-free
SUaaVE 2020 - Special Session on Reliable Estimation of Passenger Emotional State in Autonomous Vehicles
258
Table 2: LoDA implemented for the traded control task.
Automation level Description
LoDA 0
Manual mode. The human operator is in charge of everything. The ADS only displays
warnings on the HMI.
LoDA 2
Automated mode with conservative driving parameters. Safer candidates are chosen
rather than risky ones. The ADS suggests the human operator to take over the wheel
if the scene becomes complex. The RIL is high.
LoDA 3
Automated mode with normal-driving parameters. The ADS may suggest the human
operator to take over the wheel if the scene becomes complex, but also can change to
the highest LoDA if the scene has a basic complexity level. The RIL is medium.
LoDA 4
Automated mode with normal-driving parameters. The ADS may suggest the human
operator to take over the wheel if the scene becomes complex, but if the driver is
unable to take over the wheel, a safe-stop manoeuvre is performed. There is no RIL
from the human operator to handle any situation.
Table 3: RIL according to the CL and LoDA.
LoDA 2 LoDA 3 LoDA 4
Basic Medium Medium None
Simple High Medium None
Medium High Medium None
Complex High High None
roundabout (Figure 6(c)). All scenarios will be han-
dled using automated modes (LoDA 2-4). In the pro-
posed experiment, the ego-vehicle will face complex
scenarios in a shorter period of time than it would in
real-life driving.
The upper section of Figure 6 depicts a bird’s eye
view of the relevant scenes which includes the mo-
tion prediction of the traffic agents. The lower section
of the figure show the simulation environment for the
same scenes.
4.2 Automated Driving Results
Figure 7 shows the path followed by the ego-vehicle
after the experiment. The colors on the path indicate
the RIL along the journey. Red color is assigned to
high RIL, yellow is associated to medium RIL and
green refers to no RIL at all. The figure also high-
lights the location of the stop-lines crossed by the ego-
vehicle.
Figure 7 shows a high RIL when the ego-
vehicle was approaching or crossing critical scenar-
ios. Medium RIL during the road between the inter-
section and the first roundabout. Finally, in the last
section of the journey, the RIL was lower because
there was no traffic. At the end of the experiment,
when the ego-vehicle reached a highway scene, the
RIL was none.
In the case of the four-way intersection, a com-
plete involvement from the human operator was re-
quired 13,6s (t
int
in Figure 8(a)) before reaching the
stop-line; in the case of the first roundabout, this lead
time was 12,29s (t
rdt
in Figure 8(a)). According to
(Naujoks and Neukum, 2013), the traded control had
an acceptable performance, since the estimated time
for average humans to take-over is between 6s and
10s.
Figure 8 plots the data of the traded control mod-
ule during the experiment. This data includes: (a)
Level of Driving Automation, (b) scene Complex-
ity Level, (c) Requested Involvement Level and (d)
measured involvement level of the human operator.
The dotted vertical lines represent the stop-lines high-
lighted in Figure 6. Table 4 show the numeric equiva-
lencies of Figure 8 for each traded control variable.
Table 4: Numeric equivalencies for the values of the traded
control variables.
Descriptive
value
Numeric
value
CL
Basic 0
Simple 1
Medium 2
Complex 3
RIL
None 0
Medium 1
High 2
The ADS determines a LoDA 2 for crossing the
intersection and the first roundabout. The second
roundabout is handled with a LoDA 3 due to the ab-
sence of traffic. LoDA 4 is established in the final
highway. The CL is increased in the difficult scenar-
ios and decreased when no traffic or straight segments
are faced, as expected. The RIL is high when ap-
proaching critical scenarios, so that the response time
from the human operator can be reduced, if needed.
The DMS data was artificially generated during the
experiment in order to evaluate the system perfor-
mance. The red shadows in 8(d) represent the mo-
ments where the HMI displayed an alarm because the
Traded Control Architecture for Automated Vehicles Enabled by the Scene Complexity Estimation
259
Figure 6: Experiment setup in simulation environment. Four-way intersection (a). Roundabout with traffic (b). And traffic-
free roundabout (c).
4.595 4.5955 4.596 4.5965 4.597 4.5975 4.598
#10
5
4.46215
4.4622
4.46225
4.4623
4.46235
4.4624
#10
6
Stop-line intersection
Stop-line roundabout 1
Stop-line roundabout 2
Figure 7: Complete trajectory followed by the autonomous
vehicle.
involvement of the human operator was lower than the
RIL. Figure 9 displays the HMI output in the entrance
of the first roundabout of the simulation.
5 CONCLUSIONS
A novel traded control approach is presented, where
the level of automation and the required involvement
level of the driver is automatically determined from
an estimation of the driving scene complexity level
and the driver drowsiness estimation.
The proposed architecture interacts with the hu-
man operator using a HMI that displays warnings,
RIL, and planning decisions. The human operator can
also use the HMI to change the driving mode or to
0 20 40 60 80 100 120
0
2
4
Level of Driving Automation
0 20 40 60 80 100 120
0
1
2
3
Scene complexity
0 20 40 60 80 100 120
0
1
2
Requested Involvement Level
0 20 40 60 80 100 120
0
1
2
Involvement DMS
(a)
(b)
(c)
(d)
t
int
t
rdt
Figure 8: Traded control data for the proposed experiment.
Figure 9: HMI warning due to low attention from the human
operator.
perform a safe-stop manoeuvre.
The implemented ADS was validated in an sim-
ulated urban scenario, where it was able to require
SUaaVE 2020 - Special Session on Reliable Estimation of Passenger Emotional State in Autonomous Vehicles
260
higher involvement levels from the human when the
scenes were more critical (approaching to an intersec-
tion or roundabout with traffic) with an average lead
time of 13 sec.
Future work will focus on the evaluation of the ap-
proach on a real vehicle in open roads. To that end,
the suitable candidates generation will be refined and
adapted to the context, so that a higher number of op-
erational domains can be handled and the user expe-
rience can be enhanced.
ACKNOWLEDGEMENTS
This work has been partially funded by the Spanish
Ministry of Science, Innovation and Universities with
National Project COGDRIVE (DPI2017- 86915-C3-
1-R), the Community of Madrid through SEGVAUTO
4.0-CM (S2018-EMT-4362) Programme, and by the
European Commission and ECSEL Joint Undertaking
through the Projects PRYSTINE (No. 783190) and
SECREDAS (No. 783119).
REFERENCES
ADAS&ME-project (2016-2019). Grant agreement id:
688900, h2020-eu.3.4. http://www.adasandme.com/.
Artunedo, A. (2019). Decision-Making Strategies for Auto-
mated Driving in Urban Environments. Doctoral the-
sis, Universidad polit
´
ecnica de Madrid.
Artunedo, A., Godoy, J., and Villagra, J. (2017). Smooth
path planning for urban autonomous driving using
OpenStreetMaps. IEEE Intelligent Vehicles Sympo-
sium, Proceedings, (Iv):837–842.
AutoMate-project (2016-2019). Grant agreement id:
690705, h2020-eu.3.4. http://www.automate-
project.eu/.
AVSimulation (2019). Scaner studio.
https://www.avsimulation.fr/solutions/#studio.
Biondi, F., Alvarez, I., and Jeong, K. A. (2019). Hu-
man–Vehicle Cooperation in Automated Driving: A
Multidisciplinary Review and Appraisal. Inter-
national Journal of Human-Computer Interaction,
35(11):932–946.
Chao huang, fazel Naghdy, H. D. (2019). review on human-
machine shared control system of automated vehicles.
6(1):5–10.
Drexler, D. A., Takacs, A., Nagy, T. D., Galambos, P.,
Rudas, I. J., and Haidegger, T. (2020). Situation
Awareness and System Trust Affecting Handover Pro-
cesses in Self-Driving Cars up to Level 3 Autonomy.
pages 179–184.
Druml, N., Veledar, O., Macher, G., Stettinger, G., Selim,
S., Reckenzaun, J., Diaz, S. E., Marcano, M., Villagra,
J., Beekelaar, R., Jany-Luig, J., Corredoira, M. M.,
Burgio, P., Ballato, C., Debaillie, B., van Meurs, L.,
Terechko, A., Tango, F., Ryabokon, A., Anghel, A.,
Icoglu, O., Kumar, S. S., and Dimitrakopoulos, G.
(2019). Prystine - technical progress after year 1. In
2019 22nd Euromicro Conference on Digital System
Design (DSD), pages 389–398.
Inagaki, T. (2003). Adaptive Automation: Sharing and
Trading of Control. Chapter 8 of the Handbook of
Cognitive Task Design, 2001.10(0):147–169.
Inagaki, T. and Sheridan, T. B. (2018). A critique of the
SAE conditional driving automation definition, and
analyses of options for improvement. Cognition, Tech-
nology and Work.
Lindemann, P., Lee, T. Y., and Rigoll, G. (2018). Support-
ing Driver Situation Awareness for Autonomous Ur-
ban Driving with an Augmented-Reality Windshield
Display. Adjunct Proceedings - 2018 IEEE Interna-
tional Symposium on Mixed and Augmented Reality,
ISMAR-Adjunct 2018, pages 358–363.
Medina-lee, J. F., Artu
˜
nedo, A., Godoy, J., and Villagra, J.
(2020). Reachability Estimation in Dynamic Driving
Scenes for autonomous vehicles. 2020 IEEE Intelli-
gent Vehicles Symposium (IV).
Muslim, H. and Itoh, M. (2019). A theoretical framework
for designing human-centered automotive automation
systems. Cognition, Technology and Work, 21(4):685–
697.
Naujoks, F. and Neukum, A. (2013). Timing of in-vehicle
advisory warnings based on cooperative perception.
SAE International (2016). Taxonomy and Definitions for
Terms Related to Driving Automation Systems for
On-Road Motor Vehicles (Surface Vehicle Recom-
mended Practice: Superseding J3016 sep 2016). Tech-
nical report.
Sonoda, K. and Wada, T. (2017). Displaying System Situ-
ation Awareness Increases Driver Trust in Automated
Driving. IEEE Transactions on Intelligent Vehicles,
2(3):185–193.
Vi-DAS-project (2016-2019). Grant agreement id: 690772,
h2020-eu.3.4. http://vi-das.eu/.
Villagra, J., Artunedo, A., Trentin, V., and Godoy, J. (2020).
Interaction-aware risk assessment: focus on the lat-
eral intention. In 2020 IEEE 3rd Connected and Au-
tomated Vehicles Symposium (CAVS). IEEE.
Traded Control Architecture for Automated Vehicles Enabled by the Scene Complexity Estimation
261