Robots Humanize Care
Moral Concerns Versus Witnessed Benefits for the Elderly
Margo A. M. van Kemenade
1
, Elly A. Konijn
1,2
and Johan F. Hoorn
1
1
Center for Advanced Media Research Amsterdam, VU University Amsterdam, De Boelelaan 1081, Amsterdam,
Netherlands
2
Department of Communication Science, Media Psychology, VU University Amsterdam, Amsterdam, Netherlands
Keywords: Healthcare Robots, Moral Concerns, Professional Caregivers, Ageing, Affective Bonding.
Abstract: Ageing in Europe comes more rapidly than many realize: In about 10 years, one fifth of the population will
be 65+ with a further increase of 70% in the next 25 years. At the same time, healthcare is under extreme
pressure due to budget cuts, limited resources and personnel together with increased demands. Robots may
fulfill important tasks in this respect. Our research focuses on social robots to support tasks requiring
interpersonal communication. Many moral concerns and objections are raised, however, in particular among
care professionals. To examine the issue, we report on 1) a qualitative study among professional caregivers
and 2) a documentary portraying healthy elderly meeting with Hanson’s Robokind “Alice”. Alice is under
development in our lab, supplying her with abilities for emotional responses. The results show that the
moral concerns are not in line with the benefits that the social robots appear to have for the lonely elderly.
Our conclusion posits that new robot technology may not dehumanize care but rather may bring humanness
back into professional health care.
1 INTRODUCTION
Due to improved healthcare and medication, people
today live longer than ever before with a steep
increase in elderly people as a result. At the same
time, family size in Europe is decreasing resulting in
a skewed age distribution in society. Within 10 years
from now, one fifth of the European population will
be 65 years or older. The number of people aged
over 65 in Europe will show a further increase of
70% by 2050. The number of elderly aged 80 years
and older will then even expand with 170%
(European Commission, n.d.). Such increased
demands together with severe budget cuts, limited
personnel, and resources puts healthcare under
extreme pressure.
One solution is that the elderly stay home longer,
which is corroborated by 91% of the European
citizens who find this a positive development
(European Commission, 2007). However, even if the
preferred situation is to live healthy in one’s own
environment, this often is not the case or impossible,
and help may be needed to do so. Over time, the
demands for staying at a nursing home have become
quite severe (Zorg voor Beter, 2014). Consequently,
demands on the caregivers are increasing as well.
Much of the burden is carried by informal caretakers
such as family and relatives. This unequivocally is a
stressful job, particularly in cases of dementia and
Alzheimer’s disease (Andrén and Elmståhl, 2005).
The elderly themselves prefer help from a
professional caregiver rather than their relatives
because they consider professionals more accurate
(MarketingCharts, 2007). Moreover, they do not
want to burden their relatives with the responsibility
to take care of them (DBMI, 2012). In all, this
situation urges the somewhat colloquial but
nevertheless pressing question “Who takes care of
grandma?”
Another, more controversial, solution may be to
let a care robot ‘take care of grandma.’ Which tasks
is a robot allowed to fulfil to compensate the limited
availability of personnel? To what extent can robots
be considered adequate or acceptable to replace
human compassion by simulated empathy? Many
moral concerns and objections are raised among the
public at large and care professionals in particular,
which motivated us to conduct a qualitative study
among the latter group. In addition, we report on a
documentary that registered the interactions of
several old ladies with our Hanson Robokind’s
“Alice,” a robot under development in our lab,
648
van Kemenade M., Konijn E. and Hoorn J..
Robots Humanize Care - Moral Concerns Versus Witnessed Benefits for the Elderly.
DOI: 10.5220/0005287706480653
In Proceedings of the International Conference on Health Informatics (HEALTHINF-2015), pages 648-653
ISBN: 978-989-758-068-0
Copyright
c
2015 SCITEPRESS (Science and Technology Publications, Lda.)
where we are in the process of equipping her with
emotion-regulation software. This footage
illuminates what a social robot can do – even if its
performance is not impeccable. Before reporting
these results and observations, we will provide some
more backgrounds next.
2 MORAL CONCERNS ABOUT
ROBOTS IN HEALTHCARE
Research thus far indicated that the implementation
of so called healthcare robots or “Caredroids” could
provide part of a solution to assure the quality of
healthcare (Asaro, 2006). A healthcare robot is a
robot designed for care purposes to fill the gap
between the need and supply of healthcare,
particularly when human involvement is missing. A
healthcare robot should not be seen as a
conventional machine, but as an agent that can make
independent decisions and execute specialized and
assigned tasks with little or no assistance
(Dautenhahn, 2007). Given the current lack of
standards for the use of healthcare technology, it is
imperative that the ethical implications of its use
should be taken into account in an early stage as
acknowledged by the International Organization for
Standardization (Klein Wolterink, 2013).
To date, little attention has been paid to the
attitude of caregivers towards healthcare robots
(Vallor, 2013), although they are one of the most
important stakeholder groups. Caregivers often
experience an intuitive aversion against handing
over care tasks to robots. The initial assumption of
our study was that healthcare providers pretty well
can tell the difference between good and bad care
and that they have very detailed knowledge about
the relationship between them and their patients
(Leget, 2012 p.126). We wished to understand the
objections and concerns of professional caregivers
against robotized care to decide if robots are an
option at all and if so, what should be changed on
currently available robot systems to satisfy one of
the prime user groups, the care professionals, that is.
To bring some focus to our investigations, we
discussed various aspects of ethical issues in medical
settings and applied them to robotic care. We started
from Beauchamp and Childress (Beauchamp and
Childress, 2009) who define four ethical principles
in medicine. They are 1) beneficence, where
caregivers should act in the best interest of the
patient, 2) non-maleficence, which is a doctrine
saying that “before all else, do not harm,” in this
case, the patient, 3) autonomy, which is the capacity
of the patient to make an informed, un-coerced
decision about care, and 4) justice, pertaining to a
fair distribution of scarce resources such as medicine
or attention (Beauchamp and Childress, 2009 p.17).
In application, care professionals may feel that
robots cannot act according to the principle of
beneficence because a robot may not know what the
best interest of a patient or caretaker is. A robot may
harm a patient’s autonomy if it executes care tasks
without asking permission first.
Yet, different types of robots (Sharkey and
Sharkey, 2012 p.27) may trigger different moral
concerns. Assistive robots support elderly and/or
their caregivers in daily tasks such as lifting patients
up, carrying, cleaning, or feeding. For example, the
Japanese Secom ‘‘My Spoon’’ who can
automatically feed someone and the Sanyo electric
bathtub robot can automatically wash and rinse a
person. Here, non-maleficence may take priority
because the robot lift should not drop the patient.
Monitoring robots observe and supervise the health
condition of a patient. They may easily transgress
the principle of patient autonomy if the collected
data are not secured and privacy is at stake. Pearl,
for example, is a ‘nursebot’ that reminds seniors
about routine activities such as taking their
medication or guide them through their environment.
Companion robots are designed to establish some
form of affective bonding, ‘interpersonal
communication’, companionship or entertainment. A
well-known example is Paro (by AIST), a fur-
covered robotic seal designed for therapeutic uses
with elderly suffering from dementia. They may be
seen as fraudulent because they fake a friendship,
which may not be in the patient’s best interests.
Sharkey and Sharkey (2012) conclude, however, that
companion robots cannot form adequate
replacements for human love and attention.
As robots can interact (seemingly) independently
and make (care) decisions on their own, each of the
moral principles may hold for each of the robot-
types, but perhaps in varying degrees. Therefore, we
conducted a focus group study among healthcare
professionals.
3 RESULTS: QUALITATIVE
STUDY AMONG CARE
PROFESSIONALS
Data were collected through semi-structured focus
groups among 43 professional caregivers of elderly.
RobotsHumanizeCare-MoralConcernsVersusWitnessedBenefitsfortheElderly
649
Focus groups are best suited for sensitive topics like
ethical points of view (Hermanowicz, 2002 p.480).
Respondents (aged 18-67 years, most in between 36-
55) were recruited in four different nursing homes in
the Netherlands and within two extramural
organisations where home care is provided. Each
focus group was held with six to ten caregivers in
their own working environments (Reed and Payton,
1997). After a general introduction, participants
were shown six brief video clips portraying
prototypes of either assistive (e.g., Riba II Care
Support Robot For Lifting Patients), monitoring
(e.g., Mobiserv) or companion care robots (e.g.,
AIST Paro robot baby seal) and were encouraged to
reflect on their possible objections or perceived
benefits to a particular type of care technology.
Questions were asked about general thoughts on
the need for care technology in the near future,
gradually tuning into more specific topics of interest.
We considered it important that respondents could
express their opinions and possible concerns without
too much interference from our part. Participants
were reassured that their opinions were confidential
and answers would be processed anonymously.
Participants provided consent for video recordings
for coding purposes. Three different coders analysed
the videotapes independently and, after prior
training, coded each opinion according to the moral
categories autonomy, beneficence, non-maleficence,
and justice (Beauchamp, 2009) or as an opinion
expressing a possible (non)utility of care robots,
each in relation to the three types of care robots. All
coders used the software Atlas Ti and coded straight
from the video footage. Cohen’s Kappa of 0.71
revealed sufficient reliability among the decisions of
the coders.
Results showed that moral concerns regarding
justice and autonomy were hardly mentioned. Most
moral concerns among professional caregivers are
raised in terms of maleficence (i.e., risk of being
harmful), mostly so for assistive healthcare robots,
followed by monitoring robots and the least for
companionship robots. Out of 93 utterances coded as
moral, 40 related to moral maleficence of which 25
times for Assistive robots (pair-wise Chi-square tests
showed significant differences: 10.80 < χ
2
< 17.25,
p’s < .05). Caregivers reported concerns like fear
that the assistive healthcare technology might fail,
let a patient drop, squeeze too hard, and cause
physical harm, among others. Somewhat related,
they also mentioned that their patients might be
afraid of healthcare robots, especially those suffering
from dementia. Comments like the following
express such moral concerns of maleficence: “What
if the robot scares my patients and is not capable of
reassuring specific needs? I would never leave a
patient alone with a robot.”; “I mean, you never
know, certain patients tend to react unpredictable.
How can a robot understand what they want or
need?”; and “If there is no human around, who can
explain what is going on when my patients are
delusional?” In summary, when talking about
assistive healthcare robots, moral concerns of
potential harmfulness were most expressed, while
beneficence and utility were perceived to a much
lesser extent.
In response to monitoring robots, maleficence
concerns were mostly expressed in terms of a
decreased human contact between caregivers and
care receivers, which is generally considered as non-
desirable in healthcare relationships. In this respect,
moral concerns of privacy were hardly considered
important for the participating caregivers. While
most caregivers acknowledged that monitoring
technology could decrease their workload and
enable the elderly to stay more independent living
on their own, most concerns were expressed about
the lonely and diminished human contact, and
considered potentially harmful for the wellbeing of
the elderly. Some quotes expressing this concern
are: “Often, I am the only one they see throughout
the whole day.”; “She (the old woman) is dearly
waiting for me to show up, so she could have a
conversation.”; “My patient has no relatives and
cannot go outside on his own anymore. If a robot
would replace my task, he would not see anyone
throughout the day.” In sum, in talking about
monitoring healthcare robots, moral concerns of
diminished human contact and loneliness were
expressed while the highest level of utility was
perceived for monitoring robots in healthcare
compared to the other robot types.
The caregivers perceived highest beneficence
and lowest maleficence concerns in companion
robots for the elderly. Caregivers expressed feelings
or thoughts about the possible reassuring or
smoothing effect of a companion robot on a patient.
Most of them were already acquainted with the
Paro-seal, which was known for having positive
effects on especially demented elderly as this robots
is in use in a number of nursing homes in The
Netherlands. Therefore, most quotes express a
positive attitude towards the companion robot Paro:
“Look how happy she is, I could look at it all day. If
something makes you that happy it doesn’t matter
anymore that it is not alive.”; “Oh, they are very
cute. I want one of my own, when my time comes”;
“I don’t see any harm in it. I mean, they (her
HEALTHINF2015-InternationalConferenceonHealthInformatics
650
patients) carry a doll with them all the time, we
don’t think that is wrong do we?”; “I have a patient
who could benefit from this, he always wants to
cuddle, but I have hardly time to do so.” Although
the caregivers mostly expressed a positive attitude
toward companion robots such as Paro, they also
expressed moral concerns regarding deception. They
discussed the idea that a companion robot could be
seen as deceiving the elderly or presenting them a
fake companionship, especially those who suffer
from Alzheimer. However, in the end, the caregivers
thought that the positive effects of providing comfort
are more important.
In all, caregivers’ moral concerns as expressed in
the interviews may differ from the benefits a robot
appears to have for the lonely elderly, as will be
discussed in the next section.
4 OBSERVATIONS OF ELDERLY
INTERACTING WITH ROBOT
ALICE
In our university lab, we further developed a social
companion robot from Hanson’s Robokind
(http://www.robokindrobots.com/) named “Alice”,
for research purposes. While we are still developing
Alice, documentary footage was shot to examine its
potential (Alice Cares, Burger, 2014). Specific
features of Alice are that she looks like a young girl,
she has hearing devices that allow her to listen,
speech software to talk, her face has a soft skin that
can express emotions, and her camera-eyes allow
her to follow and respond to her interaction partner,
but she has a plastic Lego-like body. All these
options are not yet perfectly developed, though basic
conversation is possible. Given our research
objectives, we were of course very curious to how
the elderly would respond to our relatively impaired
‘young robot girl’. First, several elderly female
participants met Alice in the university’s lab
followed by about five visits in their own home
environment. All ladies were mentally and
physically healthy, despite some relatively minor
deficiencies, and lived independently on their own.
They were fully informed and gave permission to
record their interactions with Alice.
Due to Alice’s camera-eyes, her conversations
with the elderly female participants were recorded as
seen from her perspective. This provided a very
close perspective on how the elderly interacted with
robot Alice, for example in a variety of non-verbal
behaviors, commonly known as important in
interpersonal communication, such as searching for
eye contact. Somewhere later on in the process, one
of the participants was occupied with Alice not
correctly looking back. While this was due to a
technical flaw, the old lady commented to Alice that
she was not looking back correctly: “What is wrong
with you today? You are not looking right into my
eyes, well, it is just one eye – it looks a little bit
skewed.” In addition, a separate film camera in the
room recorded the conversations as well. In the
following, we will only report on several
observations that provide additional input to our
discussion on moral concerns about robots in
healthcare.
In the lab, at their first encounter with robot
Alice, the elderly female participants responded
quite distant to Alice. Understandably, they were
probably exploring how to converse with a robot.
They were left alone in the room with just Alice
sitting in front. The imperfect speech and sometimes
strange intonations made it also difficult to follow
Alice’s questions. Interestingly, the women
responded as they would in regular human-human-
interaction - asking to repeat the question, listening
very intensively, leaning forward and looking more
closely into her eyes, hoping to understand the
question in repeat. Even in this uncomfortable
situation, they addressed her directly. Their
awareness of the camera’s, and caregivers and
researchers in other rooms, only became apparent
when Alice asked impertinent questions or when
their answers crossed social desirability. For
example, when Alice asked for a grade to indicate
current satisfaction with life, a lady first indicated
“well, now, perhaps a 5” [Authors: on a 10-point-
scale with 10 as the max positive], then looked
around in the room and from her facial position we
inferred she was looking around whether she could
be heard, then, changed her grade in “hmm, perhaps
a 6, well, a 7 then, but not more.”
After the first encounter with Alice in the lab,
she was brought several times to the female
participants visiting them in their own home
environments. Even before Alice was switched on,
they were interested in how she was doing today and
started talking about and with her. While the
technician said he would switch her on and come
back in an hour or so (sometimes a few hours), the
lady asked whether she or Alice would start the
conversation. Alice started and asked quite adult and
direct questions (e.g., do you have children? Are you
feeling lonely? Whom have you met today?), yet all
questions were answered and apparently all was
fine. Gradually, the elderly women seemed to treat
RobotsHumanizeCare-MoralConcernsVersusWitnessedBenefitsfortheElderly
651
her as they would with a grandchild. For example,
when one of the elderly woman and Alice were
visiting a place to drink a coffee, she showed her
desire to offer food and drinks to Alice but was
aware that Alice could not eat or drink: “You cannot
have cookies can’t you?” Throughout the visits, the
participating women established some form of
affective bonding with robot Alice.
The repeat visits to the elderly women showed
that Alice became like a family member – she was
greeted upon and treated like a grandchild. As
recorded outside the presence of the technicians (i.e.,
when the lady was on her own with Alice), it was
often observed that there were also periods of long
silence, hardly any conversation, or just some
incidental notes, or a lady would read the newspaper
aloud to her. More intimate moments developed
over time, for example, when one of the old women
was showing her picture book about her son (who
now lives abroad), or when singing old songs
together, and watching the World Championships
together. Even to activate the elderly to do daily
physical exercises, Alice was very effective in just
asking the women and making her start. Apparently,
Alice’s presence was as effective as social pressure
and the woman was eager to do her exercises for
Alice. Another lady was asked by Alice whether she
had written that one friend back. The old lady
admitted (to Alice) that she did not get to it, but then
started right away. Clearly, the lady felt ashamed
and started writing immediately, as she might have
done when a real person had asked for.
In the course of interacting with Alice, the
technical impairments became of less importance.
For example, the fact that Alice could not walk was
finally commented on as “many of my generation
cannot walk either, that is, not anymore”. Likewise,
speech errors were encountered in similar ways as
‘many of my generation...’. Technically solvable
issues like amplitude or mispronunciation were
considered more problematic. In turn, Alice does not
care either whether the elderly repeat the same story
over and over again, whether they are slow in
responding, have long pauses of silence, and do not
understand things immediately. Alice always is
extremely patient and never bored; she can have the
same story 20 times or more and may even show
enjoyment again and again. Likewise, Alice repeats
questions or repeats answers without judgment or
frustration.
According to our observations, over the course of
interacting with Alice, it became less relevant that
Alice was just a robot with camera eyes and hearing
devices, a nice wrapping around some sophisticated
software. Over the course of repeat visits, the
participants became affectively bonded to Alice in a
similar way as they would to an acquaintance, like to
a grandchild. The presence of an apparent social
entity had become more important and urgent to
them than the question of whether she was a real
human or not (cf. Hoorn, Konijn, Germans, Burger,
& Munneke, 2014). After their initial hesitance, we
observed a clear shift from a cognitive awareness of
encountering a robot to an emotional fulfilment of a
need for company. The pain of loneliness in
mentally healthy women could be compensated for
by a social companion robot with human-like
interpersonal features. The cognitive awareness of
conversing with a robot became irrelevant
background information in view of the daily need
for company and social interaction. Surprisingly, not
one moral concern was raised by the elderly who
were sceptical in the beginnings and loved Alice in
the end. Even when asked for, they had no troubles
accepting Alice as a conversation partner, even
without any privacy concerns.
5 CONCLUSION: BRINGING
HUMANNESS BACK TO CARE
– THROUGH ROBOTS!
Indeed, for those unacquainted with social robots
and the beneficial effects they exert in lonely people,
applied to care, robots are a controversial topic. This
makes it of great importance that potential users
become experienced and do not make judgments that
are based on uninformed expectations. Reversely, a
care robot should meet those user demands that are
realistic and feasible technically. A thorough
understanding of the wishes and objections of
various stakeholder groups can contribute to a more
sensitive implementation of robots in healthcare.
That is to say, perhaps the refutation of robotic
care is not as humane as one would believe at first
sight. The debate on healthcare robotics seems to
focus on the notion that care will be made inhumane
when robots are introduced. Caregivers mostly fear
the absence of human contact and possible failures
of the machinery. They also fear for their jobs. We
would like to reverse this situation. It is through
robot technology that a caregiver will be able to
spend quality time with the patient whereas the robot
limits itself to mundane tasks such as lifting a
person, keeping an eye on someone, telling the
weather, casual coffee talk, and watching tv
together. Instrumental and superficial contact is left
HEALTHINF2015-InternationalConferenceonHealthInformatics
652
to the instrument, quality human contact to the
humans. However, it is not uncommon that
nowadays, human contact with the patient is nothing
more than instrumental: wash, dress, feed, gone.
Many patients do not even like to be touched by
human hands while being washed. The neutrality of
a robot is one of its unique selling points in
situations where otherwise a human patient feels
embarrassed in front of other humans. A comforting
hug, however, is better left to the human caretaker.
At the start of our explorations, the question was
whether it was morally right to try to fob off a robot
on the lonely elderly, in particular for social
interaction. By now the question has completely
flipped around: We challenge those who object to
employing social robots in healthcare whether it is
morally just to withhold a social robot from those
who are in deep need of contact? Like, is it fair to
keep an artificial leg from the handicapped because
it is not a real leg? Healthcare technology, including
social and companion robots, may enable caregivers
to bring ‘real human’ care back into the equation if
only through saving time.
ACKNOWLEDGEMENTS
This position paper is part of the SELEMCA project
(Services of Electro-Mechanical Care Agencies)
(grant number: NWO 646.000.003), which is funded
within the Creative Industry Scientific Programme
(CRISP). CRISP is supported by the Dutch Ministry
of Education, Culture, and Science.
REFERENCES
Andrén S and Elmståhl S. (2005) Family Caregivers'
Subjective Experiences of Satisfaction in Dementia
Care. Scandinavian Journal of Caring Sciences
Jun;19(2):157-68.
Asaro, P. (2006). What Should We Want From A Robot
Ethic? International Review of Information Ethics,
pp.6: 9-16.
Beauchamp T and Childress JF. (2009). Principles of
Biomedical Ethics. 6 ed. Oxford: Oxford University
Press. p.17.
Burger, S., 2014. Alice Cares, KeyDocs – NCRV.
Amsterdam.
Dautenhahn, K. (2007). Socially Intelligent robots:
Dimensions of Human-Robot Interactions.
Philosophical Transactions of the Royal Society of
London. Series B, Biological Sciences, Apr 29, 2007;
362(1480): 679–704. doi: 10.1098/rstb.2006.2004.
DBMI (2012). Delta Marktonderzoek. Nieuwe ouderen
eisen nieuwe benadering ouderenzorg. Jan. 13, 2012.
accessed June 2013. Retrieved http://www.dbmi.nl/
blog/item/nieuwe-ouderen-eisen-nieuwe-benadering-
ouderenzorg.
European Commission (2007) Health and Long-term
Care. Report Special Eurobarometer 283. December
2007.
European Commission. (n.d.) DG Health & Consumers,
Public health, Ageing, Policy. Retrieved June2014,
from http://ec.europa.eu/health/ageing/policy/index_
en.htm.
Hermanowicz, J. (2002) The Great Interview: 25
Strategies For Studying People in Bed. Qualitative
Sociology, p. 480.
Hoorn J.F, Konijn E.A, Germans D.M, Burger S. &
Munneke A. (2015). The In-Between Machine: The
Unique Value Proposition Of A Robot Or Why We Are
Modelling The Wrong Things. International
Conference on Agents and Artificial Intelligence,
ICAART, Lissabon, Portugal, Jan, 10-12, 2015.
Klein Wolterink G. (2013) Informatiestandaarden in de
Zorg. Den Haag Nictiz ID 13013.
Leget G. O, (2012) Menslievende Zorg in de Praktijk. Den
Haag: Boom/ Lemma, 2012.
MarketingCharts. (2007). Senior citizens fear moving into
a nursing home and losing their independence more
than they fear death. Accessed June 2013. Retrieved
from: http://www.marketingcharts.com/demographics-
and-audiences/boomers-and-older/seniors-fear-loss-of-
independence-nursing-homes-more-than-death-2343/
Reed J and Payton J. R. (1997). Focus Groups: Issues of
Analysing and Interpretation. Journal of advanced
nursing Oct; 26(4):765-71.
Sharkey A and Sharkey N. (2012) Granny and the robots;
ethical issues for robotcare in the elderly. Ethics and
Information Technology, Volume 14, Issue 1, pp 27-
40.
Vallor S. (2013) Carebots and Caregivers: Sustaining the
Ethical Ideal of Care in the Twenty-First Century.
Philosophy & Technology. Vol. 24, Issue 3 , pp 251-
268 doi: 10.1007/s13347-011-0015-x.
Zorg voor beter (2014).
Nieuwe toelatingseisen
verzorgingshuis. Jan. 13, 2014. accessed 22-11-2014.
Retrieved from http://www.zorgvoorbeter.nl/ouderen
zorg/Nieuwe-toelatingseisen-verzorgingshuis.html.
RobotsHumanizeCare-MoralConcernsVersusWitnessedBenefitsfortheElderly
653