e-Health for Older Adults: Navigating Misinformation
Amira Ghenai
1,2
, Xueguang Ma
1
, Robin Cohen
1
, Karyn Moffatt
2
, Andy Yang
1
and Yipeng Ji
1
1
David R. Cheriton School of Computer Science, University of Waterloo, Waterloo, Canada
2
School of Information Studies, McGill University, Montreal, Canada
Keywords:
Position Paper, Online Misinformation, e-Health, Older Adults, Social Media, Websites.
Abstract:
In this position paper, we advocate for the design of more progressive online social networks and their web
pages for the user base of older adults, arguing that in order to address the issue of misinformation, strategies
attuned to this population of users in particular are needed. We discuss challenges that arise with misleading
health information and with websites that support questionable positions with fake reviews (often generated
by bots). We also discuss the contribution of search engine results to the difficulties that this population faces
when navigating misinformation. We propose an approach where more interaction with users is promoted,
and where education about the perils of the online world can be supported, as an additional tool for reducing
misinterpretations which may lead to significant negative outcomes. The algorithms which we propose come
from the computer science subfields of artificial intelligence, human-computer interaction and information
retrieval. The novel stance is insisting on solutions that fit the demographic in question especially well, instead
of relying on one-size-fits all approaches, which may disadvantage users who are older adults.
1 INTRODUCTION
In this position paper, we introduce some new direc-
tions for a series of concerns regarding e-health and
older adults, for the specific challenge of assisting
with online misinformation. We first consider mis-
information as it arises in social networks, examin-
ing both how older adults may contribute to providing
misleading content, and also why this base of users
requires special consideration when developing solu-
tions for provide assistance in coping with challeng-
ing content. With this important background in hand,
we then comment on the specific case of social me-
dia posts regarding healthcare, providing insights into
why this context in particular requires special con-
sideration when considering older adult users. Our
next subtopic explores fake reviews on health-related
products on websites and discusses why older adults
in particular may need specific assistance. We first
discuss a collection of reviews about healthcare, dis-
covered online in April 2020, identifying the char-
acteristics that strongly suggest questionable content.
Based on research regarding older adult attitudes to-
wards online information in general, we then make
some specific recommendations, highlighted in ital-
ics. We also advocate creating automated solutions
for improving online experiences, so that possible
harm from misinformation is mitigated. An impor-
tant thread for our discussion is exploring current
approaches for detecting whether online reviews are
fake, using machine learning. We point the reader to
a later subsection of the paper where we discuss fur-
ther how to attune these solutions to older adults.
A third contributor to misleading content online is
the output of search engines. We discuss concerns that
arise due to these browsing efforts, reflecting as well
on what may be most relevant for older adults, specif-
ically when the topics at hand are those of healthcare.
The strategy that we propose for providing better as-
sistance to this user base for this particular challenge
is to conduct detailed user studies in order to carefully
learn what poses the most difficulty. The engagement
of participants yields further insights into the specific
problems to address with online misinformation and
the strategies that may hold the most promise in order
to assist users in this demographic.
We move forward from here to a section of the pa-
per devoted to examining the use of techniques from
artificial intelligence multiagent trust modeling, out-
lining their promise both in assisting with detection of
misinformation and in delivering help to older adults
in particular. The methods which we outline enable
reasoning both about the specifics of the user and the
particular content at hand, in order to propose strate-
gies for improving online experiences. We preface
Ghenai, A., Ma, X., Cohen, R., Moffatt, K., Yang, A. and Ji, Y.
e-Health for Older Adults: Navigating Misinformation.
DOI: 10.5220/0010463001950202
In Proceedings of the 7th International Conference on Information and Communication Technologies for Ageing Well and e-Health (ICT4AWE 2021), pages 195-202
ISBN: 978-989-758-506-7
Copyright
c
2021 by SCITEPRESS Science and Technology Publications, Lda. All rights reserved
195
this discussion with some insights into the user base
of older adults and what may be most important for
them when addressing untrustworthy health informa-
tion. As part of our final reflection for the future, we
consider more concrete steps forward using computer
science methods to improve the state of the world. We
also highlight the treatment of health information and
raise the current concern of Covid. In our conclusion,
we advocate for a combination of intelligent process-
ing of online information and a strategy of better ed-
ucation, going forward. It is important to note that
each online user is an individual; we are not claiming
that all older adults will fit the same profile. All the
same, certain challenges that have been shown to be
more prominent for this user base merit consideration.
And while older adult users may enjoy distinct bene-
fits online due to the wisdom of their age, since failing
to address healthcare misinformation may have criti-
cal outcomes, we choose to focus on steps to deflect
concerns.
2 THE CENTRAL CONCERNS
FOR OLDER ADULTS WITH
ONLINE MISINFORMATION
2.1 Social Networking
When considering misinformation in online social
networks and the specific concern about misleading
healthcare posts, research shows that this may be
more prevalent in certain platforms. For example,
Ebola misinformation was present in more than 80%
of Instagram posts but fewer than 50% of Tweets
(Guidry et al., 2017). There is also an issue with the
spread of rumors and how this changes over time. As
an example, reports on narcolepsy as a side effect of a
flu vaccine in the Nordic countries were first depicted
as rumors, but later few cases were confirmed after
years of research and this is still contested (Sturken-
boom, 2015). Acting on health misinformation might
also need urgent actions from health organizations to
stop the spread and share correct information. Detect-
ing health rumors in a timely fashion can help public
health officials tackle them before they spread. In the
case of the Ebola outbreak, some of the rumors cir-
culating the Internet, such as that drinking salty wa-
ter was an effective protective measure, led to several
deaths (Oyeyemi et al., 2014).
One promising approach is focusing on this im-
portant issue of rumor spread in social media, propos-
ing methods for detecting the origins of misinforma-
tion using a combination of crowdsourcing and expert
advice, examining features of the messages. Algo-
rithms to assist in detecting misinformation and ru-
mour spread have been designed specifically for con-
cerns with health within Twitter (Ghenai and Mejova,
2017). While this research also examined specific
health issues (e.g. Zika fever, cancer), a range of pos-
sible health-related topics could also be considered by
the methods presented in this paper.
Since some research has shown that older adults
may actually quite benefit from using social media
(Myhre et al., 2016) in terms of improved working
memory and cognitive function, it would be valuable
to encourage these users to participate in online com-
munities. But two problems arise. The first is that
older adults may be responsible for the spread of mis-
information. For example, in a study of registered
voters on Twitter (admittedly only a small sample of
the older adult users), Grinberg et al. (Grinberg et al.,
2019) found that people over 65 shared roughly 4.5
as many false news stories on Twitter as people 18
to 24. Older adults were also found to have shared
more fake news and hoaxes in Facebook (Guess et al.,
2019), nearly 7 times as many articles from fake news
domains than younger users, and also 3 or 4 times as
many fake news links. The second problem is that
older adults are more susceptible to misinformation,
compared to younger adults (Wylie et al., 2014). As
to why these users may be more prone to believe in-
correct information, studies such as (Ghosh and Iyen-
gar, 2019) suggest that feelings of isolation and lone-
liness may be important factors in willingness to be-
lieve the misinformation. This research suggests that
more specific attention on the user base of older adults
is merited, when trying to address online misinfor-
mation. The importance of doing so becomes all
the more essential when considering healthcare posts
with fake reviews, as we discuss in the section below.
2.2 Fake Reviews
A recent study (Shao et al., 2019) suggests that
credulity is the factor that increases vulnerability
to fraud in older adults where credulity refers to a
propensity to believe things that are unproven or un-
likely to be true. Slower cognitive processing and in-
creased trust might also be reasons why fraud among
older adults is efficient (Bailey and Leon, 2019). Liao
et.al (Liao and Fu, 2014) conducted a study to un-
derstand how age affects credibility judgements of
online health information. The results showed that
older adults pay less attention to credibility content
cues (e.g. high ranked medicine with research evi-
dence, explanation of treating mechanisms, compre-
hensive and accurate information, professional writ-
ICT4AWE 2021 - 7th International Conference on Information and Communication Technologies for Ageing Well and e-Health
196
ing) compared to younger adults. Different from
younger adults, older adults further tend to ignore the
contextual website features during credibility judg-
ment (such as nice layout/color/information struc-
ture, with reference/contact information/third party
endorsement ). When looking at user reviews, older
adults were helped in making better credibility judg-
ments when these reviews were consistent with con-
tent cues. We are encouraged as well by the work of
(Stewart et al., 2014) that suggests how credibility in-
dicators can be promoted for older adults.
From this study, we conclude that older adults
might benefit from explicit, prominent guidelines to
better perform credibility judgements. On the de-
sign perspective, website designers might use pop-up
checklist that “reminds” users to ask critical ques-
tions relevant to the credibility of information and to
pay attention to Web-specific features, which may help
older adults to naturally adopt more effective credibil-
ity judgment when consuming online health informa-
tion.
2.2.1 Older Adults and Healthcare
We discovered several examples of reviews of health-
care products on Amazon in April 2020 which we sus-
pected were fake. The first was for an essential min-
eral supplement “Pure” which has few words (“hair
grew back”), and claims huge benefit (“bald cure”).
At times we saw review text with strong titles (“Five
stars” or “Great product”) that was incoherent, pos-
sibly autogenerated, including “I gave this to my son
since I have the mugs that are copper he likes it” and
“This is the very best Melatonin in the market to-
day. Buy up while you can” (where the product was
not even melatonin-based). Generic wording at times
seemed canned, as well. This happened with a re-
view for a fingertip pulse oximeter which claimed “I
need this instrument very much to detect the physical
condition of me and my family”. Stilted writing and
fearmongering were also elements which heightened
our concern about the reviews. This study of possi-
ble fake reviews points to the value of algorithms (ex-
plored in the next subsection) which delve further into
the linguistic style of reviews and which may be able
to detect irrelevant content.
von Helversen et al. (von Helversen et al., 2018)
considered the differences in how younger and older
adults assessed products based on three attributes:
product attributes, average ratings, affect-rich (emo-
tional) user reviews. In the absence of any ratings or
reviews, they found that older adults were more likely
to carefully consider the product attributes. How-
ever, they still struggled to determine the better prod-
uct just by looking at the attributes, which may be
affected by aging-related declines in their decision-
making abilities. They also found that younger adults
were strongly influenced by average consumer rat-
ings and in the absence of user reviews would most
likely choose the product with a higher average rat-
ing unless there was jarring difference between the at-
tributes of the product. In comparison, average ratings
had no such effect on the decisions by older adults,
which they found to be surprising. They postulate that
it’s possible that older adults simply don’t care about
what other people think or that the user interface for
average ratings may be difficult for older adults to un-
derstand or see. As for user reviews, they found that
younger adults were strongly affected by both affect-
rich negative reviews and affect-rich positive reviews.
In contrast, older adults were not at all affected by
affect-rich positive reviews but strongly affected by
affect-rich negative reviews. They claimed that this is
consistent with the socio-emotional selectivity theory
since older adults want to focus on emotional goals
and thus want to avoid negative emotions that may be
associated with choosing a product.
Below we provide some of our key proposals in
italics. The researchers conclude and we recommend
that rating systems be adapted for older adults by
having better visibility of all negative reviews and
more clarity regarding average ratings. It is of course
also important to prevent the injection of fake, vivid,
negative reviews. The researchers also observed and
we find it important to emphasize that the next gen-
eration of older adults may be more internet-savvy.
It is also important to note that the authors’ base-
line reviews were skewed towards positive statements
since they were more typical than neutral reviews. We
therefore advocate that another study should examine
whether these results are repeatable. If they are, then
platform owners such as Amazon should make use
of content moderation to be more circumspect with
strongly positive reviews that may be fake. However,
freedom of speech still needs to be balanced; users
should not feel that there is undue censorship.
2.2.2 Automating the Assistance
It is important to note that fake reviews arise online
in part because they are generated by automated pro-
cedures, commonly referred to as bots. For example,
this significant study examined how artificial intelli-
gence technology could be used to populate websites
with fake reviews (Yao et al., 2017). This work then
examines how to address this practice using technol-
ogy. In fact, several researchers have designed ma-
chine learning algorithms to detect fake reviews and
we suggest also using these methods as part of the ef-
fort to assist older adults.
e-Health for Older Adults: Navigating Misinformation
197
The work of Ott et al. (Ott et al., 2011) exam-
ines linguistically analyzing the content of reviews
online in order to identify those which may be sus-
pect. Specifically, their findings suggest the im-
portance of considering both the context (e.g., BI-
GRAMS+) and motivations underlying a deception,
rather than strictly adhering to a universal set of de-
ception cues. Several other approaches for detect-
ing fake reviews have been developed over the years.
The performance of different methods is examined in
(H
´
ajek et al., 2020). The work of Sun et al. (Sun
et al., 2013) makes us aware that various spamming
techniques can still be used to thwart fake news de-
tection algorithms. And the research of Mukherjee et
al. (Mukherjee et al., 2013b) critiques just how ef-
fective methods being used today by some of the cur-
rent platform owners turn out to be, examining what
is happening in the context of Yelp.
We also assembled a set of papers aimed at de-
tecting fake reviews which introduce deeper AI tech-
niques for the analysis. The techniques spanned su-
pervised learning (supporing binary classification),
unsupervised learning (to address lack of labelled
data) and semi-supervised learning; both text and be-
haviour features were considered. The approach in
(Mukherjee et al., 2013a) formulates detection as an
unsupervised clustering problem based on user be-
haviours, using Bayesian methods. The work of
(Li et al., 2017) shows the AI methods of hidden
Markov Models, exploiting bimodal distributions and
behaviour features. Some of the methodology used in
linguistic processing of text is revealed in (Shuqin and
Jing, 2019), following a semi-supervised learning ap-
proach. Examining the contribution of temporal fea-
tures is the core idea of the next paper examined (Liu
et al., 2019), as a specific element of interest. We also
analyzed a supervised learning proposal that focuses
on text features, culminating in a machine classifica-
tion process (Jia et al., 2018). A useful literature re-
view in the area is offered in (Wu et al., 2020); this pa-
per makes clear that effective methodology and useful
datasets are both important.
Some of the central issues that need to be ad-
dressed are as follows. First of all, obtaining ground
truth data is a challenge. As explained above, one ap-
proach is to use crowdsourced labelling towards this
end, but the accuracy of these methods to date for
fake reviews has faced various challenges (Fornaciari
et al., 2020). With respect to feature detection, our ob-
servation is that using behaviour features for fake re-
views needs to focus on the review account behaviour,
such as how many reviews were posted per day. This
is distinct from user behaviour analyses that arise in
studies such as (Ghenai and Mejova, 2017). We ex-
plore how some of this feature analysis can be tuned
to older adults in particular, in Section 3.
2.3 Older Adults and Search Engines
Even if search engines return the most relevant doc-
uments, sometimes these relevant documents contain
incorrect information. The presence of incorrect in-
formation might cause harm when making health-
related decisions. In a prior study (Ghenai et al.,
2020), the authors showed that there is a signifi-
cant influence of search results on people’s decisions
when determining the efficacy of a medical treat-
ment. When search results were biased towards cor-
rect information, people tended to answer health re-
lated questions with high accuracy. On the other hand,
when the search results were biased towards incorrect
information, people’s accuracy dropped. The conse-
quences of the presence of incorrect information in
search results are dire especially when dealing with
serious illnesses. For example, a search user is diag-
nosed with cancer, and turning to a web search en-
gine, finds documents offering many treatments for
sale, but many of these treatments will have no effect
or even a negative effect on the health of a cancer pa-
tient. If a user is provided with incorrect treatment in-
formation, the searcher may not seek known effective
treatments or may reject such beneficial treatments in
favor of unhelpful treatments.
The prior study (Ghenai et al., 2020) was biased
towards highly educated young computer savvy part
of the population due to the recruiting process. How
would the effect of search results differ when deal-
ing with a different set of population with potentially
high risk of being influenced with the incorrect in-
formation such as older adults? In 2010, it was re-
ported that 45% of older adults over the age of 65 go
online to search for health-related information (Fox,
2010). While older adults often have lower health
literacy than younger adults (Czaja and Lee, 2012),
many of them are novices in Internet searching and
health-related websites (Becker, 2004). These factors
raise the concern of how incorrect online health infor-
mation effect older adults’ health-related decisions.
We advocate conducting a new study in order
to investigate how search results affect older adults
when determining the effectiveness of a medical treat-
ment. The central recommendation is to assist older
adults with challenges resulting from search engine
results by first conducting a hands-on investigation in
order to properly ascertain the preferences and con-
cerns of this user base. We advocate combining sur-
veys, interviews in person, audio-recording, video-
recording and recording eye movements using an eye-
ICT4AWE 2021 - 7th International Conference on Information and Communication Technologies for Ageing Well and e-Health
198
tracking devices. Only older adults capable to using a
computer to do online search (independently or with
help) will be able to participate. Each participant
also needs to confirm that they are 65 years of age
or older.
With older adult users, any online questionnaires
will need to take into consideration vision function,
cognitive function and memory function; it would be
best as well to try an array of ages and to survey both
males and females. An important study (Seo et al.,
2020) has also highlighted that certain segments of
this older population are even more vulnerable to on-
line misinformation. This then suggests a compre-
hensive assembly of participants with different user
profiles.
We acknowledge that during the Covid pandemic,
it will now be more difficult to gather together with
participants in person. But we would still like to learn
how older adults may be misled about this impor-
tant subtopic. We recommend trying to first under-
stand which topics regarding health these users tend
to search for the most and what platforms they use the
most for health information gathering. It would also
be useful to determine, for Covid19, whether features
that social media plaforms have deployed in order to
mitigate this use (e.g. linking explicitly to trusted
sources) are helpful to older adults. We propose con-
ducting an online survey in order to get responses
from our users. We also feel it would be valuable to
conduct a longitudinal study where we conduct the
survey multiple times during a given time period, to
see if opinions change over time. We are also aware of
important research on how best to design surveys that
operate with older adult users and have some recom-
mendations following these results (displayed in ital-
ics), as well. We recommend that researchers need to
consider the technical dexterity of the users and that
responses rates to surveys may be increased if privacy
protection measures are heightened. A very interest-
ing study (Chin et al., 2018) focused specifically on
polling older adults with respect to healthcare infor-
mation. The authors conclude that at times new web-
sites arise which provide clearer advice to patients.
One useful strategy may be to display both old and
new websites when health information has been up-
dated, in order to emphasize the new contributions
for the older adult population.
3 THE POTENTIAL OF AI TRUST
MODELING
In order to examine in closer detail how algorithms for
judging trustworthiness of agents in multiagent sys-
tems may be of use in order to assist older adults on-
line with healthcare misinformation, we first conduct
a deeper investigation of how this particular user base
differs from others, when considering trust in other
parties. One observation in the literature is that older
adults have smaller social circles, being less likely
to build new online contacts of their own initiative
(Steijn, 2014). This suggests that their conclusions
when presented with healthcare information online,
may have a tendency to agree with other users having
similar backgrounds. In this way, the effects of misin-
formation may acccumulate. Another distinguishing
viewpoint of the older adult community outlined in
(L
¨
uders and Brandtzæg, 2017) is that at times they
regard the Internet more as a functional tool than as
an avenue for social interaction. An outcome may be
less critical thinking of the content that is provided.
It is very important to treat each online user as an
individual and not to make generalizations based on
common habits that may be observed. Nonetheless,
when providing assistance online to someone iden-
tified as an older adult, some options may be pro-
vided which will facilitate beneficial outcomes. Es-
pecially for the consideration of online healthcare
information, all users would benefit from including
known experts in the medical field in their social cir-
cle. Older adults may acquire more discrimination
of online content, if they were shown some statistics
of how many other users within their platform have
disagreed with the opinion that they may choose to
promote (e.g. retweet). Some of the online statis-
tics that are not obviously displayed can become more
prominent. In part, this suggestion aligns with earlier
recommendations of ours about fake reviews (Section
2.2), to enable older adults to be more aware of true
negatives rather than focusing on positives. In a sim-
ilar vein, the information in online social networks,
when directed to users identified as older adults, can
draw out more explicitly the origin of the advice that
is being provided (e.g. if a medical organization
known worldwide has initiated the information).
Two central approaches for reasoning about trust-
worthiness of online sources using multiagent trust
modeling have emerged recently. The first outlines
how discussions which emerge following a post in
a social media environment such as Reddit can pro-
vide important clues as the reputation of the post
that launched the discussion (Parmentier and Cohen,
2019). The second sheds light on how clusters of
users with similar behaviours may receive person-
alized recommendations for whom to trust within a
review-based context, using Yelp as the exemplar
(Parmentier and Cohen, 2020). This is based on a
multi-faceted set of trust indicators such as how often
e-Health for Older Adults: Navigating Misinformation
199
a trustee’s ratings agreed with that of the trustor.
In order to move forward with algorithms such
as these, if assisting the user base of older adults in
particular, there are some key ideas which hold some
promise. The Reddit study explored the contributions
of specific features in the text such as negative lan-
guage or sentiment. But one interesting feature which
seemed to be particularly helpful in identifying ques-
tionable reputation was the difference in scores within
the ensuing discussion of a post. We have already
mentioned that older adults may need more assistance
with negativity when reacting online. As such, sce-
narios where older adults are involved in the discus-
sion may well be ones with significant disagreement
amongst users. Further study of this particular indi-
cator of user reputation may be quite fruitful, in or-
der to provide assistance to older adults. The Yelp
study located clusters of users based on their pat-
terns of trusting others. But we may be trying to
assist a set of users already identified as all being
older adults. We may know the general features of
this set of users, such as ones we have drawn out in
this paper regarding attention on negative reactions.
This may then suggest new trust indicators to be used
within the multi-faceted trust model which may then
assist in grouping together older adult users within the
same clusters. In addition, what is known beforehand
about these users can be considered to be a prior on
the Bayesian reasoning employed within the proposed
multi-faceted trust modeling. Then the personalized
attention that is delivered can become appropriate ex-
plicitly for this user base.
4 LOOKING TO THE FUTURE
FOR OLDER ADULTS AND
ONLINE MISINFORMATION
In this paper, we have reported on some challenges
which arise when trying to assist older adult users
with online misinformation about healthcare. We
have made clear that different contexts are relevant
and important: social media, fake reviews and less
helpful search engine results. We have also explained
why older adult users may require special considera-
tion in each of these scenarios. And we have tried to
emphasize the additional concerns which arise when
the discussions centre around healthcare. For each
of the specific areas in which we would like to see
improved experiences for older adult users, we have
highlighted some specific recommendations: actions
to take, including the use of various automated, intel-
ligent algorithms and the importance of doing deeper
explorations of the actual needs and preferences of the
user base, online. Looking to the future, it would be
valuable to consider some more dramatic efforts for
more benefical outcomes for our older adult users, to-
wards their healthcare decisions. Several of these sug-
gestions emphasize improved interactions with this
user base.
A first suggestion is to offer additional support
when the various technologies are being used. An
ambient help system which does not interrupt but al-
lows for additional guidance may be useful to install,
both with respect to search engine use and in navi-
gating posts in online social networks. The work of
(Matejka et al., 2011) serves as an inspiration for this
step forward. Some current efforts to engage older
adult users (Yu and Moffatt, 2019) have also shown
that the importance of being careful with these users
when it comes to layout preferences and privacy con-
siderations. In the case of healthcare, where messages
may seem mixed and difficult to interpret, one cre-
ative suggestion to support older adults is to consider
integrating friendly voice assistants. We feel this so-
lution may be appealing, in much the same way that
robotic companions have been accepted (Bemelmans
et al., 2012). In general, we are suggesting that a
somewhat different emphasis on what to support and
consider for this user base may be required.
We also have specific plans for advancing from
solutions on detecting rumours about healthcare in
Twitter (Ghenai and Mejova, 2017). Taking Covid-19
as a case study (an issue of critical concern to older
adults), we can collect a set of tweets and then clas-
sify them into rumors or non-rumors. Later we can
scrutinize the source of the rumour tweets to better
understand the origins of this particular misinforma-
tion campaign. We can also collect Covid-19 arti-
cles from the RSS feeds of already labelled media
sources, as was suggested in a recent research chal-
lenge (ICWSM 2020 dataset challenge). After iden-
tifying sources which contain suspected misinforma-
tion, we can look further into whether certain domains
are more responsible for the spread. This then brings
us into the important concern of examining influence
spreaders within the networks (Starbird et al., 2018).
Since many studies to date have focused on the spread
of news misinformation, we would aim to study more
carefully whether the results differ when the focus is
on health misinformation.
With respect to Covid-19, it would also be instruc-
tive to learn whether opinions change over time, as
new sources of information are presented to users.
Towards this end, we would aim to have a longitu-
dinal study (same survey during different time peri-
ods) of older adult users. The specific exploration
ICT4AWE 2021 - 7th International Conference on Information and Communication Technologies for Ageing Well and e-Health
200
of Covid-19 is quite enlightening, as international
collaborations between fact checkers has been estab-
lished for this particular topic: 90 professionals from
39 countries have debunked 495 falsehoods in 15 lan-
guages. The #CoronaVirusFact/#DatosCoronaVirus
alliance has published six international reports in En-
glish and created a special search list on Twitter
(poy.nu/2019CoronaVirusFacts) to help citizens eas-
ily get the latest verified content online.
Moving from (Ghenai and Mejova, 2018) to con-
sider COVID-19 to detect misinformation, we raise
these points. First, Twitter is currently doing a
great effort to keep its content rumor-free regarding
COVID-19. As a result, when collecting data, we
might not find a large amount of tweets containing
COVID-19 rumors due to stricter regulations regard-
ing the pandemic. Having fewer data might affect
the performance of the model. Second, (Ghenai and
Mejova, 2018) did not consider the geographic dis-
tribution of rumors in Zika or in the cancer study.
However, in COVID-19, we believe that we need to
consider the geographic location when building the
model. Many conspiracy theories and rumors might
be different in each location. The misinformation
tracking models in (Ghenai and Mejova, 2018) did not
look at temporal changes of rumors. Keeping track of
time when building the model will be important for
COVID-19, as regulations and rumors tend to change
rapidly.
5 FINAL REMARKS
This position paper has drawn attention to the fact
that the user base of older adults merits special con-
sideration, when developing technological solutions
to assist with misinformation about health that may
occur online. We make clear that different contexts
of use arise and need to be examined: social me-
dia, fake reviews and search engines. We also clar-
ify that for these online experiences, healthcare is a
prominent topic in focus for this base of users, who
may also be particularly vulnerable. Through some
specific recommendations about what to present and
what to highlight for these users and through some
original suggestions for moving forward with tech-
nologies that we have developed to date, with greater
focus on this user base, this position paper provides
some important messages and some concrete starting
points for progress. A predominant theme to this mes-
sage is enabling older adults who are online to be bet-
ter educated about potential misinformation. As such,
combining the steps forward outlined in this paper
with independent outreach to these individuals, mak-
ing them aware of valued sources and the importance
of careful scrutiny of online content, is also an essen-
tial part of the solution, in our view.
While what we discover now about older adults
may not be true tomorrow for the next generation
of these users, continuing to conduct dedicated user
studies will assist in understanding this demographic;
healthcare will always be an important issue for this
user base and helping to navigate misinformation
should therefore continue to be an ongoing concern.
REFERENCES
Bailey, P. E. and Leon, T. (2019). A systematic review and
meta-analysis of age-related differences in trust. Psy-
chology and aging, 34(5):674.
Becker, S. A. (2004). A study of web usability for older
adults seeking online health resources. ACM Trans-
actions on Computer-Human Interaction (TOCHI),
11(4):387–406.
Bemelmans, R., Gelderblom, G., Jonker, P., and de Witte,
L. D. (2012). Socially assistive robots in elderly
care: a systematic review into effects and effective-
ness. Journal of the American Medical Directors As-
sociation, 13 2:114–120.e1.
Chin, J., Moeller, D., Johnson, J., Duwe, E., Graumlich, J.,
Murray, M., and Morrow, D. (2018). A multi-faceted
approach to promote comprehension of online health
information among older adults. The Gerontologist,
58:686–695.
Czaja, S. J. and Lee, C. C. (2012). Older adults and in-
formation technology: opportunities and challenges.
The Human-Computer Interaction Handbook: Fun-
damentals, Evolving Technologies and Emerging Ap-
plications. Taylor & Francis, pages 825–40.
Fornaciari, T., Cagnina, L. C., Rosso, P., and Poesio, M.
(2020). Fake opinion detection: how similar are
crowdsourced datasets to real data? Language Re-
sources and Evaluation, pages 1–40.
Fox, S. (2010). Four in ten seniors go online.
Washington, DC: Pew Internet & American Life
Project. https://www.pewresearch.org/internet/2010/
01/13/four-in-ten-seniors-go-online/.
Ghenai, A. and Mejova, Y. (2017). Catching zika fever: Ap-
plication of crowdsourcing and machine learning for
tracking health misinformation on twitter. Proceed-
ings of ICHI, pages 518–518.
Ghenai, A. and Mejova, Y. (2018). Fake cures: User-centric
modeling of health misinformation in social media.
Proceedings of CSCW, 2:1–20.
Ghenai, A., Smucker, M. D., and Clarke, C. L. (2020). A
think-aloud study to understand factors affecting on-
line health search. In Proceedings of the 2020 Con-
ference on Human Information Interaction and Re-
trieval, pages 273–282.
Ghosh, D. and Iyengar, V. (2019). Older adults are espe-
cially prone to social media bubbles. Scientific Amer-
ican Blog Network.
e-Health for Older Adults: Navigating Misinformation
201
Grinberg, N., Joseph, K., Friedland, L., Swire-Thompson,
B., and Lazer, D. (2019). Fake news on twitter
during the 2016 us presidential election. Science,
363(6425):374–378.
Guess, A., Nagler, J., and Tucker, J. (2019). Less than you
think: Prevalence and predictors of fake news dissem-
ination on facebook. Science Advances, 5.
Guidry, J. P., Jin, Y., Orr, C. A., Messner, M., and Meganck,
S. (2017). Ebola on instagram and twitter: How
health organizations address the health crisis in their
social media engagement. Public relations review,
43(3):477–486.
H
´
ajek, P., Barushka, A., and Munk, M. (2020). Fake con-
sumer review detection using deep neural networks in-
tegrating word embeddings and emotion mining. Neu-
ral Computing and Applications, pages 1–16.
Jia, S., Zhang, X., Wang, X., and Liu, Y. (2018). Fake
reviews detection based on lda. 2018 4th In-
ternational Conference on Information Management
(ICIM), pages 280–283.
Li, H., Fei, G., Wang, S., Liu, B., Shao, W., Mukherjee,
A., and Shao, J. (2017). Bimodal distribution and co-
bursting in review spam detection. Proceedings of the
26th International Conference on World Wide Web.
Liao, Q. V. and Fu, W.-T. (2014). Age differences in
credibility judgments of online health information.
ACM Transactions on Computer-Human Interaction
(TOCHI), 21(1):1–23.
Liu, W., He, J., Han, S., Cai, F., Yang, Z., and Zhu, N.
(2019). A method for the detection of fake reviews
based on temporal features of reviews and comments.
IEEE Engineering Management Review, 47:67–79.
L
¨
uders, M. and Brandtzæg, P. (2017). ‘my children tell me
it’s so simple’: A mixed-methods approach to under-
stand older non-users’ perceptions of social network-
ing sites. New Media & Society, 19:181 – 198.
Matejka, J., Grossman, T., and Fitzmaurice, G. W. (2011).
Ambient help. In CHI.
Mukherjee, A., Kumar, A., Liu, B., Wang, J., Hsu, M.,
Castellanos, M., and Ghosh, R. (2013a). Spotting
opinion spammers using behavioral footprints. In
KDD ’13.
Mukherjee, A., Venkataraman, V., Liu, B., and Glance, N.
(2013b). What yelp fake review filter might be doing?
In ICWSM.
Myhre, J. W., Mehl, M. R., and Glisky, E. L. (2016). Cogni-
tive Benefits of Online Social Networking for Healthy
Older Adults. The Journals of Gerontology: Series B,
72(5):752–760.
Ott, M., Choi, Y., Cardie, C., and Hancock, J. T. (2011).
Finding deceptive opinion spam by any stretch of the
imagination. In Proceedings of the 49th Annual Meet-
ing of the Association for Computational Linguistics:
Human Language Technologies, pages 309–319.
Oyeyemi, S. O., Gabarron, E., and Wynn, R. (2014). Ebola,
twitter, and misinformation: a dangerous combina-
tion? Bmj, 349:g6178.
Parmentier, A. and Cohen, R. (2019). Learning user rep-
utation on reddit. In IEEE/WIC/ACM International
Conference on Web Intelligence, pages 242–247.
Parmentier, A. and Cohen, R. (2020). Personalized multi-
faceted trust modeling in social networks. In The
Canadian Conference on Artificial Intelligence.
Seo, H., Blomberg, M., Altschwager, D., and Vu, H. T.
(2020). Vulnerable populations and misinforma-
tion: A mixed-methods approach to underserved older
adults’ online information assessment:. New Media &
Society, page 146144482092504.
Shao, J., Du, W., Lin, T., Li, X., Li, J., and Lei, H. (2019).
Credulity rather than general trust may increase vul-
nerability to fraud in older adults: A moderated me-
diation model. Journal of elder abuse & neglect,
31(2):146–162.
Shuqin, Y. and Jing, F. (2019). Fake reviews detection based
on text feature and behavior feature. 2019 IEEE con-
ference on Data Science and Systems, pages 2007–
2012.
Starbird, K., Arif, A., Wilson, T., Van Koevering, K., Yefi-
mova, K., and Scarnecchia, D. (2018). Ecosystem or
echo-system? exploring content sharing across alter-
native media domains. In Twelfth International AAAI
Conference on Web and Social Media.
Steijn, W. (2014). A developmental perspective regarding
the behaviour of adolescents, young adults, and adults
on social network sites. Journal of psychosocial re-
search, 8.
Stewart, R., Hotopf, M., Dewey, M., Ballard, C., Bisla, J.,
Calem, M., Fahmy, V., Hockley, J., Kinley, J., Pearce,
H., Saraf, A., and Begum, A. (2014). Current preva-
lence of dementia, depression and behavioural prob-
lems in the older adult care home sector: the south
east london care home survey. Age and ageing, 43
4:562–7.
Sturkenboom, M. C. (2015). The narcolepsy-pandemic in-
fluenza story: can the truth ever be unraveled? Vac-
cine, 33:B6–B13.
Sun, H., Morales, A., and Yan, X. (2013). Synthetic review
spamming and defense. In KDD ’13.
von Helversen, B., Abramczuk, K., Kope
´
c, W., and Nielek,
R. (2018). Influence of consumer reviews on online
purchasing decisions in older and younger adults. De-
cision Support Systems, 113:1 – 10.
Wu, Y., Ngai, E., Wu, P., and Wu, C. (2020). Fake online
reviews: Literature review, synthesis, and directions
for future research. Decis. Support Syst., 132:113280.
Wylie, L. E., Patihis, L., and McCuller, L. L. (2014). Mis-
information effect in older versus younger adults: A
meta-analysis and review. In The elderly eyewitness
in court, pages 52–80. Psychology Press.
Yao, Y., Viswanath, B., Cryan, J., Zheng, H., and Zhao,
B. Y. (2017). Automated crowdturfing attacks and de-
fenses in online review systems. In Proceedings of
Conference on Computer and Communications Secu-
rity, pages 1143–1158.
Yu, J. and Moffatt, K. (2019). Improving accessibility of
social media for older adults. In Proceedings of CSCW
2019 Accessibility workshop.
ICT4AWE 2021 - 7th International Conference on Information and Communication Technologies for Ageing Well and e-Health
202