User Interface Factors of Mobile UX: A Study with an Incident
Reporting Application
Lasse Einfeldt
1
and Auriol Degbelo
2 a
1
xdot GmbH, M
¨
unster, Germany
2
Institute for Geoinformatics, University of M
¨
unster, M
¨
unster, Germany
Keywords:
Mobile UX, Mobile Form Design, Map UX, Environmental Monitoring.
Abstract:
Smartphones are now ubiquitous, yet our understanding of user interface factors that maximize mobile user
experience (UX), is still limited. This work presents a controlled experiment, which investigated factors that
affect the usability and UX of a mobile incident reporting app. The results indicate that sequence of user
interface elements matters while striving to increase UX, and that there is no difference between tab and
scrolling as navigation modalities in short forms. These findings can serve as building blocks for empirically-
derived guidelines for mobile incident reporting.
1 INTRODUCTION
User experience (UX) has gained attention from many
sides since the turn of the millennium. Although
many authors mentioned the lack of clear definitions
of user experience and no clear understanding of it
in early research (Forlizzi and Ford, 2000; McCarthy
and Wright, 2004; Hassenzahl, 2005; Wright et al.,
2005), there has been a lot of research dealing with
user experience since then. There has been reviews
of UX in human-computer interaction (e.g. Bargas-
Avila and Hornbæk (2011); Pettersson et al. (2018);
Kieffer et al. (2019)) and work investigating factors
of mobile UX in general (e.g. Wigelius and V
¨
a
¨
at
¨
aj
¨
a
(2009); Arhippainen and T
¨
ahti (2003)). Nonetheless,
our understanding of user interface factors which
maximize mobile user experience is still limited. In-
vestigating these factors is important for at least two
reasons. First, mobile devices are peculiar by virtue
of their size, their (still) relatively reduced process-
ing power, and input modalities (e.g. post-WIMP in-
teraction). That is, insights gathered while assessing
user experience on desktop (or other devices) are not
readily portable to the world of mobile UX. Second,
an understanding of these factors is important to de-
velop design heuristics, which can then be integrated
as constraints during computer-generated user inter-
face design (Oulasvirta, 2017).
a
https://orcid.org/0000-0001-5087-8776
This work is primarily concerned with the im-
pact of positioning and type of navigation modality
on the overall usability and user experience of mobile
geospatial applications. Incident reporting apps (of
which maps are a central component) are of interest
here for two reasons. From the theoretical point of
view (and as mentioned in Kray et al. (2017); Roth
(2013)), there are currently no consolidated set of
guidelines that have emerged as to how to design in-
teractions with maps. A few works (Sch
¨
oning et al.,
2014; Arhippainen, 2013; Kraak et al., 2018) made
some useful recommendations, but these do not ad-
dress the UX of mobile geospatial apps (i.e. maps)
directly. Thus, research in this area must remain ac-
tive for study outcomes to crystallize into empirically-
derived guidelines in the future. From the practi-
cal point of view, maximizing UX in the context of
incident reporting is a catalyst for uptake and pro-
longed use. Put differently, a positive user experience
is crucial to guarantee prolonged contributions by cit-
izens. As a starting point for the work, the mobile
app “Meine Umwelt” was used. The rationale for its
choice is introduced next.
Reports of ecological data are an important data
source for German federal agencies dealing with the
preservation of the environment. Reported data can
have several topics including findings of neophytes,
illegally disposed waste, or endangered animals. This
information can be used to find items of interest in
the wild easily, instead of searching for them. A
system used for reporting this data is the mobile
Einfeldt, L. and Degbelo, A.
User Interface Factors of Mobile UX: A Study with an Incident Reporting Application.
DOI: 10.5220/0010325302450254
In Proceedings of the 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2021) - Volume 2: HUCAPP, pages
245-254
ISBN: 978-989-758-488-6
Copyright
c
2021 by SCITEPRESS Science and Technology Publications, Lda. All rights reserved
245
app “Meine Umwelt” (Kooperation-Umweltportale,
2019). “Meine Umwelt” was created by the “Kooper-
ation Umweltportale”, a cooperation of German fed-
eral states for the development of environment-related
apps. More than 10.000 downloads in the Google
Play Store, more than 3000 active users on Android,
in addition to more than 2.250 unique downloads on
iOS build the current user base. Since the app is de-
veloped for federal agencies in Germany, means for
measuring the interactions (e.g. interaction logs) with
the application cannot be implemented into the live
app in the app stores. An assessment of the interaction
with the app to identify possibilities for improvement
can only happen through a lab-based study offline.
The contributions of this work are twofold: (i)
an investigation of the effect of map placements on
the user experience within a mobile app for reporting
ecological data: a takeaway from the experiment is
that sequence of UI elements matters while trying to
maximize mobile UX; (ii) a comparison of tab-based
and scrolling navigation for short forms on mobile de-
vices: contrary to Harms et al. (2015) who found that
scrolling performs worse than tab for the interaction
with long forms on smartphones, the difference be-
tween both interaction modalities for short forms was
not significant in this work.
2 RELATED WORK
As indicated in (Ricker and Roth, 2018), mobile de-
vices enable users to volunteer their local knowl-
edge and experience while situated in place, provid-
ing timely and unstructured information about chang-
ing geographic conditions. Since “Meine Umwelt” is
dedicated to the reporting of environmental data, re-
lated work on reporting systems is briefly presented in
this section. In addition, research on factors of mobile
UX is briefly introduced, followed by work on form
design on mobile devices, and interactive maps.
Reporting Systems. Winckler et al. (2016) focused
on the UX dimensions important for incident report-
ing systems. Interviews with participants were used
to gain insights into the users’ perception of the in-
vestigated app. They found users to prefer a selection
of reportable items from a menu. This provides an
overview of all items and avoids generic forms trying
to fit all items at once. Moreover, an interactive map
was clearly demanded by the users. Another require-
ment found in their interviews was the necessity to
provide an identification, to avoid fake reports. Pic-
tures and videos of the item should also be included
in the report. Related to incident reporting are citizen
observatories, defined as “the participation of citizens
in monitoring the quality of the environment they live
in” (Liu et al., 2014). Here, no recent incident or event
is present as a motivation for the report. Instead, ob-
servatories are more focused on ecological data and
observations than on incidents in an urban context.
The acquired data should be used by the government
and if possible made available to the public as data
and/or service (Liu et al., 2014). Citizens’ observa-
tories are described as a cheap and easy way for the
administration to collect data about the environment.
The collection of data about the environment,
where the citizens live is exactly what the prototypes
presented later in this study were designed for. In ad-
dition, targeting a broad audience alongside the envi-
ronmentalists, who are highly active in preservation
anyway, is possible with a mobile application. Hav-
ing data reported by local citizens as stakeholders can
lead to enhanced quality in environmental decisions
(Reed, 2008). This data is easy to collect and as a
review of ten years of community-based monitoring
found, one of the most efficient methods of monitor-
ing (Conrad and Hilchey, 2011). Preece (2016) ref-
erences multiple technologies and concepts used for
citizen science, like passive and active data collec-
tion, mobile apps, web portals, webcams, drones, and
gamification. All of these can be useful approaches to
collect data, which fulfill some given requirements.
In addition, these concepts can be combined - like
an active data collection platform available both as
an app and web portal. She also mentions some re-
porting systems like Floracaching, iSpotNature.org,
and iNaturalist.org. The study in this work uses ac-
tive data collection via mobile apps. The domains
of the reports are comparable to the one in iSpotNa-
ture.org, which provides a wide range of species to
report. In contrast, other reporting services like Flo-
racaching focus on one domain only, i.e. plants.
Factors of Mobile UX. An early listing of impor-
tant dimensions affecting mobile UX, in general, was
provided in (Arhippainen and T
¨
ahti, 2003). These di-
mensions were: the user, the social factors, the cul-
tural factors, the context of use, and the given prod-
uct. Along similar lines, Subramanya and Yi (2007)
divided factors contributing to a good user experi-
ence into three categories: device-related (i.e. hard-
ware), communication-related (i.e. provide a feeling
of face-to-face communication as much as possible),
and application-related (i.e. UI related); Wigelius
and V
¨
a
¨
at
¨
aj
¨
a (2009) identified five dimensions affect-
ing mobile UX: the social context, the spatial context,
the temporal context, the infrastructural as well as the
task contexts; and Ickin et al. (2012) identified seven
HUCAPP 2021 - 5th International Conference on Human Computer Interaction Theory and Applications
246
factors that influence mobile UX in their study: appli-
cation interface design, application performance, bat-
tery efficiency, phone features, application and con-
nectivity cost, user routines, and user lifestyle. Con-
cerning investigating UX, Korhonen et al. (2010) pro-
posed that mobile UX can be traced back to two fea-
tures of the user’s context: the triggering context (i.e.
single contextual factor that changes the user’s expe-
rience stream in a positive or negative direction), and
the core experience (i.e. experience that was most
memorable to the user during the interaction). The
lessons learned about UI factors in this work intend
to extend this body of knowledge with insight from
UI element design and placement.
Form Design on Mobile Devices. Harms et al.
(2015) investigated the use of long forms on smart-
phones. They tested four different designs and found
scrolling to perform the worst of all possible methods,
while the other three designs (tabs, menus, and col-
lapsible fieldsets) worked equally well. This finding
matches the framework of Zhang and Adipat (2005),
who stated that the cognitive load needed from the
user should be minimized by avoiding long lists. One
way to avoid scrolling in long forms is to structure the
content in categories, each fitting the screen size. Be-
sides, the framework from (Zhang and Adipat, 2005)
proposes to use as little interaction as possible (which
changes in categories would imply). Hence, the struc-
turing of content on mobile phones should be de-
signed thoughtfully. While Couper et al. (2011) inves-
tigated the placement of buttons in long forms (and
not their structuring), their statement that the design
of forms can affect the user behavior speaks in fa-
vor of further investigating form design in general.
These studies all examined the design of, and interac-
tion with long forms. The question remains whether
the findings are also applicable to short forms, and if
scrolling in forms should be avoided in general.
Interactive Maps. Degbelo et al. (2019) compared
the merits of form-based and map-based interaction
for geodata creation tasks. They reported that the
sweet spot of interactive maps (on desktop devices)
seems not so much their impact on productivity, but
rather their positive influence on the overall UX.
Users reported that maps are more stimulating and at-
tractive than forms for information creation tasks. Re-
garding the mobile context of interactive maps, Buri-
gat and Chittaro (2011) investigated mobile maps and
the use of three different approaches to visualizing the
references to off-screen content of maps. In another
study, Burigat et al. (2008) found zoomable maps
with an overview window (overview+detail approach)
to be useful for map interaction. A third study by
Burigat and Chittaro (2013) showed increased perfor-
mance in task completion time with overview+detail
layouts, which users could manipulate. The focus was
on interfaces, which allow map manipulation through
interactions with the overview and highlighting of ob-
jects. Finally, Horbi
´
nski et al. (2020) provided evi-
dence that positioning of UI elements on mobile maps
matters. Users indeed expressed different preferences
for the positioning of buttons for features such as ge-
olocation, search, or routing. Though these studies
provide valuable insights into the use of interactive
maps on mobile devices, they do not cover an under-
standing of factors affecting mobile map user experi-
ence, which is the topic of this article.
3 USER STUDY
The goal was to investigate differences in users’ per-
ception of the navigation modalities and sequences
of the interactions. The interfaces were designed to
investigate the influence of map positioning as well
as different form designs on a smartphone. This
study can be viewed, following Schmidt (2009), as
a ‘follow-up study with a replication condition’ to
Harms et al. (2015)’s work. The focus was on two
questions. RQ1: what is the influence of different
form designs on user experience and usability? The
focus here was on short forms (i.e. up to 10 fields).
RQ2: what is the influence of UI component se-
quences on user experience and usability?
Variables. Independent variables: The sequence of
the UI components, and the design of the form UI
were variables controlled by the different prototypes;
dependent variables: usability, user experience, and
task performance. Usability and user experience were
measured using the System Usability Scale (SUS,
Brooke (1995)) and the User Experience Question-
naire (UEQ-S, Schrepp et al. (2017)) respectively.
Since it was not possible to integrate interaction logs
into “Meine Umwelt”, the time to complete the tasks
was measured using screen recordings of the smart-
phone used during the experiment for all prototypes.
Study Design. Besides the application “Meine
Umwelt”, two prototypes were developed for the ex-
periment (Figure 1). The prototypes were designed
to separate the sequences of the UI elements, and
the form designs. Therefore, three applications were
used: Map + Selection + Form (scroll) [hereafter, Pro-
totype 1]; Map + Selection + Form (tab) [hereafter,
User Interface Factors of Mobile UX: A Study with an Incident Reporting Application
247
Prototype 2]; Selection + Form (scroll) + map [here-
after, Base condition] (“Meine Umwelt”). A within-
subject design was used in the experiment. Each par-
ticipant completed three ecological reporting tasks,
with different levels of difficulty. The order of the
tasks was counterbalanced using a Latin Square ap-
proach (see supplementary material).
Tasks. Each prototype was tested with three differ-
ent tasks of equivalent properties and difficulty. Prop-
erties were the complexity of the task, the complexity
of the place, and the point of time. The complexity
of the task refers to different forms provided in the
tasks. The tested condition of the “Base condition”
provides six categories to report. These were all used
in the tasks, while some of these had to be used multi-
ple times to reach the number of nine tasks. The plant
category “Ambrosia” was used in all task groups, be-
cause the complexity of this task was higher than for
all other conditions. The form to report Ambrosia had
three input fields more than the remaining categories,
resulting in a more demanding task (as more informa-
tion has to be remembered and filled in by the user).
Each task group had one task with a point of time re-
quiring the user to go back in the calendar for sev-
eral months, as opposed to the other dates used (e.g.
“right now”, “yesterday”). Lastly, the complexity of
the place was higher for one task per group. This was
used to make users interact with the map more than
just searching for street names. The tasks were in Ger-
man and are available in the supplementary material
1
.
The study was approved by the local ethics board and
pilot tested with two participants. A result of the pilot
study was a slight adjustment of the tasks, to improve
their understandability. Also, a search bar was imple-
mented for the prototypes to enable place searches on
the map. The results from the two participants are not
included in the analysis.
Participants. 18 participants (10 females and 8
males) were recruited by e-mail and word of mouth.
Participants all spoke German because the study and
app “Meine Umwelt” were in German. The partic-
ipants included four landscape ecologists and biolo-
gists, who are involved in landscape and environmen-
tal preservation, along with other users, who might
use the app more casually. This mix ensures a repre-
sentative group to the user group of “Meine Umwelt”.
The average age of the participants was 22.7 (SD:
2.7).
1
See https://doi.org/10.6084/m9.figshare.13174550.
4 RESULTS
Table 1 presents the results of the study. In line
with recommendations from previous work Dragice-
vic (2016), the whole analysis was done using interval
estimates. Confidence intervals provide much richer
information than p-values alone. A confidence in-
terval that does not include zero indicates statistical
significance; the tighter the interval, and the farther
from zero, the stronger the evidence. The analysis
was done using the BootES package (Kirby and Ger-
lanc, 2013) in R. The number of bootstrap resamples
was N = 2000. Task performance (i.e. TimeOnTask)
was similar across all three prototypes.
Regarding usability, the Base condition “Meine
Umwelt” had an average value of 59.7. The differ-
ence between the Base condition and Prototype 1 was
significant (9.4, 95%CI [1.5, 18.1]), and so was the
difference between the Base condition and Prototype
2 (7.9, 95%CI [0.7, 15.9]). In contrast, the difference
between Prototype 1 and Prototype 2 was not signifi-
cant with respect to usability ratings.
As regards user experience, the Base condition
(hereafter MU) was rated with a pragmatic quality
of 1.01 by the participants. The hedonic quality was
lower at 0.38 and the overall rating was 0.70. Overall,
users rated the user experience of the Prototype 1 (P1)
significantly higher than the user experience of the
Base condition (+0.57, 95%CI [0.25, 0.97]). Proto-
type 2 (P2) collected also significantly higher ratings
than the Base condition (+0.54, 95%CI [0.14, 0.96])
for the overall user experience. The difference be-
tween MU and P1 was significant for the pragmatic
quality (0.85, 95%CI [0.2, 1.6]), and so was the dif-
ference between the MU and P2 for the pragmatic as-
pects (0.85, 95%CI [0.01, 1.7]). The difference be-
tween P1 and P2 was not significant for the pragmatic
quality. The differences between all prototypes were
not significant for the hedonic quality.
Regarding the size of the effects, Hedges’ g was
computed for all significant differences. This is in
line with recommendations from previous work (e.g.
Sauro, 2014; Lakens, 2013) for studies with less than
20 participants. The values obtained were 0.52 (MU
vs P1, usability); 0.46 (MU vs P2, usability); 0.71
(MU vs P1, user experience total); 0.56 (MU vs P2,
user experience total); 0.50 (MU vs P1, user experi-
ence pragmatic); and 0.44 (MU vs P2, user experience
pragmatic). These suggest a medium effect
2
.
2
Corresponding Cohen’s d values were 0.54 (MU vs P1,
usability); 0.48 (MU vs P2, usability); 0.74 (MU vs P1, user
experience total); 0.59 (MU vs P2, user experience total);
0.53 (MU vs P1, user experience pragmatic); and 0.46 (MU
vs P2, user experience pragmatic).
HUCAPP 2021 - 5th International Conference on Human Computer Interaction Theory and Applications
248
Figure 1: Top: P1 & P2 (map as landing screen) vs MU (map integrated into a form); Middle: P1 with scrollable form vs P2
with tabs to structure the form vs MU (Baseline); Bottom: Example species type in P1 & P2 vs example species type in MU.
User Interface Factors of Mobile UX: A Study with an Incident Reporting Application
249
Gender effects (male vs female) and background
domain effects (landscape ecologist/biologist vs oth-
ers) were tested. These were all non-significant on
all dimensions (TimeOnTask, usability, and user ex-
perience). A learning effect was apparent in the data
for task completion time. For 16 of 18 participants,
the first app interaction took the most amount of time.
Nevertheless, the effects were apparent for all appli-
cations, because of the Latin square design of the ex-
periment and the resulting order of apps for each par-
ticipant. Therefore, the learning effect did not influ-
ence the results of a single app. The main difference
between the tested applications was the positioning of
the map and the form designs. Both prototypes scored
higher values for both usability and user experience.
These prototypes were using the map as a landing
screen for the application. The influence of form de-
sign was not apparent, as both prototypes scored sim-
ilar values and performed with similar times. These
results are now discussed in detail.
5 DISCUSSION
This section touches upon five topics: interpretation
of the task performance results, the effect of form de-
sign on UX (RQ1), the effect of the sequence of UI
elements on UX (RQ2), implications of the results for
theory and design, and the limitations of the study.
Functional Similarity of the Apps. The results of
the user study have shown similar results for the task
performance of the tested applications. All three of
them need approximately the same time to complete
all tasks (Table 1). This indicates that the applica-
tions are all equally suited for the task at hand, namely
the reporting of ecological data. There were no clear
indications for an app hindering users in effectively
performing the provided tasks. A goal of designing
the two additional prototypes was to provide a simi-
lar design, to be able to compare outcomes. With all
applications performing on a similar level regarding
the time required to solve the tasks, this goal seems to
have been achieved.
Effect of Form Design. Considering the first re-
search question, the results have shown no signif-
icant difference between scrollable forms and tab-
structured forms. Put differently, the sequences “Map
+ Selection + Form (scroll)” [P1] and “Map + Selec-
tion + Form (tab)” [P2] can be considered equivalent
with regard to task completion, usability and UX. The
main implication is that the findings of Harms et al.
(2015), who found scrolling to be worse (as to usabil-
ity) for the navigation in long forms, cannot be gen-
eralized to all sizes of forms. The usability of forms
of a size up to 10 input fields is not significantly im-
proved by a tab-based design. Thus, form designs do
not noticeably influence usability and user experience
in all cases.
Effect of Sequence of UI Elements. Significant
differences between the Base condition (MU) and
Prototype 1 were found for the user experience rat-
ings. The differing variable between the two applica-
tions was the positioning of the map. Looking closely
at the dimensions of user experience, hedonic UX was
not significantly impacted, while pragmatic UX was
rated differently by the participants. Likewise, usabil-
ity ratings were much higher for Prototype 1. This
answers the second research question: positioning the
interactive map before the form in the sequence of in-
teractions results in better user experience and usabil-
ity. Given that perceived usability is strongly associ-
ated with pragmatic user experience (see e.g. Diefen-
bach et al. (2014); Pettersson et al. (2018)), the con-
vergence of usability and UEQ ratings during the ex-
periment increases our confidence that pragmatic user
experience has been positively affected by the change
in the positioning of the map component. In addition,
given that the sequences “Map + Selection + Form
(scroll)” [P1] and “Map + Selection + Form (tab)”
[P2] can be considered equivalent (see above), the
fact that similar observations are made for Prototype
2 also provide confirmatory evidence for the result.
There may be several reasons for this, and these are
now discussed using two theories: ‘cognitive fit the-
ory’ (Vessey, 1991) and ‘memory-based theory of se-
quence effects’ (Cockburn et al., 2017).
One reason could be that the sequence ‘location
reporting => species type selection => specie report-
ing’ (sequence 1) provides a better cognitive fit than
the sequence ‘species type selection => specie report-
ing => location reporting’ (sequence 2). Cognitive
fit theory Vessey (1991) posits that problem-solving
efficiency and effectiveness increases when problem-
solving aids (in this context UI elements) support
strategies required to perform a task. In the case of
sequence 2, the map popping up to enter the location
interrupts the interaction of filling a form (see Fig-
ure 1, left). In contrast, sequence 1 provides a full-
screen map interface to enter the location and addi-
tionally encapsulates the location input. Hence, users
are done with entering location data, when they start
to provide additional data (e.g. species type, date,
height/number of plants, see Figure 1, middle) using
forms. Thus, sequence 1 offers a ‘cleaner’ task sepa-
HUCAPP 2021 - 5th International Conference on Human Computer Interaction Theory and Applications
250
Table 1: Results. Average values per prototype are reported first, followed by the average of the differences between the
prototypes. Cells highlighted in blue indicate significant values (i.e. confidence intervals that do not include zero).
MU 95% CI P1 95% CI P2 95% CI
Time (seconds) 318 [298, 339] 306 [279, 335] 323 [287, 379]
Time (Diff, MU) -13 [-42, 23] 4 [-38, 56]
Time (Diff, P2) -17 [-84, 31]
SUS Score 59.7 [52.1, 66.0] 69.2 [65.3, 72.4] 67.6 [63.2, 70.8]
SUS (Diff, MU) 9.4 [1.5, 18.1] 7.9 [0.69, 15.83]
SUS (Diff, P2) 1.53 [-3.3, 5.0]
UEQ (prag.) 1.01 [0.32, 1.65] 1.86 [1.57, 2.10] 1.86 [1.38, 2.18]
UEQ p. (Diff, MU) 0.85 [0.22, 1.63] 0.85 [0.01, 1.69]
UEQ p. (Diff, P2) 0 [-0.38, 0.44]
UEQ (hed.) 0.38 [-0.03, 0.82] 0.64 [0.13, 1.14] 0.61 [0.17, 1.03]
UEQ h. (Diff, MU) 0.26 [-0.05, 0.52] 0.24 [-0.01, 0.44]
UEQ h. (Diff, P2) 0.23 [-0.22, 0.25]
UEQ (total) 0.70 [0.23, 1.17] 1.27 [0.98, 1.51] 1.24 [0.97, 1.60]
UEQ t. (Diff, MU) 0.57 [0.25, 0.97] 0.54 [0.14, 0.96]
UEQ t. (Diff, P2) 0.03 [-0.19, 0.30]
ration and hence a better cognitive fit. It is worth men-
tioning that cognitive fit in the strictest sense only pre-
dicts improved efficiency and effectiveness for some
conditions. These were not observed (see Table 1).
Yet, if the scope of cognitive fit theory is broadened to
non-instrumental aspects of interaction, the explana-
tion above is plausible. A single experiment may not
justify broadening the scope of the theory, but based
on the results, the following provisional postulate can
be formulated: user experience and/or performance
increase when problem-solving aids support strate-
gies required to perform a task.
Another reason for differences in the scores might
be that both SUS and UEQ measure the perceived us-
ability and UX respectively. Therefore, the first im-
pression of the map as the landing screen for an ap-
plication might simply be perceived more positively.
This explanation would concur with Lindgaard et al.
(2006), who found that users form their judgment
about the visual appeal of web pages during the first
50 milliseconds of their interaction. That finding was
confirmed in (Tractinsky et al., 2006), who also pro-
vided evidence that first impressions about the attrac-
tiveness of web pages are not only formed quickly,
but lasting. Since users find maps to be more stimu-
lating and attractive than forms for information pro-
vision (see Degbelo et al. (2019)), it is plausible that
their first impression seeing the map first was much
more positive, and lasted till the end of the interac-
tion. Besides, given that UX was measured after the
experiment, a memory-based view of UX could be
valuable at this point. According to memory-based
theories of UX (Cockburn et al., 2017), three factors
influence people’s memory of experiences: primacy
(i.e. over-weighted influence of the initial moments of
an experience), recency (i.e. over-weighted influence
of the terminating moments of the experience), and
peak-end (i.e. over-weighted influence of the most in-
tense moments of the experience). Since the map was
present at the beginning in sequence 1 and at the end
in sequence 2, producing in one case a positive pri-
macy effect, and in the other a positive recency effect,
the results suggest that there may be cases where pri-
macy effects with a UI element weight stronger than
recency effects with that element. The explanations
provided in this and the previous paragraph are ar-
guably tentative, but are also useful hypotheses to cor-
roborate in follow-up experiments.
Implications for Theory and Design. “Meine
Umwelt” is an incident reporting app, and as such,
the lessons learned apply to incident reporting apps
more broadly. As to theory, Winckler et al. (2013)
provided a comprehensive model for tasks related to
incident reporting. In their model, they suggested
that sequence does not matter for the sub-tasks: ‘de-
scribe the incident’, ‘locate the incident’, and ‘inform
time for the incident’. The results above suggest that
make the scope of their theoretical model more pre-
cise. Sequence does not matter from the time comple-
tion point of view but does matter from the pragmatic
UX point of view. As to design, Norman (2009) call-
ing designers to design for memorable experiences,
formulated these rules of thumb: “What is the most
important part of an experience? Psychologists em-
phasize what they call the primacy and recency ef-
fects, with recency being the most important. In other
words, what is most important? The ending. What is
most important after that? The start. So make sure the
beginning and the end are wonderful”. The end may
not always be more important than the start according
to the results above.
User Interface Factors of Mobile UX: A Study with an Incident Reporting Application
251
Limitations. Though the prototypes were designed
as closely as possible to the baseline application
“Meine Umwelt”, some minor differences can still
be detected when comparing the applications side by
side (e.g. captions in the pictures, or the choice of
the map tile provider, see Figure 1, left). These dif-
ferences could not be removed, and stem from the
different technologies used for the app development.
“Meine Umwelt” uses an API key for Google Maps
(and thus Google Maps as a base map) while the
prototypes implemented used Open Street Map as a
base map. Besides, while “Meine Umwelt” used the
Cordova framework, React Native was used to de-
velop the prototypes. The native components pro-
vided by React Native have some slightly different
properties than those of Cordova (e.g. the placehold-
ers for <input> tags displayed in a slightly different
way, see Figure 1, middle). These differences (i.e.
map[googlemaps] vs map [openstreetmap] or form
[cordova] vs form [reactnative]) are minor nonethe-
less and are mentioned here for the sake of complete-
ness only
3
.
6 CONCLUSION
This article has investigated the effect of sequence of
UI elements and type of forms within a mobile appli-
cation for reporting ecological data. A user study has
shown significant preferences for the map as a start-
ing element, instead of the map as an ending element.
Besides, a tab-based, structured design was tested
against a scrollable view. Results have shown no
significant difference between these designs in short
forms. With respect to earlier research (Harms et al.,
3
There is no recent study known to the authors reporting
on a comparison between the user experience - on mobile
devices - of Google Maps vs. OpenStreetMaps, and the
user experience of Cordova Apps vs. React Native apps.
Nonetheless, a study by Schnur et al. (2018) reported that
the perceived user complexity of Google Maps was consis-
tently lower than that of OpenStreetMaps for several levels
of details. Tuch et al. (2009), in the context of websites,
reported an inverse-linear relationship between visual com-
plexity and pleasure. That is, start pages with low visual
complexity were rated by users as more pleasurable. A
replication of several studies (Miniukovich and Marchese,
2020) confirmed that inverse-linear relationships for web-
sites. Miniukovich and De Angeli (2014) also observed
a negative correlation between visual complexity and aes-
thetics for mobile apps. Putting these findings together sug-
gests - if the difference map [googlemaps] vs. map [open-
streetmap] was important - that P1 & P2 (OpenStreetMaps,
Figure 1, top left) would have obtained lower user experi-
ence ratings than MU (Google Maps, Figure 1, top right).
The opposite was observed during the study.
2015), it can be concluded, that scrolling does not per-
form worse for all sizes of form length. In short, the
sequence of user interface elements on mobile devices
matters, and the type of form design matters, depend-
ing on the length of the forms. Designers should keep
in mind both (besides button placement identified in
previous work Horbi
´
nski et al. (2020)) while build-
ing their next incident reporting app. Future work
can replicate this study, for instance, factoring in data
about the complexity of the base maps, and collect-
ing qualitative data about what users dis/liked. Ad-
ditionally, future studies can further investigate why
perceived user experience and usability are better for
some sequences than for others, based on a revised
version of the cognitive fit theory, and memory-based
theories of UX.
ACKNOWLEDGMENTS
We thank xdot GmbH for their support and for sharing
the code of “Meine Umwelt”.
REFERENCES
Arhippainen, L. (2013). A tutorial of ten user experience
heuristics. In Lugmayr, A., Franssila, H., Paavilainen,
J., and K
¨
arkk
¨
ainen, H., editors, International Con-
ference on Making Sense of Converging Media, Aca-
demic MindTrek ’13, pages 336–337, Tampere, Fin-
land. ACM.
Arhippainen, L. and T
¨
ahti, M. (2003). Empirical evaluation
of user experience in two adaptive mobile application
prototypes. In MUM 2003. Proceedings of the 2nd
International Conference on Mobile and Ubiquitous
Multimedia, pages 27–34. Link
¨
oping University Elec-
tronic Press.
Bargas-Avila, J. A. and Hornbæk, K. (2011). Old wine in
new bottles or novel challenges: a critical analysis of
empirical studies of user experience. In Tan, D. S.,
Amershi, S., Begole, B., Kellogg, W. A., and Tungare,
M., editors, Proceedings of CHI ’11, pages 2689–
2698, Vancouver, British Columbia, Canada. ACM
Press.
Brooke, J. (1995). SUS - A quick and dirty usability scale.
Usability Evaluation in Industry, 189:4–7.
Burigat, S. and Chittaro, L. (2011). Visualizing references
to off-screen content on mobile devices: A compari-
son of Arrows, Wedge, and Overview + Detail. Inter-
acting with Computers, 23(2):156–166.
Burigat, S. and Chittaro, L. (2013). On the effectiveness
of Overview+Detail visualization on mobile devices.
Personal and Ubiquitous Computing, 17(2):371–385.
Burigat, S., Chittaro, L., and Parlato, E. (2008). Map, di-
agram, and web page navigation on mobile devices:
the effectiveness of zoomable user interfaces with
HUCAPP 2021 - 5th International Conference on Human Computer Interaction Theory and Applications
252
overviews. In Proceedings of MobileHCI ’08, pages
147–156.
Cockburn, A., Quinn, P., and Gutwin, C. (2017). The effects
of interaction sequencing on user experience and pref-
erence. International Journal of Human-Computer
Studies, 108:89–104.
Conrad, C. C. and Hilchey, K. G. (2011). A review of
citizen science and community-based environmental
monitoring: Issues and opportunities. Environmental
Monitoring and Assessment, 176(1-4):273–291.
Couper, M. P., Baker, R., and Mechling, J. (2011). Place-
ment of navigation buttons in Web surveys. Survey
Practice, 4(1):11.
Degbelo, A., Kruse, J., and Pfeiffer, M. (2019). Interac-
tive maps, productivity and user experience: A user
study in the e-mobility domain. Transactions in GIS,
23(6):1352–1373.
Diefenbach, S., Kolb, N., and Hassenzahl, M. (2014).
The ’hedonic’ in human-computer interaction: his-
tory, contributions, and future research directions. In
Wakkary, R., Harrison, S., Neustaedter, C., Bardzell,
S., and Paulos, E., editors, Designing Interactive Sys-
tems Conference 2014 (DIS’14), pages 305–314, Van-
couver, British Columbia, Canada. ACM.
Dragicevic, P. (2016). Fair statistical communication in
HCI. In Robertson, J. and Kaptein, M., editors,
Modern Statistical Methods for HCI, pages 291–330.
Springer, Cham.
Forlizzi, J. and Ford, S. (2000). The building blocks of
experience: an early framework for interaction de-
signers. In Proceedings of the 3rd conference on
Designing Designing interactive systems, pages 419–
423. ACM.
Harms, J., Kratky, M., Wimmer, C., Kappel, K., and
Grechenig, T. (2015). Navigation in long forms on
smartphones: scrolling worse than tabs, menus, and
collapsible fieldsets. In Abascal, J., Barbosa, S., Fet-
ter, M., Gross, T., Palanque, P., and Winckler, M.,
editors, Human-Computer Interaction INTERACT
2015, pages 333–340. Springer, Cham.
Hassenzahl, M. (2005). The Thing and I: Understanding the
Relationship Between User and Product. In Funology:
From Usability to Enjoyment, chapter 3, pages 31–42.
Kluwer Academic Publishers, 2 edition.
Horbi
´
nski, T., Cybulski, P., and Medy
´
nska-Gulij, B. (2020).
Graphic design and button placement for mobile map
applications. The Cartographic Journal, 0(0):1–13.
Ickin, S., Wac, K., Fiedler, M., Janowski, L., Hong, J.-H.,
and Dey, A. K. (2012). Factors influencing quality
of experience of commonly used mobile applications.
IEEE Communications Magazine, 50(4):48–56.
Kieffer, S., Rukonic, L., Kervyn de Meerendr
´
e, V., and
Vanderdonckt, J. (2019). Specification of a UX pro-
cess reference model towards the strategic planning of
UX activities. In Chessa, M., Paljic, A., and Braz,
J., editors, Proceedings of the 14th International Joint
Conference on Computer Vision, Imaging and Com-
puter Graphics Theory and Applications, pages 74–
85, Prague, Czech Republic. SCITEPRESS.
Kirby, K. N. and Gerlanc, D. (2013). BootES: An R pack-
age for bootstrap confidence intervals on effect sizes.
Behavior Research Methods, 45(4):905–927.
Kooperation-Umweltportale (2019). Meine Umwelt.
https://play.google.com/store/apps/details?
id=de.bw.umwelt.meineumwelt.
Korhonen, H., Arrasvuori, J., and V
¨
a
¨
an
¨
anen-Vainio-
Mattila, K. (2010). Analysing user experience of
personal mobile products through contextual factors.
In Angelides, M. C., Lambrinos, L., Rohs, M., and
Rukzio, E., editors, Proceedings of the 9th Interna-
tional Conference on Mobile and Ubiquitous Mul-
timedia (MUM 2010), page 11, Limassol, Cyprus.
ACM.
Kraak, M., Ricker, B., and Engelhardt, Y. (2018). Chal-
lenges of mapping sustainable development goals in-
dicators data. ISPRS International Journal of Geo-
Information, 7(12):482.
Kray, C., Schmid, F., and Fritze, H. (2017). Guest editorial:
map interaction. GeoInformatica, 21(3):573–576.
Lakens, D. (2013). Calculating and reporting effect sizes
to facilitate cumulative science: a practical primer for
t-tests and ANOVAs. Frontiers in Psychology, 4.
Lindgaard, G., Fernandes, G., Dudek, C., and Brown, J.
(2006). Attention web designers: You have 50 mil-
liseconds to make a good first impression! Behaviour
& Information Technology, 25(2):115–126.
Liu, H.-Y., Kobernus, M., Broday, D., and Bartonova, A.
(2014). A conceptual approach to a citizens’ obser-
vatory supporting community-based environmental
governance. Environmental Health, 13(1):107.
McCarthy, J. and Wright, P. (2004). Technology as experi-
ence, volume 11. MIT Press.
Miniukovich, A. and De Angeli, A. (2014). Visual impres-
sions of mobile app interfaces. In Roto, V., H
¨
akkil
¨
a,
J., V
¨
a
¨
an
¨
anen-Vainio-Mattila, K., Juhlin, O., Olsson,
T., and Hvannberg, E. T., editors, Proceedings of
NordiCHI ’14, pages 31–40, Helsinki, Finland. ACM
Press.
Miniukovich, A. and Marchese, M. (2020). Relationship
between visual complexity and aesthetics of web-
pages. In Bernhaupt, R., Mueller, F. F., Verweij, D.,
Andres, J., McGrenere, J., Cockburn, A., Avellino, I.,
Goguey, A., Bjøn, P., Zhao, S., Samson, B. P., and Ko-
cielnik, R., editors, Proceedings of CHI 2020, pages
1–13, Honolulu, Hawaii, USA. ACM.
Norman, D. A. (2009). Memory is more important than
actuality. interactions, 16(2):24.
Oulasvirta, A. (2017). User interface design with combina-
torial optimization. Computer, 50(1):40–47.
Pettersson, I., Lachner, F., Frison, A.-K., Riener, A., and
Butz, A. (2018). A Bermuda Triangle? A review of
method application and triangulation in user experi-
ence evaluation. In Mandryk, R. L., Hancock, M.,
Perry, M., and Cox, A. L., editors, Proceedings of CHI
’18, pages 1–16, Montreal, Quebec, Canada. ACM
Press.
Preece, J. (2016). Citizen science: new research challenges
for human–computer interaction. International Jour-
nal of Human-Computer Interaction, 32(8):585–612.
User Interface Factors of Mobile UX: A Study with an Incident Reporting Application
253
Reed, M. S. (2008). Stakeholder participation for environ-
mental management: A literature review. Biological
Conservation, 141(10):2417–2431.
Ricker, B. and Roth, R. (2018). Mobile maps and respon-
sive design. In Wilson, J. P., editor, Geographic In-
formation Science & Technology Body of Knowledge,
number Q2.
Roth, R. E. (2013). Interactive maps: What we know and
what we need to know. Journal of Spatial Information
Science, 6:59–115.
Sauro, J. (2014). MeasuringU: Understanding effect sizes
in user research. https://measuringu.com/effect-sizes/,
last accessed: November 03, 2020.
Schmidt, S. (2009). Shall we really do it again? The pow-
erful concept of replication is neglected in the social
sciences. Review of General Psychology, 13(2):90–
100.
Schnur, S., Bektas¸, K., and C¸
¨
oltekin, A. (2018). Measured
and perceived visual complexity: a comparative study
among three online map providers. Cartography and
Geographic Information Science, 45(3):238–254.
Sch
¨
oning, J., Hecht, B. J., and Kuhn, W. (2014). Inform-
ing online and mobile map design with the collec-
tive wisdom of cartographers. In Wakkary, R., Har-
rison, S., Neustaedter, C., Bardzell, S., and Paulos,
E., editors, Designing Interactive Systems Conference
2014, pages 765–774, Vancouver, British Columbia,
Canada. ACM.
Schrepp, M., Hinderks, A., and Thomaschewski, J. (2017).
Design and evaluation of a short version of the user ex-
perience questionnaire (UEQ-S). International Jour-
nal of Interactive Multimedia and Artificial Intelli-
gence, 4(6):103.
Subramanya, S. and Yi, B. K. (2007). Enhancing the user
experience in mobile phones. Computer, 40(12):114–
117.
Tractinsky, N., Cokhavi, A., Kirschenbaum, M., and Sharfi,
T. (2006). Evaluating the consistency of immedi-
ate aesthetic perceptions of web pages. International
Journal of Human-Computer Studies, 64(11):1071–
1083.
Tuch, A. N., Bargas-Avila, J. A., Opwis, K., and Wilhelm,
F. H. (2009). Visual complexity of websites: Effects
on users’ experience, physiology, performance, and
memory. International Journal of Human-Computer
Studies, 67(9):703–715.
Vessey, I. (1991). Cognitive fit: A theory-based analysis of
the graphs versus tables literature. Decision Sciences,
22(2):219–240.
Wigelius, H. and V
¨
a
¨
at
¨
aj
¨
a, H. (2009). Dimensions of context
affecting user experience in mobile work. In Gross,
T., Gulliksen, J., Kotz
´
e, P., Oestreicher, L., Palanque,
P. A., Prates, R. O., and Winckler, M., editors, Pro-
ceedings of INTERACT 2009, pages 604–617, Upp-
sala, Sweden. Springer.
Winckler, M., Bach, C., and Bernhaupt, R. (2013). Identi-
fying user experience dimensions for mobile incident
reporting in urban contexts. IEEE Transactions on
Professional Communication, 56(2):97–119.
Winckler, M., Bernhaupt, R., and Bach, C. (2016). Identifi-
cation of UX dimensions for incident reporting sys-
tems with mobile applications in urban contexts: a
longitudinal study. Cognition, Technology and Work,
18(4):673–694.
Wright, P., Mccarthy, J., and Meekison, L. (2005). Making
Sense of Experience. In Funology: From Usability to
Enjoyment, pages 43–53. Kluwer Academic Publish-
ers, 2 edition.
Zhang, D. and Adipat, B. (2005). Challenges, Methodolo-
gies, and Issues in the Usability Testing of Mobile Ap-
plications. International Journal of Human-Computer
Interaction, 18(3):293–308.
HUCAPP 2021 - 5th International Conference on Human Computer Interaction Theory and Applications
254