Virtual Avatar Creation Support System for Novices with
Gesture-Based Direct Manipulation and Perspective Switching
Junko Ichino
a
and Kokoha Naruse
Faculty of Informatics, Tokyo City University, Yokohama, Japan
Keywords: Virtual Reality, Avatar, Creativity, Gesture-Based Direct Manipulation, Life-Size, First-person Perspective,
Third-Person Perspective, Embodied Interaction.
Abstract: Given the increasing importance of virtual spaces as environments for self-expression, it is necessary to
provide a method for users to create self-avatars as they wish. Most existing software that is used to create
avatars require users to have knowledge of 3D modeling or to set various parameters such as leg lengths and
sleeve lengths individually by moving sliders or through keyboard input, which are not intuitive and require
time to learn. Thus, we propose a system that supports the creation of human-like avatars with intuitive
operations in virtual spaces that is targeted at novices in avatar creation. The system is characterized by the
following two points: (1) users can directly manipulate their own life-size self-avatars in virtual spaces using
gestures and (2) users can switch between first-person and third-person perspectives. We conducted a
preliminary user study using our prototype. The results indicate the basic effectiveness of the proposed
system, demonstrating that substantial room for improvement remains in the guide objects that are used to
manipulate the manipulable parts.
1 INTRODUCTION
Research on the application of computer technology
in creative contexts has been conducted for many
years. However, tools that make use of advanced
technology do not fully activate human creativity and
sensitivity, but rather, inhibit them (Black, 1990).
When users hold a mouse or keyboard that is capable
of precise and detailed information input, and view a
precise and well-ordered objects on a display as small
as 20 to 30 inches, they become absorbed in the
details and fall into mere “tasks.” As a result, an
important thoughts, that of developing an overall
mental image of the desired creation and making bold
alterations to the creative work, can be lost. This leads
to a design that is superficially pretty but lacking in
content and substance (Tano, 1999).
On the other hand, embodied interaction is
attracting attention as a field that combines body,
mind, cognition, and emotion when people interact
with a digital environment (England, et al., 2009). In
Where the Action Is (Dourish, 2001), Dourish states
that while our bodies are our most familiar presence,
it is a presence that is difficult to consider objectively;
a
https://orcid.org/0000-0001-7048-5339
consequently, interaction design has mainly been
based on the Cartesian principle of the mind–body
dichotomy. He stated that future interaction design
should carefully consider users’ Heideggerian state of
immersion in the everyday world, namely
embodiment, and that digital technology must be used
appropriately. That is, embodiment means not only
the body itself, but also a wide range of suggestions
including emotions, feelings, intuition, etc. that arise
through the body. Therefore, in a digital environment
that is designed with embodiment in mind, the user
experience will rely on rich mappings among the
physicality, body gestures and movements, tangible
artifacts, and interface, resulting in experiences that
relate directly to the feelings and actions of users.
In this study, we explored the possibilities of
embodied interaction in a creative context virtual
avatar creation. Avatars are used to alter the egos of
real-world users in virtual spaces. Two main means
are available for users to obtain their own avatars:
purchasing off-the-shelf avatars and creating their
own avatars. Users may be unable to purchase an
ideal off-the-shelf avatar. Moreover, most existing
avatar creation software require users to have
Ichino, J. and Naruse, K.
Virtual Avatar Creation Support System for Novices with Gesture-Based Direct Manipulation and Perspective Switching.
DOI: 10.5220/0011630800003417
In Proceedings of the 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2023) - Volume 2: HUCAPP, pages
143-151
ISBN: 978-989-758-634-7; ISSN: 2184-4321
Copyright
c
2023 by SCITEPRESS Science and Technology Publications, Lda. Under CC license (CC BY-NC-ND 4.0)
143
knowledge of 3D modeling or set individual
parameters such as the leg length and sleeve length by
moving sliders or through keyboard input, which are
not intuitive and require time to learn. This may be a
barrier for users who wish to create and personalize
their own avatars (Freeman, et al., 2020).
Thus, we propose a system that supports the
creation of human-like avatars through intuitive
operations in virtual spaces. Specifically, (1) users
can directly manipulate life-size self-avatars in virtual
spaces using gestures and (2) users can switch
between first-person and third-person perspectives.
The target users are novices who are beginning to take
an interest in avatar-creation.
2 RELATED WORK
2.1 Studies on Automatic Generation of
Virtual Avatars
Various methods have been proposed for the
automatic generation of virtual avatars, e.g., (Ichim,
et al., 2015; Nagano, et al., 2018; Li, et al., 2019;
Murthy, et al., 2021; Hu, et al., 2021). The majority
of these methods use photographs of the users as input
to reproduce their real-world appearance in the virtual
world (Ichim, et al., 2015; Nagano, et al., 2018; Li, et
al., 2019; Murthy, et al., 2021; Hu, et al., 2021) or to
emphasize the features of their real-world appearance
(Hu, et al., 2021). Although techniques based on
automatic generation offer the advantage of rapid and
efficient avatar creation, avatars that are desired by
the users cannot always be generated. Given the
increasing importance of virtual spaces as
environments for self-expression, it is necessary to
provide automatic avatar generation from user
photographs as well as the creation of avatars that are
desired by users through interaction between the user
and system. However, to date, limited research has
been conducted in this area.
2.2 Software to Support Creation of
Virtual Avatars
We analyzed existing software that are available for
virtual avatar creation and divided them into four
types, as detailed in the following sub-sections. All
the types enable users to create avatars while
interacting with the system.
2.2.1 3D Modeling Software
Blender (blender.org, 2022) and Metasequoia4
(tetraface Inc., 2022) (Figure 1, upper left) are
software programs that can create 3D models in
general, including virtual avatars. These programs
enable users to create avatars with complex shapes
that are difficult to create with other software types,
and thus, offer a high degree of freedom in the
creation.
However, several challenges exist: knowledge of
3D modeling is required; the operation is
complicated; and it is difficult to imagine the shape,
thickness, and size of the 3D avatar on a 2D display,
and ultimately, how the avatar will be represented in
virtual spaces.
2.2.2 Avatar-Creation Software for Desktop
PCs
VRoid Studio (Pixiv Inc., 2021) (Figure 1, lower left)
is an example of software that is specialized for avatar
creation on desktop PCs. In VRoid Studio, users
generally adjust the parameters relating to the shape
of the avatar by moving the slider or entering
numerical values using the keyboard. Users can select
parts or draw hairstyles, faces, and clothes directly
using a tablet pen. Although the degree of freedom
offered is slightly lower than that of the
aforementioned 3D modeling software, the operation
is less complicated.
However, the limitations are as follows: numerous
parameters need to be adjusted; it is difficult to map
the parameters to the avatar geometry; and it is
difficult to imagine the shape, thickness, and size of
the 3D avatar on a 2D display, and ultimately, how
the avatar will be represented in virtual spaces.
2.2.3 Avatar-Creation Software for
Smartphones
Software that is specialized for avatar creation on
smartphones includes Custom Cast (Custom Cast
Inc., 2018) (Figure 1, right). In Custom Cast, users
select body parts to create and adjust the shape using
slider operations. As with the avatar creation software
for desktop PCs described above, the complexity of
the operation is not very high.
However, the difficulties are as follows: it is
cumbersome to operate the entire app using fingertips
on a small screen (i.e., the fat finger problem); and the
small screen makes it more difficult than with
software for desktop PCs to imagine the shape,
thickness, and size of the 3D avatar, and ultimately,
how the avatar will be represented in virtual spaces.
HUCAPP 2023 - 7th International Conference on Human Computer Interaction Theory and Applications
144
Figure 1: Examples of existing software for virtual avatar creation.
2.2.4 Avatar-Creation Capabilities on
Social Virtual Reality Platforms
Cluster is an example of a social virtual reality (VR)
platform with avatar-creation capabilities (Cluster
Inc., 2017) (Figure 1, center). Cluster enables users to
select costumes in virtual spaces for the default avatar
and adjust the avatar shape using slider operations.
The interface is similar to that of avatar-creation
software for smartphones. In the software that was
described in the previous three sub-sections, users
view and create their avatars on a 2D display (i.e.,
third-person perspective), whereas Cluster allows
users to view and create their avatars in virtual spaces
(i.e., first-person perspective).
However, the interface for creating avatars is
almost the same as that of the avatar creation software
for smartphones, which is not necessarily optimal for
creating avatars in 3D space. Moreover, although
Cluster supports both third- and first-person
perspectives, users can only use one of these
perspectives during the avatar creation.
2.3 Existing Software Difficulties
Table 1 summarizes the difficulties associated with
the software that is used to support the creation of
virtual avatars. On this basis, we analyze the
problems with existing software for creating virtual
avatars.
A common problem in many existing software
programs is that they force users who are engaged in
the creative activity of designing and creating avatars
to adjust the parameters. (i) to embody the ideal
avatar that is initially drawn in users’ minds, the ideal
avatar needs to be converted into several parameters
that are convenient for the computer. This mapping of
parameters to the avatar shape and size is not easy for
novices. (ii) to teach these parameters to the
computer, users must perform detailed tasks, such as
moving sliders and entering keyboard input. This is
tedious for novices. Furthermore, as users repeat steps
(i) and (ii), they become absorbed in the “task” of
adjusting the parameters and as a result, they may lose
the perspective and reasoning of viewing the
appearance of the avatars as a whole, or of thinking
flexibly and freely.
Another frequent issue in existing software
packages is that users cannot experience their own
avatars in virtual spaces during the creation process.
Similar to the concept of purchasing or making
clothing in the real world, users wish to be aware of
how their avatar appears to them (first-person
perspective) and how it appears to others (third-
person perspective) when creating their own avatar.
In many existing software programs, users create
their avatars only from the third-person perspective,
which may result in a large discrepancy between the
avatar the user originally wished to create and
completed avatar when the completed avatar is
viewed from the first-person perspective in virtual
spaces. Although Cluster supports both first- and
third-person perspectives, it is not possible to switch
between the two during the creation process.
Virtual Avatar Creation Support System for Novices with Gesture-Based Direct Manipulation and Perspective Switching
145
Table 1: Software difficulties in supporting virtual avatar creation.
3 SYSTEM CONCEPTS
Table 2 displays the system concepts that are derived
from the analysis of problems with existing software
for virtual-avatar creation in the previous section.
Concept (1), namely gesture-based direct
manipulation of life-size avatars, is derived from the
problem in the second paragraph of Section 2.3. Users
should be able to change the body, hair, and clothes of
a life-size avatar by directly touching with their hands,
rather than by setting parameters, to enable them to
embody the ideal avatar in their minds intuitively.
Concept (2), namely switchable perspectives (first-
person or third-person), is derived from the problem in
the final paragraph of Section 2.3. Users should be able
to create their avatars in virtual spaces from both
perspectives (first- and third-person) to enable them to
create their avatars while determining the appearance
of the avatars in the virtual space, that is, how they
appear from the users’ and others’ perspectives.
Table 2: System concepts for supporting virtual-avatar
creation for novices.
(1) Gesture-based direct manipulation of life-size avatars
(
2
)
Switchable
p
ers
p
ectives
(
first-
p
erson or thir
d
-
p
erson
)
4 INTERACTION DESIGN
We designed interactions that satisfy the system
concepts listed in Table 2. Figure 6 depicts a usage
scenario of the proposed system.
4.1 Gesture-Based Direct Manipulation
of Life-Size Avatars
To satisfy Concept (1) in Table 2, users of this
system, such as general users in the virtual space, are
represented as full-body self-avatars and they create
avatars for themselves. Users can directly change
their own body, hair, and clothes through gestures
such as “pulling,” “pushing in,” “lifting,”
“stretching,” and “depressing”. Parts 1 to 8 in Figure
2 indicate the manipulable parts, which are enclosed
by red dashed lines and can be directly manipulated
by users with gestures in our system. Users can
change the shape of their own (avatar’s) body, hair,
and clothes by moving the guide object that is
represented by a blue wedge next to the manipulable
part in Figure 2.
The interaction for changing the shape of the
manipulable part, using the sleeve fullness (Figure 2,
1) as an example, is described in the following. When
a user wishes to change the degree of sleeve fullness,
they first select one of the multiple blue wedge-
shaped guide objects [Figure 3(c)] that are placed
along the avatar’s forearm, at the point they wish to
change. Subsequently, they pinch one of the guide
objects and only the guide object turns red [Figure
3(a)]. If the user lifts the red guide object, the degree
of fullness of the manipulable part [Figure 3(b)] that
is associated with the red object, changes. Guide
objects can also be hidden if the user wishes to view
their avatar’s appearance. Table 3 summarizes the
interactions when changing each manipulable part
(Figure 2, 1 to 8).
4.2 Perspective Switching
To satisfy Concept (2) in Table 2, our system supports
both first- and third-person perspectives, thereby
allowing the users to switch between the two
perspectives in virtual spaces (Figure 5).
In the first-person perspective mode (Figure 5,
left), the users are represented as full-body self-
avatars in virtual spaces. The target avatar for creation
is themselves (i.e., a self-avatar). The users touch
their own (self-avatar’s) manipulable parts with their
Type Difficulties
3D modeling software Knowledge of 3D modeling is required
Complex operation
Difficult to imagine the shape, thickness, and size of the 3D avatar on a 2D display, and ultimately, how the
avatar will be represented in virtual spaces
Avatar creation software
for desktop PCs
Many parameters to adjust
Difficult to map the parameters to the avatar geometry
Difficult to imagine the shape, thickness, and size of the 3D avatar on a 2D display, and ultimately, how the
avatar will be represented in virtual spaces
Avatar creation software
for smartphones
Fat finger problem
Difficult to imagine the shape, thickness, and size of the 3D avatar on a 2D display, and ultimately, how the
avatar will be represented in virtual spaces
Avatar creation capabilities
on social VR platforms
The same interface as smartphone avatar creation software; may not be optimal for avatar creation in 3D space
Users can only use either the third- or first-person perspective during avatar creation
HUCAPP 2023 - 7th International Conference on Human Computer Interaction Theory and Applications
146
Figure 2: Manipulable parts (red dashed ellipses) and guide
objects (blue wedges) for body, hair, and clothing. Users
can change the shapes of the manipulable parts by moving
the corresponding guide objects.
Figure 3: User’s perspective when looking down at their
(avatar’s) right arm and changing the degree of sleeve
fullness of that arm with their (avatar’s) left hand.
.
Table 3: Gesture-based direct manipulation of life-size avatar for changing body, hair, and clothes.
Figure 4: Before and after each manipulation.
Manipulable part (Figure 5) User action (before) System feedback (after) Illustration
Clothes Sleeve fullness (1) Pinch the part of the sleeve to be inflated and pull/push
it in
Sleeve bulges/dents Figure 6
Sleeve length (2) Pinch the cuff and pull/push it in Sleeve lengthens/shortens Figure 7, upper left
Skirt length (3) Grasp the edge of the skirt and pull it up/down Skirt lengthens/shortens Figure 7, upper center
Skirt spread (3) Grasp the edge of the skirt and spread/narrow it Skirt widens/shrinks
Hair Front hair length (4) Pinch the end of the front hair and pull it up/down Front hair lengthens/shortens Figure 7, upper right
Direction and parting
of front hair (4)
Hold the front hair and move it from side to side Direction and parting of
front hair changes
Side hair length (5) Pinch the end of the side hair and pull it up/down Side hair lengthens/shortens
Back hair length (6) Pinch the end of the back hair and pull it up/down Back hair lengthens/shortens
Body
and
face
Face shape/cheek
fullness (7)
Touch the cheek with the palm of the hand and press it
in/expand it
Face shape becomes
slimmer/rounded
Figure 7, lower left
Waist position (8) Grasp the waist and pull it up/down Waist rises/lowers Figure 7, lower center
Virtual Avatar Creation Support System for Novices with Gesture-Based Direct Manipulation and Perspective Switching
147
Figure 5: User’s perspective in first-person (left) and third-person (right) perspective modes.
own (self-avatar’s) hands. As the users cannot view
the full-body target avatar (themselves) in the first-
person perspective mode, a mirror is placed in front
of the self-avatar. Manipulable parts that are suitable
for manipulation in the first-person perspective mode
include the sleeves, front hair, side hair, face shape,
and waist position.
In the third-person perspective mode (Figure 5,
right), the users are represented as hand self-avatars
in the virtual spaces. The target for creation is a fixed
avatar in front of the user. The users touch the
manipulable part of the target avatar using their own
(self-avatar’s) hands. In the third-person perspective
mode, the users can view the entire body of the target
avatar from the outside. Manipulable parts that are
suitable for manipulation in the third-person
perspective mode include the skirt, back hair, and
waist position.
5 PROTOTYPE
Based on the designed interactions that were outlined
in the previous section, we developed a prototype of
a virtual avatar creation support system that enables
gesture-based direct manipulation of life-size avatars
and perspective switching.
5.1 System Configuration
We developed the prototype using the Unity software.
As for the hardware, a VIVE Pro Eye was used for the
head-mounted display (HMD) and a Valve Index
controller was used for the controller. The Steam VR
plugin was used to acquire the HMD coordinates,
detect the controller input, and manipulate objects.
Final IK (Unity Technologies, 2022) was used to
reflect the HMD and controller coordinates in the
movements of the avatar. VRoid Studio (Pixiv Inc.,
2021) was used for the avatar and clothing materials.
The software consisted of three primary functions
(Figure 7). The manipulable parts-change function is
described in the following section.
5.2 Change Function for Manipulable
Parts
This function enables users to manipulate the guide
object using gestures to change the shape of the
manipulable part. First, the system calculates the
distance the user moved the guide object. This is the
amount of movement based on the initial position set
for each manipulable part. Subsequently, the system
multiplies this amount of movement by the scalar
multiplier set for each manipulable part and applies it
to the blendshape of the manipulable part that is being
changed.
6 USER STUDY
We conducted a preliminary user study with three
participants using our prototype. This study consisted
of two tasks: a scenario task and a free task. In the
scenario task, the participants were instructed on the
operation of the system and subsequently, they
performed each function of the system as instructed. In
the free task, the participants operated the system freely
for 10 minutes. Finally, they completed questionnaires
6.1 Questionnaires
To evaluate the effectiveness of the two proposed
system
concepts presented in Table 2, we conducted
HUCAPP 2023 - 7th International Conference on Human Computer Interaction Theory and Applications
148
Figure 6: Usage scenario of the proposed system.
Figure 7: Software configuration.
an open-ended questionnaire and asked the
respondents to provide feedback freely, mainly
regarding the system concepts.
Moreover, three scales were used to evaluate the
general usefulness of the system.
Igroup Presence Questionnaire (IPQ): we used
three of the four subscales of the IPQ (General
Presence, Spatial Presence, and Involvement)
(igroup.org, 2016) to evaluate the presence and
involvement of the system. A total of 10
questionnaire items were included using a 7-point
Likert scale.
System Usability Scale (SUS): we used the SUS
(Usability.gov., 2022) to evaluate the usability of
the system. A total of 10 questionnaire items were
included using a 5-point Likert scale.
Virtual Avatar Creation Support System for Novices with Gesture-Based Direct Manipulation and Perspective Switching
149
Intrinsic Motivation Inventory (IMI): we used
four of the seven subscales of the IMI
(Interest/Enjoyment, Perceived Competence,
Effectiveness/Importance, and Value/Usefulness)
(Self-Determination Theory, 2022) to evaluate the
intrinsic motivation of the system. A total of 25
questionnaire items were included using a 7-point
Likert scale.
6.2 Results and Discussion
6.2.1 Effectiveness of System Concepts
Positive comments relating to our system concepts
included the following:
“It was exciting to touch my own body (in first-
person perspective mode) while creating the
avatar, and it was also interesting to see and
touch myself from the outside (in third-person
perspective mode).
“I liked the fact that I could see how my avatar
would actually look in virtual spaces when I was
creating it.”
“I think it’s good that I can customize my avatar
with intuitive operations, and that I can switch
perspectives.”
“It was great to be able to adjust the subtle length
of the hair on the front and sides while looking in
the mirror (from a first-person perspective). It
was also nice to be able to adjust the length of the
back hair by looking at it from behind (from a
third-person perspective). It would be interesting
to be able to change not only the length and
parting of the hair, but also the degree of wave of
the hair.”
These comments suggest that the participants
evaluated the two system concepts. In particular, they
noted that the proposed system would be useful for
adjusting parts that are more difficult to specify using
parameters, such as the hair parting and sleeve
fullness.
However, several negative comments regarding
the system concept were provided:
“It was difficult to understand the range of each
changeable arrow (guide object) on the sleeves,
and it became a mess.”
“In this system, we touch wedges (guide objects)
to change the shape of clothes and the length of
hair, but I thought it would be better if we could
touch clothes and hair (directly) instead of
wedges, just as we usually touch them.”
These comments suggest that substantial room for
improvement exists in the guide objects that are used
to manipulate the manipulable parts. It may be
possible to increase the “resolution” of the guide
objects (i.e., reduce the changeable range of a single
guide object and locate the guide objects more
densely). If the resolution of the guide objects is
sufficiently high, users will be able to change the
shape of the sleeve as if they are “touching the cloth
directly,” without being aware of the guide objects. In
this case, it would be better to use a hand-tracking
function instead of the controller of the current
system to recognize detailed movements of the user’s
fingertips.
Furthermore, all the participants requested the
ability to customize various parts further. In addition
to the manipulable parts demonstrated in this study
(Figure 2, Table 3), it is possible to expand the design
elements that are difficult to set with parameters, such
as the degree of the hair wave mentioned by one of
the participants.
6.2.2 General Usefulness of System
The results of the ratings of the participants for each
subscale of each scale (IPQ, SUS, and IMI) are
presented in Table 6. Although there were only three
participants, their ratings of the IPQ, SUS, and IMI
scales were high, suggesting the usefulness of our
system. Overall, the evaluation score of P3 was lower
than that of the other participants. This may be owed
to the fact that P3 was the only participant who had
no experience using VR systems.
Table 6: Results of participant ratings.
Scale Subscale
Full
p
oints
P1 P2 P3 Mean
IPQ General Presence 6 6.00 6.00 6.00 6.00
Spatial Presence 6 4.40 4.40 4.80 4.53
Involvement 6 4.75 5.75 5.25 5.25
SUS 100 95.0 90.0 62.5 82.5
IMI Interest/Enjoyment 7 6.71 7.00 6.43 6.71
Perceived
Com
p
etence
7 5.83 7.00 3.67 5.50
Effort/Importance 7 6.80 7.00 6.80 6.87
Value/Usefulness 7 6.86 7.00 6.71 6.86
7 CONCLUSION
We have proposed a system that supports the activity
of creating virtual avatars for novices using embodied
interaction. First, we analyzed the problems with
existing software for the creation of virtual avatars.
Based on these problems, we derived two system
HUCAPP 2023 - 7th International Conference on Human Computer Interaction Theory and Applications
150
concepts: gesture-based direct manipulation of life-
size avatars and perspective switching. Subsequently,
we designed an interaction to satisfy these two
concepts and developed a prototype. In the first-
person perspective mode, users are represented as
full-body self-avatars in virtual spaces, and they
create their avatars in manners such as by changing
the length of the self-avatar’s sleeves through pulling
on the self-avatar’s hand or changing the parting of
the self-avatar’s front hair in the mirror by touching
the self-avatar’s hand. In the third-person mode, users
are represented as hand self-avatars in front of the
target avatar for creation in virtual spaces, and they
create their avatars in manners such as by changing
the length of the target avatar’s back hair with the
hand self-avatar from behind or changing the length
of the target avatar’s skirt with the hand self-avatar.
We conducted a preliminary user study with three
participants using the prototype. The results suggest
that these two system concepts were generally
positively accepted. In particular, we determined that
the proposed system is useful for adjusting parts that
are more difficult to specify using parameters, such as
the parting of hair and fullness of sleeves.
REFERENCES
Black, A. (1990). Visible Planning on paper and on screen:
The impact of working medium on decision-making by
novice graphic designers, Behaviour and Information
Technology 9, 4, 283–296.
Tano, S. (1999). Analysis of Obstruction and Promotion of
Human Creative Work by Information Systems, In
Proceedings of the Human Interface Symposium ’99,
Human Interface Society, 791–796.
England, D., Randles, M., Fergus, P., and Taleb-Bendiab,
A. (2009). Towards an Advanced Framework for
Whole Body Interaction. Virtual and Mixed Reality
(VMR 2009), 32-40.
Dourish, P. (2001). Where the Action Is: The Foundations
of Embodied Interaction, MIT Press.
Freeman, G., Zamanifard, S., Maloney, D., and Adkins, A.
(2020). My Body, My Avatar: How People Perceive
Their Avatars in Social Virtual Reality. In Proceedings
of the 2020 CHI Conference on Human Factors in
Computing Systems Extended Abstracts (CHI EA ’20).
Association for Computing Machinery, New York, NY,
USA, 1–8.
Ichim, A. E., Bouaziz, S., and Pauly, M. (2015). Dynamic
3D avatar creation from hand-held video input. ACM
Transactions on Graphics 34, 4, 1–14.
Nagano, K., Seo, J., Xing, J., Wei, L., Li, Z., Saito, S.,
Agarwal, A., Fursund, J., and Li, H. (2018). paGAN:
real-time avatars using dynamic textures. ACM
Transactions on Graphics 37, 6, 1–12.
Li, Z., Chen, L., Liu, C., Gao, Y., Ha, Y., Xu, C., Quan, S.,
and Xu, Y. (2019). 3D Human Avatar Digitization from
a Single Image. In Proceedings of the 17th
International Conference on Virtual-Reality
Continuum and its Applications in Industry (VRCAI
’19). Association for Computing Machinery, New
York, NY, USA, 1–8.
Murthy, S. D., Höllerer, T., and Sra, M. (2021).
IMAGEimate - An End-to-End Pipeline to Create
Realistic Animatable 3D Avatars from a Single Image
Using Neural Networks. In Proceedings of the 27th
ACM Symposium on Virtual Reality Software and
Technology (VRST ’21). Association for Computing
Machinery, New York, NY, USA, 1–3.
Hu, L., Zhang, B., Zhang, P., Qi, J., Cao, J., Gao, D., Zhao,
H., Feng, X., Wang, Q., Zhuo, L., Pan, P., and Xu, Y.
(2021). A Virtual Character Generation and Animation
System for E-Commerce Live Streaming. In
Proceedings of the 29th ACM International Conference
on Multimedia (MM ’21). Association for Computing
Machinery, New York, NY, USA, 1202–1211.
blender.org. (2022). Home of the Blender project Free and
Open 3D Creation Software, https://www.blender.org/
tetraface Inc. (2022). Metasequoia 4.
https://www.metaseq.net/jp/
Pixiv Inc. (2021). VRoid Studio. https://vroid.com/studio
Custom Cast Inc. 2018. Custom Cast. https://customcast.jp/
Cluster Inc. (2017). Cluster. https://cluster.mu/
Unity Technologies. 2022. Final IK. https://assetstore.
unity.com/packages/tools/animation/final-ik-14290
igroup.org. (2016). Igroup Presence Questionnaire (IPQ),
http://www.igroup.org/pq/ipq/index.php
Usability.gov. (2022). System Usability Scale (SUS),
https://www.usability.gov/how-to-and-tools/methods/
system-usability-scale.html
Self-Determination Theory. (2022). Intrinsic Motivation
Inventory (IMI), https://selfdeterminationtheory.
org/category/questionnaires/page/3/
Virtual Avatar Creation Support System for Novices with Gesture-Based Direct Manipulation and Perspective Switching
151