Inside - Outside Model Viewing
A Low-cost Hybrid Approach to Visualization and Demonstration of 3D Models
Ivan A. Nikolov
Research Assistant, Department of Architecture, Design and Media Technology,
Aalborg University, Rendsburggade 14, 9000 Aalborg, Denmark
Keywords:
Virtual Reality, Augmented Reality, Real-time Rendering, Head Mounted Displays.
Abstract:
Visualization of large scale 3D models has become an important part of the development cycle in many fields
like building design, machine design, construction and many more. Whether for design communication or
demonstration, it is necessary for the viewers to fully understand the different components of the model, their
proportions compared to each other and the overall design. A variety of augmented reality(AR) applications
have been created for overall visualization of large scale models. For tours inside 3D renderings of models
many immersive virtual reality (VR) applications exist. Both types of applications have their limitation, omit-
ting either important details in the AR case or the full picture in the case of VR. This paper presents a low-cost
way to demonstrate models using a hybrid virtual environment system (HVE), combining virtual reality and
augmented reality visualization. The solution is built using a fully occlusive head mounted display(HMD),
together with off-the-shelf web cameras and a game controller for interaction. A proof of concept is created us-
ing a commercial game engine, which is used in a subsequent case study. Based on this study, we demonstrate
the validity of our proposed system.
1 INTRODUCTION
Virtual Reality (VR) technology has started to gather
a lot of follower in recent years, thanks to both the
push of large companies and the emerging of the new
low-cost, easy to use, lightweight and high resolution
Head-Mounted Displays (HMD).And with the devel-
opment of faster mobile and camera solutions and
more robust tracking algorithms AR has also been on
the rise. These solutions make it possible for devel-
opers to build real-time mixed reality (MR) applica-
tions. A problem that arises with the development
of virtual reality systems is the one of visualization
and proper interaction with the objects in the envi-
ronment. To achieve a proper experience of immer-
sion and interaction with the environment different
methods are explored. The most recent state of the
art research methods concerning tablet-based interac-
tion (Krum et al., 2014), as well as gesture-based ones
(Shen et al., 2014).
Another possibility for helping the user to prop-
erly interact with the virtual environment is by provid-
ing a better and more diverse outlook on it. This can
be achieved by implementing a virtual reality system,
that uses a combination of multiple points of view and
interaction possibilities. Such systems are demon-
strated in (Wang and Lindeman, 2014) and (Wang and
Lindeman, 2015), where multiple points of view and
methods of interaction ensure a smooth workflow and
interactions for users.
This paper introduces a novel approach based
on this type of hybrid virtual environment system,
which uses a mixed reality visualization. We focus
on a low-cost hardware system using an Oculus Rift
DK2, as a HMD, together with two web cameras for
passthrough. For interaction an Xbox360 controller is
chosen for its relative simplicity and versatility. Two
visual points are implemented - a first person view
and a bird’s eye view. The bird’s eye view of the
models is rendered as an augmented reality visualiza-
tion on a marker connected to the controller. The first
person view employs a virtual reality approach to go
up close to different parts of the 3D model and get
a better understanding of their proportions. In addi-
tion the starting position of the first person view can
be changed by users, providing a more customizable
experience. A user study was conducted to validate
the approach and demonstrate its positive effects on
an user’s workflow. Furthermore, the mixed reality
approach reduced cyber sickness.
292
Nikolov, I.
Inside - Outside Model Viewing - A Low-cost Hybrid Approach to Visualization and Demonstration of 3D Models.
DOI: 10.5220/0005715502900295
In Proceedings of the 11th Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2016) - Volume 1: GRAPP, pages 292-297
ISBN: 978-989-758-175-5
Copyright
c
2016 by SCITEPRESS Science and Technology Publications, Lda. All rights reserved
2 RELATED WORK
2.1 Immersive Virtual Reality
Applications
Immersive virtual reality applications have become
a commonplace in number of different fields, that
require visualization of designs in work or finished
products without the need of a physical product.
Such fields are architecture, engineering, and con-
struction. Efforts have been made to implement this
kind of interactive visualizations into the pipeline of
CAD/CAM production, such as the work of (Stark
et al., 2010) and integrating it as part of a product
lifecycle management, as in the work of (Mahdjoub
et al., 2010). Additionally, such immersive virtual re-
ality applications are becoming more and more easy
and natural to work with, removing inherent limita-
tions such as user fatigue, uncomfortable controls and
disorientation. The research of (Mine et al., 2014)
works around these limitations to achieve an applica-
tion that can be used for longer periods of time. It
is also shown that immersive virtual reality applica-
tion can be a vital part of the demonstration process,
giving more information to users, which could not be
shown as directly in any other way. The research of
(Marks et al., 2014) demonstrates the validity of this
for the ship construction industry, while the work of
(Bednarz et al., 2015) shows the positive effects of
immersive virtual reality applications on the mining
industry.
2.2 Augmented Reality Applications
Augmented and mixed reality applications have
become regular substitutes for bulkier and harder to
move and assemble models and miniatures. These
applications are commonly used for building visual-
izations, machine and parts demonstrations. Other
strengths include the expanded degrees of interaction
and user involvement, as well as the comfort of using
a wide array of platforms to work with. Fast access to
vital information throughout the design and construc-
tion phases of buildings is a necessity and the work of
(Zollmann et al., 2014) shows that augmented reality
visualization can give this kind of information. A
combination between 2D schematics and 3D aug-
mented reality visualization gives all necessary data,
without interrupting the workflow, as shown by (C
ˆ
ot
´
e
et al., 2014). The acceleration of decision making
in construction using AR is further demonstrated
by (Wang et al., 2014). (Figueiredo et al., 2014)
show that visualizing 3D models for demonstration
and training purposes using an augmented real-
ity approach also gives a better outlook on them.
3 METHODOLOGY
3.1 Visualization Rig
The visualization rig consists of a number of major
components, that work together to achieve both the
first person and the bird’s eye views of the models.
The components are as follows - a consumer grade
VR headset, two identical off the shelf web cameras
fixed in stereoscopic view on a custom mount, two
smart phone fish-eye lenses for extending the field
of view of the cameras, an augmented reality target
tracking system and a rendering engine.
The VR headset in use is a Oculus Rift DK2,
which offers marker based tracking of linear head mo-
tion using infrared LEDs on the headset and a sin-
gle infrared sensitive camera. The headset can also
track rotational head movements thanks to the build-
in accelerometers. Because the proposed system does
not require free physical movement, these out of the
box capabilities are enough for a proper visualization.
As the headset provides a fully-occluded virtual re-
ality visualization and the mixed reality approach of
the solution requires passthrough capabilities, a cus-
tom solution needs to be build. Two identical Canyon
CNE-CWC3 web cameras, together with iPhone fish-
eye lenses are chosen for the project, as they pro-
vide an inexpensive solution with 1080p resolution,
45 fps and wide field of view and thus minimize the
symptoms of cyber sickness described by (LaViola Jr,
2000) and (Sharples et al., 2008). Both cameras are
mounted onto Oculus Rift so they can form a comfort-
able stereoscopic view with eye distance of 6.35cm.
As the aim of the project is to create a low-cost, easily
replicated, mixed reality viewing solution for our ap-
proach, we voted against using professional machine-
vision cameras.
3.2 Augmented Reality Solution
For visualization of the 3D models onto the
passthrough video feed, a augmented reality target
positioning solution needs to be chosen. Going again
with our goal to make reimplementation and subse-
quent experimentation easier, we chose a free track-
ing solution. A second requirement to our solution
is that it would be able to work properly with both
desktop and mobile systems alike. The third require-
ment is that the solution needs to be compatible with
the Unity game engine. Taking in consideration these
Inside - Outside Model Viewing - A Low-cost Hybrid Approach to Visualization and Demonstration of 3D Models
293
requirements we chose the free, easily convertible
implementation of ARToolKit, called NyARToolKit.
More information on the library can be seen in (NyA,
2015).
As virtual interactions are done through the use
of a Xbox360 controller and the user would be en-
gaged in holding the controller, a solution is offered
for physically interacting with the marker for track-
ing, without letting go of the controller. The target
marker is mounted on top of the controller, giving the
user the comfort to rotate and move the marker, while
interacting with the 3D model rendered on top of it,
through button and stick manipulation. The final pro-
totype rig and the controller-marker rig can be seen in
Figure 1.
Figure 1: Left - the hardware rig consisting of an Oculus
Rift DK2, dual web cameras and fish-eye lenses. Right
- the marker-controller rig with custom made marker. By
combining the two, both physical and virtual interaction can
be achieved at the same time, while in the bird’s eye aug-
mented reality view.
Figure 2: Implemented interaction for the prototype. Di-
vided in inside and outside interactions, depending on
which view they suit best. The selected interactions are the
ones most widely used in the state of the art research.
3.3 Proof of Concept Application
As part of demonstrating the capabilities of the sug-
gested approach, we build a proof of concept applica-
tion using the Unity 4.6 engine. A number of inter-
action possibilities are implemented into the applica-
Figure 3: Overview of the two views implemented in the
prototype - outside bird’s eye augmented reality view and
inside first person virtual reality view.
tion. These interactions are chosen in such a way that
they may be used to test user’s experience, while ma-
nipulating parts of the application. Additionally these
interactions are the most widely used in the state of
the art research of immersive virtual and augmented
reality interactive systems like the ones described by
(Steptoe et al., 2014), (Wang and Lindeman, 2014)
and (Wang and Lindeman, 2015). These interactions
are divided into inside and outside interactions, de-
pending on which view they suit - the first person one
or the bird’s eye one. The selected interactions and
the view to which they are allocated can be seen in
Figure 2.
A free model of a villa is selected, as to offer di-
verse interior and exterior objects and wider spaces
for easier testing. The model is rendered differently
for the outside and the inside views. For the out-
side view the 3D objects are rendered with only ba-
sic lighting and shadows and without any advanced
screen effects or animations. This is done to make
the model as lightweight as possible and to help the
user focus on the model itself without any additional
distractions. The inside view model is is much more
detailed, to compliment the immersive nature of the
view. A skybox, realistic water, particle effects, trees
and surrounding terrain are added. Visualization of
both view can be seen in Figure 3.
The switch between the two views is achieved by a
combination of two effects - a smooth transition of the
camera from one position to the other, combined with
a fade to black effect, to prevent the disorientation of
the user.
Another idea that we implement and which is pos-
sible by the combination of virtual and augmented re-
ality visualization is a passthrough ”window”, to the
real world, when a user is in the virtual reality first
person view. This is implemented to lower the pos-
sibility of disorientation or cyber sickness. The pos-
itive effect of giving the user the possibility to view
his or her body and the real world is demonstrated
by (Buchmann et al., 2005) and (Bruder et al., 2009).
Thus, a user can orientated himself/herself better, as
well as get a fast look on the outside view without
breaking the immersion of the inside viewing of the
GRAPP 2016 - International Conference on Computer Graphics Theory and Applications
294
Figure 4: Examples of the interactions in the inside virtual reality first person view. From left to right - interact with selected
object, change color of selected object and take a photo of what the user is seeing plus creation of waypoint for later return to
that point in the 3D model. Final image demonstrates the passthrough ”window” to the real world.
model. Some examples of the possible interactions
are given in Figure 4.
4 EVALUATION
4.1 Usability Study Overview
The prototype inside-outside model viewer is tested
in a preliminary user study with a number of partici-
pants with various work and study backgrounds. The
participants are chosen from both genders and from
different age groups. In total 12 participants are fea-
tured in the study, from which 4 female and 8 male.
The users are roughly divided into categories depend-
ing on their background and computer knowledge.
The first group of 7 participants has high degree
of computer knowledge and have used or observed
the use of a HMD in relation to their academic back-
ground, as well as those who have mixed skillsets in
design and have worked with modelling and CAD ap-
plications. The second group of 5 contains partici-
pants from a background in humanitarian sciences -
tourism, economics, marketing and management and
have a lower degree of computer knowledge.
The study determines the feasibility of our pro-
posed system, by giving users the possibility to inter-
act both with the bird’s eye and first person views.
Additionally, to avoid biased results towards users
who are well versed in working with modelling pro-
grams or have experience with virtual reality, the test-
ing is also conducted on users which have little or no
such experience. The study consists of an initial cal-
ibration and a number of testing scenarios. A free-
flowing, think-aloud method with a relaxed approach
is used for conducting the study, similar to the one
used by (Zhao and McDonald, 2010). This helps the
researcher to get an intimate look into the experience
and thought process of the participants. A seven point
system is chosen over the normal five point one, as it
is shown to receive less neutral answers, with lower
measurement error and higher precision by the re-
search conducted by (Munshi, 2014).
4.2 Results
When we asked about the degree to which the combi-
nation of the two views would help for the overall un-
derstanding of the model, participants were polarised
in their responses dependent on their skill set. Seventy
five percent of the users with non technical skill sets,
as well as ten percent of users with technical skills felt
a detached between the outside and inside views. An-
swers like - ...for me the two views were too different
so I could not even match them in my head. For me
they gave me a lot of information but I could not con-
nect it. and No, because you had to focus a lot on
that one (outside view) first. You could see everything
on the model, but the moment you go down you lose
the sense where you are...But maybe it was because I
did not zoom out that much.”, demonstrate the prob-
lems, which the users were facing. The sheer amount
of information that the users get from the setup can be
overwhelming and requires a certain time to get used
to, by inexperienced users. Figure 5 demonstrates the
difference between the conclusions of users from the
demographics. Both the inside and the outside views
were positively accepted, with all users appreciating
both the model in hand approach of the augmented re-
ality, as well as the hands on experience of the virtual
environment
On the other hand most of the participants with
technical and design background were positive that
the hybrid setup helped them. The selection where a
user can place the inside walker was considered also
a feature which helped orient the users. This is indi-
cated by comments like - ”The two views help. I can
go and look at the whole thing and then select where
I want to be. Less walking around and it’s more in-
teresting”. Furthermore, all participants also easily
got into the workflow of using the two views, switch-
ing between them after interacting with parts of the
models and searching for new ones to interact with or
looking how they look like in the bird’s eye view.
In addition even though most participants felt that
the transition from outside to inside view is smooth
and gave them a deeper feeling of being there once
in the inside view, the same was not considered for
Inside - Outside Model Viewing - A Low-cost Hybrid Approach to Visualization and Demonstration of 3D Models
295
Figure 5: How the users found this paper’s approach helpful
for viewing and understanding the overall proportions and
size of the 3D model and its parts.
the reverse transition. Users generally felt disoriented
and dizzy once coming back to the outside view. This
presents a problem, as constant switches between the
two views are required for normal work with the pro-
totype. A suggestion for alleviating this problem is
making the switch to outside view much slower as
well as giving information to the user, how he or she
is oriented in the real world before the transition is
done.
Another aspect that is taken into account in the
user study is the degree to which participants devel-
oped symptoms of cyber sickness, as our proposed
setup was also directed at alleviating nausea, dizzi-
ness and disorientation. This is why we asked users if
they experienced any symptoms. The answers show
that more than half of the participants experienced
varying levels of discomfort and disorientation. It is
also seen that most of the symptoms were experienced
in the inside view, where the users are subjected to a
full virtual reality. The results are visualized in Fig-
ure 6. Most users indicated that the switch between
the two views was very helpful with removing most
of the effects from the inside view, as well as helping
them orient themselves in the real world after longer
use. The outside view ”window” was also helpful
with that, as well as with making communication with
people in the ”real world” easier.
5 CONCLUSION AND FUTURE
WORK
The paper outlines our suggested flexible and low-
cost approach for combining virtual and augmented
reality visualization for a more comprehensive 3D
model demonstration. The approach centres around
a two-view system - an inside first person virtual real-
ity view and an outside bird’s eye augmented reality
view. A unified control scheme makes interactions in
Figure 6: Degree of occurrence of symptoms of cyber sick-
ness in participants in the study for both the augmented and
virtual reality views. Everything above 5 was considered a
basis for stopping the experiment.
both views intuitive and shortens the learning curve.
Furthermore a smooth transition between the views
ensures ease of work in extended periods of time. We
also take full advantage of the passthrough capabili-
ties by introducing a ”window” to the ”real world”,
while in the first person virtual reality view, which
helps to alleviate cyber sickness, disorientation and
dizziness.
We also present a user study on the built proof
of concept prototype for demonstrating its feasibil-
ity, which focuses on the experience of users and
demonstrates the positive impact of the system on the
demonstration of 3D models to people with different
degrees of understanding of the technology. Addi-
tionally the combination of the two views helps users
navigate the models and better judge the scale of dif-
ferent parts. The overall opinion of the users is that
the prototype is easy to use and with a lower learn-
ing curve. A comparison of our proposed system to
the state of the art is planned for the future to further
verify the positive impacts of our work and also to
find the weak points. There are a number of places
where the system can be improved. A hand tracking
system would offer much deeper sense of immersion,
as well as a high level of precision. One such solution
is the Sixense STEM VR. Together with a more ro-
bust set of interaction options this scheme can elevate
the usefulness of our system. This can be further im-
proved by also introducing position tracking and ex-
panding the ”window” idea, to include visualization
of the user’s hands.
REFERENCES
Nyartoolkit. http://nyatla.jp/nyartoolkit/wp/. Accessed:
2015-04-23.
Bednarz, T., James, C., Widzyk-Capehart, E., Caris, C., and
Alem, L. (2015). Distributed collaborative immersive
virtual reality framework for the mining industry. In
GRAPP 2016 - International Conference on Computer Graphics Theory and Applications
296
Machine Vision and Mechatronics in Practice, pages
39–48. Springer.
Bruder, G., Steinicke, F., Rothaus, K., and Hinrichs, K.
(2009). Enhancing presence in head-mounted display
environments by visual body feedback using head-
mounted cameras. In CyberWorlds, 2009. CW’09. In-
ternational Conference on, pages 43–50. IEEE.
Buchmann, V., Nilsen, T., and Billinghurst, M. (2005). In-
teraction with partially transparent hands and objects.
In Proceedings of the Sixth Australasian conference
on User interface-Volume 40, pages 17–20. Australian
Computer Society, Inc.
C
ˆ
ot
´
e, S., Beauvais, M., Girard-Vall
´
ee, A., and Snyder, R.
(2014). A live augmented reality tool for facilitating
interpretation of 2d construction drawings. In Aug-
mented and Virtual Reality, pages 421–427. Springer.
Figueiredo, M. J., Cardoso, P. J., Goncalves, C. D., and Ro-
drigues, J. M. (2014). Augmented reality and holo-
grams for the visualization of mechanical engineering
parts. In Information Visualisation (IV), 2014 18th In-
ternational Conference on, pages 368–373. IEEE.
Krum, D. M., Phan, T., Cairco Dukes, L., Wang, P., and Bo-
las, M. (2014). A demonstration of tablet-based inter-
action panels for immersive environments. In Virtual
Reality (VR), 2014 iEEE, pages 175–176. IEEE.
LaViola Jr, J. J. (2000). A discussion of cybersickness
in virtual environments. ACM SIGCHI Bulletin,
32(1):47–56.
Mahdjoub, M., Monticolo, D., Gomes, S., and Sagot, J.-C.
(2010). A collaborative design for usability approach
supported by virtual reality and a multi-agent system
embedded in a plm environment. Computer-Aided De-
sign, 42(5):402–413.
Marks, S., Estevez, J. E., and Connor, A. M. (2014). To-
wards the holodeck: fully immersive virtual reality
visualisation of scientific and engineering data. In
Proceedings of the 29th International Conference on
Image and Vision Computing New Zealand, page 42.
ACM.
Mine, M., Yoganandan, A., and Coffey, D. (2014). Mak-
ing vr work: Building a real-world immersive model-
ing application in the virtual world. In Proceedings of
the 2Nd ACM Symposium on Spatial User Interaction,
SUI ’14, pages 80–89, New York, NY, USA. ACM.
Munshi, J. (2014). A method for constructing likert scales.
Available at SSRN 2419366.
Sharples, S., Cobb, S., Moody, A., and Wilson, J. R. (2008).
Virtual reality induced symptoms and effects (vrise):
Comparison of head mounted display (hmd), desktop
and projection display systems. Displays, 29(2):58–
69.
Shen, J., Luo, Y., Wang, X., Wu, Z., and Zhou, M. (2014).
Gpu-based realtime hand gesture interaction and ren-
dering for volume datasets using leap motion. In Cy-
berworlds (CW), 2014 International Conference on,
pages 85–92. IEEE.
Stark, R., Israel, J., and W
¨
ohler, T. (2010). Towards
hybrid modelling environmentsmerging desktop-cad
and virtual reality-technologies. CIRP Annals-
Manufacturing Technology, 59(1):179–182.
Steptoe, W., Julier, S., and Steed, A. (2014). Presence and
discernability in conventional and non-photorealistic
immersive augmented reality. In Mixed and Aug-
mented Reality (ISMAR), 2014 IEEE International
Symposium on, pages 213–218. IEEE.
Wang, J. and Lindeman, R. (2014). Coordinated 3d inter-
action in tablet-and hmd-based hybrid virtual environ-
ments. In Proceedings of the 2nd ACM symposium on
Spatial user interaction, pages 70–79. ACM.
Wang, J. and Lindeman, R. (2015). Coordinated hybrid
virtual environments: Seamless interaction contexts
for effective virtual reality. Computers & Graphics,
48:71–83.
Wang, X., Truijens, M., Hou, L., Wang, Y., and Zhou, Y.
(2014). Integrating augmented reality with building
information modeling: Onsite construction process
controlling for liquefied natural gas industry. Automa-
tion in Construction, 40:96–105.
Zhao, T. and McDonald, S. (2010). Keep talking: an anal-
ysis of participant utterances gathered using two con-
current think-aloud methods. In Proceedings of the
6th Nordic Conference on Human-Computer Interac-
tion: Extending Boundaries, pages 581–590. ACM.
Zollmann, S., Hoppe, C., Kluckner, S., Poglitsch, C.,
Bischof, H., and Reitmayr, G. (2014). Augmented re-
ality for construction site monitoring and documenta-
tion. Proceedings of the IEEE, 102(2):137–154.
Inside - Outside Model Viewing - A Low-cost Hybrid Approach to Visualization and Demonstration of 3D Models
297