TOUCHING VIRTUAL REALITY
An Effective Learning Chance for Visually Impaired People
F. De Felice, F. Renna, G. Attolico and A. Distante
Consiglio Nazionale delle Ricerche – Istituto di Studi sui Sistemi Intelligenti per l’Automazione, Italy
Keywords: Learning support, Haptic technology, Multimodal interaction, Virtual reality, Visually impaired people.
Abstract: This paper presents a Virtual Reality (VR) system that allows visual impaired users to explore Virtual
Environments (VEs) by a haptic/acoustic interaction. The system may have many interesting educational
applications: indeed visually impaired people can access and learn informative contents conveyed by
opportunely designed and rendered 3D VEs. Moreover a visual 3D scene editor allows domain experts,
responsible of the learning process, to design the VE even if not well grounded in VR. This tool enables an
easy prototipization and a fast modification of the haptic/acoustic rendering to fit users feedback: therefore
the design of the learning experience arises from the cooperation of the domain expert with the final users.
1 INTRODUCTION
Virtual Reality can help visually impaired users to
learn information expressed as 3D virtual
environments (VE), which can represent objects
shape but also more abstract concepts.
Haptic devices such as PHANToM (Massie,
Salisbury, 1994), gestures inputs such as
CyberGlove (Immersion), TextToSpeech and
Speech Recognition technologies enable more
intuitive and natural Human-Machine interaction.
Force feedback, besides traditional auditory and
visual rendering, allows the tactile manipulation and
exploration of interactive 3D virtual objects.
Haptics and VR have been investigated to
enhance the learning of concepts involving three-
dimensional spatial data by sighted students (Jones,
Bokinsky, 2002). (Magnusson, Rassmus-Gron,
2005; Yu, Brewster, 2002; Jacobson, 2002) present
valid educational haptic/acoustic VR applications for
visual impaired users: they facilitate the
comprehension of information usually conveyed by
physical artefacts, less effective and more expensive.
VE can offer several views of a scene to convey
the information of interest in an ordered and
progressive way. Haptic and acoustic effects make
easier to acquire and comprehend the characteristics
of 3D view, improving their integration into a
meaningful mental schema.
The proposed multimodal system (OMERO)
combines the use of touch, vision and hearing for the
exploration of multiple views of 3D virtual
environments. Blind and seeing people can share
their knowledge by experiencing (each with its own
interaction modality) the same virtual scene.
The VE must be designed by domain experts to
make simple and effective the cognitive process. A
visual editor enables inexpert users to associate the
multimodal description (MD) to a virtual scene. It
also involves in the design process the final users
whose feedbacks drive the customization of the MD.
OMERO has experimentally proved to be an
effective learning tool in several different domains.
2 THE OMERO MULTIMODAL
FEATURES
OMERO has been designed to offer an enhanced
multimodal virtual experience whose aim is not to
mimic the interaction with physical objects by the
exploration of their approximated digital versions.
Its goal is to design a digital representation of reality
whose information contents and characteristics make
easier and more effective (in particular for visually
impaired people) the perception, the comprehension
and the learning of contents that can be expressed as
spatial data.
The user seats in front of the multimodal
workstation (figure 1) and interacts with the virtual
world via the haptic device, the keyboard and the
439
De Felice F., Renna F., Attolico G. and Distante A. (2009).
TOUCHING VIRTUAL REALITY - An Effective Learning Chance for Visually Impaired People.
In Proceedings of the First International Conference on Computer Supported Education, pages 439-442
DOI: 10.5220/0001977504390442
Copyright
c
SciTePress
audio speakers. The PHANToM desktop, a single
point haptic device, allows the user to perceive the
scene as if he\she were touching a physical scaled
model placed on the desk with a pencil-tip.
Multimodal effects (haptic and vocal) enhance the
interaction. Moreover, seeing people can
communicate with blind user: the model is visually
rendered on the screen and a red sphere shows the
current 3D position of the haptic device tip in the
virtual world. The point of view of the visual
rendering can be changed by suitable GUI
commands (thumbwheels or a “bring me to” button)
to appreciate the avatar movements but does not
affect the stability of the haptic reference system.
Figure 1: The exploration setup.
The MD of the scene is fundamental for an
effective learning process and influences data
representation and data retrieval.
Data Representation defines type and amount
of information represented by the virtual scene, their
mapping to 3D components of the virtual world,
their organization in several semantic views.
Active Objects are parts of the scene that
activate a specifically defined action when touched
by the user. Haptic interface can generate tactile
effects (such as vibration, viscosity, … ) that can
convey further information beyond shapes. Active
objects can be haptic, acoustic or haptic/acoustic and
can provide data (i.e. historical/ artistic descriptions,
dimensions, material, etc.) by vocal messages.
Active objects can be dynamic; their dynamic
behaviour can be activated either automatically
whenever the user touches them or on demand by
proper commands.
Scenarios try to overcome the serial nature of
touch, which does not provide a quick and unitary
perception of scenes as sight does. A complex
virtual world, rich of details, generates long
sequences of local perceptions that are hard to
integrate into a coherent meaningful mental schema.
Scenarios are sets of active objects representing
semantically consistent and coherent views of the
information content of the scene. When the user
selects a scenario, he\she focuses on the information
associated with its active objects, temporarily
discarding all the other data.
Data Retrieval concerns how the user interacts
with the virtual world: the navigation (how the user
can move inside the scene) and the exploration (how
3D objects transmit their associated information).
The following features support the navigation
task and facilitate the visit of the scene:
Containment box: a virtual box surrounding the
scene to prevent blind users from moving too far
from their goals. It has proved to avoid the waste of
time in useless regions of the workspace. It also
makes easier to find the objects of interest.
Guided path: a sort of guided visit around the
virtual environment. Suitable attractive forces drive
the exploration along predefined paths (De Felice,
2005). It proved to be valuable to become familiar
with the scene and to build complete and effective
mental schemas.
Dragging: can dynamically select which part of
a large model is shown in the workspace. Inside
OMERO the haptic stylus can drag the virtual scene
or the containment box (by pushing on its walls).
Scaling: dynamically changes the relative size of
the model with respect to the user fingertip (that
being fixed in the real world can prevent the
perception of small details). Increasing the size of a
model makes accessible small details and reduces
the dexterity required for their correct perception.
Scaling, if the user is touching an object, is applied
with respect to the contact point to keep a
meaningful reference that prevents the user from
being confused by the environment changes.
Similar dragging and scaling techniques can be
found in (Magnusson, Rassmun-Grohm, 2003).
3 MULTIMODAL DESCRIPTION
Multimodal rendering can provide information
through different sensory channels (redundancy) and
in alternative forms (polymorphism). The MD
(“how” the rendering is done) has been decoupled
by the structural description of the virtual scenes
(“what” is rendered). Thus the same geometrical
representation can be rendered in different ways by
modifying the associated multimodal rendering.
CSEDU 2009 - International Conference on Computer Supported Education
440
The system loads a VRML file containing all the
information about the geometry of the virtual scene
and an XML file, based on a schema called OMDL
(OMERO Multimodal Description Language), that
describes the multimodal appearance of the scene.
Creating complex virtual scenes is a difficult task
(Magnusson, Rassmus-Grohm, 2004). Proper tools
are required to make faster and easier this design
phase. VRML models can be created using several
applications (CAD system, Google SketchUp, etc.).
A visual editor, providing an intuitive and
straightforward authoring of rendering, has been
developed (figure 2).
Figure 2: The look and feel Visual Editor.
Contextual menus allow the user, even not well
grounded in VR, to visually edit active objects and
scenarios using the mouse or the haptic tool. People
responsible of the learning process and the final
users cooperate in the design: the former select
solutions fitting their cognitive aims, the latter
provide feedback about the multimodal rendering.
4 OMERO APPLICATIONS
The previously described features of the system have
been implemented and verified in different fruitful
educational applications.
Svevian Castle. A VRML model of the accessible
areas of the ground floor of the Norman-Svevian
Castle, located in Bari (Italy), has been realized
(figure 1). Its complex topology requires a huge
amount of information to be transmitted to the user.
The multimodal application has been tested, using
different protocols, on two different groups each
composed by four visually impaired people that had
never visited the castle before (De Felice et al,
2007). The test sessions were followed by a real visit
of the castle to check both the effectiveness of the
proposed features and their best use to produce an
intuitive and simple multimodal interaction.
The application has been also proposed, during
the ‘International day of people with a disability’, to
twenty blind visitors of the castle. They started with
a basic model representing the whole plan of the
castle with active objects highlighting passages
between different environments and then moved to
an enlarged version of the main environments and of
their internal objects, with the associated historical
and artistic information. More than half of them
experienced a more conscious real visit of the castle
due to the mental schema constructed during the
virtual experience. They easily found real objects
whose presence had been emphasized in the virtual
model by attractive forces and vibrations. The most
curiouses found really interesting and stimulating
the vocal explanations on history, dimensions,
buiding materials and curiosities.
A blind child with psychomotor problems was
able to concentrate, to move correctly in the virtual
environment and to recognize and open doors.
Figure 3: The streets scenario of the Apulia map model.
Geographical Map. The virtual model of the Apulia
region (figure 3), constructed from GIS data, has
been organized in several semantic views
(provinces, rivers and lakes, towns, highways) that
blind users have used to progressively build a
complete mental schema of the territory.
It has been proposed to eight visually impaired
users to check its potentialities (De Felice et all,
2007). Then it has been informally experienced by
eleven visual impaired during a meeting of the
Italian National Blind Association. Ten of them
TOUCHING VIRTUAL REALITY - An Effective Learning Chance for Visually Impaired People
441
came from other Italian regions and had almost no
knowledge of Apulia. The only native user, using
her known contextual cues, found very easy to move
through the map and judged the haptic interaction
realistic and effective. Two users were unable to
complete the exploration of the model: one was very
tired for previous meeting activities while the other
had some hands coordination problems. The other
users moved through the scenarios with a growing
interest, also caused by an increased familiarity with
the haptic device. All the users were able to learn
new information. Also the native users increased her
knowledge discovering new characteristics.
This type of application received great interest
from blind users, which provided many suggestions
to improve the information contents and the
interaction modalities of the VE.
5 CONCLUSIONS
A framework to allow visually impaired people to
access virtual reality by a multimodal interaction
including touch has been presented. The haptic
feedback extends the visual and auditory interaction
and enables the effective and efficient fruition of the
information content of the virtual scenes by blind
users. This multimodal interaction and the
multilayered representation of the real world
strongly help visually impaired people to construct a
mental schema of the scenes.
The OMERO system proposes the use of virtual
reality to generate an augmented experience that
conveys information of different nature (shape,
geometric, abstract concept) in an integrated and
compact way.
The experiences made with blind users suggest
that the multimodal interaction needs to be tailored
to the specific user: the OMDL schema allows a
quick and easy design and implementation of
rendering without affecting the geometrical structure
of the virtual scene. A visual editor allows this
design to be made also by domain expert without
specific preparation about virtual reality.
Almost all the visually impaired users have
found natural the use of the system and have reached
satisfactory results, providing positive feedback
about this new tool. The approach represents a way
to overcome some serious limitations of the direct
exploration of physical objects and opens to the
blind community new active and exciting learning
opportunities.
ACKNOWLEDGEMENTS
The project has been done with the support of the
Italian Union of Blind and of the Regional Direction
for Cultural Heritage and Landscape of Apulia.
REFERENCES
Massie, T.H., Salisbury, J.K., 1994. A device for probing
virtual objects. ASME Winter Annual Meeting,
Symposium on Haptic Interfaces for Virtual
Environment and Teleoperator Systems.
Immersion website:
www.immersion.com/3d/products/cyber_glove.php
Gail Jones, M., Bokinsky, A., et all, 2002.
NanoManipulator Applications in Education: The
Impact of Haptic Experiences on Students’ Attitudes
and Concepts. In HAPTICS.02, 10th Symp. On Haptic
Interfaces For Virtual Environments & Teleoperator
Systems. IEEE Computer Society Press.
Magnusson, C., Rassmus-Gron, K., 2005. Audio haptic
tools for navigation in non visual environments. In
Enactive’05, 2nd Int. Conference on Enactive
Interfaces, Genoa, Italy, November 17th-18th, 2005.
Yu, W. Brewster, S., 2002. Multimodal Virtual Reality
Versus Printed Medium in Visualization for Blind
People. Fifth international ACM conference on
Assistive technologies 2002, Edinburgh, Scotland July
08 – 10, 2002.
Jacobson., R. D., 2002. Representing spatial information
through multimodal interfaces. 6th International
Conference on Information Visualisation, 2002.
De Felice, F. , Gramegna, T., Renna, F., Attolico, G.,
Distante, A.. A Portable System to Build 3D Models
of Culturale Heritage and to Allow Their Explorations
by Blind People. HAVE’05, IEEE International
Workshop on Haptic Audio Visual Environments and
their Applications Ottawa, Ontario, Canada, 1-2
October 2005.
Magnusson, C., Rassmun-Grohm, K., 2003. Non-visual
zoom and scrolling operations in a virtual haptic
environment. 3th International Conference
Eurohaptics 2003, Dublin, Ireland, July 2003.
Magnusson, C., Rassmus-Gron, K., 2004. A Dynamic
Haptic-Audio Traffic Environmet. Eurohaptics 2004,
Munich Germany, June 5-7 2004.
De Felice, F., Renna, F., Attolico, G., Distante, A.. A
Haptic/Acoustic Application to Allow Blind the
Access to Spatial Information. Proc. IEEE World
Haptics 2007, Second Joint EuroHaptics Conference
and Symposium on Haptic Interfaces for Virtual
Environment and Teleoperator Systems, Tsukuba,
Japan, March 22 – 24, 2007, pp. 310-315.
CSEDU 2009 - International Conference on Computer Supported Education
442