Background - We started this project as an Inclusive Design Challenge given by Microsoft in our UX Design class. The challenge was to design for inclusivity in a "deskless workspace". After exploring multiple options, my team decided to work on making digital art in VR accessible and used Google Tilt Brush as the starting point.
Problem -
VR is inaccessible to people with limb disabilities due to the use of hand controllers.
300 million people of the world's population has some sort of limb disability and due to the use of hand controllers, most VR applications are not accessible to people with these disabilities. People who have cerebral palsy, multiple sclerosis, paralysis and someone who is missing an arm is completely excluded from utilizing the full potential of VR.

Solution – We designed a tool using principles of multi-modality – ‘Craft‘, that allows people to create art in Virtual Reality using Voice Commands and Eye gestures. Multimodal human-computer interaction refers to the interaction with the virtual and physical environment through natural modes of communication.

Team – Cherisha Agarwal, Joanna Yen, Pratik Jain, Raksha Ravimohan, Simi Gu, Srishti Kush
My role – I worked on the research, defining personas, designing interactions, rapid prototyping, user interviews, usability testing and documentation.
Duration – April 2018 – Nov 2018
Tools used – Sketch | InDesign | Unreal | Photoshop | Javascript
Process
Card Sorting
Since we had the freedom to create any product we wanted to, based on our interests we came up with different professions and disabilities that the team wanted to focus on. After deciding the professions, the following process was followed –
Listing all the tasks that the person does, in order to figure out the point at which they will not be able to work normally due to a disability.
Card sorting to come up with problems we potentially wanted to work on. Card sorting helped to narrow down our ideas to three different scenarios and disabilities: Artist with limited hand mobility, Cashier using a kiosk and having physical disabilities.
After discussing the idea with our Professor – Dana Karwas and getting feedback from the Microsoft team, we decided to go ahead with the idea of artists with limited hand mobility and trying to create drawings in VR.
Secondary Research
The current market is flooded with options to draw art in Virtual Reality. Some of the options are TiltBrush, Quill, Medium, Block by Google.
Surprisingly, the apps and headsets meant for VR did not have accessibility features, especially apps like TiltBrush which are completely unusable for someone with limited mobility. Here are some of our observations-
Drawing and selection of tools can be done only through handheld controllers at present.
Drawing using voice by mentioning coordinates is not intuitive and the state of natural language processing is not advanced enough to perfectly transform the user’s commands to strokes.
Stakeholder Interviews
To help us understand how we should develop our idea, we needed to get expert insights. Since we were foraying into an unexplored territory, we required new perspectives to better understand the interactions and complications involved. Some of the interviews we did were with –
1) Todd Bryant, NYU Professor for VR.
2) Serap Yigit, a User Experience Researcher at Google to learn about user research techniques and usability functionalities.
3) Claire Kearney-Volpe, NYU Professor at Ability Lab who guided us to focus on multi-modal interactions and put us in touch with potential users at Adapt Community Network.
4) User interviews at Adapt.
These interviews gave us a real-world experience of how users reacted to our idea. Some of the insights were –



To get more clarity on our idea, we decided to speak with artists and creative technologists, to understand their workflow. These artists were mainly students from
NYU or working professionals who worked on illustrations, cinema, music, 3D modelling, graphic design and games. The students helped us in figuring out the different pain points and learning curve.
NYU or working professionals who worked on illustrations, cinema, music, 3D modelling, graphic design and games. The students helped us in figuring out the different pain points and learning curve.




Prototyping & Iteration
We proceeded to create basic prototype to demonstrate and test the concept of our idea. To make our tool accessible, we needed to empathize with our users and identify pain points and intuitive ways of interaction. We decided to go for multi modal interaction, since the user has limited hand mobility, we wanted to make use of voice and eye gestures to perform tasks.
Our first prototype was a paper prototype for which we listed down the features we meant to add. We choose the features, which were widely understood and intuitive, a lot of these insights came from the artist interviews we had done earlier. This list of tools included drawing tools, system tools and functionality tools. After this, we printed out each tool icon on an A4 sheet as demonstrated in the pictures below.


User Testing Paper Prototype
We tested the paper prototype with designers and artists with experience in VR as well as 2D and 3D software drawing tools and normal hand movements to get quick feedback on usability. Along with that we also tested it with the members of the ADAPT community who had limited hand movement but were interested in art and the concept of drawing using eye movements.
Here is how the prototype worked -
a) The users wore a cap with the laser pen attached to it or fix the pen to it or fix the pen to their glasses.
b) The tools on the wall could be selected by projecting the laser beam on it using head movements.
c) Once the user gazes at a particular tool, the selected tool is highlighted using the blue-violet light.
d) To draw, the user would move the laser point across the canvas and a student would trace the trajectory using a marker. Some of the tools such as scale and colour palette were made expandable and we used different sheets of paper to make pop-ups.
Few insights from users are below -




User Journey

Interaction Map

Information Architecture

Product Features

Final Prototype
After understanding our user journey and building our information architecture by reorganizing our tool structure and finalizing on our interactions, we proceeded to create the final hi-fidelity prototype using the Unreal Engine. Which included an on-boarding AI assistant called Crafty to help new users get familiar with the interface. The assistant guided the user with the features and functionalities of each tool and was available if the user got stuck at any point.
The final prototype process included Movement tracking, Interactable user interface, Time-based gaze selection, Teleport function, Painting function and changing the Environment. These tasks helped us accomplish a working prototype which users could interact within the immersive Environment.

The AI tool - Crafty
Our team decided to use Unreal Engine to build the high fidelity prototype because Unreal is a a good tool to build Virtual Reality content. To make a working prototype, there are several tasks we needed to accomplish which are as explained on left. Images of the working prototype are demonstrated below:
The most basic function of eye tracking painting tool in virtual reality was realized in our prototype by using head movements. User could choose tools, paint, teleport and change environment with our final prototype but it was still constrained by some technical limitations. Few functions like erase, undo, redo could not be realized by Unreal for now but we hope we can make them work by using other software and
hardware. We also hope to look into the technology to enable eye tracking and make it possible to select the tools using the gaze movements to draw. In our current prototype,
the voice instructions are manually monitored. We would like to include this functionality as well to make a multi-modal solution for our users.
hardware. We also hope to look into the technology to enable eye tracking and make it possible to select the tools using the gaze movements to draw. In our current prototype,
the voice instructions are manually monitored. We would like to include this functionality as well to make a multi-modal solution for our users.
To view the complete details of this project, check out the project booklet here.
XR startup bootcamp
This project received a grant of $10,000 from NYC Media Lab's XR Startup Initiative. During the 12 weeks intensive sessions, our team worked on the business model and product-market fit, and conducted about 120 customer and expert interviews. We also showcased our prototype at various exhibits - NYVR Expo '18, Media Lab Summit '18, Exploring Future Reality '18, R- Lab XR Showcase, Science Fair BCG Ventures and Verizon 5G Labs.