VR Interaction Prototypes
Tools: Oculus Quest, C#, Unity, Blender, Muse EEG headband
Using the body for display and input
I wanted to explore the potential of using my body as a display and an input surface that’s always there. Given the current limitations of hand occlusion, forearms seem to be the next best choice.
The haptic feedback is incredible. I was surprised at how good it felt, even though I was expecting this. The back of our arm is quite sensitive and it’s an area we don’t touch very often. It would be interesting to see if there’s a drop in sensitivity over time as people get used to it. I think the intensity of the experience also comes from the tactile feedback from BOTH arms PLUS the visual input all converging in one point. Even through clothes the feeling is quite strong, but obviously not ideal.
I placed more interactive content on the non-dominant hand and informational, overview-type content (e.g. health data) on the dominant hand where less interaction is expected. This way the interaction with the non-dominant hand can be more imprecise and forgiving.
I tried various approaches for changing the currently active content. I first tried asssigning content type to a specific finger pinch but it was confusing to remember the finger assignments, especially when the other hand was using a different interaction (touching). The simpler pinching back/forward was better but still suffered from pinching imprecision.
Anchoring content around the forearm was easy to comprehend and can benefit from muscle memory. But, aside from the limited wrist rotation (~180deg), small movements can induce fatigue, especially at ROM limits. To work well it also needs strong visual feedback on "where you are". You also lose the auto-hiding when arm is facing away from the head.
The forearm naturally lends itself to parallel swiping. Swiping perpendicularly by flexing the finger (to center of body) feels good, while the opposite is weird (touching with the fingernail). There is also increased fidelity starting a swipe from the top as opposed to the side.
Using a real desk for haptic feedback
The idea is that you could, on the fly, define your own interactive surface on top of a physical one. This then enables you to create a proper working environment with a mix of tactile surface interactions that we are familiar with and free-space interactions above and around the desk.
The proper application for this however, is AR.
SteadyHand - easier object stacking
You know how you hold your other palm when stacking objects to catch them in case they fall? What if that palm could actually magically hold whatever you are stacking? Or maybe define how they should stack, perhaps perfectly aligned in a grid?
I also found it tiresome to keep the non-dominant hand steady so I designed and easy way to create a copy of your hand and leave it there to keep doing the work.
Head lean to see behind
It has occurred to me that we still tend to place VR content on a plane encircling the user. In that sense, all the depth away from the user (or behind the content plane) seems to be wasted. I wanted to try out an idea where you can lean to the side in order to “see behind” the content in the foreground, much like you would in real life. It could be useful as a quick transient interaction to access less-used information and move it between the foreground/background.
Telekinesis like a Jedi (VR + Muse Headband)
A small prototype to simulate learning how to use “The Force” to move objects at a distance (from The Empire Strikes Back when Yoda teaches Luke how to lift his X-Wing). It uses the first generation of the Muse Headband for detecting The Force, Oculus Quest with hand-tracking and OSC for messaging between the two. Once in a relaxed state of mind, the lights come on and the ship can be lifted with your hand.
Before removing the SDK, Muse used to have publicly available detection of concentration and relaxation which I was going to use as a proxy for The Force. Without it however, I had to resort to some primitive combination of high delta & theta and low gamma waves. This unfortunately works poorly as interpreting brain waves isn’t easy (I wish Muse shared some of their algos!).
Also, I’ve encountered issues with the quality of data from the headband, especially when positioned right above the Quest. It’s extremely sensitive so it has to be positioned perfectly (which is hard given the space the VR headset occupies). Also, any blink will cause a disruption in the data so this was just one recording with minimal blinking. But it was a fun experiment with a consumer-grade EEG device.