Having arrived with the whole team we sat around the table with one of the co-founders and briefly discussed the project we're developing and the benefits of using the Manur VR gloves. After a tour around the office we got a hands-on with the gloves themselves. The development team has created a small room with some objects the user can toy around with. Some balls and cube to pick up and throw around, a pole, a sort of DJ table with records that can be span and a few buttons.
During one of the Virtual Technology classes the teacher mentioned that the human brain is great at substituting limbs that aren't yours and/or connected. Seeing your hands and fingers in VR move the way you move them in reality with the Manus Gloves is a great example of this feat. The moment I put on the Vive headset it felt natural. Instinctively you close your fingers to pick up objects and the same goes for in VR. When pressing buttons, we generally do so with our index fingers and once again, same works in VR. The gloves work as intuitive as we had researched beforehand and interaction with the world comes natural without having to explain the user any controls or mechanical rules. While the other team members got their chance to try out the gloves I had the chance to talk with one of the developers. The Manus VR team works with Unity3D aswell and provide an extensive library that comes with the gloves.
Knowing this and having experienced the gloves ourselves we we're convinced that adding Manus VR to the project is a worthwhile investment. Both for KLM as for us. The KLM and it's employees have a much more accessible simulation experience with a drop-in drop-out principle where controls don't have to be explained. And for us to work and learn with a new technology and thereby also discovering and research new topics made possibly by using the gloves. In the field, will the gloves really give that extra immersion? Will it still feel right without actually having the physical objects present? Audio is always an important part of a game, even more so in training simulations. Generally audio is done binaural, panning between the left and right ears to give a sense of hearing the direction of which the sound is emmitting. The last Virtual Technology class' subject was about spatial audio in Virtual Reality. With Virtual Reality it becomes that much more important to have a proper audio system to immerse the user into the world. There is a lot to learn when looking at how sounds and the human ears work in real life. One of the biggest aspects is how sounds are perceived differently depending on the direction it comes from. So called HRTFs (Head-Related Transfer Functions) are a sort of filter that changes the way the sound is perceived (YouTube video showcasing the difference between audio with and without HRTF in Counterstrike: Global Offensive can be found here). Another thing to take into account is how sound travels through space. Air itself already has effect on soundwaves, but so does every other material. Concrete absorbs and transfers sound differently that a piece of cloth does. There's even a difference in concrete that's connected to the world or disconnected. The students were tasked with finding a way to replicate some of these features in Unity and see how Unity handles audio. From itself Unity has a rather lacking audio feature. There's an option to enable 3D sound which opens up the option to use spatial audio, panning, rolloff, doppler effect and some other settings. Above all of this, hidden in the project settings, the audio source can be set to make use of HRTF filters, provided by Microsoft. Unity audio is relatively wholesome for simple average projects but lacks in features like reverbing sound from surfaces and materials. For our project we wanted to delve deeper into audio and Virtual Reality.
To demonstrate the differences and pros and cons between Steam Audio and Unity audio I created a scene with a concrete room and a few walls. Within this room there are two spheres that can be turned on and off. One uses Steam Audio and the other uses Unity audio. The spheres circle around the player, passing behind the concrete walls in its path. In case of the Unity audio sphere no difference can be heard when the sphere dissapears behind the concrete walls in the middle. The only noticible feature is the HRTF filters and the direction the audio comes from. The Steam Audio sphere on the other hand loses a lot of it's volume and higher pitches once it travels behind the walls. Sound becomes more clear once the sphere reaches the corners of the walls before being visible again. During the next Virtual Technology I let the teacher and some other students try the application and listen to the differences. With both spheres everybody was able to tell where the audio was coming from. Wether it was above or below them. The added value of Steam Audio however is the fact that the perceived sound changes based on the position of the source and the objects between the source and the player. Steam Audio offers more features like placing sound probes in the rooms where sound can in fact travel to better replicate the HRTF filters. For the KLM training simulation we decided to continue using Steam Audio. An airplane muffles sound in an interesting manner due to use of materials and shapes and Steam Audio has the power to simulate these filters.
For the class Virtual Technology I was tasked to do research into a subject of choice, related to virtual or augmented reality. During that time the team has been discussing the idea of having gesture recognition in the training simulation as a means of communicating with other characters. Gestures and poses to block passengers from passing, beckoning passengers to move over here or sending them somewhere else. We have discussed this concept with the KLM aswell and they mentioned that it would not have any priority to the training but could be an extra functionality towards pushing the realism of the training. Gesture recognition in virtual reality is relatively new and not a lot of information is available as of yet. A few games have implemented a basic version of gesture recognition where the players can wave at a character and the character will wave back. Another example, Left-Hand Path, takes it a step further and implemented different motions to cast different spells. This would require a form of recognizing the different motions of the controllers and compare them to a library of pre-set motions. After having done some research into available tools and examples (the full research can be found at the bottom of this entry in pdf-format) I called upon the VR Technology teacher, Juriaan, to discuss gesture recognition. Juriaan mentioned that there is a distinct difference between gestures and poses. A pose defines a stationary, single frame, stature. Like crossing arms in front of you. A gesture is a series of poses over time. A motion like beckoning. Looking for a gesture would require having knowledge of the position and orientation of the controllers and having to compare these to pre-set values of a gesture. A problem with this is the decrepancy in margins of the motions. Someone could make a beckoning motion while having the controller pointed downards while it is meant to trigger an event when it is pointing forwards. It's a difficult task to make a clear difference between when something is a gesture or an accident. For the time being I have set this aside as more pressing mathers are at hand. However, we still keep the idea in tha back of our heads should the project be at a state it is finished and we have spare time to explore gesture recognition more. Gesture Recognition in Virtual Reality
|