Picture Credits – Pixels.
You can now interact with solely mouth gestures in a virtual reality environment, all thanks to a new technology developed by Professor Lijun Yin (Department of Computer Science) and his team of researchers at the State University of New York (SUNY), Binghamton.
Presently, the market is loaded with VR headsets available in all kinds of price ranges, cheap and expensive: headsets that cover the upper half of the face, but, lack the feature to recognize (and perhaps, utilize) the entire face of the user. This must have been deemed as the quite the issue to the team and hence, they sought to remove this shortcoming by producing this new tech.
How VR applications have sprouted in the past few years
Even though the exact origins of Virtual Reality remain disputed, the past few years have seen an immense rise in the development of VR applications.
The first Head Mounted Display (HMD) was engineered at the Philco Corporation in the 1960s for helicopters pilots who need to be able to see their surrounding areas while flying in the dark. The next big development took place in 1968, when Ivan Sutherland created an HMD connected to a computer that enabled the user to see the virtual world. Since then, we have not looked back.
Although it is the gaming applications that have gained stellar popularity, VR has been introduced in other fields as well; sport, healthcare, military, scientific visualization, entertainment, education, and media, to name a few. The proposed use of VR in shopping by Walmart is also a clear sign of VR being the next big thing in retail.
All of the recent developments regarding HMDs have been regarding little tweaks and changes, for example, making them lighter in weight, using more comfortable fabric, etc. The VR field definitely has a lot more room for innovation, and Yin and his team seem to have grabbed the opportunity.
Also Read: My First Android App As An Android Developer
How does the new headset work?
For this depth-based recognition framework, mouth gestures are a medium of interaction that it interprets in real-time within the virtual reality environment. It uses a new 3D edge map approach to mark and trace the mouth features. Further, it classifies and separates these features into seven different gesture classes.
Practical Application Testing
Picture Credits – Unsplash.
The team tested this application on a number of independent individuals with a simple game. On donning the VR Headset, the subject individual had to guide his or her character through a forest, eating as many cakes as possible. The control actions for the game were as follows:
- Head rotation for selecting direction
- Governing movement solely through mouth gestures
- Eating cake by smiling
The system worked accurately and attained high correct recognition rates. Further, it was tested and then, validated, through a virtual reality application in real time.
Even though the framework is in its nascent prototype phase, Professor Yin has high hopes for its numerous, useful applications in the future alongside entertainment.
The communication sector can become more defined; to quote Yin, “Imagine if it felt like you were in the same geometric space, face to face, and the computer program can efficiently depict your facial expressions and replicate them so it looks real.”
Individuals in the military, and even medical patients, can benefit a lot from this system as they will be able to perform exercises that might not be possible for them in their regular lives. The medical field already uses VR for disabled patients, and by using this framework, their experiences will become more life-seeming.
|Virtual Reality Society|
“Post Submitted By – Sukhman K Attwal.
Site : https://www.ionizermag.com
Sukhman K Attwal
Twitter Handle: @AttwalSukhman
Editor-in-Chief and founder of Ionizer Mag, I am an engineer by profession. I am a Technology and Science enthusiast, with a proclivity for Artificial Intelligence. My venture allows me to make everyone understand and engage with Science & Tech in an easy manner.”