Posts Tagged ‘virtual reality’

drwietimg_4481In 1999, the Institute of Medicine published a study that concluded the following: medical errors in the US cost the lives of as many as 98,000 people each year (and run up a $17- $29 billion bill to boot). Ten years later, the Safe Patient Project reported that, rather than showing improvement, in the intervening decade the situation may have actually gotten worse—to the tune of more than 100,000 deaths each year as a result of “preventable medical harm.” Given that the CDC puts the number of deaths from hospital infections alone at around 99,000 annually, the SPP’s number seems conservative.

Let me put this into perspective. A Boeing 737—the most popular aircraft family in service today—seats 360 people, give or take. So consider this: the Safe Patient Project’s estimate of preventable fatalities is akin to 277 airliners plummeting to Earth and killing everyone on board—every year. How long do you think the FAA—or the public, for that matter—would stand for that?

Fortunately there’s a solution: video games.

Being a videogamer doesn’t get a lot of respect in a lot of mainstream professions, but it has been instrumental to me in becoming a surgeon.”

red_dragon

Red Dragon simulator, ISIS

That’s Dr. Andy Wright, surgeon and core faculty member at the University of Washington’s Institute for Simulation and Interprofessional Studies (ISIS). The Institute’s goal is to use technology to improve the quality of healthcare education, patient safety, and surgical outcomes. Simulations are particularly effective as they allow trainees to easily repeat procedures until they’re successful, and provide a safe place for them to fail when they’re not. In Dr. Wright’s experience, the skill and manual dexterity necessary to play video games proficiently translate directly to surgical simulators—resulting in more effective training and fewer accidents in the OR.

Gamers have a higher level of executive function. They have the ability to process information and make decisions quickly, they have to remember cues to what’s going around [them] and [they] have to make split-second decisions.”

Accomplished gamers show heightened abilities to focus on critical elements while maintaining peripheral awareness of the surrounding environment, function amidst distraction, and effectively improvise if a situation doesn’t go according to plan. Past studies have repeatedly demonstrated this, and it makes sense: effectively navigating through and surviving a video game’s virtual world demands it. There are other characteristics of video games that make them particularly well-suited to prepare surgeons for the operating room: you interact with the game’s world through a video screen, and you have to be adept at manipulating images and items with a handheld controller. These skills are especially useful in the areas of laparoscopic (see my previous post here) and robot-assisted surgery.

da Vinci Surgical System

da Vinci Surgical System

Take da Vinci, for example. It’s a robotic surgical system that allows surgeons to perform delicate, complex procedures through tiny incisions. The da Vinci system combines 3D, high definition video with four interactive robot arms (there’s even a dual-console option where trainees can watch an actual procedure, and a switching mechanism that allows surgeons and trainees to exchange control during an operation). Surgeons manipulate these arms using precision controllers that scale the speed and range of their movements down to the much smaller size of the surgical instruments, allowing for unparalleled accuracy. Put simply, the most advanced robotic surgical system in the world employs an interface intimately familiar to video gamers.

Take gaming into the land of simulation, though, and you can start tapping into the medium’s real power. Virtual reality (VR) simulators are an effective means of getting fledgling surgeons comfortable with a variety of procedures, allowing them to perform a given surgery dozens of times before ever opening up a live patient. They also provide an environment in which surgeons can, in essence, fail safely. Within a simulation, they can develop critical skills and expertise without putting anyone at risk, experimenting with different techniques, learning what does—and doesn’t—work, and becoming safer and more effective. A 2002 Yale University study provided strong evidence for this: surgical residents trained in VR were 29 percent faster and six times less likely to make mistakes than their non-VR trained colleagues.

virtual_surgery-chirurgie_virtuelle_1You can also customize a simulation to closely reflect reality, matching the conditions and characteristics of actual patients. In 2009, Halifax neurosurgeon Dr. David Clarke made history when he became the first person to remove a brain tumor in a patient less than 24 hours after removing the same tumor virtually, on a 3D rendering of that same patient. Two years later, doctors in Mumbai performed PSI knee replacement surgery on a patient after first running the operation virtually on an exact 3D replica of the patient’s knee.

Earlier this year, VR training took another leap forward: using the online virtual world Second Life, London’s St. Mary’s Hospital developed three VR environments—a standard hospital ward, an intensive care unit, and an emergency room—and built modules for three common scenarios (at three levels of complexity, for interns, junior residents, and senior residents) within them. According to Dr. Rajesh Aggarwal, a National Institute for Health Research (NIHR) clinician scientist in surgery at St. Mary’s Imperial College,

The way we learn in residency currently has been called ‘training by chance,’ because you don’t know what is coming through the door next. What we are doing is taking the chance encounters out of the way residents learn and forming a structured approach to training. What we want to do—using this simulation platform—is to bring all the junior residents and senior residents up to the level of the attending surgeon, so that the time is shortened in terms of their learning curve in learning how to look after surgical patients.”

After running interns and junior and senior residents through the VR training, researchers compared their performances of specific procedures against those of attending surgeons. They found substantial performance gaps between interns, residents, and attendings—validating the VR scenarios as training tools. As Dr. Aggarwal explained,

What we have shown scientifically is that these three simulated scenarios at the three different levels are appropriate for the assessment of interns, junior residents, and senior residents and their management of these cases.”

In the future, the team at St. Mary’s plans to study how this type of VR training can improve clinical outcomes of patients treated by residents—ultimately using this tool to bring their interns’ and residents’ skills up to the level of the attendings, help them better manage clinical patients, and, at the end of the day save lives.

In my previous two posts, I talked a lot about avatars and some of the rather intriguing and exciting developments happening regarding virtual worlds. In brief, both are evolving far beyond what early digital pioneers could have envisioned when they took their first steps into virtual reality. We have the ability today, as demonstrated by LucasArts’ E3 reveal of Star Wars: 1313, to render near-photo-realistic environments in real-time. Absolute photo-realism is a mere skip in time away—five years at the outside—and completely immersive, realistic virtual worlds experienced through any connected device are already on the horizon.

This does not spell the death of the avatar. Avatars will, I’m fairly certain, always have a place within gaming and VR. But we’re fast approaching a technological revolution the likes of which humanity has never experienced. Soon we will have the ability to interact with virtual environments directly, without filtering our actions through a digital intermediary.

Very soon, in fact. Prototypes of The Oculus Rift VR headset show great promise in delivering fully-immersive 3D gaming to the masses. With it, you can climb into the cockpit of a star fighter and engage in ship-to-ship combat, surrounded on all sides by the vastness of space and the intensity of interstellar battle (as with EVE VR). Combine the Rift with the Wizdish omni-directional treadmill and a hands-free controller like the Xbox Kinect, and you can transform a standard first-person shooter into a full-body experience—allowing you to ditch the thumbsticks and literally walk through a game’s virtual environment, controlling your actions via natural motion of hands, feet, arms, legs, and head.

In the world of augmented reality (AR), Google Glass will hit the streets later this year, providing users with a wearable, hands-free interface between the real and the virtual, enhancing your interaction with reality, and allowing you to transmit  your view of the world across the globe.

But the real future lies in a recently Kickstarted project by startup tech company Meta. Meta’s developing a system for interacting with the virtual world that tears open the envelope of the possible, and drives the line dividing virtual from real one step closer to extinction. It’s a combination of stereoscopic 3D glasses and a 3D camera to track hand movements—allowing for a level of gestural control of the virtual world previously unseen (think Minority Report or The Matrix Reloaded).

Unlike the Oculus Rift, the Meta system includes physical control and also works in real space (not strictly a virtual gaming world). And unlike Google Glass, Meta creates a completely immersive virtual environment. According to a recent article by Dan Farber on CNET,

Meta’s augmented reality eyewear can be applied to immersive 3D games played in front of your face or on [a] table, and other applications that require sophisticated graphical processing, such as architecture, engineering, medicine, film and other industries where 3D virtual object manipulation is useful. For example, a floating 3D model of a CAT scan could assist doctors in surgery, a group of architects could model buildings with their hands rather than a mouse, keyboard or stylus and car designers could shape models with the most natural interface, their hands.”

In other words, with the Meta system, the wearer can actually enter and manipulate the virtual world directly, altering it to serve whatever purpose s/he desires. Atheer, another recent entry into the field of AR, is working on a similar system. Like Meta, Atheer’s technology uses a 3D camera and stereoscopic glasses for gesture control, and also provides direct access to the virtual world. According to Atheer founder and CEO Sulieman Itani,

We are the first mobile 3D platform delivering the human interface. We are taking the touch experience on smart devices, getting the Internet out of these monitors and putting it everywhere in [the] physical world around you. In 3D, you can paint in the physical world. For example, you could leave a note to a friend in the air at [a] restaurant, and when the friend walks into the restaurant, only they can see it.”

The biggest difference between the two systems is that Atheer isn’t building a hardware-specific platform; it will be able to run on top of existing systems like Android, iOS, Windows Mobile, or Xbox. Apps built specifically with the Atheer interface in mind will be able to take full advantage of the technology. Those that aren’t optimized for Atheer will present users with a virtual tablet that they can operate by touch, exactly like an iPad or Galaxy Tab. Here’s Itani’s take:

This is important for people moving to a new platform. We reduce the experience gap and keep the critical mass of the ecosystem. We don’t want to create a new ecosystem to fragment the market more. Everything that runs on Android can be there, from game engines to voice control.”

We’re rapidly approaching a critical stage in our technological evolution, nearing the point at which we’ll be able to work, play, and live, at least part-time, in hyper-realistic, fully-immersive virtual worlds, just as we do in real-world spaces today. So, what does this mean? Games will get better. Virtual worlds will become richer and more complex, and take on greater significance as we spend more time in them. And we, as humans, may begin to lose our grasp of the real. I spoke to Kim Libreri at Industrial Light and Magic about this, and he agreed that this could become a real problem. The human brain, he said, is a very flexible learning device. With the gains in fidelity that we’ll see in AR over the next decade, he believes that, as the human race evolves, it’s going to become more and more difficult to separate what’s real from what’s not. The longer we coexist within those places, the harder it will become, and this could begin to create problems. As Libreri said,

There’s a real, tangible threat to what can happen to you in the cyber world, and I think as things visualize themselves more realistically . . . think about cyber-crime in an AR world. Creatures chasing you. It’s gonna be pretty freaky. People will have emotional reactions to things that don’t really exist.”

It might also render people particularly susceptible to suggestion. It’s a bit like the movie Inception, where ideas are planted deep within a subject’s subconscious—except this would be easier. If you can’t distinguish the virtual from the real, a proposal put to you in a virtual world could seem like a really good idea someone presented in reality. If this sounds like science fiction, consider for a moment how vulnerable we are to the power of subliminal suggestion or even direct messages within everyday advertising. No, if anything, the intricacies of our approaching reality will most likely far exceed our ability to imagine them.

Of course, there are positives and negatives with any technology. Through highly accurate simulations, immersive virtual worlds could allow people to visit places they might otherwise not be able to. They could also provide greater and more intuitive access to information, and the ability to use and manipulate it in ways that are today all but unthinkable. Whether the looming future of alternate reality will be predominantly good or bad is irrelevant. It is coming, one way or another, and there’s nothing we can do to stop it. If our past history and the ever-increasing speed of development and change are predictive—or at least indicative—of future events, then alternate reality, in whatever form it assumes, is poised to bring an end to the separation of virtual and real. And if that happens, we will all bear witness to the birth of a singularity beyond which the nature of human interaction—and indeed, of humanity itself—will be changed, fundamentally, forever.

You can learn more about the Oculus Rift here.

… and the Wizdish treadmill here…

.. and here.

There’s a video demo of Google Glass here.

And a demo of EVE VR here:

More info about EVE VR is here.

You can learn more about Meta and their computing eyewear here…

… and here.

You can read about Atheer here:

And for an overview of the future of AR, check out this article:

avatarsAvatars are all the rage these days. Facebook profile pics, World of Warcraft and EVE Online characters, Second Life and OpenSim personas—these are just a few examples of a growing phenomenon. We all seem to have some sort of digital representation of ourselves that we project into cyberspace—and we spend a fair amount of time designing and customizing them, getting their appearances just right.

Who can blame us, really? After all, they are us, the digital faces we present to the virtual world. That doesn’t mean they have to perfectly replicate our real-world identities, though. In fact, the beauty of designing an avatar is the ability to get creative, to choose exactly who we want to be, to build our ideal selves.

At first blush, it would appear that this is a one-way exchange: through the creative process, we affect the avatar, which we then use to interact with the virtual world. Certainly, with things like static photos and images, this is the case. However, with respect to a 3D digital persona that responds to our commands, it gets a little more complicated. In fact, as several researchers are discovering, situations that we experience virtually through our avatars can impact and even alter our reality.

vravatarPalo Alto research scientist Nick Yee dubbed this the Proteus Effect, after the Greek sea god Proteus, who could assume many different forms (and whose name lends itself to the adjective protean—changeable). He first described it in 2007 while studying how an avatar’s appearance and height affected the way people behaved in the virtual world. In his initial research, Yee provided study subjects with avatars that were attractive or unattractive, tall or short, and then watched them interact with a virtual stranger (controlled by one of Yee’s lab assistants). Here’s what Yee’s team discovered:

We found that participants who were given attractive avatars walked closer to and disclosed more personal information to the virtual stranger than participants given unattractive avatars. We found the same effect with avatar height. Participants in taller avatars (relative to the virtual stranger) negotiated more aggressively in a bargaining task than participants in shorter avatars.”

Yee’s work demonstrated clearly that an avatar’s appearance could change how someone acted within a virtual environment and interacted with its residents.

Okay, so what? It’s interesting, but what relevance does it have to the real world?

In 2009, Yee asked the same question: did changes in virtual-world behavior translate to physical reality? He revisited his 2007 study, adding another task: After concluding their virtual interaction, Yee had each participant create a personal profile on a mock dating site and then, from a group of nine possible matches, select the two s/he’d most like to get to know. Without fail, Yee says,

… we found that participants who had been given an attractive avatar in a virtual environment chose more attractive partners in the dating task than participants given unattractive avatars in the earlier task. This study showed that effects on people’s perceptions of their own attractiveness do seem to linger outside of the original virtual environment.”

The Proteus Effect has been credited with more than just creating more aggressive negotiators or making people feel better about themselves: weight loss, substance abuse treatment, environmental consciousness, perception of obstacles… all affected by people’s experiences through their avatars within virtual reality. According to Maria Korolov, founder and editor of the online publication Hypergrid Business—and who’s been studying virtual worlds since their inception—people who exercise within a virtual world…

… will exercise an hour more on average the next day in real life, because they think of themselves as an exercising-type person. It changes the way you think.”

Researchers at the University of Kansas Medical Center back this up. A weight loss study there found that people who lost weight either through virtual or face-to-face exercise programs were more effective at managing their weights if they took part in maintenance programs delivered through Second Life.

Regarding substance abuse, Preferred Family Healthcare, Inc., found that treatment outcomes for participants in their virtual programs were as good as or better than those for people who took part in real-life counseling. More significantly, fewer people dropped out of virtual treatment—vastly so: virtual programs saw a 90 percent completion rate, as opposed to 30-35 percent completion for programs at a traditional, physical facility.

3standford_virtual_human_interaction_lab_ym8phEnvironmental consciousness may seem like a stretch, but researchers at Stanford University’s Virtual Human Interaction Lab found that people who felled a massive, virtual sequoia used less paper in the real world than those who only imagined cutting down a tree.

And perhaps the most interesting example, a study at the University of Michigan showed that participants who saw that a backpack was attached to an avatar consistently overestimated the heights of virtual hills—but only if they’d created the avatar themselves. Participants assigned an avatar by the researchers were much more accurate in their estimations. Said S. Shyam Sundar, Distinguished Professor of Communications and co-director of the Media Effects Research Laboratory, Penn State, who worked on the study,

You exert more of your agency through an avatar when you design it yourself. Your identity mixes in with the identity of that avatar and, as a result, your visual perception of the virtual environment is colored by the physical resources of your avatar… If your avatar is carrying a backpack, you feel like you are going to have trouble climbing that hill, but this only happens when you customize the avatar.”

Of course, there is a dark side to the Proteus Effect. A study co-written by Jorge Peña, assistant professor in the College of Communication at the University of Texas, Austin, Cornell University Professor Jeffrey T. Hancock, and graduate student Nicholas A. Merola (also at Austin), showed that avatars could be used to prime negative responses in users within a virtual world. In two separate studies, researchers randomly assigned participants dark- or white-cloaked avatars, or avatars wearing physician or Ku Klux Klan-like uniforms. They were also asked to write a story about a picture or play a video game on a virtual team and then come to consensus on dealing with infractions. Those in the dark cloaks or KKK robes consistently showed negative or anti-social behavior. What really causes concern, though, is that they were completely unaware that they’d been primed to do so. According to Peña,

By manipulating the appearance of the avatar, you can augment the probability of people thinking and behaving in predictable ways without raising suspicion. Thus, you can automatically make a virtual encounter more competitive or cooperative by simply changing the connotations of one’s avatar.”

Behavior modification through manipulation of appearance is nothing new: Traditional, face-to-face psychological experiments have shown that changes in dress can affect a person’s behavior or perception of themselves. That this also happens in the virtual world says something interesting about the human brain and its ability to distinguish reality from virtuality.

It should also give us pause. We’re rushing headlong into a brave, new virtual world, and it seems all but unstoppable. This, in itself, is not a bad thing. However, as we move forward, we would do well to proceed deliberately and with caution. Our history is rife with examples of decisions made ignorant of the potential outcome, and good intentions corrupted. If we are going to plunge into the virtual, we must consider the consequences—intended or otherwise—that our choices, and our actions, may beget.

You can read a summary of Nick Yee’s work here.

For a discussion of the University of Kansas study, check this link.

You can read about the Preferred Family Healthcare study here.

The Virtual Human Interaction lab study is here.

Check out the University of Michigan study here.

And you can find a discussion of the potential negative aspects of avatar manipulation here.

Picture this: you’re behind the wheel of a military Humvee on the road to Fallujah, your unit’s team leader in the seat next to you and half a squad of Marines in the back. Tensions are high: Iraq is still a hotbed of violence, you’re traveling a dangerous road, and everyone knows the risks. Still, nothing’s happened yet. You’re just beginning to relax when a roadside bomb—one of the infamous IEDs—rips through the truck with a deafening roar. Your team leader dies instantly, but you barely have time to notice because the Humvee’s now on its back. Screams sound from behind you. Looking back, you can see your team through the billowing smoke—and it’s not pretty. A Hollywood makeup artist with an unlimited budget and a taste for the macabre would have a hard time duplicating the scene. Some of the men are dead, the rest horribly wounded. There’s something burning in the back, noise and smoke are overwhelming. You need to do something, but what?

Try taking off the VR headset.

Fortunately for you, this was only a simulation. But for many US servicemen and women, variations on the above scene are all too real. And for those who survive, healing from the physical wounds may be the easy part.

Post-traumatic stress disorder, or PTSD, has always been a serious problem, and it’s getting worse: Iraq and Afghanistan are unique in the history of US military conflict (length of deployments, faster than usual redeployment, etc.), and seem to be contributing to growing mental health problems. According to Steven Huberman, PhD, dean of Touro College’s School of Social Work, in New York City,

Since the deployment to Iraq and Afghanistan started… we’re seeing a significant difference from other military involvements, in the number and types of injuries, the types of deployments, the nature of the military force, and the impact on families and kids.”

PTSD is often hard to identify, always difficult to treat, and has far-reaching impacts on sufferers and their families. In order to recover, victims have to confront the memories and emotions surrounding the traumatic event and eventually work through them. Ignoring them only creates more severe problems. The trick is confronting the memories safely.

Enter Virtual Iraq. Virtual Iraq is an immersive, 3-D virtual world that allows a PTSD patient to re-live a traumatic situation in a safe environment. Based on the videogame Full Spectrum Warrior, Virtual Iraq places the patient into a therapist-controlled combat scenario. During the scenario, the therapist exposes the veteran to the sights and sounds of battle at a level that he or she is emotionally capable of handling. As the patient progresses, the therapist can turn up the heat, enhancing the realism of the scene by delivering additional sounds and images—jets flying over, insurgents coming out of palm groves, IEDs, explosions—into the environment. The videogame provides a safe environment for the patient to confront their emotions and ultimately gain control over the PTSD.

Says Albert “Skip” Rizzo, a clinical psychologist at the University of Southern California—and Virtual Iraq’s developer,

VR puts a person back into the sights, sounds, smells, feelings of the scene… You know what the patient’s seeing, and you can help prompt them through the experience in a very safe and supportive fashion. As you go through the therapy, the patient may be invited to turn on the motor. Eventually, as they tell their story, you find out that it wasn’t just a vehicle in front, it was a vehicle with five other friends… The guy that died was going to be discharged in two months. You start to see a rich depth of story.”

This type of treatment—called virtual reality exposure therapy (VRET)—isn’t limited to combat vets, though. There are virtual environments for treating much more common fears, including flying, heights, storms and public speaking. Virtually Better—the company behind Virtual Iraq—also has other environments designed around specific traumatic events: Vietnam, Hurricane Katrina and the attacks on 9/11/2001.

Here’s Skip Rizzo again, this time in his roles as Associate Director – Institute for Creative Technologies, and Research Professor – Psychiatry and Gerontology, University of Southern California, Los Angeles:

Results from uncontrolled trials and case reports are difficult to generalize from and we are cautious not to make excessive claims based on these early results. However, using accepted diagnostic measures, 80% of the treatment completers in our initial VRET sample showed both statistically and clinically meaningful reductions in PTSD, anxiety and depression symptoms, and anecdotal evidence from patient reports suggested that they saw improvements in their everyday life situations. These improvements were also maintained at three-month post-treatment follow-up.”

Perhaps the best testament to the effectiveness of Virtual Iraq, though, comes from this 22-year-old Marine injured during combat operations in Iraq:

By the end of therapy I felt more like one person. Toward the end, it was pretty easy to talk about what had happened over there. We went over all the hot spots in succession. I could talk about it without breaking down. I wasn’t holding anything back. I felt like the weight of the world had been lifted.”

This young man—and there are many others—gained his life back in large part through the healing power of a videogame.

Maybe videogames do have something positive to offer after all.

A quick Google search for Virtual Iraq will give you more information than you ever wanted, but here’s a selection of the best links:

Here’s an article from the New York Times Health section.

The New Yorker magazine published an article on Virtual Iraq here.

Check out this article about Virtual Iraq from Veterans Today.

NPR has a similar story here.

The US Army’s official web page has a story on VRET and PTSD here.

Here’s Fast Company’s take.

A discussion of Videogames and PTSD is here.

And you can find Virtually Better’s website here.