Archive for the ‘Virtual reality’ Category

drwietimg_4481In 1999, the Institute of Medicine published a study that concluded the following: medical errors in the US cost the lives of as many as 98,000 people each year (and run up a $17- $29 billion bill to boot). Ten years later, the Safe Patient Project reported that, rather than showing improvement, in the intervening decade the situation may have actually gotten worse—to the tune of more than 100,000 deaths each year as a result of “preventable medical harm.” Given that the CDC puts the number of deaths from hospital infections alone at around 99,000 annually, the SPP’s number seems conservative.

Let me put this into perspective. A Boeing 737—the most popular aircraft family in service today—seats 360 people, give or take. So consider this: the Safe Patient Project’s estimate of preventable fatalities is akin to 277 airliners plummeting to Earth and killing everyone on board—every year. How long do you think the FAA—or the public, for that matter—would stand for that?

Fortunately there’s a solution: video games.

Being a videogamer doesn’t get a lot of respect in a lot of mainstream professions, but it has been instrumental to me in becoming a surgeon.”

red_dragon

Red Dragon simulator, ISIS

That’s Dr. Andy Wright, surgeon and core faculty member at the University of Washington’s Institute for Simulation and Interprofessional Studies (ISIS). The Institute’s goal is to use technology to improve the quality of healthcare education, patient safety, and surgical outcomes. Simulations are particularly effective as they allow trainees to easily repeat procedures until they’re successful, and provide a safe place for them to fail when they’re not. In Dr. Wright’s experience, the skill and manual dexterity necessary to play video games proficiently translate directly to surgical simulators—resulting in more effective training and fewer accidents in the OR.

Gamers have a higher level of executive function. They have the ability to process information and make decisions quickly, they have to remember cues to what’s going around [them] and [they] have to make split-second decisions.”

Accomplished gamers show heightened abilities to focus on critical elements while maintaining peripheral awareness of the surrounding environment, function amidst distraction, and effectively improvise if a situation doesn’t go according to plan. Past studies have repeatedly demonstrated this, and it makes sense: effectively navigating through and surviving a video game’s virtual world demands it. There are other characteristics of video games that make them particularly well-suited to prepare surgeons for the operating room: you interact with the game’s world through a video screen, and you have to be adept at manipulating images and items with a handheld controller. These skills are especially useful in the areas of laparoscopic (see my previous post here) and robot-assisted surgery.

da Vinci Surgical System

da Vinci Surgical System

Take da Vinci, for example. It’s a robotic surgical system that allows surgeons to perform delicate, complex procedures through tiny incisions. The da Vinci system combines 3D, high definition video with four interactive robot arms (there’s even a dual-console option where trainees can watch an actual procedure, and a switching mechanism that allows surgeons and trainees to exchange control during an operation). Surgeons manipulate these arms using precision controllers that scale the speed and range of their movements down to the much smaller size of the surgical instruments, allowing for unparalleled accuracy. Put simply, the most advanced robotic surgical system in the world employs an interface intimately familiar to video gamers.

Take gaming into the land of simulation, though, and you can start tapping into the medium’s real power. Virtual reality (VR) simulators are an effective means of getting fledgling surgeons comfortable with a variety of procedures, allowing them to perform a given surgery dozens of times before ever opening up a live patient. They also provide an environment in which surgeons can, in essence, fail safely. Within a simulation, they can develop critical skills and expertise without putting anyone at risk, experimenting with different techniques, learning what does—and doesn’t—work, and becoming safer and more effective. A 2002 Yale University study provided strong evidence for this: surgical residents trained in VR were 29 percent faster and six times less likely to make mistakes than their non-VR trained colleagues.

virtual_surgery-chirurgie_virtuelle_1You can also customize a simulation to closely reflect reality, matching the conditions and characteristics of actual patients. In 2009, Halifax neurosurgeon Dr. David Clarke made history when he became the first person to remove a brain tumor in a patient less than 24 hours after removing the same tumor virtually, on a 3D rendering of that same patient. Two years later, doctors in Mumbai performed PSI knee replacement surgery on a patient after first running the operation virtually on an exact 3D replica of the patient’s knee.

Earlier this year, VR training took another leap forward: using the online virtual world Second Life, London’s St. Mary’s Hospital developed three VR environments—a standard hospital ward, an intensive care unit, and an emergency room—and built modules for three common scenarios (at three levels of complexity, for interns, junior residents, and senior residents) within them. According to Dr. Rajesh Aggarwal, a National Institute for Health Research (NIHR) clinician scientist in surgery at St. Mary’s Imperial College,

The way we learn in residency currently has been called ‘training by chance,’ because you don’t know what is coming through the door next. What we are doing is taking the chance encounters out of the way residents learn and forming a structured approach to training. What we want to do—using this simulation platform—is to bring all the junior residents and senior residents up to the level of the attending surgeon, so that the time is shortened in terms of their learning curve in learning how to look after surgical patients.”

After running interns and junior and senior residents through the VR training, researchers compared their performances of specific procedures against those of attending surgeons. They found substantial performance gaps between interns, residents, and attendings—validating the VR scenarios as training tools. As Dr. Aggarwal explained,

What we have shown scientifically is that these three simulated scenarios at the three different levels are appropriate for the assessment of interns, junior residents, and senior residents and their management of these cases.”

In the future, the team at St. Mary’s plans to study how this type of VR training can improve clinical outcomes of patients treated by residents—ultimately using this tool to bring their interns’ and residents’ skills up to the level of the attendings, help them better manage clinical patients, and, at the end of the day save lives.

In my previous two posts, I talked a lot about avatars and some of the rather intriguing and exciting developments happening regarding virtual worlds. In brief, both are evolving far beyond what early digital pioneers could have envisioned when they took their first steps into virtual reality. We have the ability today, as demonstrated by LucasArts’ E3 reveal of Star Wars: 1313, to render near-photo-realistic environments in real-time. Absolute photo-realism is a mere skip in time away—five years at the outside—and completely immersive, realistic virtual worlds experienced through any connected device are already on the horizon.

This does not spell the death of the avatar. Avatars will, I’m fairly certain, always have a place within gaming and VR. But we’re fast approaching a technological revolution the likes of which humanity has never experienced. Soon we will have the ability to interact with virtual environments directly, without filtering our actions through a digital intermediary.

Very soon, in fact. Prototypes of The Oculus Rift VR headset show great promise in delivering fully-immersive 3D gaming to the masses. With it, you can climb into the cockpit of a star fighter and engage in ship-to-ship combat, surrounded on all sides by the vastness of space and the intensity of interstellar battle (as with EVE VR). Combine the Rift with the Wizdish omni-directional treadmill and a hands-free controller like the Xbox Kinect, and you can transform a standard first-person shooter into a full-body experience—allowing you to ditch the thumbsticks and literally walk through a game’s virtual environment, controlling your actions via natural motion of hands, feet, arms, legs, and head.

In the world of augmented reality (AR), Google Glass will hit the streets later this year, providing users with a wearable, hands-free interface between the real and the virtual, enhancing your interaction with reality, and allowing you to transmit  your view of the world across the globe.

But the real future lies in a recently Kickstarted project by startup tech company Meta. Meta’s developing a system for interacting with the virtual world that tears open the envelope of the possible, and drives the line dividing virtual from real one step closer to extinction. It’s a combination of stereoscopic 3D glasses and a 3D camera to track hand movements—allowing for a level of gestural control of the virtual world previously unseen (think Minority Report or The Matrix Reloaded).

Unlike the Oculus Rift, the Meta system includes physical control and also works in real space (not strictly a virtual gaming world). And unlike Google Glass, Meta creates a completely immersive virtual environment. According to a recent article by Dan Farber on CNET,

Meta’s augmented reality eyewear can be applied to immersive 3D games played in front of your face or on [a] table, and other applications that require sophisticated graphical processing, such as architecture, engineering, medicine, film and other industries where 3D virtual object manipulation is useful. For example, a floating 3D model of a CAT scan could assist doctors in surgery, a group of architects could model buildings with their hands rather than a mouse, keyboard or stylus and car designers could shape models with the most natural interface, their hands.”

In other words, with the Meta system, the wearer can actually enter and manipulate the virtual world directly, altering it to serve whatever purpose s/he desires. Atheer, another recent entry into the field of AR, is working on a similar system. Like Meta, Atheer’s technology uses a 3D camera and stereoscopic glasses for gesture control, and also provides direct access to the virtual world. According to Atheer founder and CEO Sulieman Itani,

We are the first mobile 3D platform delivering the human interface. We are taking the touch experience on smart devices, getting the Internet out of these monitors and putting it everywhere in [the] physical world around you. In 3D, you can paint in the physical world. For example, you could leave a note to a friend in the air at [a] restaurant, and when the friend walks into the restaurant, only they can see it.”

The biggest difference between the two systems is that Atheer isn’t building a hardware-specific platform; it will be able to run on top of existing systems like Android, iOS, Windows Mobile, or Xbox. Apps built specifically with the Atheer interface in mind will be able to take full advantage of the technology. Those that aren’t optimized for Atheer will present users with a virtual tablet that they can operate by touch, exactly like an iPad or Galaxy Tab. Here’s Itani’s take:

This is important for people moving to a new platform. We reduce the experience gap and keep the critical mass of the ecosystem. We don’t want to create a new ecosystem to fragment the market more. Everything that runs on Android can be there, from game engines to voice control.”

We’re rapidly approaching a critical stage in our technological evolution, nearing the point at which we’ll be able to work, play, and live, at least part-time, in hyper-realistic, fully-immersive virtual worlds, just as we do in real-world spaces today. So, what does this mean? Games will get better. Virtual worlds will become richer and more complex, and take on greater significance as we spend more time in them. And we, as humans, may begin to lose our grasp of the real. I spoke to Kim Libreri at Industrial Light and Magic about this, and he agreed that this could become a real problem. The human brain, he said, is a very flexible learning device. With the gains in fidelity that we’ll see in AR over the next decade, he believes that, as the human race evolves, it’s going to become more and more difficult to separate what’s real from what’s not. The longer we coexist within those places, the harder it will become, and this could begin to create problems. As Libreri said,

There’s a real, tangible threat to what can happen to you in the cyber world, and I think as things visualize themselves more realistically . . . think about cyber-crime in an AR world. Creatures chasing you. It’s gonna be pretty freaky. People will have emotional reactions to things that don’t really exist.”

It might also render people particularly susceptible to suggestion. It’s a bit like the movie Inception, where ideas are planted deep within a subject’s subconscious—except this would be easier. If you can’t distinguish the virtual from the real, a proposal put to you in a virtual world could seem like a really good idea someone presented in reality. If this sounds like science fiction, consider for a moment how vulnerable we are to the power of subliminal suggestion or even direct messages within everyday advertising. No, if anything, the intricacies of our approaching reality will most likely far exceed our ability to imagine them.

Of course, there are positives and negatives with any technology. Through highly accurate simulations, immersive virtual worlds could allow people to visit places they might otherwise not be able to. They could also provide greater and more intuitive access to information, and the ability to use and manipulate it in ways that are today all but unthinkable. Whether the looming future of alternate reality will be predominantly good or bad is irrelevant. It is coming, one way or another, and there’s nothing we can do to stop it. If our past history and the ever-increasing speed of development and change are predictive—or at least indicative—of future events, then alternate reality, in whatever form it assumes, is poised to bring an end to the separation of virtual and real. And if that happens, we will all bear witness to the birth of a singularity beyond which the nature of human interaction—and indeed, of humanity itself—will be changed, fundamentally, forever.

You can learn more about the Oculus Rift here.

… and the Wizdish treadmill here…

.. and here.

There’s a video demo of Google Glass here.

And a demo of EVE VR here:

More info about EVE VR is here.

You can learn more about Meta and their computing eyewear here…

… and here.

You can read about Atheer here:

And for an overview of the future of AR, check out this article:

In my last post, I said that avatars were all the rage, and they are—and most likely will only become more so within the next five years. Why then? That’s when Philip Rosedale believes we’ll see intricately detailed virtual worlds that begin to rival reality. If you don’t know Rosedale, you know his work: back in 2000, he created the first massively multiuser 3D virtual experience. It was more alternate reality than game, and with perhaps a nod to his desire to build a world that would become an essential component of daily existence, Rosedale called his creation Second Life.

For many people, Second Life became just that. Users (residents, in the SL lexicon) could log in, explore the world, build virtual places of their own, shop, find support groups, run businesses, have relationships… in short, everything people do in “real” life. The experience is engaging—so much so that some actually find it as involving as their first lives, if not more so. However, no one would mistake Second Life for reality: visually it resembles animated film—the fidelity is good, but it has nothing on the real world. To become truly immersive, the virtual component must be thoroughly unquestionable—enough to fool our brains into believing that we’re in the grip of the real.

That’s where we’re headed—rushing headlong towards, in fact—and Rosedale is at the fore in getting us there. At the recent Augmented World Expo in Santa Clara, CA, he dropped a few hints as to what his new company, High Fidelity, is cooking up. At its heart, it could be a 3D world that’s virtually indistinguishable from reality, offering a vastly increased speed of interaction, employing body tracking sensors for more life-like avatars, and applying the computing power of tens of millions of devices in the hands of end-users the world over. Within five years, he believes that any mobile device will be able to access and interact with photo-realistic virtual worlds in real time, with no discernable lag.

We’re already seeing the first signs of this. Last summer, I spoke with Kim Libreri at ILM (this was before the Disney purchase) regarding the stunning but now-doomed Star Wars 1313:

We’ve got to a point where many of the techniques that we would have taken as our bread-and-butter at ILM a decade ago are now absolutely achievable in real-time. In fact, you know, a render at ILM—or any visual effects company—can take ten hours per single frame. This is running at thirty-three milliseconds—that’s like a million times faster than what we would have on a normal movie.  But the graphics technology is so advanced now that the raw horsepower is there to do some incredible things.”

Star Wars 1313And this is today. Within five years, he told me, they’ll achieve absolute, indistinguishable-from-reality photo-realism. Regarding the ability of mobile devices to connect to the type of virtual world Rosedale envisions, he’s a little more conservative. In this case, the bottleneck isn’t computing power but the speed of Internet connectivity, which depends on more factors. Still, Libreri sees that being cleared within 10 years. And that’s it—we’ll have removed the last barrier to delivering hyper-realistic, fully immersive virtual worlds to any device, anywhere. From that point on, the possibilities will be limitless, bounded only by the extent of our imagination.

The implications of this, though, are another matter entirely—and one I’ll take up in my next post. Until then, I’ll leave you with a taste of the possible: Star Wars 1313 videos here

… and here.

You can read more about Philip Rosedale’s Augmented World Expo talk here.

And you can learn more about the Augmented World Expo here.

avatarsAvatars are all the rage these days. Facebook profile pics, World of Warcraft and EVE Online characters, Second Life and OpenSim personas—these are just a few examples of a growing phenomenon. We all seem to have some sort of digital representation of ourselves that we project into cyberspace—and we spend a fair amount of time designing and customizing them, getting their appearances just right.

Who can blame us, really? After all, they are us, the digital faces we present to the virtual world. That doesn’t mean they have to perfectly replicate our real-world identities, though. In fact, the beauty of designing an avatar is the ability to get creative, to choose exactly who we want to be, to build our ideal selves.

At first blush, it would appear that this is a one-way exchange: through the creative process, we affect the avatar, which we then use to interact with the virtual world. Certainly, with things like static photos and images, this is the case. However, with respect to a 3D digital persona that responds to our commands, it gets a little more complicated. In fact, as several researchers are discovering, situations that we experience virtually through our avatars can impact and even alter our reality.

vravatarPalo Alto research scientist Nick Yee dubbed this the Proteus Effect, after the Greek sea god Proteus, who could assume many different forms (and whose name lends itself to the adjective protean—changeable). He first described it in 2007 while studying how an avatar’s appearance and height affected the way people behaved in the virtual world. In his initial research, Yee provided study subjects with avatars that were attractive or unattractive, tall or short, and then watched them interact with a virtual stranger (controlled by one of Yee’s lab assistants). Here’s what Yee’s team discovered:

We found that participants who were given attractive avatars walked closer to and disclosed more personal information to the virtual stranger than participants given unattractive avatars. We found the same effect with avatar height. Participants in taller avatars (relative to the virtual stranger) negotiated more aggressively in a bargaining task than participants in shorter avatars.”

Yee’s work demonstrated clearly that an avatar’s appearance could change how someone acted within a virtual environment and interacted with its residents.

Okay, so what? It’s interesting, but what relevance does it have to the real world?

In 2009, Yee asked the same question: did changes in virtual-world behavior translate to physical reality? He revisited his 2007 study, adding another task: After concluding their virtual interaction, Yee had each participant create a personal profile on a mock dating site and then, from a group of nine possible matches, select the two s/he’d most like to get to know. Without fail, Yee says,

… we found that participants who had been given an attractive avatar in a virtual environment chose more attractive partners in the dating task than participants given unattractive avatars in the earlier task. This study showed that effects on people’s perceptions of their own attractiveness do seem to linger outside of the original virtual environment.”

The Proteus Effect has been credited with more than just creating more aggressive negotiators or making people feel better about themselves: weight loss, substance abuse treatment, environmental consciousness, perception of obstacles… all affected by people’s experiences through their avatars within virtual reality. According to Maria Korolov, founder and editor of the online publication Hypergrid Business—and who’s been studying virtual worlds since their inception—people who exercise within a virtual world…

… will exercise an hour more on average the next day in real life, because they think of themselves as an exercising-type person. It changes the way you think.”

Researchers at the University of Kansas Medical Center back this up. A weight loss study there found that people who lost weight either through virtual or face-to-face exercise programs were more effective at managing their weights if they took part in maintenance programs delivered through Second Life.

Regarding substance abuse, Preferred Family Healthcare, Inc., found that treatment outcomes for participants in their virtual programs were as good as or better than those for people who took part in real-life counseling. More significantly, fewer people dropped out of virtual treatment—vastly so: virtual programs saw a 90 percent completion rate, as opposed to 30-35 percent completion for programs at a traditional, physical facility.

3standford_virtual_human_interaction_lab_ym8phEnvironmental consciousness may seem like a stretch, but researchers at Stanford University’s Virtual Human Interaction Lab found that people who felled a massive, virtual sequoia used less paper in the real world than those who only imagined cutting down a tree.

And perhaps the most interesting example, a study at the University of Michigan showed that participants who saw that a backpack was attached to an avatar consistently overestimated the heights of virtual hills—but only if they’d created the avatar themselves. Participants assigned an avatar by the researchers were much more accurate in their estimations. Said S. Shyam Sundar, Distinguished Professor of Communications and co-director of the Media Effects Research Laboratory, Penn State, who worked on the study,

You exert more of your agency through an avatar when you design it yourself. Your identity mixes in with the identity of that avatar and, as a result, your visual perception of the virtual environment is colored by the physical resources of your avatar… If your avatar is carrying a backpack, you feel like you are going to have trouble climbing that hill, but this only happens when you customize the avatar.”

Of course, there is a dark side to the Proteus Effect. A study co-written by Jorge Peña, assistant professor in the College of Communication at the University of Texas, Austin, Cornell University Professor Jeffrey T. Hancock, and graduate student Nicholas A. Merola (also at Austin), showed that avatars could be used to prime negative responses in users within a virtual world. In two separate studies, researchers randomly assigned participants dark- or white-cloaked avatars, or avatars wearing physician or Ku Klux Klan-like uniforms. They were also asked to write a story about a picture or play a video game on a virtual team and then come to consensus on dealing with infractions. Those in the dark cloaks or KKK robes consistently showed negative or anti-social behavior. What really causes concern, though, is that they were completely unaware that they’d been primed to do so. According to Peña,

By manipulating the appearance of the avatar, you can augment the probability of people thinking and behaving in predictable ways without raising suspicion. Thus, you can automatically make a virtual encounter more competitive or cooperative by simply changing the connotations of one’s avatar.”

Behavior modification through manipulation of appearance is nothing new: Traditional, face-to-face psychological experiments have shown that changes in dress can affect a person’s behavior or perception of themselves. That this also happens in the virtual world says something interesting about the human brain and its ability to distinguish reality from virtuality.

It should also give us pause. We’re rushing headlong into a brave, new virtual world, and it seems all but unstoppable. This, in itself, is not a bad thing. However, as we move forward, we would do well to proceed deliberately and with caution. Our history is rife with examples of decisions made ignorant of the potential outcome, and good intentions corrupted. If we are going to plunge into the virtual, we must consider the consequences—intended or otherwise—that our choices, and our actions, may beget.

You can read a summary of Nick Yee’s work here.

For a discussion of the University of Kansas study, check this link.

You can read about the Preferred Family Healthcare study here.

The Virtual Human Interaction lab study is here.

Check out the University of Michigan study here.

And you can find a discussion of the potential negative aspects of avatar manipulation here.

As any William Gibson fan will tell you, it was only a matter of time.

Just yesterday, the Department of Defense announced that it’s developing virtual reality contact lenses to enhance the intelligence, surveillance and reconnaissance (ISR) abilities of soldiers on the battlefield. The lenses, which fit over the eye exactly like standard contact lenses, contain miniature, full-color displays, on which digital images can be directly projected. Unlike a laptop, PDA or other handheld device, which places a screen between the user and his or her environment, wearers could watch these images and still have an unobstructed view of their surroundings, allowing them to react to events on the ground while receiving potentially critical intel through the lenses. According to DARPA, who’s working with Innovega iOptiks to create the lenses, they would

operate hands-free, provide similar or better magnification on-demand, while providing FOV [field-of-view] equal to that of the unaided eye.”

They would also cost less than existing equipment used for ISR activities, and would provide soldiers with a freedom of movement not possible with binoculars, night-vision goggles, and other traditional ISR gear. There’s an image of the lenses here.

Of course, this is hardly the U.S. military’s first foray into the realm of VR technology. In 2008, the U.S. Air Force built a simulated base in Second Life; in 2010, the Army posted details for a complex virtual world similar to Second Life’s massively multiplayer environment, and began courting a systems integrator to build it (InformationWeek reported the article here).

2011, though, saw a flurry of activity, with both the Army and the Navy exploring the potential of virtual worlds to train their personnel for a variety of battle exercises, from firing torpedoes to preparing for encounters with IEDs and other explosive devices—right down to the nature and damage of an explosion, including haptic (tactile) feedback systems that would simulate being hit with debris.

But what does this all mean, really? What are the larger implications?

Last year, I spoke with Rob Lindeman, a game design and technology professor in Worcester Polytechnic Institute’s Department of Computer Science, and I asked him how far VR technology could go. Would it ever be possible to create a fully-immersive virtual world? Here’s what he had to say:

I think the answer is yes, and I think it’s going to be more Matrix-like than anything else. What I’ve found is that technology seems to be moving closer to the brain and bypassing more and more systems. So for instance there are these displays that draw images on the retina, so instead of showing you a display, it actually draws directly on your retina. So it bypasses the optics and it’s perfect resolution. There’s no pixelation, there’s actually no display, it’s literally drawing on it, so you have perfect resolution. And that’s one step closer to the brain. At some point, we’ll just be tapping into the optic nerve, tapping into the auditory nerve and just stimulating… sending nerve impulses to the brain. And then at some point we’ll start just tapping right into the area of the brain that we know and can tap into. And I think that’ll happen. I don’t know when it will happen. I don’t know what the motivation will be for it to happen, but I think it will happen.”

What seemed far-fetched and confined to the realms of science fiction less than 30 years ago is all but upon us. In another 30 years, we may be able to live much of our lives in a completely virtual world that’s indistinguishable from reality. Whether this turns out to be a boon or a curse will debated long after its inauguration. We can turn away in fear or face the future and embrace the possibilities. However, the history of technological progress has taught us that there’s no turning back: Once we have the means to create something, it’s a virtual certainty that we will. How we use it is up to us.

To read more about virtual reality contact lenses, click here.

This article talks about some of the next generation training tools being investigated by the DOD.

Here’s another article regarding the DOD and virtual worlds.

A similar article regarding the U.S. Navy’s virtual reality exploration is here.

The DOD has a special report with several articles about military virtual worlds here.

Public safety is a tricky business. It is, by its nature, risky: Paramedics, firefighters, police, EMTs, first responders—they have dangerous jobs, and often put themselves in harm’s way to help others. When they go to work, lives are often at stake—sometimes theirs, sometimes ours, and sometimes both. For reasons that should be obvious, adequate and effective training of individuals pursuing this line of work is absolutely critical: Call me crazy, but getting thrown cold into an emergency situation doesn’t strike me as the best way to assess your skills.

Okay, so training is important. However, it’s also expensive, and budgets for public safety at all levels—local, state and federal—are stretched even during times of economic prosperity. It’s time-intensive as well, can be limited in reach, and usually requires safety personnel to travel outside their communities—taking them off the streets and reducing their departments’ abilities to respond to emergencies at home. I don’t know about you, but I’m not aware of any towns nearby that have dedicated training facilities on-site.

So how do we reconcile the need for comprehensive training with the expense of providing it?

Wait for it…

By using videogames, of course (you expected a different answer, maybe?).

This is exactly what groups like Virtual Heroes do for a living. Using 3-D game engines and game design techniques, Virtual Heroes builds scenarios within an immersive, virtual environment to help medical, military, public safety and healthcare professionals respond to catastrophic events in the real world. One of their flagship products, HumanSim, allows healthcare workers to sharpen their skills in realistic situations without risking real lives. They also create simulations for commercial clients who want to expand their ability to deliver on-the-job safety training. I watched one targeted at electrical workers. Arc flashes are scary things…

Okay, but how does this impact you, me, a typical big-city urbanite, or the average citizen in small town America? Let me bring this home. I live in western Massachusetts. Belchertown, to be exact. The end of last week, our local paper, The Sentinel, reported a story from the next town over about police and fire personnel training on an immersive, 3-D driving simulator. Big deal, right?

Actually, it is. First, the simulator was brought to them—right into the police and fire departments—allowing more personnel to go through the training than if the departments had to send them off-site. Also, had there been an emergency during the training sessions (thankfully, there were none), every single police officer or firefighter would have been available to respond. Perhaps more importantly, though, this training was provided free of charge, allowing cash-strapped departments to offer driver training to all their personnel—something they wouldn’t be able to afford otherwise. According to Granby Police Chief Alan Wishart, who also took a turn behind the wheel,

We would not be able to do this on our own. If it wasn’t for them [Massachusetts Interlocal Insurance Association (MIIA)], it could be years before some of the officers saw this type of training. It’s a great opportunity for us.”

Granby Fire Lieutenant Brian Pike agreed, adding that they could provide additional scenarios—and there are hundreds available—without taking out the trucks.

Less emergency worker downtime, more personnel trained, zero cost to the community—it all adds up to more experienced public safety departments, better emergency response, and more lives saved. So the next time you hear a public safety success story, you may have to thank a videogame.

To read the original article, check out The Sentinel here.

You can learn more about Virtual Heroes here.

And you can read more about the Massachusetts Interlocal Insurance Association’s simulator here.

Picture this: you’re behind the wheel of a military Humvee on the road to Fallujah, your unit’s team leader in the seat next to you and half a squad of Marines in the back. Tensions are high: Iraq is still a hotbed of violence, you’re traveling a dangerous road, and everyone knows the risks. Still, nothing’s happened yet. You’re just beginning to relax when a roadside bomb—one of the infamous IEDs—rips through the truck with a deafening roar. Your team leader dies instantly, but you barely have time to notice because the Humvee’s now on its back. Screams sound from behind you. Looking back, you can see your team through the billowing smoke—and it’s not pretty. A Hollywood makeup artist with an unlimited budget and a taste for the macabre would have a hard time duplicating the scene. Some of the men are dead, the rest horribly wounded. There’s something burning in the back, noise and smoke are overwhelming. You need to do something, but what?

Try taking off the VR headset.

Fortunately for you, this was only a simulation. But for many US servicemen and women, variations on the above scene are all too real. And for those who survive, healing from the physical wounds may be the easy part.

Post-traumatic stress disorder, or PTSD, has always been a serious problem, and it’s getting worse: Iraq and Afghanistan are unique in the history of US military conflict (length of deployments, faster than usual redeployment, etc.), and seem to be contributing to growing mental health problems. According to Steven Huberman, PhD, dean of Touro College’s School of Social Work, in New York City,

Since the deployment to Iraq and Afghanistan started… we’re seeing a significant difference from other military involvements, in the number and types of injuries, the types of deployments, the nature of the military force, and the impact on families and kids.”

PTSD is often hard to identify, always difficult to treat, and has far-reaching impacts on sufferers and their families. In order to recover, victims have to confront the memories and emotions surrounding the traumatic event and eventually work through them. Ignoring them only creates more severe problems. The trick is confronting the memories safely.

Enter Virtual Iraq. Virtual Iraq is an immersive, 3-D virtual world that allows a PTSD patient to re-live a traumatic situation in a safe environment. Based on the videogame Full Spectrum Warrior, Virtual Iraq places the patient into a therapist-controlled combat scenario. During the scenario, the therapist exposes the veteran to the sights and sounds of battle at a level that he or she is emotionally capable of handling. As the patient progresses, the therapist can turn up the heat, enhancing the realism of the scene by delivering additional sounds and images—jets flying over, insurgents coming out of palm groves, IEDs, explosions—into the environment. The videogame provides a safe environment for the patient to confront their emotions and ultimately gain control over the PTSD.

Says Albert “Skip” Rizzo, a clinical psychologist at the University of Southern California—and Virtual Iraq’s developer,

VR puts a person back into the sights, sounds, smells, feelings of the scene… You know what the patient’s seeing, and you can help prompt them through the experience in a very safe and supportive fashion. As you go through the therapy, the patient may be invited to turn on the motor. Eventually, as they tell their story, you find out that it wasn’t just a vehicle in front, it was a vehicle with five other friends… The guy that died was going to be discharged in two months. You start to see a rich depth of story.”

This type of treatment—called virtual reality exposure therapy (VRET)—isn’t limited to combat vets, though. There are virtual environments for treating much more common fears, including flying, heights, storms and public speaking. Virtually Better—the company behind Virtual Iraq—also has other environments designed around specific traumatic events: Vietnam, Hurricane Katrina and the attacks on 9/11/2001.

Here’s Skip Rizzo again, this time in his roles as Associate Director – Institute for Creative Technologies, and Research Professor – Psychiatry and Gerontology, University of Southern California, Los Angeles:

Results from uncontrolled trials and case reports are difficult to generalize from and we are cautious not to make excessive claims based on these early results. However, using accepted diagnostic measures, 80% of the treatment completers in our initial VRET sample showed both statistically and clinically meaningful reductions in PTSD, anxiety and depression symptoms, and anecdotal evidence from patient reports suggested that they saw improvements in their everyday life situations. These improvements were also maintained at three-month post-treatment follow-up.”

Perhaps the best testament to the effectiveness of Virtual Iraq, though, comes from this 22-year-old Marine injured during combat operations in Iraq:

By the end of therapy I felt more like one person. Toward the end, it was pretty easy to talk about what had happened over there. We went over all the hot spots in succession. I could talk about it without breaking down. I wasn’t holding anything back. I felt like the weight of the world had been lifted.”

This young man—and there are many others—gained his life back in large part through the healing power of a videogame.

Maybe videogames do have something positive to offer after all.

A quick Google search for Virtual Iraq will give you more information than you ever wanted, but here’s a selection of the best links:

Here’s an article from the New York Times Health section.

The New Yorker magazine published an article on Virtual Iraq here.

Check out this article about Virtual Iraq from Veterans Today.

NPR has a similar story here.

The US Army’s official web page has a story on VRET and PTSD here.

Here’s Fast Company’s take.

A discussion of Videogames and PTSD is here.

And you can find Virtually Better’s website here.