Posts Tagged ‘Second Life’

drwietimg_4481In 1999, the Institute of Medicine published a study that concluded the following: medical errors in the US cost the lives of as many as 98,000 people each year (and run up a $17- $29 billion bill to boot). Ten years later, the Safe Patient Project reported that, rather than showing improvement, in the intervening decade the situation may have actually gotten worse—to the tune of more than 100,000 deaths each year as a result of “preventable medical harm.” Given that the CDC puts the number of deaths from hospital infections alone at around 99,000 annually, the SPP’s number seems conservative.

Let me put this into perspective. A Boeing 737—the most popular aircraft family in service today—seats 360 people, give or take. So consider this: the Safe Patient Project’s estimate of preventable fatalities is akin to 277 airliners plummeting to Earth and killing everyone on board—every year. How long do you think the FAA—or the public, for that matter—would stand for that?

Fortunately there’s a solution: video games.

Being a videogamer doesn’t get a lot of respect in a lot of mainstream professions, but it has been instrumental to me in becoming a surgeon.”


Red Dragon simulator, ISIS

That’s Dr. Andy Wright, surgeon and core faculty member at the University of Washington’s WWAMI Institute for Simulation in Healthcare (WISH). The Institute’s goal is to use technology to improve the quality of healthcare education, patient safety, and surgical outcomes. Simulations are particularly effective as they allow trainees to easily repeat procedures until they’re successful, and provide a safe place for them to fail when they’re not. In Dr. Wright’s experience, the skill and manual dexterity necessary to play video games proficiently translate directly to surgical simulators—resulting in more effective training and fewer accidents in the OR.

Gamers have a higher level of executive function. They have the ability to process information and make decisions quickly, they have to remember cues to what’s going around [them] and [they] have to make split-second decisions.”

Accomplished gamers show heightened abilities to focus on critical elements while maintaining peripheral awareness of the surrounding environment, function amidst distraction, and effectively improvise if a situation doesn’t go according to plan. Past studies have repeatedly demonstrated this, and it makes sense: effectively navigating through and surviving a video game’s virtual world demands it. There are other characteristics of video games that make them particularly well-suited to prepare surgeons for the operating room: you interact with the game’s world through a video screen, and you have to be adept at manipulating images and items with a handheld controller. These skills are especially useful in the areas of laparoscopic (see my previous post here) and robot-assisted surgery.

da Vinci Surgical System

da Vinci Surgical System

Take da Vinci, for example. It’s a robotic surgical system that allows surgeons to perform delicate, complex procedures through tiny incisions. The da Vinci system combines 3D, high definition video with four interactive robot arms (there’s even a dual-console option where trainees can watch an actual procedure, and a switching mechanism that allows surgeons and trainees to exchange control during an operation). Surgeons manipulate these arms using precision controllers that scale the speed and range of their movements down to the much smaller size of the surgical instruments, allowing for unparalleled accuracy. Put simply, the most advanced robotic surgical system in the world employs an interface intimately familiar to video gamers.

Take gaming into the land of simulation, though, and you can start tapping into the medium’s real power. Virtual reality (VR) simulators are an effective means of getting fledgling surgeons comfortable with a variety of procedures, allowing them to perform a given surgery dozens of times before ever opening up a live patient. They also provide an environment in which surgeons can, in essence, fail safely. Within a simulation, they can develop critical skills and expertise without putting anyone at risk, experimenting with different techniques, learning what does—and doesn’t—work, and becoming safer and more effective. A 2002 Yale University study provided strong evidence for this: surgical residents trained in VR were 29 percent faster and six times less likely to make mistakes than their non-VR trained colleagues.

virtual_surgery-chirurgie_virtuelle_1You can also customize a simulation to closely reflect reality, matching the conditions and characteristics of actual patients. In 2009, Halifax neurosurgeon Dr. David Clarke made history when he became the first person to remove a brain tumor in a patient less than 24 hours after removing the same tumor virtually, on a 3D rendering of that same patient. Two years later, doctors in Mumbai performed PSI knee replacement surgery on a patient after first running the operation virtually on an exact 3D replica of the patient’s knee.

Earlier this year, VR training took another leap forward: using the online virtual world Second Life, London’s St. Mary’s Hospital developed three VR environments—a standard hospital ward, an intensive care unit, and an emergency room—and built modules for three common scenarios (at three levels of complexity, for interns, junior residents, and senior residents) within them. According to Dr. Rajesh Aggarwal, a National Institute for Health Research (NIHR) clinician scientist in surgery at St. Mary’s Imperial College,

The way we learn in residency currently has been called ‘training by chance,’ because you don’t know what is coming through the door next. What we are doing is taking the chance encounters out of the way residents learn and forming a structured approach to training. What we want to do—using this simulation platform—is to bring all the junior residents and senior residents up to the level of the attending surgeon, so that the time is shortened in terms of their learning curve in learning how to look after surgical patients.”

After running interns and junior and senior residents through the VR training, researchers compared their performances of specific procedures against those of attending surgeons. They found substantial performance gaps between interns, residents, and attendings—validating the VR scenarios as training tools. As Dr. Aggarwal explained,

What we have shown scientifically is that these three simulated scenarios at the three different levels are appropriate for the assessment of interns, junior residents, and senior residents and their management of these cases.”

In the future, the team at St. Mary’s plans to study how this type of VR training can improve clinical outcomes of patients treated by residents—ultimately using this tool to bring their interns’ and residents’ skills up to the level of the attendings, help them better manage clinical patients, and, at the end of the day save lives.

In my last post, I said that avatars were all the rage, and they are—and most likely will only become more so within the next five years. Why then? That’s when Philip Rosedale believes we’ll see intricately detailed virtual worlds that begin to rival reality. If you don’t know Rosedale, you know his work: back in 2000, he created the first massively multiuser 3D virtual experience. It was more alternate reality than game, and with perhaps a nod to his desire to build a world that would become an essential component of daily existence, Rosedale called his creation Second Life.

For many people, Second Life became just that. Users (residents, in the SL lexicon) could log in, explore the world, build virtual places of their own, shop, find support groups, run businesses, have relationships… in short, everything people do in “real” life. The experience is engaging—so much so that some actually find it as involving as their first lives, if not more so. However, no one would mistake Second Life for reality: visually it resembles animated film—the fidelity is good, but it has nothing on the real world. To become truly immersive, the virtual component must be thoroughly unquestionable—enough to fool our brains into believing that we’re in the grip of the real.

That’s where we’re headed—rushing headlong towards, in fact—and Rosedale is at the fore in getting us there. At the recent Augmented World Expo in Santa Clara, CA, he dropped a few hints as to what his new company, High Fidelity, is cooking up. At its heart, it could be a 3D world that’s virtually indistinguishable from reality, offering a vastly increased speed of interaction, employing body tracking sensors for more life-like avatars, and applying the computing power of tens of millions of devices in the hands of end-users the world over. Within five years, he believes that any mobile device will be able to access and interact with photo-realistic virtual worlds in real time, with no discernable lag.

We’re already seeing the first signs of this. Last summer, I spoke with Kim Libreri at ILM (this was before the Disney purchase) regarding the stunning but now-doomed Star Wars 1313:

We’ve got to a point where many of the techniques that we would have taken as our bread-and-butter at ILM a decade ago are now absolutely achievable in real-time. In fact, you know, a render at ILM—or any visual effects company—can take ten hours per single frame. This is running at thirty-three milliseconds—that’s like a million times faster than what we would have on a normal movie.  But the graphics technology is so advanced now that the raw horsepower is there to do some incredible things.”

Star Wars 1313And this is today. Within five years, he told me, they’ll achieve absolute, indistinguishable-from-reality photo-realism. Regarding the ability of mobile devices to connect to the type of virtual world Rosedale envisions, he’s a little more conservative. In this case, the bottleneck isn’t computing power but the speed of Internet connectivity, which depends on more factors. Still, Libreri sees that being cleared within 10 years. And that’s it—we’ll have removed the last barrier to delivering hyper-realistic, fully immersive virtual worlds to any device, anywhere. From that point on, the possibilities will be limitless, bounded only by the extent of our imagination.

The implications of this, though, are another matter entirely—and one I’ll take up in my next post. Until then, I’ll leave you with a taste of the possible: Star Wars 1313 videos here

… and here.

You can read more about Philip Rosedale’s Augmented World Expo talk here.

And you can learn more about the Augmented World Expo here.

avatarsAvatars are all the rage these days. Facebook profile pics, World of Warcraft and EVE Online characters, Second Life and OpenSim personas—these are just a few examples of a growing phenomenon. We all seem to have some sort of digital representation of ourselves that we project into cyberspace—and we spend a fair amount of time designing and customizing them, getting their appearances just right.

Who can blame us, really? After all, they are us, the digital faces we present to the virtual world. That doesn’t mean they have to perfectly replicate our real-world identities, though. In fact, the beauty of designing an avatar is the ability to get creative, to choose exactly who we want to be, to build our ideal selves.

At first blush, it would appear that this is a one-way exchange: through the creative process, we affect the avatar, which we then use to interact with the virtual world. Certainly, with things like static photos and images, this is the case. However, with respect to a 3D digital persona that responds to our commands, it gets a little more complicated. In fact, as several researchers are discovering, situations that we experience virtually through our avatars can impact and even alter our reality.

vravatarPalo Alto research scientist Nick Yee dubbed this the Proteus Effect, after the Greek sea god Proteus, who could assume many different forms (and whose name lends itself to the adjective protean—changeable). He first described it in 2007 while studying how an avatar’s appearance and height affected the way people behaved in the virtual world. In his initial research, Yee provided study subjects with avatars that were attractive or unattractive, tall or short, and then watched them interact with a virtual stranger (controlled by one of Yee’s lab assistants). Here’s what Yee’s team discovered:

We found that participants who were given attractive avatars walked closer to and disclosed more personal information to the virtual stranger than participants given unattractive avatars. We found the same effect with avatar height. Participants in taller avatars (relative to the virtual stranger) negotiated more aggressively in a bargaining task than participants in shorter avatars.”

Yee’s work demonstrated clearly that an avatar’s appearance could change how someone acted within a virtual environment and interacted with its residents.

Okay, so what? It’s interesting, but what relevance does it have to the real world?

In 2009, Yee asked the same question: did changes in virtual-world behavior translate to physical reality? He revisited his 2007 study, adding another task: After concluding their virtual interaction, Yee had each participant create a personal profile on a mock dating site and then, from a group of nine possible matches, select the two s/he’d most like to get to know. Without fail, Yee says,

… we found that participants who had been given an attractive avatar in a virtual environment chose more attractive partners in the dating task than participants given unattractive avatars in the earlier task. This study showed that effects on people’s perceptions of their own attractiveness do seem to linger outside of the original virtual environment.”

The Proteus Effect has been credited with more than just creating more aggressive negotiators or making people feel better about themselves: weight loss, substance abuse treatment, environmental consciousness, perception of obstacles… all affected by people’s experiences through their avatars within virtual reality. According to Maria Korolov, founder and editor of the online publication Hypergrid Business—and who’s been studying virtual worlds since their inception—people who exercise within a virtual world…

… will exercise an hour more on average the next day in real life, because they think of themselves as an exercising-type person. It changes the way you think.”

Researchers at the University of Kansas Medical Center back this up. A weight loss study there found that people who lost weight either through virtual or face-to-face exercise programs were more effective at managing their weights if they took part in maintenance programs delivered through Second Life.

Regarding substance abuse, Preferred Family Healthcare, Inc., found that treatment outcomes for participants in their virtual programs were as good as or better than those for people who took part in real-life counseling. More significantly, fewer people dropped out of virtual treatment—vastly so: virtual programs saw a 90 percent completion rate, as opposed to 30-35 percent completion for programs at a traditional, physical facility.

3standford_virtual_human_interaction_lab_ym8phEnvironmental consciousness may seem like a stretch, but researchers at Stanford University’s Virtual Human Interaction Lab found that people who felled a massive, virtual sequoia used less paper in the real world than those who only imagined cutting down a tree.

And perhaps the most interesting example, a study at the University of Michigan showed that participants who saw that a backpack was attached to an avatar consistently overestimated the heights of virtual hills—but only if they’d created the avatar themselves. Participants assigned an avatar by the researchers were much more accurate in their estimations. Said S. Shyam Sundar, Distinguished Professor of Communications and co-director of the Media Effects Research Laboratory, Penn State, who worked on the study,

You exert more of your agency through an avatar when you design it yourself. Your identity mixes in with the identity of that avatar and, as a result, your visual perception of the virtual environment is colored by the physical resources of your avatar… If your avatar is carrying a backpack, you feel like you are going to have trouble climbing that hill, but this only happens when you customize the avatar.”

Of course, there is a dark side to the Proteus Effect. A study co-written by Jorge Peña, assistant professor in the College of Communication at the University of Texas, Austin, Cornell University Professor Jeffrey T. Hancock, and graduate student Nicholas A. Merola (also at Austin), showed that avatars could be used to prime negative responses in users within a virtual world. In two separate studies, researchers randomly assigned participants dark- or white-cloaked avatars, or avatars wearing physician or Ku Klux Klan-like uniforms. They were also asked to write a story about a picture or play a video game on a virtual team and then come to consensus on dealing with infractions. Those in the dark cloaks or KKK robes consistently showed negative or anti-social behavior. What really causes concern, though, is that they were completely unaware that they’d been primed to do so. According to Peña,

By manipulating the appearance of the avatar, you can augment the probability of people thinking and behaving in predictable ways without raising suspicion. Thus, you can automatically make a virtual encounter more competitive or cooperative by simply changing the connotations of one’s avatar.”

Behavior modification through manipulation of appearance is nothing new: Traditional, face-to-face psychological experiments have shown that changes in dress can affect a person’s behavior or perception of themselves. That this also happens in the virtual world says something interesting about the human brain and its ability to distinguish reality from virtuality.

It should also give us pause. We’re rushing headlong into a brave, new virtual world, and it seems all but unstoppable. This, in itself, is not a bad thing. However, as we move forward, we would do well to proceed deliberately and with caution. Our history is rife with examples of decisions made ignorant of the potential outcome, and good intentions corrupted. If we are going to plunge into the virtual, we must consider the consequences—intended or otherwise—that our choices, and our actions, may beget.

You can read a summary of Nick Yee’s work here.

For a discussion of the University of Kansas study, check this link.

You can read about the Preferred Family Healthcare study here.

The Virtual Human Interaction lab study is here.

Check out the University of Michigan study here.

And you can find a discussion of the potential negative aspects of avatar manipulation here.

Thought you all might find this illuminating. Three separate ideas that all have a common thread, and illustrate the relevance of videogames beyond entertainment.

First, Robert Wood Johnson. The Robert Wood Johnson Foundation is the nation’s largest philanthropic organization devoted to public health. Its mission is to improve the quality of health and health care for all Americans, and–through its Pioneer Portfolio–has been a driving force of the Games For Health conference, as well as its regular sponsor. Why? According to Paul Tarini, Senior Programs Officer at RWJF,

We see games as both a really  interesting therapeutic intervention, but also more and more… help us learn things about people’s health and how to improve their health than we ever could before.”

Check out the full interview here.

On to the U.S. Military and the Dismounted Soldier Training System. In development: a fully-immersive, virtual training environment for U.S. army soldiers, featuring immediate performance feedback, injury simulation, 360 view and surround sound. The $57 million project is being built on the CryENGINE® 3 game engine released by German  developer Crytek in 2009 (Electronic Arts’ Crysis 2 (March 2011) was the first videogame developed with CryENGINE 3). Said Harry Martin, President and CEO of Intelligent Decisions (the company behind the development of this training system),

The goal of Dismounted Soldier is to provide our deploying soldiers with the best available training to ensure that they maintain the military advantage.”

And it’s based entirely on an engine used to build cutting-edge videogames. Here’s the full press release.

You can check out the announcement on the Off Duty Gamers website here.

And you can read more about CryENGINE 3 here.

Finally, virtual currency. Virtual currency’s been around for years: many MMORPGs (World of Warcraft being the primary example) use it to allow players to make in-game purchases, and Second Life has it’s own currency–the Linden dollar–that players use when buying or selling virtual goods. However, the Linden also has an exchange rate with the U.S. dollar, which fluctuates based on supply and demand. With the rise of social networking, though, the market for virtual goods has exploded, and the virtual economy has gone with it: this year, in the U.S. alone, its estimated value is $2.2 billion (yes, billion). Worldwide, that number is a staggering $12.5 billion. Remember, these are virtual goods. Outside of their presence in videogames, they don’t exist. At all. And yet they’re worth billions. The developers of Empire Avenue want to use the game’s economy to drive the real-world economy. Check it out here.

And you can learn more about Empire Avenue here.