Archive for the ‘Technology’ Category

In my last post, I said that avatars were all the rage, and they are—and most likely will only become more so within the next five years. Why then? That’s when Philip Rosedale believes we’ll see intricately detailed virtual worlds that begin to rival reality. If you don’t know Rosedale, you know his work: back in 2000, he created the first massively multiuser 3D virtual experience. It was more alternate reality than game, and with perhaps a nod to his desire to build a world that would become an essential component of daily existence, Rosedale called his creation Second Life.

For many people, Second Life became just that. Users (residents, in the SL lexicon) could log in, explore the world, build virtual places of their own, shop, find support groups, run businesses, have relationships… in short, everything people do in “real” life. The experience is engaging—so much so that some actually find it as involving as their first lives, if not more so. However, no one would mistake Second Life for reality: visually it resembles animated film—the fidelity is good, but it has nothing on the real world. To become truly immersive, the virtual component must be thoroughly unquestionable—enough to fool our brains into believing that we’re in the grip of the real.

That’s where we’re headed—rushing headlong towards, in fact—and Rosedale is at the fore in getting us there. At the recent Augmented World Expo in Santa Clara, CA, he dropped a few hints as to what his new company, High Fidelity, is cooking up. At its heart, it could be a 3D world that’s virtually indistinguishable from reality, offering a vastly increased speed of interaction, employing body tracking sensors for more life-like avatars, and applying the computing power of tens of millions of devices in the hands of end-users the world over. Within five years, he believes that any mobile device will be able to access and interact with photo-realistic virtual worlds in real time, with no discernable lag.

We’re already seeing the first signs of this. Last summer, I spoke with Kim Libreri at ILM (this was before the Disney purchase) regarding the stunning but now-doomed Star Wars 1313:

We’ve got to a point where many of the techniques that we would have taken as our bread-and-butter at ILM a decade ago are now absolutely achievable in real-time. In fact, you know, a render at ILM—or any visual effects company—can take ten hours per single frame. This is running at thirty-three milliseconds—that’s like a million times faster than what we would have on a normal movie.  But the graphics technology is so advanced now that the raw horsepower is there to do some incredible things.”

Star Wars 1313And this is today. Within five years, he told me, they’ll achieve absolute, indistinguishable-from-reality photo-realism. Regarding the ability of mobile devices to connect to the type of virtual world Rosedale envisions, he’s a little more conservative. In this case, the bottleneck isn’t computing power but the speed of Internet connectivity, which depends on more factors. Still, Libreri sees that being cleared within 10 years. And that’s it—we’ll have removed the last barrier to delivering hyper-realistic, fully immersive virtual worlds to any device, anywhere. From that point on, the possibilities will be limitless, bounded only by the extent of our imagination.

The implications of this, though, are another matter entirely—and one I’ll take up in my next post. Until then, I’ll leave you with a taste of the possible: Star Wars 1313 videos here

… and here.

You can read more about Philip Rosedale’s Augmented World Expo talk here.

And you can learn more about the Augmented World Expo here.

RealD 3D is so last year. Now, if you’re old enough to remember the Dark Ages of 3D cinema, when you had to wear those funny red and green “glasses” and exert an effort of will to lose yourself in a movie, then you may scoff at this. Until yesterday, I myself would have called you crazy for suggesting such a thing. After all, just two years ago, I sat in a darkened theater dumbstruck as the landscapes of Pandora leaped off the screen. Whatever your opinion of the film, there’s no denying Cameron’s technical achievement in bringing Avatar vividly and viscerally to life. Visually, it was captivating.

Since then, I’ve been stunned by the gorgeous, lush, and flawless 3D images presented by perhaps dozens of movies—and I stand by my initial statement. In fact, I’ll do you one better: What we’ll see from films within the next five to 10 years will make today’s 3D look like Hollywood’s primitive, color-shifted attempts of the early 20th-century. The future of film is Holodeck-real. And it’s just around the corner.

In an effort to promote its Playstation Store, Sony just released three short films that will change the cinema experience forever. Under contract with Sony, the UK-based Studio Output and Marshmallow Laser Feast attempted the impossible: projection-mapping an entire room in real-time, over a single take—no post-production, digital editing or addition of cool CGI effects after the fact.

For those of you unfamiliar with projection mapping, it’s essentially a technique that displays animated, 3D images on a fixed surface (usually a wall or building). It’s a recent technology, but it’s being used more and more frequently in advertising and marketing circles. You can see some examples of standard projection mapping here.

Though projection mapping is relatively new, the real innovation behind the Sony videos is how they were done. Traditional projection mapping’s downfall is a matter of perspective: the effect is visible only from a single, fixed point. Creating a 3D, immersive movie with the audience at the center that also changes angles and viewpoints was, therefore, impossible. Sony’s production team cleared this hurdle by combining existing technology from two distinct realms of entertainment: movies and videogames. The team connected a standard Steadicam camera mount to several Playstation Move motion controllers, and synced them up with EyeToy motion-capture cameras (used for Sony’s PS3 console). The results are… well, see for yourself. And as you’re watching, bear in mind that these are, according to everyone on the production team, entirely real-time and completely free from post-production editing or enhancement (scroll down to the end to see the videos).

We’re still a few years away from the practical, large-scale application of this, but it’s now demonstrably possible. Sony’s opened the window on a new movie experience, and it’s only a matter of time before others follow.

Personally, I’m holding out for the battle of Hoth.

You can read a bit more about the videos here.

Public safety is a tricky business. It is, by its nature, risky: Paramedics, firefighters, police, EMTs, first responders—they have dangerous jobs, and often put themselves in harm’s way to help others. When they go to work, lives are often at stake—sometimes theirs, sometimes ours, and sometimes both. For reasons that should be obvious, adequate and effective training of individuals pursuing this line of work is absolutely critical: Call me crazy, but getting thrown cold into an emergency situation doesn’t strike me as the best way to assess your skills.

Okay, so training is important. However, it’s also expensive, and budgets for public safety at all levels—local, state and federal—are stretched even during times of economic prosperity. It’s time-intensive as well, can be limited in reach, and usually requires safety personnel to travel outside their communities—taking them off the streets and reducing their departments’ abilities to respond to emergencies at home. I don’t know about you, but I’m not aware of any towns nearby that have dedicated training facilities on-site.

So how do we reconcile the need for comprehensive training with the expense of providing it?

Wait for it…

By using videogames, of course (you expected a different answer, maybe?).

This is exactly what groups like Virtual Heroes do for a living. Using 3-D game engines and game design techniques, Virtual Heroes builds scenarios within an immersive, virtual environment to help medical, military, public safety and healthcare professionals respond to catastrophic events in the real world. One of their flagship products, HumanSim, allows healthcare workers to sharpen their skills in realistic situations without risking real lives. They also create simulations for commercial clients who want to expand their ability to deliver on-the-job safety training. I watched one targeted at electrical workers. Arc flashes are scary things…

Okay, but how does this impact you, me, a typical big-city urbanite, or the average citizen in small town America? Let me bring this home. I live in western Massachusetts. Belchertown, to be exact. The end of last week, our local paper, The Sentinel, reported a story from the next town over about police and fire personnel training on an immersive, 3-D driving simulator. Big deal, right?

Actually, it is. First, the simulator was brought to them—right into the police and fire departments—allowing more personnel to go through the training than if the departments had to send them off-site. Also, had there been an emergency during the training sessions (thankfully, there were none), every single police officer or firefighter would have been available to respond. Perhaps more importantly, though, this training was provided free of charge, allowing cash-strapped departments to offer driver training to all their personnel—something they wouldn’t be able to afford otherwise. According to Granby Police Chief Alan Wishart, who also took a turn behind the wheel,

We would not be able to do this on our own. If it wasn’t for them [Massachusetts Interlocal Insurance Association (MIIA)], it could be years before some of the officers saw this type of training. It’s a great opportunity for us.”

Granby Fire Lieutenant Brian Pike agreed, adding that they could provide additional scenarios—and there are hundreds available—without taking out the trucks.

Less emergency worker downtime, more personnel trained, zero cost to the community—it all adds up to more experienced public safety departments, better emergency response, and more lives saved. So the next time you hear a public safety success story, you may have to thank a videogame.

To read the original article, check out The Sentinel here.

You can learn more about Virtual Heroes here.

And you can read more about the Massachusetts Interlocal Insurance Association’s simulator here.

Remember the scene in The Empire Strikes Back when Yoda lifts Luke’s X-wing out of the swamp with the Force? Or when Darth Vader, using only his mind, repeatedly smashes Luke with a variety of blunt objects? That was cool, right? I mean, who wouldn’t want to be able to move things just by thinking about it.

Now you can. And though it’s not quite the Force, there are a few games on the market that allow you to move objects or alter the gaming environment simply by concentrating. Mindflex, by Mattel, and Uncle Milton Industries’ Force Trainer rely on headsets that read players’ electrical brain waves and transmit them into the game, allowing players to control items within the games themselves. The games have raised their fair share of skepticism, but all that changes once people play them. Said Stanley Yang, chief exec. of Neurosky, the company behind the operating system inside Mindflex and Force Trainer,

That’s everyone’s initial reaction to the technology: It doesn’t work. It can’t work. Telekinesis is just something in the movies. And telekinesis in its pure form is really impossible. But this technology is as close as you will get.”

Iceland-based developer MindGames also has two apps for sale on Apple’s App Store (where else) that are controlled by brain waves. W.I.L.D. allows players to navigate through the game landscape or complete a range of tasks by concentrating and relaxing. Tug of Mind is also designed to encourage relaxation—this time through use of an angry avatar that gradually gets happier the longer a player stays calm.

The technology driving these games has applications far beyond entertainment, though. In yet another instance of life imitating art, both Honda and Toyota are investing a lot of green into researching mind control features, such as trunks and doors that open by thought command. The Defense Advanced Research Projects Agency has even gotten into the act, last year awarding Johns Hopkins University a cool $34.5 million to test mind-controlled prosthetic limbs. It may sound like science fiction, but they could be available as early as next year. Before long, these limbs could be giving new hope and new life to injured soldiers, amputees, paraplegics—the list goes on. And they’ll have games to thank for them.

Now how cool is that?

You can find the Mindflex/Force Trainer article here.

And the article about MindGames is here.

For details on mind-controlled prosthetics, check out this article…

… and this one.

A new technology developed at Disney Research, Pittsburgh (DRP) is poised to usher in a brave new world of gaming. Called Surround Haptics, it allows game players (and film viewers, for that matter) to feel a wide range of sensations—from gentle caresses to bone-jarring collisions. Initially demonstrated during the Emerging Technology Exhibition at SIGGRAPH 2011, it was used to enhance a driving simulator, allowing drivers to feel road imperfections, skidding, braking, acceleration, collisions, jumping and landing, even objects striking the car. It’s all thanks to a gaming chair equipped with inexpensive vibrating actuators that transform digital information into physical sensation. Said Ivan Poupyrev, senior research scientist at DRP,

Although we have only implemented Surround Haptics with a gaming chair to date, the technology can be easily embedded into clothing, gloves, sports equipment and mobile computing devices. This technology has the capability of enhancing the perception of flying or falling, of shrinking or growing, of feeling bugs creeping on your skin. The possibilities are endless.”

Endless, indeed. The ability to feel within a game environment would radically alter the game experience—especially when you consider the implications for accessible games.

Of course, this sensory enhancement has other implications as well. It doesn’t take a genius to see how this could apply to adult-themed games—which brings a critical issue to light: when is a game no longer a game? Right now, virtual sex is limited to (for the most part) a visual and auditory experience, and the line between the game world and reality is pretty clear—though there are issues of betrayal and trust even with that. But what happens when you incorporate touch into the game, when you can actually “feel” your game partner? And what happens if you combine this with a virtual world like Second Life, where your game partner could easily be a real person at the other end of a network connection? If you’re single, it may not be that big a deal, but what about people who have partners in real life? Though there’s no direct contact between game partners, would such immersive virtual sex be considered cheating? Or, more simply, would the betrayal be any less real?

If the history of technological progress has taught us anything, it’s that once we develop the ability to do something, it’s impossible to prevent. Sooner or later, haptic systems will make the inevitable leap to the bedroom. It will be up to us—individually and as a society—to adapt to this new reality.

And there are darker ways this could go, but I’d rather not get into that.

To read the original article, click here.

For more on Surround Haptics, click here.

And for an overview of haptic technology, click here.