Posts Tagged ‘Kim Libreri’

In my previous two posts, I talked a lot about avatars and some of the rather intriguing and exciting developments happening regarding virtual worlds. In brief, both are evolving far beyond what early digital pioneers could have envisioned when they took their first steps into virtual reality. We have the ability today, as demonstrated by LucasArts’ E3 reveal of Star Wars: 1313, to render near-photo-realistic environments in real-time. Absolute photo-realism is a mere skip in time away—five years at the outside—and completely immersive, realistic virtual worlds experienced through any connected device are already on the horizon.

This does not spell the death of the avatar. Avatars will, I’m fairly certain, always have a place within gaming and VR. But we’re fast approaching a technological revolution the likes of which humanity has never experienced. Soon we will have the ability to interact with virtual environments directly, without filtering our actions through a digital intermediary.

Very soon, in fact. Prototypes of The Oculus Rift VR headset show great promise in delivering fully-immersive 3D gaming to the masses. With it, you can climb into the cockpit of a star fighter and engage in ship-to-ship combat, surrounded on all sides by the vastness of space and the intensity of interstellar battle (as with EVE VR). Combine the Rift with the Wizdish omni-directional treadmill and a hands-free controller like the Xbox Kinect, and you can transform a standard first-person shooter into a full-body experience—allowing you to ditch the thumbsticks and literally walk through a game’s virtual environment, controlling your actions via natural motion of hands, feet, arms, legs, and head.

In the world of augmented reality (AR), Google Glass will hit the streets later this year, providing users with a wearable, hands-free interface between the real and the virtual, enhancing your interaction with reality, and allowing you to transmit  your view of the world across the globe.

But the real future lies in a recently Kickstarted project by startup tech company Meta. Meta’s developing a system for interacting with the virtual world that tears open the envelope of the possible, and drives the line dividing virtual from real one step closer to extinction. It’s a combination of stereoscopic 3D glasses and a 3D camera to track hand movements—allowing for a level of gestural control of the virtual world previously unseen (think Minority Report or The Matrix Reloaded).

Unlike the Oculus Rift, the Meta system includes physical control and also works in real space (not strictly a virtual gaming world). And unlike Google Glass, Meta creates a completely immersive virtual environment. According to a recent article by Dan Farber on CNET,

Meta’s augmented reality eyewear can be applied to immersive 3D games played in front of your face or on [a] table, and other applications that require sophisticated graphical processing, such as architecture, engineering, medicine, film and other industries where 3D virtual object manipulation is useful. For example, a floating 3D model of a CAT scan could assist doctors in surgery, a group of architects could model buildings with their hands rather than a mouse, keyboard or stylus and car designers could shape models with the most natural interface, their hands.”

In other words, with the Meta system, the wearer can actually enter and manipulate the virtual world directly, altering it to serve whatever purpose s/he desires. Atheer, another recent entry into the field of AR, is working on a similar system. Like Meta, Atheer’s technology uses a 3D camera and stereoscopic glasses for gesture control, and also provides direct access to the virtual world. According to Atheer founder and CEO Sulieman Itani,

We are the first mobile 3D platform delivering the human interface. We are taking the touch experience on smart devices, getting the Internet out of these monitors and putting it everywhere in [the] physical world around you. In 3D, you can paint in the physical world. For example, you could leave a note to a friend in the air at [a] restaurant, and when the friend walks into the restaurant, only they can see it.”

The biggest difference between the two systems is that Atheer isn’t building a hardware-specific platform; it will be able to run on top of existing systems like Android, iOS, Windows Mobile, or Xbox. Apps built specifically with the Atheer interface in mind will be able to take full advantage of the technology. Those that aren’t optimized for Atheer will present users with a virtual tablet that they can operate by touch, exactly like an iPad or Galaxy Tab. Here’s Itani’s take:

This is important for people moving to a new platform. We reduce the experience gap and keep the critical mass of the ecosystem. We don’t want to create a new ecosystem to fragment the market more. Everything that runs on Android can be there, from game engines to voice control.”

We’re rapidly approaching a critical stage in our technological evolution, nearing the point at which we’ll be able to work, play, and live, at least part-time, in hyper-realistic, fully-immersive virtual worlds, just as we do in real-world spaces today. So, what does this mean? Games will get better. Virtual worlds will become richer and more complex, and take on greater significance as we spend more time in them. And we, as humans, may begin to lose our grasp of the real. I spoke to Kim Libreri at Industrial Light and Magic about this, and he agreed that this could become a real problem. The human brain, he said, is a very flexible learning device. With the gains in fidelity that we’ll see in AR over the next decade, he believes that, as the human race evolves, it’s going to become more and more difficult to separate what’s real from what’s not. The longer we coexist within those places, the harder it will become, and this could begin to create problems. As Libreri said,

There’s a real, tangible threat to what can happen to you in the cyber world, and I think as things visualize themselves more realistically . . . think about cyber-crime in an AR world. Creatures chasing you. It’s gonna be pretty freaky. People will have emotional reactions to things that don’t really exist.”

It might also render people particularly susceptible to suggestion. It’s a bit like the movie Inception, where ideas are planted deep within a subject’s subconscious—except this would be easier. If you can’t distinguish the virtual from the real, a proposal put to you in a virtual world could seem like a really good idea someone presented in reality. If this sounds like science fiction, consider for a moment how vulnerable we are to the power of subliminal suggestion or even direct messages within everyday advertising. No, if anything, the intricacies of our approaching reality will most likely far exceed our ability to imagine them.

Of course, there are positives and negatives with any technology. Through highly accurate simulations, immersive virtual worlds could allow people to visit places they might otherwise not be able to. They could also provide greater and more intuitive access to information, and the ability to use and manipulate it in ways that are today all but unthinkable. Whether the looming future of alternate reality will be predominantly good or bad is irrelevant. It is coming, one way or another, and there’s nothing we can do to stop it. If our past history and the ever-increasing speed of development and change are predictive—or at least indicative—of future events, then alternate reality, in whatever form it assumes, is poised to bring an end to the separation of virtual and real. And if that happens, we will all bear witness to the birth of a singularity beyond which the nature of human interaction—and indeed, of humanity itself—will be changed, fundamentally, forever.

You can learn more about the Oculus Rift here.

… and the Wizdish treadmill here…

.. and here.

There’s a video demo of Google Glass here.

And a demo of EVE VR here:

More info about EVE VR is here.

You can learn more about Meta and their computing eyewear here…

… and here.

You can read about Atheer here:

And for an overview of the future of AR, check out this article:

In my last post, I said that avatars were all the rage, and they are—and most likely will only become more so within the next five years. Why then? That’s when Philip Rosedale believes we’ll see intricately detailed virtual worlds that begin to rival reality. If you don’t know Rosedale, you know his work: back in 2000, he created the first massively multiuser 3D virtual experience. It was more alternate reality than game, and with perhaps a nod to his desire to build a world that would become an essential component of daily existence, Rosedale called his creation Second Life.

For many people, Second Life became just that. Users (residents, in the SL lexicon) could log in, explore the world, build virtual places of their own, shop, find support groups, run businesses, have relationships… in short, everything people do in “real” life. The experience is engaging—so much so that some actually find it as involving as their first lives, if not more so. However, no one would mistake Second Life for reality: visually it resembles animated film—the fidelity is good, but it has nothing on the real world. To become truly immersive, the virtual component must be thoroughly unquestionable—enough to fool our brains into believing that we’re in the grip of the real.

That’s where we’re headed—rushing headlong towards, in fact—and Rosedale is at the fore in getting us there. At the recent Augmented World Expo in Santa Clara, CA, he dropped a few hints as to what his new company, High Fidelity, is cooking up. At its heart, it could be a 3D world that’s virtually indistinguishable from reality, offering a vastly increased speed of interaction, employing body tracking sensors for more life-like avatars, and applying the computing power of tens of millions of devices in the hands of end-users the world over. Within five years, he believes that any mobile device will be able to access and interact with photo-realistic virtual worlds in real time, with no discernable lag.

We’re already seeing the first signs of this. Last summer, I spoke with Kim Libreri at ILM (this was before the Disney purchase) regarding the stunning but now-doomed Star Wars 1313:

We’ve got to a point where many of the techniques that we would have taken as our bread-and-butter at ILM a decade ago are now absolutely achievable in real-time. In fact, you know, a render at ILM—or any visual effects company—can take ten hours per single frame. This is running at thirty-three milliseconds—that’s like a million times faster than what we would have on a normal movie.  But the graphics technology is so advanced now that the raw horsepower is there to do some incredible things.”

Star Wars 1313And this is today. Within five years, he told me, they’ll achieve absolute, indistinguishable-from-reality photo-realism. Regarding the ability of mobile devices to connect to the type of virtual world Rosedale envisions, he’s a little more conservative. In this case, the bottleneck isn’t computing power but the speed of Internet connectivity, which depends on more factors. Still, Libreri sees that being cleared within 10 years. And that’s it—we’ll have removed the last barrier to delivering hyper-realistic, fully immersive virtual worlds to any device, anywhere. From that point on, the possibilities will be limitless, bounded only by the extent of our imagination.

The implications of this, though, are another matter entirely—and one I’ll take up in my next post. Until then, I’ll leave you with a taste of the possible: Star Wars 1313 videos here

… and here.

You can read more about Philip Rosedale’s Augmented World Expo talk here.

And you can learn more about the Augmented World Expo here.