HandsFree3d Video and New Website

April 11, 2008

Last month I blogged about a new project I started mid January with Mitch Kapor. We made great progress in 10 short weeks and we have a pretty good set of demoable code coming out now. I promised then we’ll be doing some video demos. The first of those went live this morning here.

We also put together a small website called handsfree3d, dropping the confusing “Segalen” code name in the process (too bad, I kind of liked it… o well…).

More videos will be coming in the next few weeks and I’ll be blogging at handsfree3d, answering questions and requests as much as I can.

Hope to hear from you soon!


Segalen – A Keyboard free Second Life

March 3, 2008

 

So, since this fateful day of January 15th (see OSAF 2.0 and me), I’m now a full time employee of KEI (Kapor Enterprises, Inc.) and working on a project code named Segalen (picking its name from Victor Segalen for the curious) in Mitch Kapor’s “incubator”. That is to say that I’m basically working alone on the technical stuff and with Mitch on the idea, business aspects and everything else.

 

The objective of the project is to dramatically change the way users interact with online virtual worlds. The online world (or metaverse or MMO or whatever you want to call it…) in question for this project being, of course, Second Life.

 

For the moment, the Segalen mission is simply summarized in that short sentence:

 

Keyboard free Second Life!

 

That’s it! That’s what we’re trying to create. I’m hacking the open source slviewer and implementing different ways to interact with SL (navigate, change camera focus, initiate animations, interact with objects) without ever having to learn those weird and complicated navigation keyboard sequences (Alt-zooming anyone?) and, above all, allowing users to naturally interact in world without having to do everything with one single pointer (the mouse) and a handful of keystrokes.

 

To do so, we started exploring the use of new “3D” cameras, those capturing not only RGB but also depth or distance to the camera for each pixel. This makes the tracking of body features in real time much easier than simple “2D” cameras. In the first weeks of working on it, I was able to hack enough into the slviewer code to plug in a camera and start interacting within SL without the use of the keyboard. Things started working “for real” 2 weeks ago and Mitch got a little excited and spilled the beans at the Metaverse Roadmap meeting in Stanford.

 

I had to work extra hours over the following weekend to make the “segway” navigation he talked about work and, on the next Tuesday, I made a first demo to the whole KEI staff. That was received with cheers and applause. It felt great though, clearly, there were a lot of challenges ahead.

 

Since then, I completed the whole navigation UI: walk, turn, fly, jump, crouch, etc… It’s really cool and demos amazingly! It’s also a true different feeling being around SL with your own body (so to speak) instead of being tied to this darn keyboard.

 

Next on my list: direct interaction with objects inworld. I’ve some pretty cool ideas already and can’t wait to get enough code in there to start playing with them. The object edit code though lies in a completely different (and unknown to me) part of the source tree… Well, I guess I’ll get there as I did with the other part (navigation). 

 

In short, I’m not done yet but making daily progress. I wanted to keep things quiet but I saw that some people are starting to pick up on Mitch’s story so I wanted you to hear what’s going on from the horse’s mouth. I don’t know exactly when I’ll be done with everything I want for a first really knocks-your-socks-off demo (if something like that ever existed) but I think it will be weeks rather than months. Optimism is  required in this kind of job!