For the technically inclined, more details after the video:
While the video is fairly self-explanatory, there are some factors that got left out due to editing. The main one being how we incorporated the use of Microsoft's Kinect camera to do a scan of Chris's face.
So yes, not only did we photograph him and have that 3d head & face that you see, but there's even yet another one that is significantly more detailed and accurate. However where the face in the video has a texture based on photographs, the Kinect scan is purely a mesh used in conjunction with the textured model as well as tracking markers we specified.
Technical enough yet?
To delve even deeper into the jargon/nerd-talk, we made two prototypes for tracking markers (which you can see for a bit in the video). Our first attempt was a headband with 3 sticks attached to it, with tracking markers at the end of said sticks. While that gave us lots of parallax and tracking data, it wasn't sturdy enough and the wiggling around was giving us bad data. Our solution was to acquire an Orgasmatron (weird name, we know), and attach a single (but larger) tracker on top. The latter tracking rig yielded the best results, so we plan on developing that one further for the actual project.
And a final bit of motion lingo: the 3d tracking is a mix of user-inputted tracking points, as well as geometry/object tracking (which is why we generate the face with photos and Kinect scans).
If you've made it this far, congratulations! Here's some stills of the process: