New media models: Video on plain paper, or text that controls video

Video on paper: MIT doctoral student Pranav Mistry’s “SixthSense” technology is a wearable device that makes it possible to project video on any page of a newspaper, snap a picture by framing a scene in the rectangle between your fingers and thumbs, and project a stranger’s Facebook info onto his T-shirt for a quick introduction.

Mistry’s video on TED.com and this earlier demo with MIT Media Lab professor Pattie Maes show the prototype device in action. The hardware consists of a Bluetooth-equipped smartphone, webcam and a miniature projector, with a mirror to redirect the image. (Yes, this is one MIT Media Lab demo that is literally done with mirrors.) The software is the hard part.

Mistry has a project website with photos, related articles, and this summary:

‘SixthSense’ is a wearable gestural interface that augments the physical world around us with digital information and lets us use natural hand gestures to interact with that information. By using a camera and a tiny projector mounted in a pendant like wearable device, ‘SixthSense’ sees what you see and visually augments any surfaces or objects we are interacting with. It projects information onto surfaces, walls, and physical objects around us, and lets us interact with the projected information through natural hand gestures, arm movements, or our interaction with the object itself.

While I doubt that video projection from an iPhone-size pendant will become the next way to “save the newspaper,” the technologies in the TED videos are fascinating — and not only Mistry’s demo.

Text controlling video: Reading-oriented new media fans, notice the “interactive transcript” link on those TED videos. The transcript doubles as an index to the video — click any sentence in the text and the Flash video player jumps to that point. (I’d missed this feature in earlier TED videos, thanks to  watching a lot of them embedded on other sites, including my own. Here’s its announcement.)

The TED version isn’t the only “interactive transcript” technology around. See the New York Times video and transcript of President Obama’s speech in Cairo last June for another example; it uses a navigation timeline to choose points in both the text and video. MSNBC’s presidential inauguration coverage included something similar.

Thanks to simpsonmedia.net for more info on the interactive transcripts, and to Sree Sreenivasan for the pointer to both the transcript and the TED videos. For more advanced Adobe Premier and Flash users, here’s a related tutorial.

mild-mannered reporter who fell deeper into computers and the Web during three trips through graduate school in the 1980s and 1990s, then began teaching journalism, media studies and Web production, most recently as a faculty member at Radford University.

Posted in Computers, Digital Culture, Internet, Multimedia, Newspapers, Online-Only, Technology

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Categories
Archives