The camera of the future is not at CES — it's at Eyebeam, an art and technology incubator in the Chelsea neighborhood of New York City, where two artists-in-residence are currently shooting a documentary that's probably better described as interactive software than a feature film.
Ever since their first meeting at a coding conference in Pittsburgh, James George and Jonathan Minard have been perfecting their experimental filmmaking technique, which uses a DSLR + Kinect hybrid image capture process in conjunction with custom editing software. The results are two-fold: a distinct set of stuttered images that can be spatially manipulated by the viewer, and a free dev kit that allows anyone with the right equipment to play along — like Hollywood 3D's dirtier, open-source cousin made into a video game.
Clouds, an upcoming documentary that was just funded on Kickstarter, is being shot entirely using this method, which George and Minard call RGB+D. Set within the framework resembling a modern "choose your own adventure" game, the duo says the finished film (if you could call it that) will be almost game-like — an interactive narrative offering a reflective look at the culture of creative coding and software art, as told by a sizable cast of artists, journalists, and curators.
George and Minard explain that the concept came from a keynote address from sci-fi author Bruce Sterling — one of the subjects of the documentary — in which he tried to predict what the cameras of the future would be like, describing a device "that absorbed photons from all angles at all times, turning the act of taking a picture into a computational problem of choosing which angle at which moment to visualize."
"We want 'Clouds' to feel like a continuous revelation in a space of ideas."
With its noted hacker-friendliness and the ability to capture 3-dimensional images in 30 frames-per-second, the Kinect seemed an obvious choice to try and make that fiction a reality. George and Minard began beta testing their methods, using the process to document a Reddit AMA and an art hackathon at nearby gallery space 319 Scholes. But eventually the stylized footage led to bigger ideas.
"The fact that we were already visualizing this data in a virtual environment resembling a video game, with unlimited possible camera positions and visual styles, had opened up our thinking," write George and Minard in an email interview. "It dawned on us that the conversation we were recording had begun to resemble an interconnected network."
The two artists have begun parsing the various nodes of that network into conversation "seeds," allowing the viewer to ask questions and follow various digressions as the conversation progresses. The end result will be an executable program generating a virtual space in which viewing angles can be changed and individual ideas pursued. Suddenly, the videography ceases to be a lightly-interactive depiction of events and starts to become more like a computer simulation, navigated in real time.
"The goal of building such a system is to leave the possibility space wide open, to create an experience that feels spontaneous and full of surprises ... We want Clouds to feel like a continuous revelation in a space of ideas."
We'll email you a reset link.
If you signed up using a 3rd party account like Facebook or Twitter, please login with it instead.
Choose an available username to complete sign up.
In order to provide our users with a better overall experience, we ask for more information from Facebook when using it to login so that we can learn more about our audience and provide you with the best possible experience. We do not store specific user data and the sharing of it is not required to login with Facebook.