Archive for April, 2010

PostHeaderIcon 3D Not EZ: A New Way of Seeing

NAB 2010. I’m in Las Vegas now strolling the Strip with Panasonic’s new one-piece 3D camera. It’s my second night out, and while my 3D skills are improving I’m finding the whole experience rather humbling with only a few usable shots mixed in with a lot of  rubbish.

In the first place the Panasonic AG-3DA1 camera is not a run-and-gun type affair. Rigorous control of the frame boundary, convergence point and placement of objects is imperative, as is the use of a steady tripod to support a compelling i.e. non-sickening 3D experience. Thousands of drunken strollers cutting into and out of frame at all angles do not a pleasant 3D experience. make!

Unlike more sophisticated rigs that use twin-mounted cameras, a beam splitter and precise control of the distance between the left and right “eyes”, the A1 uses a single-twin lens system with a fixed interocular setting. The advantage is a greatly simplified operation with key parameters such as image distortion and rotation automatically addressed inside the front-mounted binocular housing.  The downside of of the fixed distance between the left and right eyes is the inability to shoot objects converging closer than 8 feet (2.2 meters). Indeed a shooting distance of 10-100 feet (approximately 3 – 30 meters) is ideal to produce the most compelling 3D images utilizing the Panasonic A1.

Objects floating ahead of the convergence point in negative space can produce a nauseated feeling in the viewer, as can objects cutting through the frame boundary that might appear in one eye and not the other. The threat of “Window Violations” is a constant menace for 3D shooters, and is a major reason why documentaries utilizing a verité 3D camera are inherently impractical.

The close-focus prohibition can be highly disconcerting to accomplished shooters long accustomed to working optimally at a range of six to eight feet (1.8-2.2 meters), invariably capturing close-ups at this distance for reasons of gaining proper perspective with minimum distortion.

Accomplished 2D shooters are also long accustomed to anchoring the frame by placing objects at or near the frame edges. We trim head room as objects approach the camera, and use copious foreground action to enhance the three-dimensional illusion.

These techniques are not applicable or advisable utilizing an actual 3D camera, especially one with a fixed interocular distance like the A1. Indeed the veteran 2D shooter can forget about many of the hard-learned third-dimension-inducing techniques. 3D requires a much more rigorous control of the frame and frame boundary than any of our 2D work until now.

One area requiring specific expertise is the setting and re-setting of the 3D camera’s  focus and convergence points often simultaneously mid-scene; the convergence point is typically set first and represents the screen plane. The 3D shooter must thus always be cognizant of objects crossing behind and in front of the screen plane/convergence point; this understanding helping to minimize to the extent possible the headache induced in viewers that may result from objects floating in negative space ahead of the convergence point.

Shooting proper 3D will require training and close attention to imaging fundamentals. Even for the most accomplished among us, the new 3D models including the new A1 will prove humbling at first as we understand how much (or how little) of our previous knowledge and experience can be safely applied.

Search