This year I am pleased and honored to be conducting a three-day camera and visual storytelling workshop at PDXFF16. This fun hands-on event will take place from August 31 – September 2 at the Pro Photo Event Center in the city’s NW Pearl District, 1801 NW Northrup Ave, Portland OR 97209.
While the workshop is designed primarily for aspiring camera craftsmen and cinematographers the course is just as applicable to filmmakers of every stripe, including actors and screenwriters, and visual storytellers of all media.
For more information and to register click on the general festival link:[www.portlandfilm.org] and the event specific link: [https://www.eventbrite.com/e/professional-camera-visual-storytelling-tickets-26573603363]
This summer I am again in East Africa leading a camera and visual storytelling workshop at the Zanzibar International Film Festival. The hunger for knowledge in this part of the world never ceases to amaze me as my students demonstrate an eagerness to learn and practice the fundamental lessons of good visual storytelling and effective camera operation.
Its most recent demise has been nothing short of astonishing. As recently as four years ago 3D seemed atop of the world and gaining momentum with filmmakers and TV broadcasters around the world seemingly poised to jump on the 3D bandwagon. Pouring tens of millions of dollars into new cameras, rigs, and displays, the major manufacturers like Sony and Panasonic subsidized to a huge extent startup venues like Sky 3D and Direct TV’s Channel 101. By January 2013 virtually every large screen display sold in the USA offered a 3D capability.
Then what happened? Despite the manufacturers’ best efforts and massive financial investment the public in Western countries never really cared for 3D. In Asia the public’s view was more positive but the interaxial handwriting was already on the wall in the principal markets in the USA and Europe.
Today, save for the few studio tentpole movies still distributed in 3D, the format has become all but irrelevant for most non-theatrical applications. You can blame it on the uncomfortable glasses, the underpowered insufficiently bright displays, or the poorly developed skills of 3D filmmakers, who not understanding the physiological impact of stereo viewing, unwisely opted for maximize depth at the price of viewer comfort. It goes without saying that inflicting riveting pain on one’s audience is not a good way to win its loyalty and affection!
Since the dawn of painting and photography the challenge to artiss has always been how to best represent the 3D world in a 2D medium. Because the world we live in has depth and dimension, our filmic universe is usually expected to reflect this quality by presenting the most life-like three-dimensional illusion possible for our screen characters to live, breathe and operate most transparently.
In the 2D world of cinema and TV the camera craftsman uses mainly texture and perspective to foster the desired three-dimensional illusion. While the 3D shooter makes use of many of the same tools the stereo format inherently goes a long way to promote the feeling of a real world experience. In fact the 3D shooter must often mitigate the use of aggressive depth cues, as the forcing of perspective can be very painful to viewers.
As a cinematographer and 3D specialist I attribute the format’s lack of public acceptance to something rather fundamental. Of course the ‘3D’ format isn’t really 3D at all but stereo, which is much less immersive. Viewing a movie or TV broadcast in stereo requires substantial viewer effort, a reliance on a gimmick or ‘loophole’ in human physiology that allows viewers to separate focus from convergence. As it turns out a large part of the audience is simply unable or unwilling to perform the unnatural act of forming a 3D image in its mind; it can be tiring or painful, and not at all conducive to what is supposed to be an entertaining experience.
In spite of all this, the savvy cameraperson today understands that the lessons of 3D, i.e. communicating the maximum number of depth cues to viewers, can greatly enhance the impact, breadth, and effectiveness, of our traditionally composed 2D captured scenes.
On many shows we are increasingly shooting RAW and/or in a multitude of compressed formats with different cameras utilizing different flavors of log. ARRI Alexa, Canon C300, Sony FS7, Panasonic VariCam, GoPro, Blackmagic, DSLRs – just keeping all the color spaces and log profiles straight can be a major challenge.
The Academy Color Encoding System (ACES) has simplified things for folks at the high end of the food chain. But for the rest of us toiling in typical broadcast productions and independent features, the convoluted post-camera wrangling of color and LUTs has become an unwelcome hassle with the wholesale wrangling of .cube, .aml, and .ctl files.
Thankfully we now have Latice, a powerful and versatile LUT management tool that greatly minimizes this ongoing hassle. To be clear it is not intended to compete with or replace grading applications like Davinci Resolve. Think of it more as a LUT Swiss Army Knife, able to view, transcode, and conform, a wide array of color spaces and profiles.
So if you’re shooting B-roll on a Canon C300 Mark II and we need to conform to the A camera which is a Sony FS7, Lattice can convert the Canon Log2 files to Sony S-Log, which we then import into Resolve for simple and straightforward color grading in a single consistent color space.
The Mac-based app features a very straightforward interface, which offers plenty of hooks for tweaking. As cameras like the Panasonic VariCam 35 are enabling the creation of 3D LUTs in camera it becomes a simple matter in Lattice to convert Panasonic’s V-Log file to something else, like Sony S-Log or Blackmagic’s Film Emulation (BMD) LUT.
Lattice doesn’t eliminate entirely the complexity and hassle of wrangling LUTs post-camerabut it sure makes the ordeal a whole lot easier.
I use it regularly and recommend it.
With the advent of post-camera filter software like Tiffen’s Dfx4 and powerful color grading tools like Davinci Resolve, it’s easy to see how some shooters, even some relatively accomplished ones, no longer see the need for an on-camera filter. But this can be a big mistake. Especially given today’s onslaught of low- and mid-level 4K camcorders and DSLRs that produce overly harsh albeit very high resolution images.
Truth is, the impact of some optical filters cannot be effectively recreated in post. The polarizer, for example, is the only filter capable of increasing contrast and resolution; it is simply not possible to add picture detail post-camera if the detail was not captured in the first place.
Then there is the matter of finishing filters like the Tiffen Satin and Black Satin, and the Schneider Digicon and Black Magic types. These filters have become more or less obligatory in recent years, to mitigate the clinical brash look characteristic of many low-cost 4K cameras. The Blackmagic URSA, AJA Cion, and other relatively economical camcorders and DSLRs, often exhibit very poor shadow integrity along with a harsh roll-off in the highlights, which can only be effectively ameliorated by a proper finishing filter.
Today’s sharper diffusion filters from Tiffen, Schneider, and others, produce very little scatter and halation, and for all intents and purposes, are invisible to the viewer, but produce a more flattering look with lower noise in the shadows and smoother more pleasing highlights. Most importantly, these filters maintain sharpness in the pupil of the eye, while subtly blending and softening the skin tones and around the eye sockets and face. This is possible because these filters are designed from the outset with the proper telecentricity to accommodate modern digital sensors with deep bucket photosites.
Older series Black Pro Mist, Soft F/X, Fogs, and Double Fogs, produce too much scatter to be useful with modern cameras fitted with CMOS sensors. Still, having said this, I do find a 1/8 Tiffen Black Pro Mist filter can be useful to match certain type vintage optics, like an old Cooke zoom, to the latest generation Zeiss CP.2.
Around the country and world, BlackMagic seems to have no trouble attracting folks to its showcase events, like this one in Burbank last Wednesday.
The winning formula? Okay, a free lunch certainly helps.
But so does appealing to the burgeoning number of former DSLR shooters and low to mid-range producers looking to move up to more professional gear. The low end of the market has now all but displaced the once dominant mid range corporate and broadcast segments that were Sony and Panasonic’s former bread and butter.
Responding to the Isness of the market BlackMagic is capitalizing on this newest trend in a big way.
The data and power cables poking in and out of my MacBook Pro are pure crap. Take a look at how I’m managing my AC charger cord. I’ve added three cable ties over a sleeve of black camera tape to ensure adequate stress relief and thus forestall the eruption of arcs and sparks from the broken connection.
Why can’t cable manufacturers simply provide sufficient stress relief in the first place?
The matter of diseased trouble-prone cables is not a new phenomenon. After 35 years in the business I know from hard-won experience that 90% of failures in the field are due to defective cables and connectors. Thinking about it now my ARRI 16SR in 1976 was a true godsend. The camera utilized onboard batteries that eliminated completely failure-prone cables and plugs. It gave me great peace of mind that this disproportionate cause of failure was gone forever.
Thunderbolt is an impressive technology and we’ve grown to rely on the less than robust cables for our most critical tasks from off-loading original camera footage to preparing our backup volumes. With data rates up to 40Gbps in Thunderbolt3 the flow of data is sensitive to a range of cable snafus, especially at the point where the cable enters the rigid connector where most failures due to fatigue occur.
For manufacturers, the cost of producing cables with proper stress relief might amount to a few pennies per unit, but as shooters and content creators whose business is capturing, transferring, and managing critical data, the extra dollar or two per cable at retail is worth it, if only to avoid the indignity of having to affix a raft of cable ties simply to ensure a solid and reliable connection.
One of the truisms of our profession has long been a camera system is only as good as its optics. While computational non-optical lenses in devices like the iPhone have obviated the need in some applications for finely crafted optics, the demand persists for high-performance glass at the high end of our business, especially in light of the latest 4K and higher resolution cameras.
The new Sony PXW-FS7 4K camcorder is well-balanced and robust, with superb ergonomics. Particularly notable, however, is Sony’s own 28-135mm F4 lens that comes with it. It is not the typical crappy package lens that usually accompanies new mid-range cameras.
Employing precisely sculpted aspheric elements and low-dispersion glass the FS7 4K zoom represents a real breakthrough in economical s35mm lens technology. Remarkably free of chromatic aberrations – the main reason cheap lenses look cheap – the lens compares favorably with much pricier optics; its relatively slow F4 maximum aperture posing less of a challenge these days for shooters employing cameras like the FS7 that shoot virtually noise-free at ISO 2000 and higher.
Sony’s lens uses precise servo control of zoom and focus to enable a constant F-stop throughout the zoom range. Some shooters will object to the limited 5:1 zoom for documentary work but keep in mind the lens is a large format zoom. If you’re looking for a 23x capability you should consider a 2/3-inch camera like the Varicam HS, which is ideal for travel and sports, and certain wildlife titles, that tend to employ long zoom telephoto lenses.
With the advent of cameras like the extreme low-light ISO 5000 Varicam 35 the imperative to exclude, exclude, exclude, unhelpful story elements inside the frame takes on a more dramatic dimension. Shooting with the new Varicam in very low light at night, for example, means that streetlights suddenly appear blown out, the night sky over major cities is way too bright, and the intensity of incidental TV monitor and screens in the background must be substantially reduced.
This way of working is a revolution in how camera folks have managed the world until now. In the film days of years ago we needed light and plenty of it to gain even a minimum exposure. Moderately sensitive digital cameras like the Alexa of several years ago gave us the freedom to illuminate scenes with smaller low-wattage, more economical, cooler units. A blessing to be sure!
Today given the latest ultra sensitive digital cinema cameras entering the market we are back to spending a lot of time addressing the lighting in our setups, not so much by adding massive hot lights of course, but by employing more extensive lighting control, that is, by taking light away.
Given the new technology my old harangue to students to exclude everything not supportive of the visual story has taken on even greater relevance and urgency.
I’m in Abu Dhabi this week continuing my ongoing travels around the globe offering camera workshops and pontificating about one irrelevant thing or another. This morning a major sandstorm blew in from Saudi Arabia and completely enveloped the NYU campus where I’m currently holed up.
The extremely fine sand less than 50 microns in diameter is highly abrasive and can lodge dangerously deep in one’s lungs. It is also damaging to cameras and especially camera lenses, a lesson learned the hard way by many filmmakers and shooters in the region who are drawn inexorably, like I am, to the eerie other world feeling imparted on the landscape.