PostHeaderIcon Does the Continuously Variable ND Filter Spell the End of the Lens Iris?

The advent of the continuously variable neutral density filter (VND) raises an interesting issue: Do we still need a lens iris at all? The iris ring and underlying mechanism fitted with fragile rotating blades are at constant risk of damage from shock, moisture, and dust penetration; the iris itself adding cost and complexity while also reducing sharpness from diffraction artifacts when stopped down. The new Sony FS7 II features a VND that isn’t quite efficient enough to eliminate the lens iris entirely – but the day may be coming. For shooters, this could mean much better performing and sturdier lenses at  lower cost.

The Sony FS7 II VND offers insufficient light attenuation to eliminate the lens iris entirely – but the day may be coming.

PostHeaderIcon Exercise Your SSDs! Like Any Other Storage Drive

When designing a drive, engineers assume that some energy will always be applied. Leaving any drive on a shelf without power for years, will almost certainly lead eventually to lost data. For those of us with vast archives stored on umpteen drives sitting idle in a closet or warehouse, the failure of a single large drive or RAID array can be devastating.

Mechanical drives are inherently slow, relatively speaking, so moving large files can significantly reduce one’s productivity. Beyond that, the high failure rate of mechanical drives is an ongoing threat. The distance between the platters is measured literally in wavelengths of light, so there isn’t much room for dislocation of the spinning disks due to shock. This peril from inadvertent impact is in addition to the normal wear and tear of mechanical arms moving continuously back and forth inside the drive.

For documentary shooters operating in a rough-and-tumble environment, the move to flash storage is a godsend. SSDs contain no moving parts, so there is little worry from dropping a memory card or SSD while chasing a herd of wildebeests, or operating a camera in a high vibration environment like a racecar and fighter jet.

SSDs also offer 100x the speed of mechanical drives, so it’s not surprising that HDDs these days are rapidly losing relevance. The lower cost per gig still makes mechanical drives a good choice for some reality TV and backup applications, but as camera files grow larger with higher resolution 4K production, the SSD’s greater speed becomes imperative order to maintain a reasonably productive workflow.

Right now the capacity of mechanical drives is reaching the upper limit. Due to heat and physical constraints, there are only so many platters that can be placed one atop of the other inside an HDD. SSD flash memory, on the other hand, may be stacked in dozens of layers; with each new generation of module offering a greater number of bits. Samsung is moving from 256Gb flash chips in 48-layers to 512Gb chips in 64-layers.

So how much should you exercise your HDD and SSD drives? Samsung states its consumer drives can be left unpowered safely for about a year. In contrast, enterprise data drives found in rack servers are designed for heavy use with continuous data loads, and offer only a six-month window of reliability without power. Such guidelines are vital to keep in mind as some shooters may not use a particular drive or memory card for many months or even years, and for them, it is important to power up their SSDs from time to time, to ensure a satisfactory performance and reliability.

One other thing.  SSDs have a limited life expectancy. The silicon material in flash memory only supports so many read-write cycles, and will, over time, eventually lose efficiency As a practical matter, the EOL of solid-state drives should not pose much of a problem however. Depending on the load and level of use, most consumer SSDs writing 10-20GB per day have an estimated EOL of 120 years. Most of us I would think will have replaced our cameras, recording media, and storage drives, long before then.

All drives, including SSDs, require regular exercise. Ordinary consumer drives should be powered up at least once a year to maintain reliable access to stored data. Professional series and enterprise-level drives require powering up twice as often, about every six months, to ensure maximum efficiency and long life.


PostHeaderIcon For Better or Worse… QuickTime Exits the Scene

With the introduction of Imagine Products’ PrimeTranscoder, the QuickTime-based paradigm introduced over 25 years ago may finally be changing for video and web-publishing professionals. Foregoing the aging QuickTime engine, PrimeTranscoder utilizes Apple’s latest AVFoundation technology instead, taking ample advantage of GPU acceleration and core CPU distribution, among other tricks. To its credit, PrimeTranscoder supports virtually every professional camera format, from Avid, GoPro, and RED, to MXF, 4K and even 8K resolution files. Underlying this greater speed and efficiency, support for these codecs is completely native, sharing the 20 or so different types also supported inside Imagine’s HD-VU viewer.

Since 1991, Apple’s QuickTime engine has served as the de facto go-between for translating digital media files, enabling manufacturers like Sony, Panasonic, and others, to encode/decode their respective codecs on user devices from editing platforms to viewers. Abandoning the QuickTime engine, PrimeTranscoder is leading the charge, offering the ability to process ProRes and H.264 files more quickly and efficiently, but there is a downside as well. Foregoing the old but versatile QuickTime library greatly complicates support for certain popular legacy formats like MPEG-2. MPEG-2 is still used widely for preparing DVD and Blu-ray discs, especially in parts of Africa and South Asia where Internet connectivity is poor or nonexistent. MPEG-2, like most legacy formats, is not currently supported in PrimeTranscoder.

PrimeTranscoder is a new application, and still early in its development.  Indeed one key legacy format, Avid DNxHD, has been added to the latest PT release, which Imagine says, without a QuickTime translation, required quite a back flip to accomplish. Like it or not, shooters, producers, and content creators of every stripe, are transcoding more files for a multitude of purposes – for dailies, for streaming, for DVD, for display on a cinema screen. Letting go of the legacy stuff, for all of us, is very hard to do.

PrimeTranscoder’s main window is simple and intuitive. No need to consult a manual or quick-start guide.


Legacy’ formats contained in the QT library like MPEG-2 (for DVD and Blu-ray) are not supported in PrimeTranscoder. The latest PT update does support Avid DNx, so where there’s a will, there’s a way to support the old formats.


PostHeaderIcon Use Broad Lighting to Increase Three-Dimensionality

As filmmakers and directors of photography, we are constantly working to recreate the 3D world in a 2D medium. We use a variety of techniques to accomplish this, for example, by maximizing the use of linear perspective, and when blocking actors, by placing one slightly behind the other. For the same reason, when lighting scenes, we aim to maximize texture, by carefully crafting the direction and character of the light as manifested in the shadows and highlights. Broad lighting can also help foster the (usually) desired three-dimensional illusion, by imparting a natural wraparound texture, especially in the hair and side of the face of female talent. The Rosco LitePad Vector  CCT is particularly effective in this regard, given today’s ultra-sensitive, high dynamic range cameras.

The diminutive 8-inch x 8-inch LitePad is ruggedly manufactured and offers much better than average performance, especially in the tungsten wavelengths where many LEDs tend to struggle. The light’s output, while somewhat less than the larger, more common 1 x 1 units on the market, is ideal as a wraparound luminaire when shooting talking-heads, or in a dark car at night.

The LitePad Vector CCT LED is especially well-suited as a side or backlight for interviews and standups.


The Vector Litepad exhibits good red saturation produces very smooth skin tones, an imperative for shooters these days working exclusively with LED lighting.

PostHeaderIcon Shooting In log Makes Sense For Most Shooters

We all know that shooting in log can improve exposure latitude and dynamic range. In short, log capture allows us to record more professional-looking images, as the brightest highlights in sun-dappled scenes, for example, may often be accommodated without clipping or loss of detail.

Practically, shooting in log may also mean utilizing less fill light, which can be helpful on low-budget run-and-gun style productions. I always try to reduce the amount of gear in general on a set; and shooting in log can help reduce the grip and lighting complement substantially – a good thing in my opinion.

Remember more gear = less work.

Giving you an idea of how I work in log, when shooting with the VariCam LT, for example, I assign Y-Get to user button 1. Placing the EVF cross-hairs over the brightest part of a scene, with the iris adjusted to 69 IRE, the camera recording V-Log will accommodate 3-4 stops of additional latitude before clipping or a noticeable loss of highlight detail. Setting exposure in this way is simple and effective, and eliminates the need for external exposure meters and pricey reference monitors.

While shooting in log makes sense for most shooters, broadcast news and some non-fiction shooters may find the hassle and inconvenience of working with LUTs not be worth it, or even possible, given the tight time constraints and lack of serious post-production in most news and public affairs programming.

This scene captured in log is shown with and without a LUT applied. Despite the advantages of shooting in log, some producers may see working with LUTs as a hassle and an inconvenience.

PostHeaderIcon Offloading Huge Camera Files Over Time

Since the advent of tapeless workflows it has been necessary to safely and securely offload camera original footage from memory cards and onboard drives.  For many of us this can be a process fraught with trepidation, which is why I and most industry professionals use Imagine Products’ ShotPut Pro to handle the offloading chore.

Of course we need an effective checksum to verify the integrity of the transfer and SPP has done that very well for almost a decade. In that regard ShotPut Pro version 6 retains the key checksum options from fastest to slowest; most data wranglers I know will opt for XXHASH (the fastest option) or MD5.

One valuable new feature in ShotPut Pro’s latest version is the PAUSE & RESUME function since on so many productions these days we are pressed for time and forced to interrupt the transfer of large data files. On many shows there are simply not enough hours in a day to permit the uninterrupted offload of the gargantuan 4K and higher resolution camera files.

Provided that the application is not closed or quit, SPP6 will resume and complete the transfer of large files without reinitiating the entire transfer. SPP6 completes the immediate file in progress so it does not truncate or interrupt a file prior to pausing or interrupting the transfer.

How many times have we been forced to move from hotel room to a moving vehicle and into a new hotel room while attempting to offload large camera drives and media cards? Shotput Pro 6 addresses the needs of frazzled data wranglers in precisely this unenviable position.

ShotPut Pro 6's new PAUSE & RESUME feature enables users to interrupt a long data-heavy offload for completion later.  This ability to transfer large files in intervals is long overdue!

ShotPut Pro 6’s new PAUSE & RESUME feature enables users to interrupt a long data-heavy offload for completion later. This ability to transfer large files in intervals is long overdue!



PostHeaderIcon My Upcoming Workshop at Portland’s Cool Film Festival

This year I am pleased and honored to be conducting a three-day camera and visual storytelling workshop at PDXFF16. This fun hands-on event will take place from August 31 – September 2 at the Pro Photo Event Center in the city’s NW Pearl District, 1801 NW Northrup Ave, Portland OR 97209.

While the workshop is designed primarily for aspiring camera craftsmen and cinematographers the course is just as applicable to filmmakers of every stripe, including actors and screenwriters, and visual storytellers of all media.

For more information and to register click on the general festival link:[]  and the event specific link: []

Scene: London Film School Camera Workshop 2015

PostHeaderIcon Road to Zanzibar

This summer I am again in East Africa leading a camera and visual storytelling workshop at the Zanzibar International Film Festival. The hunger for knowledge in this part of the world never ceases to amaze me as my students demonstrate an eagerness to learn and practice the fundamental lessons of good visual storytelling and effective camera operation.

A dhow crosses Zanzibar bay 8 July 2016.

Zanzibar is located off the coast of East Africa  at 6º south latitude. The light at dusk is positively mesmerizing. Here a traditional dhow passes off the coast of Stone Town 8 July 2016.


Despite the many technical challenges DSLR cameras dominate the filmmaking landscape in this part of the world.

Despite the technical challenges DSLR cameras dominate the filmmaking landscape in this part of the world.


The opportunity to integrate local color is a great advantage of shooting in Zanzibar.

The opportunity to integrate a range of local color is a great advantage of shooting in Zanzibar.


Lulu. The star of our scenario. In real life she is a very popular fashion model.

Lulu. The star of our scenario. In real life she is a popular fashion model.


The emphasis of local cinema is on actors and performance which is as it should be. Technical issues aside it is after all what audiences really care about.

The emphasis of local cinema is on actors and performance which is as it should be. Technical issues aside it is after all what audiences really care about.



Its most recent demise has been nothing short of astonishing. As recently as four years ago 3D seemed atop of the world and gaining momentum with filmmakers and TV broadcasters around the world seemingly poised to jump on the 3D bandwagon. Pouring tens of millions of dollars into new cameras, rigs, and displays, the major manufacturers like Sony and Panasonic subsidized to a huge extent startup venues like Sky 3D and Direct TV’s Channel 101. By January 2013 virtually every large screen display sold in the USA offered a 3D capability.

Then what happened? Despite the manufacturers’ best efforts and massive financial investment the public in Western countries never really cared for 3D. In Asia the public’s view was more positive but the interaxial handwriting was already on the wall in the principal markets in the USA and Europe.

Today, save for the few studio tentpole movies still distributed in 3D, the format has become all but irrelevant for most non-theatrical applications. You can blame it on the uncomfortable glasses, the underpowered insufficiently bright displays, or the poorly developed skills of 3D filmmakers, who not understanding the physiological impact of stereo viewing, unwisely opted for maximize depth at the price of viewer comfort. It goes without saying that  inflicting riveting pain on one’s audience is not a good way to win its loyalty and affection!

Since the dawn of painting and photography the challenge to artiss has always been how to best represent the 3D world in a 2D medium. Because the world we live in has depth and dimension, our filmic universe is usually expected to reflect this quality by presenting the most life-like three-dimensional illusion possible for our screen characters to live, breathe and operate most transparently.

In the 2D world of cinema and TV the camera craftsman uses mainly texture and perspective to foster the desired three-dimensional illusion. While the 3D shooter makes use of many of the same tools the stereo format inherently goes a long way  to promote the feeling of a real world experience. In fact the 3D shooter must often mitigate the use of aggressive depth cues, as the forcing of perspective can be very painful to viewers.

As a cinematographer and 3D specialist I attribute the format’s lack of public acceptance to something rather fundamental. Of course the ‘3D’ format isn’t really 3D at all but stereo, which is much less immersive. Viewing a movie or TV broadcast in stereo requires substantial viewer effort, a reliance on a gimmick or ‘loophole’ in human physiology that allows viewers to separate focus from convergence. As it turns out a large part of the audience is simply unable or unwilling to perform the unnatural act of forming a 3D image in its mind; it can be tiring or painful, and not at all conducive to what is supposed to be an entertaining experience.

In spite of all this, the savvy cameraperson today understands that the lessons of 3D, i.e. communicating the maximum number of depth cues to viewers, can greatly enhance the impact, breadth, and effectiveness, of our traditionally composed 2D captured scenes.

PostHeaderIcon LUTs for ‘Luttites’

On many shows we are increasingly shooting RAW and/or in a multitude of compressed formats with different cameras utilizing different flavors of log.  ARRI Alexa, Canon C300, Sony FS7, Panasonic VariCam, GoPro, Blackmagic, DSLRs – just keeping all the color spaces and log profiles straight can be a major challenge.

The Academy Color Encoding System (ACES) has simplified things for folks at the high end of the food chain. But for the rest of us toiling in typical broadcast productions and independent features, the convoluted post-camera wrangling of color and LUTs has become an unwelcome hassle with the wholesale wrangling of .cube, .aml, and .ctl files.

Thankfully we now have Latice, a powerful and versatile LUT management tool that greatly minimizes this ongoing hassle. To be clear it is not intended to compete with or replace grading applications like Davinci Resolve.  Think of it more as a LUT Swiss Army Knife, able to view, transcode, and conform, a wide array of color spaces and profiles.

So if you’re shooting B-roll on a Canon C300 Mark II and we need to conform to the A camera which is a Sony FS7, Lattice can convert the Canon Log2 files to Sony S-Log, which we then import into Resolve for simple and straightforward color grading in a single consistent color space.

The Mac-based app features a very straightforward interface, which offers plenty of hooks for tweaking.  As cameras like the Panasonic VariCam 35 are enabling the creation of 3D LUTs in camera it becomes a simple matter in Lattice to convert Panasonic’s V-Log file to something else, like Sony S-Log or Blackmagic’s Film Emulation (BMD) LUT.

Lattice doesn’t eliminate entirely the complexity and hassle of wrangling LUTs post-camerabut it sure makes the ordeal a whole lot easier.

I use it regularly and recommend it.

Simple powerful LUT management tool. If you're shooting with multiple cameras and employing various color spaces you'll need Lattice to conform your look to a single format for grading inside Davinci Resolve.

Lattice is a very simple LUT management tool. If you shoot with multiple cameras and employ various color spaces Lattice will conform the different LUT flavors to a single format for grading inside Davinci Resolve (or other color correction environment).