Once Upon a Time in London

Preparing for my trip to London on the week of June 20th made me reflect on the fact that my very first post on this blog was about another trip to London; a trip during which I pitched a feature/web series project to the director of development of a storied English production company. At the time I was still waiting to hear back, so I didn’t go into specifics.

However, enough time has passed that it seems time to tell the tale. It was a fantastic experience, and in particular involves a funny story about using the wrong tools for the job, and my unveiling of the animatic I created, posted via Vimeo down near the bottom.

Getting this pitch meeting was a four year odyssey of networking, yearly trips to London, and the various slings and arrows of project development. However, once I was over the initial hump of getting my contact at the company to read my script after sitting on it for the previous year, and was reasonably sure that I might actually get a crack at actually getting a meeting, I commissioned a pile of artwork to support my writer/director pitch. The project is a period, gothic adventure–horror tale, with some nice action set pieces. My short filmography doesn’t exactly include an action film, so I wanted to make it clear that I’m fully capable of directing thrilling sword fighting scenes (being a fencer myself, I have a bit of insight).

I hired illustrator and storyboard artist Ryan Beckwith to turn my chicken-scratching thumbnail storyboards into nicely-illustrated presentation boards for the first three scenes. I commissioned concept art from Bay Area painter Anna Noelle Rockwell, who also did a series of costume illustrations for the main characters. I had a practiced pitch. I was loaded for bear.

And then I waited. For various reasons, my interactions with this company were on a yearly cycle, and I had plenty to work on in-between meetings, so I stored my cache of artwork and attended to other business. Once in a while I’d mull over whether or not to convert my presentation storyboards into a whiz-bang animatic, but I decided to skip it, thinking the comic book style presentation of my boards might be more fun to browse. Besides, Ryan and I were already up to our eyeballs planning Starship Detritus (update—the final coloring is nearly done, and we’re going to begin animating shots again for the pilot), so it was easy to relegate to the back burner.

So, as is the way with these things, I got the nod for the actual meeting at the last minute, pretty much a “fly out here in four days and you might get a chance to present” kind of deal. I threw everything together, bought some little easels for the paintings, and re-rehearsed my pitches. Again, I wondered, “should I put together an animatic?” but there was no time.

So, there I am in London, at an afterparty for the event that brought me out there, having gotten an actual appointment for my pitch meeting. I’m at a pub with another producer as well as my initial contacts at the production company, drinking and chatting about pitches, and they start talking about how great it is to pitch with an animatic. And there I sit, feeling like an idiot, since I had a whole year to put something together and I didn’t.

The day before my meeting, I took a walk in Saint James park, mulling. I’d brought JPEG scans of the boards on my MacBook Air, I could maybe whip something together but I didn’t have Final Cut Pro installed because the Air was my writing machine. However, I did have Keynote. Could I do this in Keynote? Absurd! But that night, back at my hotel, I started poking at it, and sure enough Keynote had some rudimentary slide timing tools for autoplaying a presentation. I started putting something together. 

Calling my wife, Kaylynn, back in the states, I had her email me a few Yoko Kanno tracks from my iTunes library that I knew, from memory, would fit what I wanted (Yoko Kanno was even part of my pitch, as I would have loved to have her score the project). In a fever, I put the whole thing together and tuned it up by 4am, then caught what little sleep I could manage.

The next morning, while packing up for the meeting, I took one last look at my hacked together Keynote animatic. Was I crazy? Would this fly? I watched it, and, well, it was fun! Not perfect, but it would be a heck of a lot more interesting then flipping through my stack of boards.

With a song in my heart, I went to the meeting. What I though was going to be a 15 minute quick pitch ended up being a fully-engaged hour and a half meeting. I was ON FIRE, and the executive I was meeting with seemed interested. He had his hesitations, but there was a genuine back and forth. And he watched the animatic. All of it (and I did have my finger on the stop key, looking for the slightest sign of boredom which, thankfully, didn’t appear). What follows is an h.264 movie of my Keynote presentation, and while the timing isn’t quite the same, it’s pretty much what I presented in London (vaguely NSFW, I suppose).

After my return, it took four months for them to finally pass on the project, but I’m philosophical about the experience. I’m glad I got the opportunity to make the big pitch I’d prepared for, and for the record, this script isn’t dead. It’s back on my stack again, but I’ve a plan to rework it within a new context when the time seems right. I’ve had too much fun to give up on it now.

My apologies to Yoko Kanno for the unauthorized use of her tracks. However, if you like what you year, and you enjoy soundtrack music, you owe it to yourself to check out her work, which is eclectic and wonderful.

Color Correction Handbook 2nd Edition: Grading theory and technique for any application.
Color Correction Look Book: Stylized and creative grading techniques for any application.
What's New in DaVinci Resolve 15: Covering every new feature in Resolve 15 from Ripple Training.
DaVinci Resolve Tutorials: Far ranging DaVinci Resolve instruction from Ripple Training.

How I Stopped Worrying and Learned to Love Grade-Linking

Here’s one for my Resolve-using brethren. I’ve been trying for some time to come to terms with Resolve’s wonderful, yet at times terrifying auto-linking feature. I’m talking specifically about the fact that, when you first conform an EDL to a Media Pool full of files, any session’s clips that are linked to the same Media Pool file are automatically linked together in the Timeline, such that changes to one clip are automatically rippled to the other linked clips.

It’s all fun and games if you’re grading well-managed media, with strictly delineated coverage and no lighting changes in the middle of an angle (or, ugh, a take). For well-shot, well-organized projects, auto-linking is a GIGANTIC time-saver, and I love me some saved time.

However, if you’re editing documentary footage, or working on a project where multiple angles of coverage are combined into mixed content files that defy logic, then this mechanism can be more trouble then it’s worth. I’ve talked to more then one colorist who, upon conforming any EDL, immediately uses the Batch Unlink command to force all clips in a session to use Local grades, which are never linked among clips. In this way, its possible to work strictly clip by clip, linking only via manually created groups of your own choosing.

Another approach that I’ve used is to continue using the auto-linked Remote grades, creating new Versions of automatically linked clips that I need to individually tweak. You see, when you create a new Version, it’s no longer linked to the other clips and you can grade independently. The only problem with versions is that media management mechanisms such as the ColorTrace tool automatically link to the “Default Version,” regardless of the version you had set for that clip. If you never copy grades from a session in one project to a session in another project, then this is irrelevant. Until later you decide you want to…

However, I like auto-linking, and I want to use it until I exhaust its timesaving possibilities.

Which, it turns out, is easy to do.

Simply put, when I first conform an EDL, I go ahead and use the auto-linked Remote Versions to start grading. Each grade I make ripples happily out among sensibly-linked clips, and if there are any linked clips that I want to make an individual adjustment to, I create a new Version for that clip and do what I need to do. In this way, I rough out all of the grades on my first pass. At this point, I’m not looking for seamless shot-matching, I just want to get the major grades assembled and distributed out amongst as much of the timeline as possible.

THEN, when I’ve hit the wall in terms of what I can do with the linked grades in that session, I do the magic thing—Batch Copy.

Rt-clicking the Thumbnail Timeline and choosing Batch Copy, it's like having my gravy and drinking it too.

Unlike Batch Unlink, which switches every clip in the session to a blank Local grade (you can switch back to your grades using Batch Link, by the way), Batch Copy copies every clip’s current version to a new Local version, which is now (by definition) unlinked. Every clip has exactly the same grade that it had before, except that now none of them are linked, which means I can start digging into the nitty-gritty of making all the tiny manual adjustments that will make each scene play smoothly.

To summarize—I start my first pass using Resolve’s auto-linked Remote grades, and then I Batch Copy to a new set of Local grades that I can individually tweak without worrying about rippled grades causing problems.

For the last few projects I’ve done, this has worked really well, and it seemed worth sharing.

Color Correction Handbook 2nd Edition: Grading theory and technique for any application.
Color Correction Look Book: Stylized and creative grading techniques for any application.
What's New in DaVinci Resolve 15: Covering every new feature in Resolve 15 from Ripple Training.
DaVinci Resolve Tutorials: Far ranging DaVinci Resolve instruction from Ripple Training.

Don’t Make My Tools Easy. Make Them More Fun.

An Open Letter to Developers of
Software Tools for Specialists

I’ve worked with a number of software companies over the years, on various applications invariably relating to media creation. I’ve never been able to settle on which is more fun, participating in the development of new tools, or using those tools to actually create media. In my case, I like both too much. It’s just so satisfying to make a suggestion to an engineer, and watch it get implemented knowing that the lives of all who use that software will get a little less tiresome.

The great thing about software is that, hardware limitations aside, you can make things work pretty much however you want, if only you’re clever enough to figure out how. Whenever you hear someone from a software company say “we can’t do that,” what they’re really saying is:

  • “We don’t have the time to figure out how to do that.”
  • “We don’t want to do that.”
  • “We could do that, but your performance would be
    so horrible that you’d wish we hadn’t.”
  • “We’re planning on doing that, but that’s number 287
    on a list of 500 planned feature requests.”

To be fair, these are all legitimate issues. However, when a team of developers decides it’s time to try and add a particular new feature, there’s tremendous freedom of implementation. You can design a feature to take as many or as few steps as you want, use exotic input devices or simple keystrokes, take advantage of pre-built interface widgets or design new ones yourself, in the eternal struggle to second-guess the user’s preferences for how to do things. Which brings me to the reason for bringing this up in the first place.

You see, developers have a dilemma. To make an application more easily “discoverable” to new users, you generally have to create a more obvious user interface, with highly visible controls that are obviously labeled, that lead to activities that are easy to figure out and accomplish though trial and error, and constrained enough to keep the user from doing anything too insane that might result in catastrophic failure. The result is often software that’s drop-dead simple to use (one hopes). This is a good thing for folks who want to sit down and get something simple done without a lot of hand-wringing, and there are ample examples of this type of software UI that I like and use.

On the other hand, if you want to make a software application capable of complex functionality that’s fast and efficient, that’s doable as well. Design an interface that does away with needless mouse clicks, create a fast way to trigger functions (often keyboard shortcuts) that invoke specialized, streamlined workflows with open-ended functionality that impose as few restrictions on the user as possible, with as much variation and flexibility packed into as few interface widgets as is feasible. The result can be software that’s incredibly powerful and fast to use, but inscrutable to the point of presenting a dauntingly steep learning curve. This is a good thing for power users who are willing to put the work in, intent on rocking their software like Jimmy Page playing the guitar, blasting through creative work while thinking up new ways of problem-solving.

My point is that “easy” UI and power-user UI are almost mutually exclusive. Finding a way to merge the two is, I believe, the great challenge of our age of software design. That’s because simple software, while fast to learn and easy to use, becomes frustrating once you’ve plumbed its depths and reached its limits, forcing an excessive number of mouse-clicks and other laundry-lists of steps that, while making those tasks easy to learn, now turn every simple thing you want to do into a chore.

On the other hand, software for power users can be maddeningly frustrating to learn. Without someone to show you the ropes, enrollment in a class, or the patience to plow through user documentation (please god let it be well written), you may sit there with your expensive new software application clicking and pressing keys until the sun goes down without any clue of how to proceed. Even worse, it could take you days or weeks of this kind of torture until you finally become proficient enough to get done what you need to do without too much hair-pulling and google searching. That said, after a painful apprenticeship, you learn the magic keyboard shortcuts and mouse gestures, at which point you spend the rest of your career with that software flying through your tasks with joyful precision, with others gaping slack-jawed at your wizardry.

In terms of user interface design, it’s really, really hard to accomplish both; discoverable software that also allows you to fly though tasks as a power user once you’re ready to ditch the training wheels. I sometimes fear this sisyphean task may be well nigh impossible, but I don’t want to believe that because, well, it’s software. We can design things any way we like, and maybe if someone clever enough and wise enough comes along, the feat could be accomplished.

In the meantime, software for postproduction finds itself split between these camps. Simple to use software that’s frustrating to do big projects with, and power-user software that’s time-consuming to learn. This is the great challenge of software development as I see it, and is an honorable undertaking for any developer.

However, somehow another notion has crept into the collective unconscious of the software development community. This notion is that people want their tasks to be made easier.

Allow me to clarify, because this is important. It’s one thing to say, “I’m creating a music application, and I want the software to be easy to use for writing music.” It’s entirely another thing to say “I want to write a software application that makes writing music easy.”

To use another example I’m most intimately acquainted with, color correction is difficult. You have to learn obscure things. As a result, it’s tempting to want to develop software so that, with the click of a button, any shot is auto-magically corrected to look amazing, and then the lucky amateur can move on to something much more worthwhile, like drinking beer. And before you accuse me of beating up on any particular application, I’m not; every contemporary grading, editing, and photo manipulation tool I can think of has some manner of auto color-correction functionality. It’s a universal aspiration.

Now, I’m not knocking simple tools for non-specialists. That’s like complaining about automatic transmissions in cars (and I’ll freely admit that I prefer driving automatics, I’m no race car driver). One-click auto correct, template-driven video editing and compositing, audio auto-leveling and auto-mixing functions, auto-tune and quantization for music, and many other auto-magical features are wonderful things and allow folks that lack specialized skills to create interesting media. I use many of these myself, and I’m glad to have them when the time is right.

However, we specialists want more then one-click solutions to our problems. Frankly, those of us who are looking for the experience of “driving stick” within a particular domain of software are so inclined because we don’t want anyone telling us what to do. We want to find original solutions to our creative problems, or at least to imagine that we’re coming up with our own secret sauce version of whatever it is we’re trying to accomplish. In the process, we want to exercise multiple iterative variations, we want to make tweaks both gross and minute, and we want to do all of this as quickly as possible with the least amount of wrist strain since we’re doing whatever it is we do all day, all week, all month, and if we’re financially lucky, all year.

As a specialist myself, I speak for other software specialists, and not for casual users, when I say that I don’t want software to make things easy. Because the only way to make a task easy is to tell someone how to do it, or do it for them.

Using a pencil is easy. Any two-year-old can use the hell out of a pencil. Using a paint brush is easy, you just dip it in paint and start smearing. Swinging a hammer is easy. When I was seven and my dad was building a house, he gave me a hammer, a pile of nails, and a piece of two by four and I was a hammering fool.

You know what’s not easy? Writing a novel. Painting a portrait. Chiseling a sculpture. For all of these activities, the user interface is unbelievably easy, but the task itself is hard. It requires knowledge. It requires practice. It requires skill. Perhaps a bit of talent if you’re so lucky.

Easy to learn UI, but does it make the task easier? Should it?

And that’s great. The whole point of art, as I see it, is the doing of it. The process, the act of learning, growing, figuring out how to make the results more interesting, more exciting, more effective and unique.

I’m going to wax idealistic for a moment. Nobody ever set out to become an editor of film and/or video because they said to themselves “I think I can make a pile of money editing corporate communications videos.” People become editors when they’re exposed to the process of creating meaning by juxtaposing one shot next to another, and get hooked. They want to figure out how to do it better. They want to try creating the kind of meaning that is most interesting to them, be it documentary, short form, or long form narrative.

I believe this is true of all creative specialists; musicians, writers, colorists, mixers, compositors and animators. We all use software, and we all want to do something new, something great, something interesting. Compositors and sound designers create new worlds of experience by building and combining layers of information in novel and interesting ways. Writers and musicians discover the ability to impart entirely new experiences to audiences using the deceptively simple tools of language and instrumentation. Colorists discover deeper and more effective ways of fine-tuning images to guide the audience’s emotional reception of a scene. All of these occupations can fill a lifetime with a completely engaging struggle for improvement.

Along the way we all take day jobs doing all kinds of work that pays the bills, but the one thing that, I hope, keeps most of us going is that we’re all striving to figure out how to do what it is we do better. The day I feel like I’ve plateaued at a particular task is the day I lose interest, turning to something else where I feel there’s more to explore.

What does this have to do with software? Puzzling out that first creative adjustment to the color and contrast of an image is the best part of my day. I don’t want an auto-correct button, because that robs me of the joy of discovering the best adjustment for the first shot of a brand new project.

Instead, I’d much rather developers found ways to make the software more fun to use. Find the creative obstacles that impede a particular task, and create software that minimizes or eliminates them. Figure out what specialists want to do when the software frustrates them, limits their ability or the expressiveness of the operation they’re trying to perform, and create new tools to overcome these limits that aren’t cumbersome to use. These are the areas that need real improvement.

A good friend of mine, Michael Wohl, is fond of saying that good software gets the hell out of your way and lets you do what you want to do. I think this is a terrific goal for any developer.

Furthermore, every software-driven workflow inevitably involves some amount of digital drudgery. If it’s possible to identify repetitive, non-creative busywork that drains the joy out of a specialist’s day, then that’s another target for improvement. Are there new tools that could be added that would make the process of exploring new ways of processing the image more enjoyable, open-ended, creative, and exciting? Those are the improvements I want, things that let me focus less on house-cleaning, and more on the creative aspects of the process.

Lastly, give me more creative tools. And make them fun to use while keeping them as customizable and open-ended as possible so I can find my own particular tweaks. I want to look forward to coming to work, opening your software, and finding new creative uses for interesting tools that perhaps nobody else has yet discovered.

Don’t make my job easier. I don’t want to paint by numbers. Make my job more fun and expand my toolkit of creative possibilities, and I’ll buy every single software upgrade you make.

Color Correction Handbook 2nd Edition: Grading theory and technique for any application.
Color Correction Look Book: Stylized and creative grading techniques for any application.
What's New in DaVinci Resolve 15: Covering every new feature in Resolve 15 from Ripple Training.
DaVinci Resolve Tutorials: Far ranging DaVinci Resolve instruction from Ripple Training.

Hands-On With an FSI Monitor

I finally got my hands on the new Flanders LM-2461W thanks to Dan Desmet, owner of FSI. Alas, being in the middle of a move and with many obligations at NAB this year, I didn’t have as much time to work with the monitor as I would have liked, but I still managed to fire it up, run a few scenes on it, do a bit of grading, and get a solid sense of the monitor’s performance.

If you’ve been reading my blog for a while, you’ll remember I gave this monitor a shout out in my “What (Inexpensive) Display Should I Buy?” article. I made the recommendation mainly because of glowing reviews I’ve gotten in conversation with Robbie Carman, who’s a satisfied owner of the previous model, and from more positive written reviews from another FSI monitor owner, Walter Biscardi. Between that, and glimpses of various iterations of this display at past NAB shows, I was comfortable recommending it even though I didn’t have hands-on experience with it.

Well, now I have. But before I go into what I found appealing about the Flanders display, I want to make it clear that in today’s world of competing monitor technologies, display selection is a very personal thing. No two hi-fi enthusiasts are going to like the same speakers, and no two colorists are necessarily going to like the same display. Different display technologies have different characteristics, making them more useful for some applications then others. Some people find high-end LCD appropriate for their needs, while others prefer plasma, and still others are better off with a projector setup. I’ll address what I think the Flanders is most suited for later on.

My most basic advice is to never, ever buy a monitor without evaluating it in person first, in a reasonably suitable environment with appropriate lighting. This year at NAB Flanders had set up a proper, shaded viewing booth, appropriately shielded from ambient glare, and backlit to provide a suitable surround. That’s how you want to view the monitor, as it’s the only way to really see how its black level and contrast stack up in a real-world suite.

I’m going to start with my overview of what I found appealing about the Flanders 2461 display from the outside in.


First off, FSI does the right thing by providing a host of connectivity options. Where other companies have traditionally made you pay a premium for SDI or HD-SDI inputs, Flanders includes SDI, HD-SDI, Dual Link, and 3G (now standard with the 2461) digital inputs, as well as Y’PbPr (for you analog holdouts). There’s even DVI in if you want to use the FSI as a computer monitor (although keep in mind that DVI is limited to 8-bits-per-channel). That makes the FSI monitors incredibly flexible for just about any postproduction application.

Given the built-in support for single and dual link HD-SDI (or either via 3G), corresponding settings supporting 4:2:2 and 4:4:4  monitoring (either Y’CbCr or RGB, depending on your signal path) can be found in the menus. Again, dual link 4:4:4 support was once a quite expensive option, so having this built-in makes any shop able to accurately monitor high end HD and digital cinema signal paths.

Chroma sampling menu options

If you’re monitoring via Y’PbPr (the analog signal standard, as opposed to Y’CbCr, which is the digital signal standard), there are also all the options you’d want for analog monitoring, including SMPTE/N10 (with now standard 0 IRE setup, or black level), Betacam (with its now non-standard 7.5 IRE setup for North American analog Betacam output), and MII (with a 7.5 IRE setup similar to Betacam, but slightly different saturation levels).

As a side note (and I can’t stress this enough) if you’re monitoring for eventual output to analog NTSC Beta SP, then you’d use the Betacam setting. If you’re monitoring to eventually output via the SDI or HD-SDI outputs to any other format, or if you’re simply using the Y’PbPr connection for monitoring a digital signal, then use SMPTE/N10. Its 0 IRE setup is the standard for any application other then analog NTSC Beta SP.

Different options for analog component input

Commensurate with the wealth of video inputs, this is very much a multi-format display, with support for SD, HD, and 2K formats of all standard frame rates, frame sizes, and interlacing standards. I’ll let their technical specs link speak for itself. Bottom line, you shouldn’t have a problem monitoring any format of digital or analog video with the 2461.


I don’t typically discuss the power connectors of post gear, but in addition to the standard three pin computer power plug, the Flanders display also supports 24V DC portable power for field use. In conjunction with other features I’ll discuss later, this makes the FSI a flexible display for a wide variety of production situations as well.

DC power connector for on-set use

Color Fidelity

One of the main points of buying a monitor like this is that it can be properly calibrated to the required video standards.

Flanders takes a unique approach to monitor calibration, which is to carefully precalibrate each monitor that leaves their warehouse to be ready for you to use, as is. At a later time, they offer recalibration on demand, with you shipping your monitor back to them for the service whenever necessary (monitor recalibration is free for the life of the monitor, except for the cost of shipping). However, I’ve been assured that the panels are very stable, so Flanders only recommends full recalibration every 18-24 months.

However, the luminance of the Fluorescent backlighting can diminish very slightly over time with regular use (a matter of months). Before sending your monitor in for recalibration because of the backlight, there is a DIY calibration option that’s designed to account for this using an inexpensive colorimeter (the X-rite i1 and i1 Display 2 probes are approved). You must have first calibrated the probe to your brand new FSI monitor when you first received it, then you can use that probe to periodically check the luminance level of the monitor (Dan Desmet recommends every two months), using the monitor’s backlight setting to compensate for any aging of the fluorescent backlighting tubes.

When the time comes to send your monitor in for an overall recalibration, I’ve been told that turnaround is about 24 hours, depending on the method of shipping you decide to use. Flanders own calibration is done using a Minolta CA-210 or 310 (itself calibrated by FSI using an even more precise spectroradiometer), the results of which are used to generate a 3D calibration LUT that’s loaded directly into the display, accounting for any shifts within the panel or backlight that occur over time. Interestingly, the Flanders color engine is designed to use 64x64x64 LUTs. According to what I know about calibration LUTs, this is overkill, but hey, there’s nothing wrong with overkill if it doesn’t cost you anything extra, and the extra precision should make anyone sleep better at night.

Flander’s calibration service is a significant selling point, as the cost of purchasing your own high-quality colorimeter (to measure the display’s color characteristics), software for measurement and LUT generation, and an outboard video processor to apply the results (were you to use a different type of display lacking onboard LUT processing) could easily run you from $8K to $16K (USD) depending on the vendors you use (I’ve written an overview of the LUT calibration process here). Flanders takes this burden upon themselves so you don’t have to, and that’s a pretty good deal in and of itself.

The panel of the 2461 has a wide enough gamut to support a variety of standardized color spaces. These include:

  • Wide Gamut — The uncalibrated, native gamut of the panel used in the 2461.
  • SMPTE C — The gamut of the phosphors used by broadcast CRT’s (such as the Sony BVM series).
  • Rec 709 — The published gamut standard for HD video
  • EBU — Another gamut defined by CRT phosphors that was standardized by the EBU. It’s similar, but not identical to, the SMPTE C phosphor gamut.
  • DCI P3 — The gamut defined by the Digital Cinema Specification, for digital distribution and projection in theaters. Flanders says the 2461 supports 97 percent of the overall P3 gamut, being shy primarily in the most saturated greens. This is similar to the HP Dreamcolor’s stated 97 percent support of the gamut, and is pretty good. However, I suspect most users of this monitor will be most interested in the Rec 709 setting.

All of these are available via a convenient on-screen menu.

Options in the color space menu

Gamut is only one characteristic of professional display adjustment. Another is Gamma, which is a whole other topic. Suffice it to say the current thinking of some prominent industry experts is that, despite some disagreement between postproduction professionals in different segments of the industry, the preferred standard for the gamma of monitors displaying Rec 709 is 2.4 (when backlit and with subdued lighting). This has apparently been ratified by the International Telecommunications Union (ITU).

The most appropriate gamma to use also depends on the display’s color space. The published gamma standard of a display calibrated to DCI P3 is 2.6 (in a blackout theater environment). Apparently, audience testing has shown that higher gamma values look better in darker viewing environments, while lower gamma values look better in lighter viewing environments.

However, opinions still vary. For example, the EBU seems to have standardized on 2.35 for consumer displays (see the EBU TECH 3321 document). Flanders takes the high road by providing adjustable gamma anywhere from 1.0 to 2.8. Why so much adjustability? Well, if you wanted to use this monitor as a computer display, you could set it to the current 2.2 gamma standard of Windows or OS X.

Options in the gamma menu

I neglected to get a picture of the menu, but there’s also a selectable color temperature option. Most folks will likely be monitoring at 6500K (the North American and European standard), but this is also adjustable if necessary, with settings for 3200K, 5000K, 5600K, 6500K, and 9300K (the broadcast color temperature standard for many Asian countries).


Much as it pains me that interlacing is still firmly entrenched in the HD postproduction world, it’s still very much in use by producers and broadcasters worldwide. As a result, it’s still critical to be able to get a true look at the field order of a video signal, and this is another area where the Flanders excels. Incoming interlaced images are displayed properly, with sequential fields presented in order so it’s apparent if, for example, an effects shot has had the interlacing reversed accidentally. A conventional computer display that deinterlaces everything wouldn’t necessarily show this problem as obviously.

You can control how the 2461 handles the display of an interlaced signal, but the menu settings are not obvious. The Video menu has a Processing submenu. When set to Normal or Fast, an interlaced video signal is displayed with properly resolved discreet sequential fields on the FSI monitor. When set to Noise Reduction, interlaced video signals are deinterlaced.

Manual Controls

Now, while carefully calibrated standards are important, it’s easy to forget that displays in a broadcast environment are simply evaluation instruments, and often manual adjustments to abuse the signal are useful for finding odd problems that won’t reveal themselves in normal use. For example, a shape-limited correction may look fine with ordinary calibration, but give itself away horribly on a display that’s a bit too bright. By temporarily cranking up the brightness on your monitor, you can see what your graded image will look like on a display that’s mis-calibrated (good to do if you’re working on a project destined for the wild west of the film festival circuit).

For this purpose, the Flanders monitors also include a host of manual controls for phase, chroma, bright, and contrast.

Manual monitor control knobs

Unlike the “center-click-for-détente” potentiometers of traditional monitors, the Flanders display uses infinite rotary controls with an onscreen digital readout. It’ll take a bit of getting used to for old-school monitor users, but this is a digital monitor through-and-through, and it gives you the comfort of seeing when your setting is completely and accurately at 0 détente.

On-screen control values

Furthermore, there are all the traditional controls you’d expect for over/underscan, H/V Delay, Blue Only, and Monochrome only (found in the menus).

This is also an interesting time to point out that the LAN jack on the back of the monitor will eventually support an application that can, via your network, control the monitor and adjustment settings of the Flanders remotely. Furthermore, Flanders sells a customizable monitor remote unit (with a whiteboard face for easy relabling) that can be used to control various features of your choosing via the GPI interface (connectable via CAT5 cable).

On the topic of instrument controls, the front panel has convenient pushbuttons for each of the video inputs. It’s a small thing, but I love that I don’t have to go clicking through menus or up/down button sequences to get to the input I want; one click and I’m there. Hooray!

Buttons controlling all inputs

Tools for On-set Use

There’s also a series of user-programmable Function buttons (F1-F5).

User definable function buttons

By default, these are mapped to useful utilities like video scopes, windows, measurement, and false color analysis (more on these later), and you can customize these via an additional set on onscreen menus.

User definable function menu

As a suite colorist, these options are of limited use to me, as I’ve already got a set of video scopes, and frankly I want a pure, unvarnished look at the image I’m supposed to be making improvements to. The last thing I need is stuff superimposed over the picture, distracting me from what I need to be paying attention to.

However, all of this is incredibly useful for an on-set crew, and to that purpose there are some options I’d like to call your attention to.

First, there’s a comprehensive set of video scopes available, visible up to two at a time, including a full suite of waveform scopes (Luma, Parade, Y’CbCr parade and overlay), Vectorscope, Histograms, audio meters, and more.

Video scope overlays

These are a real convenience on the set, but if you’re in the studio I wouldn’t get too excited. In use, I noticed that the framerate of analysis was less then realtime, which is not useful if I’m sweating the QC on a broadcast show. Furthermore, popping onscreen scopes on and off via the F-buttons is great when you’re adjusting exposure on your camera, but as I mentioned I don’t want anything superimposed over my image when I’m trying to color correct a scene. This monitor is no replacement for a set of dedicated video scopes in a postproduction environment.

There’s also a really interesting pixel analysis tool. If there’s an element in the scene that you absolutely have to have a numeric analysis for, it’s here.

On screen measurement tool

Another great feature for setting lights and exposure is a false-color mode for showing regions of maximum and minimum Luma.

False color mode for luma limits

The zones are adjustable, but by default are set to show you which areas of the image are falling into the upper and lower 10 percent of image tonality, as an aid to help you avoid overzealous clipping.

Chart showing false color breakdown

There are many more features targeted to the on-location crew, including pixel zoom, focus assist (with a different false-color red overlay showing which areas of the image are analyzed to be in focus), timecode display, alarms, VU meters, and onscreen markers for Title Safe, Background, Center, etc, with settings for whatever frame aspect ration you may want to view.

Finally, Flanders provides the ability (new in the 2461) to display two inputs side by side.

Side by side dual input display

You can also wipe either horizontally or vertically between inputs to compare cameras, view dual channels of a stereo rig, or whatever else you need to do.

Viewing Angle

But enough of the bells and whistles. Getting back to the grading suite experience with this monitor, I’d like to talk about viewing angle. Most LCD-panel based displays lack a wide horizontal viewing angle, which is a liability when you’ve got multiple clients in the suite, and the one who’s sitting at the end of the client area is looking at an image that’s darker then the one who’s sitting in the “sweet spot” behind you. This is one area where Plasma displays have an advantage.

The Flanders, in my opinion, has a suitably wide viewing angle for two to three people sitting in a typical triangle about the display (colorist in front, clients to either side, but not too far away from one another). Flanders claims 178˚ (which seems about right in my informal “head-swinging” test). In this configuration, everyone will be seeing pretty much the same thing, which is what you want. That said, it’s still a 24″ monitor. While that was once considered luxury for midrange grading suites seven years ago (when a 24″ Sony BVM would run somewhere around $30K), these days clients have been so spoilt by facilities using calibrated 60″ plasma displays, or projector-based mini-theaters, that a 24″ display might seem a bit meagre in a multi-client environment (I’m talking five agency clients sitting in a room).

Also, keep in mind that the current recommendation for ideal viewing distance from your eyeball to an HD screen is 3-4 times the vertical height of the display. For the Flanders (with a screen height of 12 3/4″), that means that the ideal viewing distance is approximately 36″-49″ (3′-4′).

Black Level

Another issue to keep in mind is that, being based on backlit LCD technology, the black level is not as deep as either the CRTs we were once used to, a properly configured plasma, or the newfangled OLED monitors that are coming on the market (though for OLED it’ll cost you). For some, this may be an unconscionable liability. For others, the stable color and noiseless shadows of LCD makes it superior to Plasma, which is the other cost-conscious alternative.

This, in the end, is purely a matter of user preference, and is one of the reasons I recommend you evaluate any display you want to purchase in person. Certainly it’s true that as far as black level goes, you can do better, though you’ll likely pay considerably more for the privilege (the Dolby PRM-4200 has great blacks, and it’s only around $50K).

However, I would point out that the Flanders monitors does well in a standard, backlit video suite configuration. Perceptually, a dim, backlit environment is going to provide the best appearance of good contrast for any monitor (one of the main reasons for a proper viewing environment when using any display). Once used to the black level of these displays in relation to the numerical black of one’s video scopes, I don’t think anyone would have a serious problem grading their shadows predictably and well on the 2461.


As far as image quality goes, when I fired up the 2461 and loaded some projects I had previously graded on my calibrated JVC RS2 projector (calibrated to Rec 709), there were no surprises. Granted, this was not a probe-driven numerical analysis, but everything looked exactly as I’d expected it to, and the range of color and contrast that presented itself was absolutely suitable for professional work. The material looked right, and a series of test images that I loaded revealed nothing improper. Colorimetrically speaking, I’m impressed.

Based on the size and viewing angle, I’d say this is the perfect monitor for a shop with smaller suites designed to accommodate 2-3 people, or situations where colorists are working largely unsupervised and the Flanders is their hero display. It’s also an excellent solution for editorial environments where there is a desire for the editing displays to match the grading displays in the hero suite, preventing the unwanted surprise of a program looking different as it travels from room to room.

So that’s my final analysis. Solid color fidelity, low-cost calibration for the life of the display, and simple, flexible connectivity and video signal support make this an easy monitor to integrate into any postproduction environment (from out of the box to grading with my DaVinci rig was about 7 minutes). The size and viewing angle of the Flanders make it most suited for smaller suites, and the pricing brings a high-quality display instrument within reach of smaller shops needing color-critical monitoring. Furthermore, if you’re a “preditor” that engages in production as well as post, the Flanders provides you terrific options for field use. If you’re in the market for a new monitor in this size, I’d recommend getting a demo to see if it’s for you.

Color Correction Handbook 2nd Edition: Grading theory and technique for any application.
Color Correction Look Book: Stylized and creative grading techniques for any application.
What's New in DaVinci Resolve 15: Covering every new feature in Resolve 15 from Ripple Training.
DaVinci Resolve Tutorials: Far ranging DaVinci Resolve instruction from Ripple Training.

Looking Back at NAB 2011

Another Vegas NAB

NAB was great fun this year. Lots of new announcements for the color grading crowd, and a visibly big jump in attendance from previous years. As always, it was good to catch up with colleagues from around the world whom I only see at either NAB or IBC, especially at after-hours events like the Media Motion Ball (made sweeter by my winning a copy of Sapphire plugins for AE and all, apologies to Scott Simmons who was eyeing it from the next place in the winners queue).

At any rate, now that I’ve settled into my new house and have had a chance to more or less unpack my home office, I thought I’d share a few experiences I had at the show, as well as some interesting details of what was announced, from the colorist’s perspective. My apologies to the many companies I didn’t have time to chat with, at this point even four days is hardly enough time to see everything and talk to everyone I’d like.

Full disclosure, DaVinci invited me to spend some time at their booth, and I’ve been doing a bit of writing for them, so I had ample time to see the new features they unveiled on Monday. While there’s been plenty of chat about DaVinci’s various announcements, including XML import (with support for transfer modes and 12 different types of video transitions), multi-track timeline support, hue curves, RGB mixer, limitable noise reduction, improvements to 3D left/right eye color and contrast auto-matching, and 3D left/right eye auto geometry matching, what I find most interesting are the tiny implementation details of many of these features that shows they’re really listening to what colorists want in their day to day work.

DaVinci's New Features on Display

For instance, the hue curves can be used from the DaVinci control surface, with the primary and secondary colors being mapped to knobs on the panel, and the fourth trackball being useful for moving selected curve points around on the surface of the curve. For the mouse users out there, holding the Shift key while clicking on a curve places a control point without adjusting the curve (great for locking part of a curve off from adjustment), while a small button underneath reveals bezier handles if you want to go nuts with custom curve shaping. My favorite implementation of the hue curves, however, is the ability to sample a range of color by dragging within the image preview, in order to automatically place control points for manipulating that range of color using any of the hue curves, or the Sat vs. Lum curve. Oh yeah, and the Sat vs. Lum curve is a welcome addition (especially given the control panel implementation). Film Master and Quantel have had this feature for years, it’s nice to see it available to Resolve users.

The RGB mixer is really interesting. Its default mode lets you mix any amount of R, G, and B into any channel, but you also have the option to subtract any amount of R, G, and B from any channel. I played around with subtracting bits of neighboring color channels (subtracting G and B from R, then subtracting R and B from G, then subtracting R and G from B) while adding to R, G, and B by the amount I subtracted from the other channels, and the resulting subtle “color purification” boosted saturation, but in a wholly different way then using the Sat knob. And of course since it’s a standard tab within every correction node, it’s fully limitable. I’m sure there will be many crazy as well as utilitarian uses of this tool. An additional grayscale mode lets you mix the R, G, and B channels together to create different monochrome mixes, a welcome feature as I’d never quite figured out how to do this in prior versions of Resolve.

Sigi Ferstl (Company 3) was demoing the new 3D toolset on the show floor to keen audiences. I haven’t yet been required to do any amount of 3D, but the auto color and geometry matching features for making the left and right eyes align and match properly are welcome additions, as are various new monitoring modes for comparing the two eyes on one screen.

Resolve's 3D features being shown.

Speaking of 3D, it’s time to give Quantel some love. Speaking with David Throup, and with a great demo from Sam Sheppard, I got the lowdown on some of the new color-grading and 3D features that Quantel has come up with for Pablo.

Quantel has introduced improved auto fix tools for matching color and geometry between the left and right eyes in Pablo. There are also superimposed left/right eye vectorscope and histogram graphs (color coded per eye), which look to be a huge help when making those last few manual tweaks to parts of the image that just won’t auto-match (I’ve seen left/right eye demos on several grading systems now, and sadly there’s often a stubborn region that just won’t match, requiring manual fixing).

Most interestingly, two new measurement tools have been added for evaluating convergence. A “Depth Histogram” analyzes how much of the picture projects forward and backward from the center of the screen. As you can see in the image below, a center line represents the screen itself, and a histrograph analysis shows quickly and precisely how much of the image is projecting forward, and how much is projecting backward. This is a really handy tool for quantifying the disparity within your image.

Quantel's depth histogram, showing the overall spread of image disparity.

Additionally, a user adjustable “Curtain Delimiter” places a square pattern in 3D space to serve as a visual indicator of your chosen limits for stereo disparity. This is key as broadcasters add disparity limits to their QC guidelines (I was told the BBC has implemented Vince Pace’s recommendations for disparity limits of no more then 1% forward and 2% back of the screen for home viewing). Both this and the depth histogram really take the fear out of convergence adjustments, as far as I’m concerned.

Quantel's Curtain Delimiter, providing a visible boundary for guidance on disparity limits.

Quantel’s big new feature for color grading is a set of customizable “Range Controls” for the lift/gamma/gain color balance controls. Three curves let you set custom tonal ranges of influence for each control, which are fantastic for post-primary adjustments that need more influence over the image then a secondary, but finer-tuned control then the standard lift/gamma/gain overlap. That, and the results are clean at the edges due to the mathematical joy at work. And one other thing, customized range controls can be baked into a LUT in a way that HSL qualifiers can’t, which is an interesting approach to creating even more customized looks via LUTs. Overall, a very nice addition.

Quantel's new customizable range controls.

I also got a better understanding of Pablo PA. It’s a full software version of Quantel Pablo that runs on Windows 7, requires Nvidia Quadro GPUs, and sells for $14K. It’s not feature-limited as far as the on-screen experience goes, it’s got all the editing, color, and 3D tools of the full Pablo, and can handle all SD, HD, and film resolutions. However, there are hardware limitations. There’s no video output, so all monitoring must be done via your computer display (possibly an HP DreamColor monitor being fed via DisplayPort for a calibrated look at the image). Also, there’s no support for control surfaces. However, the main point of this software is to serve as an assist station for a Pablo-using post house, so I suppose that’s not a huge bother. I’ll be very curious to see if this software’s capabilities grow over time.

Over at Filmlight’s suite at the Renaissance, I spoke with Mark Burton, who showed me the insanely wonderful new Blackboard II control surface. At $62K, I have no problem saying this is something I’ll likely never own. On the other hand, I also have no problem saying that, to date, I consider this to be one of the most significant advancements in control surface design that I’ve seen. And it’s not just because of the hand shaped wooden top. Take a look at the video below:

[vimeo video_id=”22777705″ width=”400″ height=”300″ title=”Yes” byline=”Yes” portrait=”Yes” autoplay=”No” loop=”No” color=”00adef”]

Filmlight is patenting their method of placing a flatpanel display underneath banks of buttons, with individual lenses (one for each button) projecting each part of the screen at a different button. The result is that every button on the panel can have custom labeling for every single mode of the Baselight software. Furthemore, buttons aren’t limited to mere text, they can display icons, images, even motion video. All the while, they’re still physical, touchable buttons that you can find with your fingers (and muscle memory) and press without taking your eyes off the screen. I found the layout to be logical, with banks of knobs (okay, one quibble, there could have been more knobs), additional displays that can be used for UI, both a “virtual” keyboard (two button banks to the left can be remapped to be a QWERTY keyboard) and a “real keyboard” that can be flipped up from the bottom of the panel, and finally a touchpad for mouse navigation and graphics tablet pad for drawing round out the available controls.

It’s also worth pointing out that the Baselight grading software itself has just undergone a huge under-the-hood rewrite. At the moment, the biggest new thing being shown is a three-up monitoring UI layout, but more goodness to come was implied. I’m curious to see how the already impressive Baselight software continues to evolve, especially with such a flexible control surface.

I also took a look at one of the buzziest things at the show, the Baselight for Final Cut Pro plugin. Those who know are aware that Filmlight has had a Mac version of Baselight for a while, they just haven’t been interested in releasing it. This is their answer to folks wanting Baselight goodness on their Macs; essentially a version of Baselight that works inside of Final Cut Pro. It’s limited to four layers, but those layers can do everything that Baselight layers can do, and with exactly the same image quality. Of course, the real news is that with the Baselight plugin, exported XML from FCP to Baselight translates the “offline session” grades directly and precisely into Baselight grades for getting a start on the session. Of course, I don’t know how many colorists I’ve heard say “I don’t want the editor telling me what to do,” and I myself have blown away plenty of editor-created grades prior to creating my own take on the program. However, this would be a real boon for integrated shops where multi-disciplinarians can move from task to task, not having to worry about losing the work they’ve done.

The plugin they showed at NAB was still a work in progress (they’re planning on releasing at IBC in September), so I won’t comment on performance as it’s still being tuned. However, it’s an interesting development and a great option to have, and when they continue on to develop a Nuke version of the same plugin, and possibly plug-ins for other major NLEs as well, Filmlight will have created a remarkably smooth grading pipeline for Baselight-using facilities.

Thanks to Sherif Sadek I got to see the new features in Assimilate’s big release of Scratch 6. Among the announcements are Arri Alexa and RED Epic media compatibility, a new Audio Mixer allowing you to grade within context of audio tracks, multi-track video support (After Effects style, where additional tracks go down, not up), multiple shapes per scaffold (letting you do more with fewer scaffolds), blend modes that work with superimposed images, as well as additional blend modes that are useful for combining alpha channels and masks (a cool feature for the compositing minded). Also, proper AAF/XML import (although no XML export).

Scratch 6 offers AAF and XML import.

But that’s not all. Scratch 6 sports a bicubic grid warper that’s animatable (have an actor that needs to look a bit thinner?).

Scratch's Grid Warper in action

They’ve also added a dedicated Luma keyer (a convenience, really, as you could do this before by turning off the H and S of HSL), and a brand new chroma keyer for high quality green and bluescreen keying. Now that there are multiple tracks, Scratch is heading down the path of letting you do more compositing work directly in your grading app (you got your chocolate in my peanut butter!). The new chroma keyer is nice, the plates they were keying were fairly challenging, and the wispy hair detail that was preserved, as well as the built-in spill suppression, were all very impressive.

One interesting new feature for those doing digital dailies, an “auto sync” feature that was described to me as a “clap finder.” Once you find the visual clap frame in a clip, you simply move the paired audio clip’s clap close to it in the timeline, and then one command auto aligns the peak of the clap to the clap frame, saving you a few moments of dragging and adjustment. Of course, the ability to auto sync matching timecode between video and Broadcast Wave files is included for productions that are more organized.

Assimilate was also showing their new version of Scratch for Mac OS X (finally!), available for $18K. In addition to full feature parity with Scratch 6 for Windows, there is full support for ProRes output (Scratch on Windows can import ProRes files, but not export). That’s the good news, the bad news is that there’s no video output, although like Pablo PA you could always use an HP DreamColor monitor to view the image and UI in a calibrated manner. I also saw a Scratch configuration in the Panasonic booth, outputting both image and UI to a calibrated plasma display, which is an interesting way to work. You’re also limited to the NVidia Quadro 4000 (when oh when will NVidia and Apple make the rest of the Quadro line available for Mac users?). Still, this is a good start, and may be a useful option for Scratch shops that need an assistant station or two.

Supported media formats in Scratch 6 for Mac OS X.

Finally, Assimilate announced Scratch Lab, their on-set grading application. I didn’t see this in person, but was told it consists of Primary/Source/LUT/Curve controls only, no secondaries, no compositing tools. It’s designed to run on a Macbook using NVidia graphics (yes, you’ll need to buy a used one), and costs $5K. This actually interests me quite a lot, as I’ve been intrigued by the role of on-set colorists. Nice to see another tool available.

And yes, I went to the Apple event at the Supermeet. No hard feelings about being bumped as a speaker, honestly this was far more interesting. Many new things were shown, and I look forward to hearing what folks think once they get their hands on the newy newness of Final Cut Pro X. Although I’ll take a raincheck on endless speculation about what the new app will and won’t do until release, thank you very much.

Randy Ubillos showing off Final Cut Pro X.

I had a great chat with Mike Woodworth at Divergent Media, who showed me the new video analysis tools of Scopebox 3. While this software is also capable of digital capture for field recording, all I had eyes for were the new scope features, which are impressive. New gamut displays let you see out of bounds errors with composite and RGB analyses. A unique new “envelopes” feature highlights the high and low boundaries of excursion in the waveform monitors, making it a snap to see peaks (including a peak and hold display) without having to crank your WFM brightness up to 100%. I’ve longed for something like this in other scopes for years, it’s great to see it here. Alarm logs are available for QC environments, and it’s also worth mentioning that Scopebox does 444 analysis if your video interface supports it. All for the new low price of $99. Mike laughed when I told him for that price, I’d buy one for my living room TV. I wasn’t joking. I look forward to stacking Scopebox up against my Harris Videotek, as well as against Blackmagic’s Ultrascope, to see which I prefer.

The new version of Scopebox previewed.

On a lark, I had a conversation with George Sheckel at Christie (the projector company). I was curious about the total changeover of projector models that almost made my Color Correction Handbook obsolete before it went to print (I updated it). It turns out that on Dec. 31st, 2009, all projector companies made the shift from Series 1 projectors to Series 2, primarily to add additional layers of hardware security demanded by the major film distribution companies. This is serious, using the National Institute of Standards and Technology (NIST) Federal Information Processing Standard (FIPS) security for physical protection of the encrypted video stream. At this point, for DCI playback, an encrypted stream is sent by the playback server, over dual, quad, or even octuple-link (is that even a term?) HD-SDI, to the projector. This stream is not decrypted until it’s inside the projector, just before being sent to the TI DLP chipset that literally reflects the light to the screen. Any attempt to physically tamper with the internals of the projector results in the loss of the DCI key that makes decryption possible. This is serious encryption.

On a lighter note, Christie took the opportunity to add 4K resolution, as well as some other small improvements. I inquired which projector models were being recommended for 2K projection in a postproduction environment, and was told that Christie’s current best post projector is the CP2210, or the CP2220 if you need a color wheel for Dolby 3D. Both require 220 volts AC, with 20 amp circuits.

Alas, I hadn’t enough time to spend at the Panasonic booth to get all my questions answered, however I did get to stand in awe of their unsanely ginormous 152″ 4K plasma television. That’s 3D ready. It’s the TH-152UX1, if you’re planning on going to Best Buy to pick one up. However, like the Christie projector, you’re going to need to feed this beast 220 volts and 20 amps.

Is that a Plasma display or a tanning booth?

I also had a nice chat with Steve Shaw of Light Illusion. On Tuesday of the show I did a seminar on “Color Management in the Digital Facility,” during which I demonstrated monitor calibration with 3D LUTs using Steve’s Lightspace software, driving a Klein K-10 colorimeter (thanks to Luhr Jensen, CEO of Klein Instruments, for loaning me the K-10 for my class). It was a great three-hour session (I only went over by 9 minutes, a personal best), and was so fun to do I hope to have the opportunity again sometime.

Talking about color management for film and video at NAB.

Anyhow, the Light Illusion software works well, and Steve mentioned a new utility he and his team have developed called Alexicc, that essentially lets you batch convert Alexa media using Log-C gamma into Rec. 709 QuickTime media (ProRes, if you do that sort of thing), cloning timecode and reel number for later conforming to the original media. You can also convert into DNxHD if you’re an Avid sort of person. You’re not limited to a Log-C to Rec. 709 LUT, you can do other LUT conversions using additional tools. It’s a streamlined utility, available for £220 ($363 USD as of this morning), that you might find useful.

After years of correspondence with Graeme Nattress (I’d interviewed him for my Encyclopedia of Color Correction, and he was my technical editor when I wrote for Edit Well), I finally had the pleasure of chatting in person at the RED booth. While there, Ted Schilowitz was showing off an honest to goodness working Scarlet camera.

The mythical camera made real.

RED definitely won the “over the top camera demo” award for the live tattooing of a model on stage. I’ve seen models in camera booths working out (on a treadmill all day?), lounging, getting their hair cut, all kinds of wacky things, but this was definitely a first.

That's one way of testing resolution.

Shane Ross has already blogged about Cache A as a solution for LTO tape archival, so I’d direct you to his blog.

Cache A LTO integration with media applications.

One thing Shane didn’t mention that I thought was really interesting is the SDK that Cache A has developed, that enables their tape storage hardware to be used directly by software such as Media Composer, Final Cut Server, and Assimilate Scratch (though I’m not sure if Scratch support is coming or already available). With this kind of support, applications can directly request specific media for retrieval from tape backup. One example that was described to me was the ability to, during reconform, use an EDL to request only the media files used by that EDL for retrieval, rather then having to retrieve everything associated with that project. This is fantastic functionality that I hope more grading applications jump on board with in today’s world of terabytes of tapeless media.

Lastly, I was told that the entire Cache A product line is now compatible with the open source Linear Tape File System (LTFS) format. This means that each tape is self contained, and can be unarchived by any application and operating systems with LTFS compatibility. More information on this can be found on a handy Cache A press release.

Color Correction Handbook 2nd Edition: Grading theory and technique for any application.
Color Correction Look Book: Stylized and creative grading techniques for any application.
What's New in DaVinci Resolve 15: Covering every new feature in Resolve 15 from Ripple Training.
DaVinci Resolve Tutorials: Far ranging DaVinci Resolve instruction from Ripple Training.

More Colorful—A Closer Look

NBC’s catchphrase gave me a chuckle, not the least bearing in mind that the Peacock logo was originally intended to promote the “new” color programming available back in 1956. This title appears at the end of a Law and Order promo that I saw on Vimeo (thanks to Graeme Nattress for pointing it out on Twitter).

First, I just want to get the fact that it was shot using the RED Epic out of the way. Yes, the Epic is a new and wonderful camera, and yes, it captures fantastic images with lots of latitude for grading etcetera etcetera and so on…

That’s not what I want to talk about.

What I want to talk about are the decisions that the DP (Rhet Bear) and colorist (I don’t know who) made while crafting the look of this piece. As with many promos, a bold look was created, but there are a lot of elements at play that immediately struck me as a good opportunity for discussion.

So, watch the video, and then read on.

It’s a very nice promo, visually interesting, with great lighting and effective grading, which is what I’d like to talk about. Now, as is typical for a visually beautiful piece, it’s difficult to know where the DP’s work ends and the Colorist’s work begins, so I’m just going to discuss the look of the piece as a whole. If by some happy coincidence the original DP and/or Colorist stumbles across this post and wants to comment, I’d be very happy to learn more.

Let’s take a look at an early frame, medium on the actor with a wide expanse of background.

There are some bright, soft highlights going on here, both in the blown-out background, and a hard rim light on the man’s face. I dig it, and in fact I’ve never been one to be afraid of softly blown-out highlights (for the right situation). However, the key word is softly.

Battlestar Galactica (the new series that is) also indulged in hot, blown-out highlights, but again, the overexposure was a smooth “blooming” effect, rather then the harsh digitally aliased crap that you end up with if you simply overexpose a digital signal. When you overexpose film, light bounces among the different emulsion layers and causes halation, which lends a soft blurry glow to blown-out highlights that can look quite nice (who doesn’t like a bit of glow). This is the quality we associate with “good” overexposure.

Another aspect of this shot that we can also see in the following shot is the willingness to allow a bit of overexposure on the face. Granted, this is a bit inevitable due to the shininess of a bald scalp, but still, since it’s motivated by an already high contrast ratio, and since the majority of the face is still well-exposed, letting a bit of rim light or forehead shine blow out won’t kill anyone.

I don’t know how many times I’m asked to do something to “patch up” a bit of overexposure on the face, when a) it may not be necessary since it looks just fine, and b) the fix can sometimes end up looking worse then the original bit of overexposure. Here’s another shot with high-contrast light on the face and bright highlights.

Even though the actor’s complexion is clearly darker, the highlight on his cheek is really quite hot, but that’s okay, because the lighting in the shot justifies it (more or less, it’s a promo after all), and there’s still plenty of detail in the midtones and shadows of the actor’s face. Now, I’m not saying this is how you should always grade faces, I’m just saying there are times when strong face highlights are perfectly fine.

The key is to make sure that the edges of the blown out areas roll off smoothly and softly into the rest of the midtones, which depends on two things. First, the original shot needed to have been exposed carefully so that the bright highlights aren’t clipped, because that will make the job ten times harder or virtually impossible (RED footage seems to have a softer knee at the highlights then lower-end digital camcorders, so that helps).

Second, you need to control your overexposure adjustments so that YOU don’t end up introducing harsh clipping. Granted, you’re going to need to push your highlights up beyond 100 percent to get the blowout, but you need to make sure that you compress the highlights as you do so, rather then simply clipping them past 100. There are a few ways you can do this.

  1. You can roll off the top of your YRGB curves (if you have them) so that the very top end of exposure is squeezed before clipping, which will give you some softness.
  2. You can use something like DaVinci Resolve’s Soft Clipping controls to compress the clipping at the highlights.
  3. You can selectively blow out the highlights by using HSL Qualification to isolate the top highlights, blur out the resulting matte, and push the entire keyed region up to, but not beyond, 100 percent to simulate a soft roll-off.

Another interesting thing about these shots is the selective use of saturation. Skin tones retain a fairly high degree of saturation, while the background saturation is a bit muted. This has the function of drawing our eye straight to the actors (the folks they want us to be looking at). However, the following shot shows another interesting use of selective saturation.

Yes, the actor’s face still has visibly higher saturation then the surroundings, but there’s also a fair amount of color ringing the light hitting the brick wall in the background (and it’s even an analogous color, no orange/teal going on here). Even though the overall image has fairly subdued saturation, the existence of an additional pool of color that’s of a distinctly different hue then the flesh color of the the man’s face increases the perceived colorfulness of the image, while keeping the viewer’s eye on what we want them to be looking at via the stark color contrast between the face and the rest of the scene. (If you’re wondering what colorfulness and color contrast are, there are a few sections in my book that explain)

Here’s another fun thing, and this I suspect was a bit of serendipity that the colorist was able to capitalize on. The alley scene has a lot of silhouette. Stark, striking, I love this kind of thing.

However, it can be tempting for clients to wimp out and say “I wish we could see the guys’s face a little.” Well, check out a few frames later.

Some flaring from the police car lights wraps around his face, providing a great excuse to see a bit of facial detail every other second. By carefully adjusting those blown-out highlights, we’re able to have both the stark silhouette, and short glimpses of the character’s tense expression.

Let’s take a look at one more shot; this time a wider, more colorful image.

There are a few things going on here. For one, we can clearly see the upper left/right corner vignette that’s been applied throughout the spot to give a bit of style, and focus our eyes towards the center of the image. Also, here we can see a lot more color, but what I want to point out in particular is the role that the costume department has played.

The clothes of the main players are all dark and muted (except for the purple tie). This choice of attire makes it easier to pull a high contrast look, easier to have the faces stand out amongst a pool of neutrality, and easier for the red and green of the background signage to stand out (although I suspect there was more then a bit of HSL qualification used to tune the tie and sign colors to be just right).

Beginning directors and cinematographers should never underestimate the impact that art department decisions will have on the final image. Sure, we colorists have all kinds of tools and toys for selectively playing around with individual colors in the frame, but a) color needs to be there to begin with, and b) it’s a lot faster to choose dark suits during a wardrobe meeting then it is to have a colorist rotoscope the actors in a shot to try and selectively make beige suits dark blue if you change your mind later.

Disclaimer—No, I don’t have permission to reproduce any of these images, so hopefully NBC’s lawyers don’t throw me into the pokey. Hopefully, as I’m saying nice things and, frankly, promoting their show, they’ll cut me some slack. And once again, my complements to the Cinematographer and Colorist who worked on this. Very pretty indeed.

Color Correction Handbook 2nd Edition: Grading theory and technique for any application.
Color Correction Look Book: Stylized and creative grading techniques for any application.
What's New in DaVinci Resolve 15: Covering every new feature in Resolve 15 from Ripple Training.
DaVinci Resolve Tutorials: Far ranging DaVinci Resolve instruction from Ripple Training.

I’m Moving; And Going to NAB

I’m moving! Those of you following me on Twitter saw my announcement a couple of weeks ago, but as my wife Kaylynn and I are closing on our new house next week, it seemed appropriate to mention it here.

Specifically, we’re moving to St. Paul, Minnesota. A place where snow freely falls during long, long winters (as evidenced in the picture above). Fortunately, I originally grew up in Wisconsin, so winter doesn’t really bother me that much (although I anticipate more March/April vacationing then usual). Also, I love cross-country skiing, so I expect an abundance of opportunity to improve my skills.

The reasons are varied, but mostly relate to work. New York has been a wonderful home for the past 6 years, but new opportunities beckon, and the St. Paul area is really a wonderful midwestern metropolis.

I will obviously be moving my color correction practice there with me, building out a new and improved home-based DaVinci Resolve suite for my personal clients, and doing other freelancing as opportunity permits. This move is also strategically planned to allow me more time for writing (not that I’ve exactly lacked time for writing, but apparently I want to do even more).

So, if you’re a filmmaker/documentarian/video artist working in the midwest, I’ll be available come June (who knows, I may even update correctionforcolor.com by then).

In other news, literally the same week as a truck will be hauling our stuff to the new house, I’ll be attending NAB, and I’ve got all kinds of activities planned there what with a class on Color Management for the Digital Facility, a short presentation at the NAB Supermeet, and a book signing (likely at the NAB Store), all of which are listed on the sidebar over at the right. I may even be appearing at the Blackmagic Designs DaVinci booth, I’ll add that if it ends up happening.

Perhaps I’ll see you there.

Color Correction Handbook 2nd Edition: Grading theory and technique for any application.
Color Correction Look Book: Stylized and creative grading techniques for any application.
What's New in DaVinci Resolve 15: Covering every new feature in Resolve 15 from Ripple Training.
DaVinci Resolve Tutorials: Far ranging DaVinci Resolve instruction from Ripple Training.

DaVinci Resolve Control Surface Unboxing

You may have noticed I’ve not updated in a while, due to an unexpected (yet delightful) spike in my workload at the moment. However, in between gigs, I managed to put in an order for the full-blown DaVinci Resolve control surface. Yeah, the expensive one.

When it arrived (at my home, as I’m getting ready to move), my first thought was to simply get it set up as quickly as possible in order to try it out. However, it occurred to me that I might share the vicarious thrill of the moment by offering that most blogerly of posts, the “unboxing” photo series.

In the process, you can see how the DaVinci control surface is connected, as well as the surprising amount of thought and care that has gone into packaging and delivering all 70 lbs (30 kilos) of hardware goodness.

So, here we go…

The box, as it arrived on my doorstep. My cat (Sieben) appreciated the sturdiness of the box, as well as how intact it was after the long trip from Australia, where they’re manufactured.

As I mentioned, it’s 70 lbs worth of kit, so I had fun carrying it up to my 4th floor walkup apartment.

Upon opening the outer box, a packaging extravaganza awaited me within.

As a result, I just had to pull the inner box out to fully appreciate the design that had gone into it. While this isn’t the kind of thing I’d expect to see sitting on a store shelf, it’s gratifying (if a bit of overkill) to see such a nice box enclosing something you’ve just dropped $29,995 on. Practically speaking, I was also glad to see something this expensive getting double-boxed.

Opening up the inner packaging revealed the cables, power supply, and software, each with its designated spot. Nothing loose and rattling around here.

This control surface connects via USB, so three USB cables are supplied, two to connect the side panels to the center panel, and one to go to the CPU. However, power is supplied via a set of seriously engineered cables, with distinct three-pronged male/female plugs.

The actual power supply is external, a brick-type supply that plugs into the wall via a standard CPU power cord. Interestingly, the cord wasn’t supplied, but I figure since they deliver these internationally it would be too much of a hassle to keep track of all the different socket types, and since anyone buying one of these likely has a box of these stashed in their gear closet (I do) this wasn’t a big deal.

Finally, the software itself comes in a DVD-style box, with the DVD-ROM and dongle inside.

Putting the cables and software aside, it was time to pull off the top styrofoam. My dog (Penny), who often spends time with my clients and I in the grading suite, appreciated the new upgrade.

Pulling the center panel out of its plastic bag and inspecting the back, it was immediately obvious how all the plugs are meant to be installed.

  1. The power supply plugs into the middle, with the left and right panel power pluggins into either side.
  2. A standard type B USB plug next to the center power supply connection goes to the CPU, while two standard type A USB plugs next to the two side-panel power supply plugs connect the data to those panels.
  3. An additional two type A USB plug allows the connection of accessories (and the dongle, if that’s where you want to put it.

Next, it was time to lift the center panel’s cradle of styrofoam, revealing the two side-panels cleverly nested underneath, within their own styrofoam cradle. I’m definitely keeping this set of boxes for future moving and transport.

Pulling the side panels out of their plastic bags revealed the logical set of plugs in the back of each; power daisy-chained from the center panel in the middle, and a standard type B USB plug to daisy-chain data from the center panel.

So, with everything unwrapped, it was time to set up the panels. My home office is a bit cramped for a set of panels this big, and the first thing I noticed after placing them on my desk is that the panel displays, angled as they are towards the user, obscured my monitors. This will be a consideration for anyone setting up a Resolve suite using these panels.

Ergonomically, this design makes sense, and it’s nice to be able to clearly see all of the labels without having to constantly look down, but you’ll need to position your other monitors and displays accordingly.

Needing to temporarily elevate them in a hurry, I used the one thing I have in abundance…

Since the power and USB plugs are so clearly positioned and strictly gendered, connecting the panels to one another and to the computer is a snap, and took me all of three minutes.

Plugging the power supply in, the panel displays light up with the Blackmagic DaVinci logo, against a pleasing cloudscape. A funny thing, these panels lack an on/off switch. If you want to turn them off, you pull the plug. Very “big-facility” (just like my Harris video scopes).

Looking closer at the DaVinci displays on the center panel. Interestingly, the three vertical displays corresponding to each set of four knobs are actually a single LCD, the push-for-detente knobs sit on a bracket floating on top. The result is that, in normal use, everything is labeled so that you’re never lost whenever you change modes (and despite the generous number of controls, you still have to change modes from time to time). I’m a big fan of dynamic labeling, so I’m happy with how much panel real-estate this functionality is given.

Starting up the Resolve software, here’s my temporary home office setup, a mere half-hour after opening the boxes. Honestly, the thing that took me the most time was clearing space on my desk, and finding something with which to elevate my monitors. Bear in mind, I’ve no actual grading monitor connected at the moment, this is simply a temporary setup to make sure everything on the panel works (and, let’s face it, to have fun with my new toy).

And now, the control surface as it appears in the dark with all the buttons illuminated. Incidentally, the color of button illumination is customizable from within the Resolve settings tab; they’re lit with red, green, and blue LEDs, so you can make them any color you like…

I’ve just started getting used to this surface. Even moreso then other surfaces I’ve used, the abundance of controls now available to me will require some practice to use efficiently; it’s like learning to play the piano, and I’m going to have to do a little bit of grading every day to develop the muscle memory I’ll need to use this to full advantage.

That said, having this at home is incredibly silly. I’ve been likening it to having a Lamborghini in one’s back yard, just for tooling around the patio.

I’m not going to even attempt any kind of formal review in this post, other then to say that, in the three days I’ve been casually using this, the build quality feels exceptional. The contrast wheels and trackballs feel large, easy to manipulate, smooth, and solid to the touch, and the buttons all depress with a satisfying “click,” soft enough to not be irritating, but firm enough to provide positive feedback. Lastly, the displays are bright and clear, and I really like the push-for-detente rotator controls.

Overall, I’m happy with my purchase so far, and looking forward to using these in a client situation after my move is complete (more on that later).

A DaVinci Resolve at home. Who would’ve thought?

Color Correction Handbook 2nd Edition: Grading theory and technique for any application.
Color Correction Look Book: Stylized and creative grading techniques for any application.
What's New in DaVinci Resolve 15: Covering every new feature in Resolve 15 from Ripple Training.
DaVinci Resolve Tutorials: Far ranging DaVinci Resolve instruction from Ripple Training.

What (Inexpensive) Display Should I Buy?

Not a week passes without my getting an email that is some variation on the following:

I’m setting up a new computer for color correction, but I don’t know which monitor to buy for grading, and your book recommends broadcast displays that are out of my price range.

Sometimes folks are asking for recommendations of affordable color-critical monitors because they’re trying to set up a budget suite. Other times the request is for a learning workstation that’s good for getting started.

Whatever the reason, there are a bewildering array of monitoring choices currently available, and many of them are incredibly expensive. However, there are some affordably priced solutions that are available (relatively speaking) that will do the job, and here are three of the ones that have risen to the top over the last couple of years

HP DreamColor Monitor—In my opinion, the most economical monitor that can do Rec. 709 accurately is currently the HP DreamColor monitor, connected via HD-SDI (out of whatever video output interface you’re using) using Blackmagic’s HDLink DisplayPort adapter (HD-SDI out of your computer, DisplayPort into the DreamColor). The panel is 10-bit, and if you’ve set it up correctly it’s color-critical with blacks that are decently deep enough (at least for an LCD-based display). You will want to get the optional calibration probe to keep it on the straight and narrow. I know at least one professional colorist who’s using this as the monitor for his home system who quite likes it. Link.

Flanders Scientific LM-2461W—For a couple thousand more, you can also get into a Flanders Scientific broadcast monitor, for even higher quality monitoring. It’s got HD-SDI built in, so no signal conversion is necessary, and these monitors come pre-calibrated from the factory with impeccable settings; it’s the favored monitor of several of my grading colleagues, and I’ve been impressed overall. It also has more settings that make it appropriate for a professional broadcast suite, however it’s still quite affordable. Link.

Added 3/20/11—Just got wind that this model is about to be upgraded to the LM-2461W, with even better calibration from the factory, built-in 3G HD-SDI, remote control software, and other cool enhancements. Check out Walter Biscardi’s interview.

Panasonic Viera TC-P50VT25 (since superseded by the TC-P55VT30 VIERA)—The other possibility is to use a THX-rated Panasonic Plasma display. In fact, externally-calibrated Panasonic plasmas have been appearing in many professional grading suites. While there are many Panasonic models available (and the comparable models are updated every year), the previous year’s model was a recommendation from my colleague Robbie Carman. Forget about this monitor being 3D capable, what’s important is that it has both a THX mode and ISFccc rating for calibration. This just means all the controls are there for accurate calibration to the Rec. 709 HD standard. If you’re on a budget, you can have it calibrated using the services of a qualified THX video calibrator, running a signal to it via an HD-SDI to HDMI convertor (such as the BlackMagic HDlink Pro or the AJA HI5). Make sure the calibrator has references, though, because an unqualified calibrator will simply make a hash of things. You want measured Rec 709, not “uncle joe’s home theater settings.” The more professional solution to calibrating your plasma would be to buy a probe and calibration software to generate a 3D LUT of your own to load into either an HDLink or Cine-Tal Davio (either of which can apply a LUT transform to the video signal for calibration), but that will cost more. Link to the TV. Link to my article about 3D LUT calibration.

So these are the most budget-friendly monitoring options that I can wholeheartedly recommend. Please keep in mind that these aren’t all the options that are available, technology marches on and new monitors appear every year, so I encourage you to continue doing your own research.

Just remember, you get what you pay for. When it comes to color-critical monitoring for color correction and grading for broadcast or cinema, if you can’t accurately see the signal you’re adjusting, you can’t do the job. Do yourself a favor and get a good monitor.

Another Added NoteI’m amazed that folks are still referencing this article, as it’s going on two years old now, which is ancient in the fast-moving world of color critical displays. Check the comments for some interesting updates and back and forth, and check my more recent article about What Display Should I Buy which, while not making more specific recommendations, suggests how you should go about evaluating what type of display is best for your needs.

Updated 3/24/2013

Color Correction Handbook 2nd Edition: Grading theory and technique for any application.
Color Correction Look Book: Stylized and creative grading techniques for any application.
What's New in DaVinci Resolve 15: Covering every new feature in Resolve 15 from Ripple Training.
DaVinci Resolve Tutorials: Far ranging DaVinci Resolve instruction from Ripple Training.

Keeping it Reel

All that's old becomes new again...

I’ve been grading Persona Films’ debut feature, Cargo, and I originally thought I’d see how well it would work to load the entire 86 minute timeline into DaVinci Resolve at one go.

A classic case of user error.

Having been shot on RED, I conformed the project to the original R3D media, and I took the shortcut of adding all the media from the shoot to the media pool, thinking it’d make conforming a snap. That was a bad idea. The resulting colossal project database ended up taking forever to save (and I do like to save frequently), and was a bear to manage.

At the advice of those who are wiser then myself, I went back to my previously standard operating procedure of working in reels (something I always do when working in Apple Color). Furthermore, I was more judicious about what media I added to the media pool.

I had the original Final Cut Pro sequence for the feature broken into four sequence “reels” approximately 20 minutes in length (with each reel starting and ending on whole scenes). EDLs were then exported from each.

After creating separate DaVinci Resolve projects for each reel, I did the smart thing and used the “Add Folder and SubFolders Based on EDLs” command in the Browse page to add only the R3D media referenced by each EDL to the media pool of its corresponding project. That saved me a boatload of hassle right there.

Once that was done, it was a simple thing to open each project and import its corresponding EDL in the Conform page. With less media in the media pool, and a shorter list of events in the timeline, saving is once again snappy, and everything is generally faster and easier to manage. Once the grade is finished, I’ll be exporting a set of four .mov files that will be stitched together back in Final Cut Pro, with final mastering to tape from there.

Moral to the story? If you’re grading a feature in DaVinci Resolve, divide the program into separate project reels, and only add the media you need to each one. Guess it just goes to show that reels never go out of style…

Color Correction Handbook 2nd Edition: Grading theory and technique for any application.
Color Correction Look Book: Stylized and creative grading techniques for any application.
What's New in DaVinci Resolve 15: Covering every new feature in Resolve 15 from Ripple Training.
DaVinci Resolve Tutorials: Far ranging DaVinci Resolve instruction from Ripple Training.

Oh Yes, We’re Listening

Thanks, Dictionary.com!

While having dinner with fellow colorist Joe Owens in December, we got to talking about the grunts and interjections that sometimes pass for communication in the suite.

When not in a rush for time, I generally ask a client “so how do you like it?” before moving out of a scene or a shot that I’ve just graded. However, I’m listening to the tone of the reply as much as the words. If a client says, “Great!” then I’m done and we move on. However, if the response is “Uh, fine?” then my impression is that there’s something not quite right, it’s hard to articulate, and the client is trying to convince themselves that it’s all in their head.

My response to this is usually some variation on “so how can we make this shot better?” If I get an answer, then I try and take care of it. If I don’t, then the shot or scene is probably a ripe candidate for revisiting at a later time, when fresher eyes will have a better chance of spotting the necessary improvement. Never underestimate the power of simply walking away.

However, when I’m in the middle of an adjustment, I’m also listening for any little verbal sign of what the client thinks at that moment. My suite is set up with the clients sitting behind me as I work, so if I hear “Ahhh!” then I know I’m doing something right. If I hear “huh…” then I’m inclined to stop and ask what they think of the current state of the image, just to get a sanity check.

I don’t always do this. Some grades are like haircuts, and nothing is going to look good until I make the final adjustment. In these instances, I let folks know when the shot is ready for an opinion. Until then, I encourage them to enjoy the free Wi-Fi.

I remember one gig where the client, a lovely fellow, tended to grunt, noncommittally and often, and usually when I was in the middle of an adjustment. It worried me a bit, and I started checking in with him more and more frequently; “what do you think of this adjustment?” “Oh, it’s fine!” he’d reply enthusiastically, and after the sixth instance of this I simply bit my tongue and hoped for the best.

The session ended up going swimmingly and he was very happy with the result, but it’s worth knowing that, even when our backs are turned, all of us colorists, editors, and post people are paying attention to every syllable you utter.

Color Correction Handbook 2nd Edition: Grading theory and technique for any application.
Color Correction Look Book: Stylized and creative grading techniques for any application.
What's New in DaVinci Resolve 15: Covering every new feature in Resolve 15 from Ripple Training.
DaVinci Resolve Tutorials: Far ranging DaVinci Resolve instruction from Ripple Training.

Two Ways to Highlight Keys in DaVinci Resolve

Here’s a small but useful tip I put up on Twitter, but given how ephemeral Twitter is, I thought I’d elaborate here. It’s about highlighting keys in DaVinci Resolve.

As of Resolve 7.1, there are two keyboard shortcuts for showing a highlight with which to evaluate the isolation you’re doing with either an HSL Qualifier (a key) or with a Power Window (shape), or even to view the interaction of the two. Shift-H for a regular highlight, and Control-H for a high-contrast highlight (both key shortcuts toggle the highlight on and off).

In this example, I want to isolate the highlights of the water in the following shot:

The original, ungraded image.

Assuming a sunny day, and a camera angle that’s near the surface of the water, lakes and ocean scenes tend to be two-tone, with highlights reflecting the color of the sky, and shadows reflecting the quality of the water (I plan on talking more about the color of water in a future post, it’s actually quite interesting).

By isolating the water highlights using an HSL Qualifier, I can manipulate the water color while at the same time keeping some interesting color contrast and interactions with the original color of the water shadows.

While I create and adjust my secondary qualification, there are two ways that I can preview the key I’m generating with a highlight. The default highlight that Resolve uses can be toggled on and off using Shift-H (this is also the default highlight you’ll get if you use the button on a WAVE) and shows the selected portion of the image with the original colors, and the unselected portion of the image with a flat gray:

The DaVinci Resolve default highlight.

While this view took me a bit of getting used to at first, it’s grown on me, and I now find it really useful to get some perspective on how the isolated portion of the image looks while I’m fine-tuning the key.

On the other hand, by pressing Control-H you can also show what’s called a “high-contrast black and white” highlight (so named via a checkbox in the Settings tab of the Config page that lets you change the default highlight that’s turned on via your WAVE button):

A high-contrast highlight in DaVinci Resolve.

This high-contrast highlight should be familiar to you if you’ve used other color correction applications and plugins; it’s a more typical display wherein the selected portion of the image is white, and the unselected portion of the image is black.

I find this high-contrast highlight is useful in situations where I’m trying to eliminate holes in a key, or evaluate how “chattery” a key is since irregularities are easier to spot when divorced from the original image. For example, the black & white highlight makes it easier to see the unwanted top portion of the man’s head that’s gotten selected along with the water. I’ll want to do something about that…

The great thing is, via either keyboard shortcuts or macro remapping to a multi-button mouse or other device, you have the option of easily and quickly switching between the two, or turning them off, as you see fit. It’s really handy!

Color Correction Handbook 2nd Edition: Grading theory and technique for any application.
Color Correction Look Book: Stylized and creative grading techniques for any application.
What's New in DaVinci Resolve 15: Covering every new feature in Resolve 15 from Ripple Training.
DaVinci Resolve Tutorials: Far ranging DaVinci Resolve instruction from Ripple Training.

My Book Made It to Korea

Thanks to colleague Warren Eagles, who sent me a picture of Korean colorist and author (of a Korean-language book on Apple Color) Wonju Park, whom I’m told likes The Handbook.

Guess my book's got some competition in Korea!

All I can say is, awesome! If you happen to read this, thank you Wonju. I’d follow you on Twitter, but alas my Korean is nonexistant, and attempts at automatic translation were humorously tragic. I hope we cross paths someday!

Color Correction Handbook 2nd Edition: Grading theory and technique for any application.
Color Correction Look Book: Stylized and creative grading techniques for any application.
What's New in DaVinci Resolve 15: Covering every new feature in Resolve 15 from Ripple Training.
DaVinci Resolve Tutorials: Far ranging DaVinci Resolve instruction from Ripple Training.

Eliminating Video Waste

It's Oscar season again...

My wife is an actress and member of SAG, so every year around this time she gets a handful of SAG screening discs. For those of you who know, this is not quite as exciting as it sounds. You see, these discs are watermarked—to prevent piratical distribution—with sentences of text that appear over the picture every fifth scene or so reminding you it’s a screener. So the excitement of “free movies!” is moderated by the downfall of getting kicked out of one’s suspension of disbelief every so often by an irritating subtitle.

The only reason I bring it up is that this year, for the first time that I’ve seen, my wife has received a postcard offering a free iTunes rental of the movies that studio has for Oscar consideration. This is brilliant, primarily because the “free” DVDs sent out in the past weren’t anything we’d want to bother keeping. If it was a movie we’d want in our library, the last thing we need is to see those annoying subtitles during every viewing, I’d just buy a clean copy once the Blu-ray version came out.

By using iTunes rental distribution, the studio can keep their bits secure, my wife can watch the movies she might care to vote on, and I don’t have to feel guilty tossing unwanted DVDs into a landfill. I consider this to be very forward thinking, and I must applaud the studios who are trying this out. The only disadvantage is that, for typical home viewing, one has to get an Apple TV (or possibly have a Mac Mini or other iTunes-outputting CPU hooked up to one’s TV). At $99 this isn’t a massive imposition, but it’s still a drag if you’re an underemployed actor struggling to make ends meet while fulfilling your dreams. However, there’s always the option of renting on your iTunes equipped computer.

It’s also been brought to my attention that Withoutabox.com has been allowing uploaded, online screeners (used by select festivals) for some time. I used Withoutabox.com in 2006 when I was submitting my feature Four Weeks, Four Hours to festivals around the world, and at the time I was crowing about being able to send a DVD instead of a VHS tape. However, the thought of how many hundreds of thousands of DVD submissions from indie filmmakers found their way into the trash makes me quail. The waste saved by online video submission ought to be tremendous.

Of course, one can only hope that the festival reviewers who are evaluating these submissions aren’t tempted to catch up on their review queue using their iPhone on the bus…

Color Correction Handbook 2nd Edition: Grading theory and technique for any application.
Color Correction Look Book: Stylized and creative grading techniques for any application.
What's New in DaVinci Resolve 15: Covering every new feature in Resolve 15 from Ripple Training.
DaVinci Resolve Tutorials: Far ranging DaVinci Resolve instruction from Ripple Training.

Fun With Television Framerates

I got a question from a friend of mine, and I thought it might be worth sharing my answer with a wider audience. He asks:

“What’s the short answer for why new 120hz screens make films look like video? I don’t know if you have had a chance to observe this yet, but it will effect you because it makes everything look like the ‘behind the scenes’ footage on a dvd, or raw dailies. People seem to love it.”

Well, I can attest to the fact that not everyone loves it, in fact the cinemaphile/home theater boards are filled with invective regarding how this feature despoils the cinematic experience, and I completely agree with them. I’m all about respecting the filmmaker’s intent regarding how they wanted the film to look, so whatever framerate they created their program using, that’s the framerate I want to watch it at.

The reason for the difference in “look” between 24p video viewed natively and 24p video that’s been converted via 120hz digital magic is virtually identical to the difference between 24p film and 29.97 video frame rates. We’ve all grown up with juddery 24p frame rates looking “cinematic,” even though the motion sampling is, strictly speaking, pretty crude compared to what is now possible.

On the other hand, since the motion sampling of interlaced 29.97 video is effectively 60 fps, “video” motion has traditionally looked much smoother, more “real life,” or more like a TV newscast.

The newer 120hz displays use motion estimation to generate/interpolate new frames in-between the original frames of the 23.98 image stream on a DVD/Blu-ray, and so the “cinematic” motion of 24p is changed into the “non-cinematic” look we generally associate with video, all because of the introduction of a smoothness of motion where there was none before. The result, to my eye, is that classic motion pictures end up looking like a shot-on-video sitcom.

Incidentally, speaking for myself I find that the reverse can also be distracting. I’m increasingly seeing 24p-acquired video used in programs like the PBS newshour with Jim Lehrer, the result being a somewhat “cinema” look within traditionally interlaced video programming, which I confess looks a bit odd. I’m just not used to it, and I believe this effect is solely based in what we’re used to.

It’s entirely possible that, someday, the next generation may get so used to 60p that 24p will be looked upon as quaintly as silent film or black & white, (at least, if James Cameron has his way). However, there are so many advantages to the low-bandwidth of 24p that I suspect, similar to interlacing, 24p motion sampling will be around for a long, long time. (And I’m not even going to get into the debate over the “intrinsic” cinematic value of shooting one’s projects 24p and 24p only, this particular article is about watching movies, not making them.)

My friend went on to reply:

I can see that showing the same thing 5 times would look different than showing me the thing, and a thing, then a half a thing mixed with half of the next thing. [My note: this is a fantastic description of 3:2 pulldown insertion] I just wasn’t expecting it to change the character of the images so much. Seems like the old way is closer to what it looks like in the theater. I wish my dvd-blu-ray player could just do 24 frames without the pulldown. You kids, give me back my vinyl 78s!

I suspect most of you already know what my reply is, but for those who don’t, I’ll enlighten you.

If you’ve got a good flat-panel display (television or projector), and especially if you’re using HDMI (and really, who isn’t anymore), you should be able to set up your player/display combo to play back actual 23.98 right now.

You usually have to enable the settings manually within your gear’s menus, but the DVD specification (and now Blu-ray) has always allowed distributors to author a DVD with an encoded 23.98 video stream—all players are supposed to do 3:2 pulldown insertion when necessary in order to display content on a non-24p-capable TV. If the TV can handle 23.98, then the player can send it directly via Component or HDMI.

So there you go. If you get a new TV and your movies look like television news, do yourself a favor and disable that pesky 120hz interpolation mode. You’ll be surprised at the difference.

Added 1/12/11—There’s an interesting thread in the comments. Nothing is ever simple! Also, it was pointed out to me that Tom Lehrer, the mathematician, songwriter, and satirist, does not in fact host the News Hour. That would be Jim Lehrer. Would have been funny if I could, in fact, use a strikethrough, but alas I for whatever reason cannot, so I’ve resorted to simply making the correction.

Color Correction Handbook 2nd Edition: Grading theory and technique for any application.
Color Correction Look Book: Stylized and creative grading techniques for any application.
What's New in DaVinci Resolve 15: Covering every new feature in Resolve 15 from Ripple Training.
DaVinci Resolve Tutorials: Far ranging DaVinci Resolve instruction from Ripple Training.