Resolve 8.1–Do the Engineers at DaVinci Ever Sleep?

It seems like just yesterday that DaVinci released version 8.0.1 with its new color balance control interface, and DaVinci Lite, a free 2-node and HD-limited version. But, not content to rest on their laurels, DaVinci has eliminated a decimal place and announced DaVinci Resolve 8.1, with even more feature enhancements. Most are subtle, but welcome additions to a variety of users.

Top of the list is is enhanced AAF round trip support. A new Format popup in the Export Session dialog (that appears when you click the Export button in the Conform page) lets you choose whether to export XML or AAF (previously it only expoted XML). Choosing AAF generates a file for Media Composer that can be directly relinked to the media you output from Resolve.

 

Furthermore, when importing AAF files, Resolve now reads a variety of video transitions (dip to color, wipes, and iris transitions), composite modes, and transform parameters into your session. This will be welcome news for colorists wanting greater effects fidelity from project import.

Incidentally, transforms from Final Cut Pro (Position, Rotation, Scale) are now imported from XML projects, allowing you to render these effects using Resolve’s superior transform algorithms.

For those of you using EDLs day in and day out, a new contextual menu command–available from the Conform page’s Timeline–lets you load an EDL directly to a new track. If you’re the kind of person who needs this, you’ve just now come up with three different ways you’ll use this feature.

Next up is support for the new ACES “Academy Color Encoding Specification” standard. While comparatively few facilities need ACES right this second, it’s becoming clear that ACES is the future of media exchange in post, and DaVinci is being foward-looking by its inclusion. ACES support can be seen initially as a new option in the “Color Science Is” popup of the Project tab.

With “DaVinci ACES” selected, two new popups appear in the LUTS tab, which allow you to choose an Input Device Transform (a characterization of the source camera), and an Output Device Transform (a characterization of the target display or projector).

Finally, contextual menu item available from the thumbnails within the Browse and Color pages allow you to redefine the Input Device Transform on a per-clip basis, in cases where you’re mixing media from different cameras.

 

While we’re looking at per-clip options, a new contextual menu item for clips in the Media Pool lets you redefine the individual Data Level of clips. In previous versions of Resolve, the Data Level was a project-wide setting that determined which digital values in your source media mapped to the minimum and maximum levels in Resolve. In a nutshell, Y’CbCr source media typically used the “Normally Scaled Legal Video” setting, while RGB source media (think film scans) used the “Unscaled Full Range Data” setting.

However, you were in trouble if you had a mix of both kinds of media. Now, you can use this Media Pool submenu command to individually redefine clip data ranges in this situation.

If you’re still working with DVCPRO HD media (I continue to have documentaries coming in using this format), you’ll be pleased to know that DaVinci has finally added the DVCPRO HD pixel aspect ratio to the PAR dialog box.

Now, all these workflow features are nice and all, but there are also some solid additions for grading and effects. For example, if you hover your mouse over a node in the node graph, a tooltip appears showing you what adjustments have been made within that node, giving you an organizational heads-up.

Not impressed yet? Well, the Layer node, which until now simply combined multiple input nodes according to order and opacity (set via the Key tab’s Post Mixing Gain slider), now lets you choose a composite mode to use to combine all of the inputs. Now all you composite mode junkies can go nuts right from within the Node Graph.

Another valuable little feature is the ability to copy one node’s settings (select the node and press Command-C), and paste them into another existing node in another shot (select another node, press Command-V. Along with all dynamics (marks, or keyframes, whichever you prefer to call them). This is similar to Final Cut Pro’s “Paste Attributes” command, except without any options, you simply copy and paste all settings at once.

Lastly, 8.1 continues to refine the use of the Timeline in the Conform page, enabling you to copy and paste clips in the Timeline of the Conform page. Select a clip, press Command-C, and then move the playhead and click a track number button to determine where the pasted clip should go, and press Command-V. It’s the little things, right?

So those were the features that grabbed my eye. There’s more to the release, but honestly, those engineers deserve a break. And a beer.


Color Correction Handbook 2nd Edition: Grading theory and technique for any application.
Color Correction Look Book: Stylized and creative grading techniques for any application.
What's New in DaVinci Resolve 14: Covering every new feature in Resolve 14 from Ripple Training.
DaVinci Resolve Tutorials: Far ranging DaVinci Resolve instruction from Ripple Training.

The Tangent Designs Element, An Evolution

I posted an article a few weeks ago with information about the new Tangent Element color correction control surface to be unveiled at IBC. Well, IBC is upon us (almost), and Tangent Designs has posted some photos of the new surface.

As you can see, it’s a modular design that’s also able to “click” together via hidden magnets in the sides. They connect via USB, and so you can arrange the panels any way you like.

All of this you can read on their web site, but I thought it would be fun to show you a bit of history of how these panels were first designed. When Tangent Designs contacted me a year ago, they sent me some photos of their first stage of design, foamcore mockups wherein they tried to work out the ergonomics and overall desirability of the form factor.

 

 

 

 

To give an idea of what they were initially thinking, they also provided some theoretical desktop arrangements of the element panels alongside other input peripherals.

 

Now, you can compare these to the form factor that’s actually shipping. The biggest difference is that they added 12 buttons to the transport panel, along with a single trackball/ring that’s meant to do double-duty as a jog wheel/shuttle, depending on individual vendor’s implementations.

 

I wish I could be at IBC to see folks reactions. Personally, I think they’re a great set of budget panels. Granted, the Wave and Avid MC Artist panels are cheaper, but these will satisfy anyone with a yen for more knobs, and a higher-quality feel.

Furthermore, notice that every single control is dynamically labeled via monochrome OLED displays at the top. Granted, you can develop muscle memory for an unlabeled set of panels that you use day in and day out, but if you’re like me and you bounce around from task to task to task, with grading sessions sometimes separated by weeks, this is a big help.


Color Correction Handbook 2nd Edition: Grading theory and technique for any application.
Color Correction Look Book: Stylized and creative grading techniques for any application.
What's New in DaVinci Resolve 14: Covering every new feature in Resolve 14 from Ripple Training.
DaVinci Resolve Tutorials: Far ranging DaVinci Resolve instruction from Ripple Training.

The Insecurity of Media Creation

A strip from one of my favorite web comics—Scenes From A Multiverse

Please go to Jonathan Rosenberg’s comic site, Scenes From A Multiverse, and enrich yourself with his scalpel-like sense of humor. This strip made me laugh, and then gave me pause as I considered the possible futures of my own ventures. Then I bought a copy of his book to reward the part he’s played in filling the gnawing void within my soul.


Color Correction Handbook 2nd Edition: Grading theory and technique for any application.
Color Correction Look Book: Stylized and creative grading techniques for any application.
What's New in DaVinci Resolve 14: Covering every new feature in Resolve 14 from Ripple Training.
DaVinci Resolve Tutorials: Far ranging DaVinci Resolve instruction from Ripple Training.

Doing a Bit of Work On the Blog

My apologies if you’ve been trying to read something on my blog today, as it’s been in a somewhat unfortunate state of transition. I’ve decided to go for a more minimalistic look, and I’m trying out some different approaches towards readability. The current layout seems to be an interesting improvement.

What do you think? Leave me some feedback in the comments.


Color Correction Handbook 2nd Edition: Grading theory and technique for any application.
Color Correction Look Book: Stylized and creative grading techniques for any application.
What's New in DaVinci Resolve 14: Covering every new feature in Resolve 14 from Ripple Training.
DaVinci Resolve Tutorials: Far ranging DaVinci Resolve instruction from Ripple Training.

Free Isn’t Always Freeing

As a sometime producer of original narrative content, I’ve had an ongoing interest in online business models for content distribution. In particular, I’ve been trying to figure out how to move forward with an episodic series for the web for some time now. And by move forward, I mean solve the puzzle of how to pay for the help I need in its creation without bankrupting myself.

This entire article presupposes “budgeted” programming, where writers, editors, actors, and others involved with the production are getting paid real money so that they can at least pay their bills while spending six months to a year focused on creating a series of some kind. If you’re doing an all-volunteer project, then you have an entirely different set of fund-raising issues, and monetization is probably not the first thing on your mind.

Getting back to budgeted projects, I’m particularly interested in models whereby content producers can create media of ambition, where artists are getting paid for their efforts, that’s free from the notes of sponsors, executives, and network ratings systems. Content where the audience itself is the sole judge of whether a show continues or not, by voting with their pocketbooks.

Then there’s that whole Free thing.

I’m not talking about piracy. I’m talking about the ongoing discussion regarding the merits of free distribution for promoting awareness of a piece of media, be it a song, a short movie, or a series. I’ve been reading a lot of chat about the importance, necessity, and some say the inevitability of free distribution in reaching a meaningfully large audience.

Clearly it’s a lot easier to get eyeballs on something if it’s free to access/watch/read. Along with this, today’s audience is settling into an “I want to watch it where I want, how I want” set of expectations.

Fair enough, I can’t really argue with that. Of course making something free makes it easier for a lot of people to watch. Paywall barriers are enormous disincentives if they’re not already associated with a “one-click” account that you’ve already set up (say, the iTunes store or Amazon.com). Heck, even unpaid password barriers are a disincentive; who wants to do all that typing?

However, I often see radio used as an example of how “free” has moved an industry forward. That’s a solid point, but I have to point out that radio has never really been free. You have to listen to ads.

Popularity in and of itself doesn’t pay for anything, unless the audience is buying the content, buying schwag, or buying access to live performances (for an episodic science fiction series, touring is probably not a realistic option). So, if you’re not charging your viewers, then most monetized free-to-the-public models of which I’m aware pay the bills with advertising, and then hope you buy something material like a t-shirt, DVD, book, or poster.

So yes, ads keep things free, and free stuff gets bigger audiences. But advertising necessarily results in selectivity of what will be aired to the public on the part of the advertisers. That’s a price. Let me put it thusly: when was the last time you listened to ad-supported music radio in your market? In another example, if you bemoan the difference in content available from network programmers, cable programmers, and premium cable channels, I think I’ll have made my point. (And don’t start blaming the FCC; they ban swearing and nudity, not bland writing).

Network programming is primarily ad supported and “parent-company” sponsored. No program is going to last long if it irritates the sponsors too much, or if it generates protests among a vocal-enough minority of the audience that they complain to the sponsors.

Even worse, no program is going to last if it doesn’t pull in the ratings necessary to justify how much advertisers are paying for those ads, regardless of its content or the passion of an audience large enough to make any internet series producer drool (have any of your favorite programs summarily canceled?).

On the other hand, lots of internet citizens hate ads. Some hate ads enough to endorse a total sponsorship model of content, straight out of the 1940’s, where a single company pays for the creation of a program and then gets product-placement and promotion that’s intrinsically associated with that program.

Financially, this solves a production’s budget issues nicely. Of course, instead of hoping you don’t lose half of your sponsors if you decide to write something controversial, now you have to focus on not losing your one and only sponsor.

Unless you’ve got the most open-minded corporate backer around, this is an extremely influential level of control over your content. Do you really think that a corporation that’s large enough to speculatively spend tens (or hundreds) of thousands of dollars on a program of ambition isn’t going to have opinions about its development? It’s their money, and they’re perfectly justified in wanting to make sure they’re being represented well, whether the script has anything to do with the company or not. Heck, I’d have plenty of opinions if it was my money.

This may be less relevant if your content is non-fiction, for example a podcast about post-production, or a series of cooking or home-improvement videos (although would any of those sponsored series say anything bad about their sponsor’s products?). But if you’re writing dramatic fiction, there are all kinds of decisions that may suddenly become important to a sponsor. Do you really need swearing in the script? Do you really need that many gay main characters? Isn’t the dramatic arc of the series a bit depressing? Are you sure that fifth episode really strikes the right tone? What if you made the series more all-ages appropriate? How about if the show all takes place inside of our store? How about if we did a different show altogether?

Let me put this another way. Would “True Blood,” “Weeds,” “Rome,” “Dexter,” “Deadwood,” “Nurse Jackie,” “Hung,” or “Game of Thrones” be possible in their current forms if they relied solely on corporate sponsorship?

You, the viewer, might not be paying money to watch sponsored shows, but you’re exchanging the possibility of watching a narrative as it was originally conceived, for a “sponsor-friendly” version that made its budget possible. Despite the freedom and democratization that internet distribution is supposed to engender, advertising and sponsor-driven content promotes an environment where only sponsor-friendly programming is conceived.

If paying for content is “old fashioned,” then ads and sponsorship will be the only way of raising enough money for a show that pays a writer, a director, a DP/camera operator, a sound recordist, some actors, an art director/set dresser/costumer, a couple of PAs, an editor/finisher/colorist, a sound designer/mixer, and maybe one or two VFX artists to do something interesting. And this, presupposing all of these folks are doing double or triple-duty to cover positions I haven’t included, is the minimum if you want to make something that’s even remotely polished.

Of course, you could save a lot of money by dispensing with all those people and producing something like this. And I’m not taking a cheap shot. That show I linked to is popular as hell, and I don’t even want to guess how low his budget is. And it’s free to watch; successfully ad-supported via YouTube’s partner program.

I’m not saying that advertising is evil. Advertising, in one form or another, subsidizes much of the content we all enjoy, content that employs tens of thousands of artists worldwide. However, it’s inarguable that advertising exerts influence. My point is simply to be mindful that you don’t ever get something for nothing, either as a content producer, or as an audience member. If viewers don’t want to pay directly for programming, then they’re losing their vote.


Color Correction Handbook 2nd Edition: Grading theory and technique for any application.
Color Correction Look Book: Stylized and creative grading techniques for any application.
What's New in DaVinci Resolve 14: Covering every new feature in Resolve 14 from Ripple Training.
DaVinci Resolve Tutorials: Far ranging DaVinci Resolve instruction from Ripple Training.

Element — A New Control Surface from Tangent Devices

While I was in the UK last month, principals Andy Knox and Chris Rose of Tangent Devices invited me up to their offices in picturesque Toddington (about an hour outside of London) to check out what they’ve been up to. Touring their offices, with prototypes and stacks of CP-100s (the last 3 they have in stock), CP-200s, Waves, and various parts, it was interesting to see the evolution of their designs from product to product.

They’d already contacted me almost a year prior to share some preliminary designs and mockups of a new control surface they’d been planning, and had shown me their first prototype that they were slyly sharing with select folks at NAB, but this was an opportunity to look behind the curtain and see what goes into the design of a control surface, from the inside out.

Tangent has named their new control surface the “Element,” and it presents some new thinking about creating a smaller, lighter, modular set of grading controls that, despite the size, feels robust and professional. Alas, I have no photos to share at this time, as they’re holding back for the big unveiling at IBC, but these panels not only feel great to the touch, they look fantastic.

They’ve not yet determined the final pricing, but I’m told they’ll sell for between $3,000 and $3,500. That will get you a base set of four panels:

  • One bank of 12 knobs (arranged 3 across by four down)
  • One bank of 12 buttons (arranged 3 across by four down)
  • One set of three trackballs, with ring contrast controls
  • One combination panel with a fourth trackball, transport controls, and another bank of 12 buttons

Each panel has a vertically angled OLED alphanumeric display on top that identifies each control, for dynamic labeling, along with shift buttons for stepping through different “menus” of parameters, depending on how they’re used by the software they’re controlling.

The panels themselves are thin, light, and strong, with durable metal construction. They’re small (there’s no wrist rest, reducing their footprint), so they’ll sit comfortably next to a keyboard and mouse or magic trackpad, and they’ve been designed to either sit loosely on your desk in whatever arrangement you prefer, or to “click together” magnetically to form a rectangular block of controls.

I’ve two uncles who are machinists, and my dad used to be a navel aircraft mechanic, so I’ve grown up around mechanical design enough to appreciate the details that go into something like this. Much thought has gone into the Element’s various details.

For example, the mechanisms of the push-for-detente rotary encoders (knobs) have been newly designed from scratch, and feel solid and smooth. These are NOT the Wave knobs, they’re completely different, and they actually feel nicer then the CP-200 encoders. A gel packed inside of each knob creates resistance that provides a smooth, expensive-feeling twist, and in fact they’re using the same gel that’s used by high-end audiophile gear (think of an expensive amplifier’s volume control) and professional telescope focus knobs. In fact, there’s a range of gels available, each enabling a different amount of drag, and Tangent tested a variety to determine the best “feel” for continuous use.

Furthermore, the buttons resting underneath each knob (for push to reset) were carefully chosen. Here again, they tested a grid of buttons from their supplier, each requiring different newtons of force, to find the best feel for the “click.” Not too light (or you’d accidentally push to reset all the time), but not too hard. I got to try the different buttons out myself, and the one I chose in my blind test was exactly one level away from the one they’re actually using (their choice was a tiny bit stiffer, to discourage accidental resets).

The push-buttons used by the Element panels will be familiar to anyone using Tangent gear, they’re the same ones that are used by the CP-100 and CP-200 panels (and the Wave, for that matter). I was told that most CP-200 users have reported they really like the buttons, and the failure rate has been incredibly small, so they’re sticking with a winning formula there.

Finally, the trackball/ring control encoder mechanisms are a completely new design, and are smoother and more precise then any of their previous surfaces. This is due both to improvements in optical sensors, and the evolution of Tangent’s mechanical engineering (in fact, a new rapid-prototyped encoder arrived while we were talking, and I got to see their latest tweak to the ring mechanism). Fans of ring-around-the-ball contrast controls should be thrilled, the rings are solid, with a nice smooth glide and good resistance to provide a controlled feel. The color balance trackballs are similarly refined, with smooth travel, and while they’re the same size as the balls found on the Wave (to keep the overall footprint small), they manipulate well. This is again due to a complete redesign of all components inside and out.

Having seen the development of these panels from the initial foam-core mockups through the first prototype panel, I can say that Tangent has put an enormous amount of thought into the Element panels, with numerous tweaks to rotary encoder and button spacing, knob shape, control arrangement, etc., to try and create the best experience. Ultimately, it’s impossible to know how comfortable they’ll be until they’re plugged into a system, but, in my hands-on with the prototype and various components, the fit and finish of the individual controls and the panels as a whole is top notch, especially at the prices they’re considering.

At the time of this writing I have no idea which software companies will be supporting these new panels, but I’m hoping the usual suspects will step up to the plate. Alongside the panels, Tangent is designing a new software control system that will make it possible to integrate nearly every panel that Tangent makes together under one SDK (including the Tangent iPad software). In theory, companies that are able to support the new Tangent control protocol would be able to support every panel Tangent makes. However, this will require reprogramming on their part.

Furthermore, the modular design of the Element means that it may be possible to purchase individual button or knob panels to add to the Element’s base set, if this is supported by a particular software application. This information is preliminary, so it’ll be interesting to see what is ultimately supported.

Bottom line, if you’re in the market for a compact control surface at an affordable price, you should wait to see the Element when it’s unveiled at IBC (it won’t ship until later). While it won’t replace the sheer scale and joy of a grading application’s custom-made, native control surface (for example Baselight’s Blackboard, DaVinci Resolve’s Control Surface, or the Quantel Neo), these large-scale control surfaces do cost 10-20 times more. The Element may be just the thing for a smaller shop on a tighter budget looking for a thoroughly professional set of controls, or for second room or on-set use where compact size and portability are important.

Keep in mind that this is not a review, as I’ve only seen prototypes on the workbench, not shipping units. As always, wait until you’re able to get your hands on one of these before making any kind of decision. Control surface preference is an intensely personal thing; everyone has their own reasons for liking one over another. That said, I think the Element will fill a need that’s not yet been addressed, at a price and level of quality that will be surprising.


Color Correction Handbook 2nd Edition: Grading theory and technique for any application.
Color Correction Look Book: Stylized and creative grading techniques for any application.
What's New in DaVinci Resolve 14: Covering every new feature in Resolve 14 from Ripple Training.
DaVinci Resolve Tutorials: Far ranging DaVinci Resolve instruction from Ripple Training.

Resolve Lite and the Color Correction Handbook

The new color balance UI implemented in DaVinci Resolve and Resolve Lite

I try to refrain from pimping my book too much here, but now that DaVinci Lite has become freely available to colorists and post professionals one and all, I thought the time was ripe to point out that DaVinci Resolve Lite has finally caught up with the Color Correction Handbook’s primary teaching interface.

Why am I mentioning all this now? Because if you’ve just downloaded Resolve Lite and asked yourself, “now what?” then you’ll really like the Color Correction Handbook’s methodical and in-depth explanations and demonstrations of how to use color correction applications, how to make many of the practical grades you’ll be using every single day, and why they work.

Since the vast majority of color correction interfaces (Baselight, Scratch, Film Master, Colorista, FCP7 and Premiere Pro plugins, etc.) employ color balance controls, more colloquially referred to as “color wheels,” that’s the interface my book uses to demonstrate all principles of color rebalancing and contrast manipulation.

Furthermore, since most serious colorists eventually get themselves a control surface (they’re more affordable then they’ve ever been), every example showing color balancing also shows how you’d make the same correction using the familiar trackballs and rings interface of a hardware panel. Here’s an example from the book:

Now, even though color balance controls and control surface UI are all generally similar, there’s enough difference so that I didn’t want to necessarily focus on any one particular application, so I created a generic illustrated UI for showing examples in the book. The problem was, DaVinci’s onscreen UI was vastly different, a bar graph interface designed for use with a dedicated control surface. That meant if you had a mouse-only setup, learning Resolve would be a bit of a pain.

No more. The very latest versions of Resolve and Resolve Lite now sport a conventional color balance control interface that’s completely relatable to the dummy UI I present in my book. Furthermore, while the first half of the book is tailored to beginners who need to focus on first principles, the second half of the book grows into advanced topics that guide you towards effective use of qualifiers, shapes and windows, and other advanced techniques.

While many different grading applications are shown, DaVinci Resolve is featured heavily, with many examples shown within the DaVinci Resolve interface.

So, if you want to learn how to grade using DaVinci Resolve or DaVinci Resolve Lite, you should consider the Color Correction Handbook, which is available right now. With healthy dollops of both theory and practice, you’ll learn how to make the adjustments you need quickly, why you’re making them in the first place, and you’ll be getting a cookbook of intermediate and advanced techniques that you’ll be using for the rest of your career.

For a free look at the book, go to the Peachpit.com store, and open the Sample Content tab for a downloadable version of the chapter covering HSL qualification.

Available on Amazon.com in print

Available on Amazon.com for Kindle


Color Correction Handbook 2nd Edition: Grading theory and technique for any application.
Color Correction Look Book: Stylized and creative grading techniques for any application.
What's New in DaVinci Resolve 14: Covering every new feature in Resolve 14 from Ripple Training.
DaVinci Resolve Tutorials: Far ranging DaVinci Resolve instruction from Ripple Training.

Gamma Mea Culpa

This blog is a rewritten version of a creative cow post I made on July 25th, 2011.

Since the Color Correction Handbook was published, a single error of a tenth of a percent has emerged to cause me some small regret. I’m referring to the display gamma value I cite for digital Rec. 709 displays in chapter 1 of the book. I’m an advocate for standards in all of the postproduction work that we do, and it pains me that in attempting to clarify a confusing issue, I’ve inadvertently added just a dash more confusion.

At the time I was writing chapter 1, I was going by information regarding the inverse gamma encoded by Rec 709 compliant cameras, and so the display gamma I derived was 2.5, which fit with my understanding at the time. This value wasn’t challenged by anyone, and it was difficult to find a quoted gamma standard for Rec 709 that was mathematically derived, so this is what I went with. Unfortunately for me, in light of new information it appears I was off by .1 in that section.

Since publication, it has been brought to my attention via Charles Poynton’s own open letter to the industry that there, in fact, has never been a formal display gamma standard for Rec 709, as gamma was an implicit characteristic of CRT displays that was simply “built-in.” The actual gamma employed by CRT displays is quoted by Poynton as falling between 2.3 and 2.4, but this was never a published standard.

Of course, now most colorists putting together brand new suites are getting digital displays of one kind or another, and these digital displays lack the inherent gamma characteristics of CRT. For a variety of reasons that Mr. Poynton explains far better then I ever could, gamma is still a desirable and necessary characteristic in a display (even CPU displays have incorporated gamma settings for as long as flat panels have been available). However, lack of an easily referenced, published standard is confusing.

In his open letter, Poynton advocates for a published display gamma for digital broadcast displays of between 2.35 and 2.4 (he seems to be hoping that SMPTE will pick one), and peak white of 80-120 cd/m(squared).

It’s worth pointing out that the display gamma for projected digital cinema is yet another value, 2.6 (with peak white of 48 cd/m(squared)), but that assumes a completely dark viewing environment. My understanding is that higher display gamma settings represent scenes better in darker environments, whereas lower display gamma settings represent scenes better in brighter environments (which explains sRGB’s gamma standard of 2.2 for lit office/computer environments).

With that rationale, 2.4 falls in the middle for a muted “evening living room” environment. This all reinforces the importance of a carefully controlled viewing environment, where your display settings match the characteristics of the display surround, for doing color-critical work. Personally, I hope Poynton is successful and that a single gamma standard is published, as this is a confusing topic that engenders a lot of disagreement and doubt.


Color Correction Handbook 2nd Edition: Grading theory and technique for any application.
Color Correction Look Book: Stylized and creative grading techniques for any application.
What's New in DaVinci Resolve 14: Covering every new feature in Resolve 14 from Ripple Training.
DaVinci Resolve Tutorials: Far ranging DaVinci Resolve instruction from Ripple Training.

Just When You Think You’re Being Original


Excerpt from a stained glass window in Prague, designed by Alfonse Mucha.

You take a trip and realize that someone beat you to it a hundred years ago…

This last winter my wife Kaylynn and I took a trip that ended in Prague. As we toured the city, we went to the cathedral within Prague Castle, the Basilica of St. Vitus (the patron saint of dancers, actors, and epilectics). It’s a magnificent structure, and I recommend it highly. As I walked around, I came upon a truly stunning stained glass window by Alfonse Mucha, one of my favorite artists (the window is excerpted above).

It’s a stunning work, filled with light and narrative, but one of the things that surprised me was the color scheme he employed. A look at the window in its entirety shows that the color scheme uses two rings of what I will crudely refer to as colored vignettes. Click the image to get a better look.

The reason I bring this up is that colored vignettes are a technique I covered in my Color Correction Handbook, as a way of creating a visually interesting treatment when you need to come up with something surreal. Of course, in the grading suite we cheat by simply drawing a shape to define the vignette, soften the heck out of it for a gentle transition, and then use either color balance controls or curves to create a wash of color over the edges, or within the middle (or both, if you’re really daring). Here are the two examples from my book.

The examples I use are really no comparison, except for the general concept of vignetting with color, rather then light and darkness. Still, it’s sobering to realize that whatever look you’re tinkering around with, some painter or another probably did the same thing a hundred or more years ago. Not that this is a fair comparison, since painters create their scenes from whole cloth, with total control over the art direction, composition, and palette. We film and video colorists get to work with what we’re given from the art and photography departments. Still, there are plenty of lessons to be learned, and ideas to be had.

Getting back to Mucha’s window, there’s another way to look at this example. If you click to enlarge the whole window, you’ll notice that, instead of simply washing color over the subjects indescriminantly in order to create the color scheme, the colors are built into the design of the costumes and items within the scene. Dark blue cloaks and shawls give way to light blue robes and props, giving way to green costumes, trees, and trim, which in turn surrounds a central region of yellow, gold, and red robes, chests, and blooming flowers.

Now, this is partially because each colored element needs to be an individual piece of glass, but I think an interesting lesson to be derived for the film and video colorist is that there may be circumstances where the art direction and composition of a scene lets you create the equivalent of a colored vignette by using HSL Qualification or hue curves to selectively isolate and emphasize rings of individual elements surrounding a central subject, be it foliage, sky and/or water, or architectural elements. The resulting correction will at the same time be a bit more naturalistic, yet highly stylized.

That’s the reason I love traveling. You never know what you’re going to see, and how it’ll relate to the work you do. (Oh, and Czech beer is fantastic, too!)


Color Correction Handbook 2nd Edition: Grading theory and technique for any application.
Color Correction Look Book: Stylized and creative grading techniques for any application.
What's New in DaVinci Resolve 14: Covering every new feature in Resolve 14 from Ripple Training.
DaVinci Resolve Tutorials: Far ranging DaVinci Resolve instruction from Ripple Training.

What I Came Home To

I’ve been out of the country, presenting at the SuperMeet, dashing about the UK having meetings and doing training, and then finally crossing the channel into France for a much-needed vacation in Saint-Malo and Paris. More on all that later.

For now, I’m rather pleased that I’ve finally gotten my new Panasonic 55′ VT30 series plasma delivered to my Saint Paul suite. Everything is now mounted, hooked up, and working. In fact, I’m so pleased with myself that I wanted to show off with the following photo.

Gray monitor surround, D65 backlighting, scopes and UI comfortably placed, and my new-ish DaVinci Resolve control surface dominating all available desk space. It’s a grading suite, all right.

To use a blatantly nerdish analogy, building out a new suite is like a Jedi assembling his/her own lightsaber. It’s always a custom job, you always try to make the new one better then the last one, and you’ll likely be doing much of the work yourself. This go-around, I’ve built a really comfortable, utterly professional suite that I’ll be very happy to work in for the foreseeable future. It’ll also serve me well as a color laboratory for my various other writing, training, and consulting projects, and will be a nice hub for controlling remote-grading Resolve gigs, should that come up.

I was a bit concerned that the 55″ display would be too big for the space, but it’s turned out to be perfect. My current room is too small to make projection practical, but factoring in the viewing distance, this is virtually a projection experience for myself and two clients as is. It’s probably worth mentioning that my initial impressions of the VT30 series displays are very favorable; they’re thin, relatively light (for a plasma), and the THX color mode should make lots of home theater afficionados quite happy. I’ve yet to calibrate it fully (via a LUT loaded into the HDLink Pro that’s converting the HD-SDI to HDMI), so I’ll likely have more to say once I’ve gone through that process, but so far I’m really liking what I’m seeing.


Color Correction Handbook 2nd Edition: Grading theory and technique for any application.
Color Correction Look Book: Stylized and creative grading techniques for any application.
What's New in DaVinci Resolve 14: Covering every new feature in Resolve 14 from Ripple Training.
DaVinci Resolve Tutorials: Far ranging DaVinci Resolve instruction from Ripple Training.

Music Videos and EPIC Go Together Like Chocolate and Peanut Butter

The Diamond brothers approached me last month about a music video they were about to shoot for Jackson Harris, and while I always enjoy grading music videos, another big draw was the fact that they had just taken possession of a RED EPIC, and I was eager to see how the media would handle.

When the media drive arrived and I loaded the project into Resolve (I used the Resolve v8 beta for this project, with zero problems), I had a phone chat with Josh Diamond to discuss their visual goals, and then jumped straight into the grade on a handful of shots to see what I could get away with before sending some test stills back for early approval.

The video was shot at 5K, with cinema prime lenses (I was told but I forget which ones), and excellent lighting, so the source material was great to start out with. The brief was for a look that both flattered the singer and kept him front and center during the entire video.

Ordinarily I don’t go cuckoo for secondaries, but this was a project that benefitted from tight saturation control of specific elements throughout the scene, so I let myself go. The potential issue was the fairly subdued palette of the art direction throughout the piece. This was by design, but I really wanted to, as best as I could, create a subtle but distinct separation between the talent and the environment.

This was where the EPIC R3D media really shined. In general, I found that if I could click something, then I could key it, even shots with the actors skin tones against an orange background. And not only could I key such small differences between analogous hues, but I could key them cleanly.

A stack of parallel nodes isolating different elements.

While I’m usually an advocate of trying to take care of as much business in the initial primary grade as possible, I’ll freely admit that for this project I keyed every single skin tone throughout the entire video. And in just about every shot, the result was a solid, controllable matte. On top of that, I did a lot of selective manipulation of luma within secondary grades. Again, this is something I rarely even attempt if I’m working with highly compressed media, the results usually expose too many compression artifacts to be worth trying. With the EPIC R3D footage, despite the fact that it is a compressed source format, I was able to get away with pretty much any isolated adjustment I wanted to. It was tremendously freeing.

Image segmentation with a bit of digital relighting.

Other things I could point out would be a significantly higher latitude in making contrast adjustments; this wasn’t HDR media, I just had the one exposure to work with but I was able to make pretty much whatever adjustments to image lightness I wanted to. There was low, virtually nonexistent noise in the shadows. And the color I was drawing out was consistently rich and free from artifacts.

Feeding a single tracked Power Window to multiple correction nodes to tease out some light.

So that was my EPIC experience. Having a RED ROCKET card certainly made things easier, but I disabled the card to see what the performance would be on my 8-core 2010 Mac Pro, and with quarter-rez debayering selected in the Source tab, my performance was perfectly acceptable. Of course, with the RED ROCKET turned on things went much faster, especially when it came time to render the final result.

One last funny tidbit. I was so eager to take a look at the media when I first got the hard drive via FedEx, I loaded the project straight from the bus-powered LaCie FireWire 800 drive I was sent. Unexpectedly, it turns out there was plenty of bandwidth to work in Resolve at real-time via my RED ROCKET, even though it was EPIC media and the R3D files were somewhat larger. Bandwidth was fine as long as I was only reading a single stream from the drive; when I tried to render back onto the same drive, my performance dropped precipitously. I eventually copied the media onto my local array, but it was actually pretty great knowing I could just work off the drive in a pinch.

Without further ado, here’s the video.


Color Correction Handbook 2nd Edition: Grading theory and technique for any application.
Color Correction Look Book: Stylized and creative grading techniques for any application.
What's New in DaVinci Resolve 14: Covering every new feature in Resolve 14 from Ripple Training.
DaVinci Resolve Tutorials: Far ranging DaVinci Resolve instruction from Ripple Training.

Once Upon a Time in London

Preparing for my trip to London on the week of June 20th made me reflect on the fact that my very first post on this blog was about another trip to London; a trip during which I pitched a feature/web series project to the director of development of a storied English production company. At the time I was still waiting to hear back, so I didn’t go into specifics.

However, enough time has passed that it seems time to tell the tale. It was a fantastic experience, and in particular involves a funny story about using the wrong tools for the job, and my unveiling of the animatic I created, posted via Vimeo down near the bottom.

Getting this pitch meeting was a four year odyssey of networking, yearly trips to London, and the various slings and arrows of project development. However, once I was over the initial hump of getting my contact at the company to read my script after sitting on it for the previous year, and was reasonably sure that I might actually get a crack at actually getting a meeting, I commissioned a pile of artwork to support my writer/director pitch. The project is a period, gothic adventure–horror tale, with some nice action set pieces. My short filmography doesn’t exactly include an action film, so I wanted to make it clear that I’m fully capable of directing thrilling sword fighting scenes (being a fencer myself, I have a bit of insight).

I hired illustrator and storyboard artist Ryan Beckwith to turn my chicken-scratching thumbnail storyboards into nicely-illustrated presentation boards for the first three scenes. I commissioned concept art from Bay Area painter Anna Noelle Rockwell, who also did a series of costume illustrations for the main characters. I had a practiced pitch. I was loaded for bear.

And then I waited. For various reasons, my interactions with this company were on a yearly cycle, and I had plenty to work on in-between meetings, so I stored my cache of artwork and attended to other business. Once in a while I’d mull over whether or not to convert my presentation storyboards into a whiz-bang animatic, but I decided to skip it, thinking the comic book style presentation of my boards might be more fun to browse. Besides, Ryan and I were already up to our eyeballs planning Starship Detritus (update—the final coloring is nearly done, and we’re going to begin animating shots again for the pilot), so it was easy to relegate to the back burner.

So, as is the way with these things, I got the nod for the actual meeting at the last minute, pretty much a “fly out here in four days and you might get a chance to present” kind of deal. I threw everything together, bought some little easels for the paintings, and re-rehearsed my pitches. Again, I wondered, “should I put together an animatic?” but there was no time.

So, there I am in London, at an afterparty for the event that brought me out there, having gotten an actual appointment for my pitch meeting. I’m at a pub with another producer as well as my initial contacts at the production company, drinking and chatting about pitches, and they start talking about how great it is to pitch with an animatic. And there I sit, feeling like an idiot, since I had a whole year to put something together and I didn’t.

The day before my meeting, I took a walk in Saint James park, mulling. I’d brought JPEG scans of the boards on my MacBook Air, I could maybe whip something together but I didn’t have Final Cut Pro installed because the Air was my writing machine. However, I did have Keynote. Could I do this in Keynote? Absurd! But that night, back at my hotel, I started poking at it, and sure enough Keynote had some rudimentary slide timing tools for autoplaying a presentation. I started putting something together. 

Calling my wife, Kaylynn, back in the states, I had her email me a few Yoko Kanno tracks from my iTunes library that I knew, from memory, would fit what I wanted (Yoko Kanno was even part of my pitch, as I would have loved to have her score the project). In a fever, I put the whole thing together and tuned it up by 4am, then caught what little sleep I could manage.

The next morning, while packing up for the meeting, I took one last look at my hacked together Keynote animatic. Was I crazy? Would this fly? I watched it, and, well, it was fun! Not perfect, but it would be a heck of a lot more interesting then flipping through my stack of boards.

With a song in my heart, I went to the meeting. What I though was going to be a 15 minute quick pitch ended up being a fully-engaged hour and a half meeting. I was ON FIRE, and the executive I was meeting with seemed interested. He had his hesitations, but there was a genuine back and forth. And he watched the animatic. All of it (and I did have my finger on the stop key, looking for the slightest sign of boredom which, thankfully, didn’t appear). What follows is an h.264 movie of my Keynote presentation, and while the timing isn’t quite the same, it’s pretty much what I presented in London (vaguely NSFW, I suppose).

After my return, it took four months for them to finally pass on the project, but I’m philosophical about the experience. I’m glad I got the opportunity to make the big pitch I’d prepared for, and for the record, this script isn’t dead. It’s back on my stack again, but I’ve a plan to rework it within a new context when the time seems right. I’ve had too much fun to give up on it now.

My apologies to Yoko Kanno for the unauthorized use of her tracks. However, if you like what you year, and you enjoy soundtrack music, you owe it to yourself to check out her work, which is eclectic and wonderful.


Color Correction Handbook 2nd Edition: Grading theory and technique for any application.
Color Correction Look Book: Stylized and creative grading techniques for any application.
What's New in DaVinci Resolve 14: Covering every new feature in Resolve 14 from Ripple Training.
DaVinci Resolve Tutorials: Far ranging DaVinci Resolve instruction from Ripple Training.

How I Stopped Worrying and Learned to Love Grade-Linking

Here’s one for my Resolve-using brethren. I’ve been trying for some time to come to terms with Resolve’s wonderful, yet at times terrifying auto-linking feature. I’m talking specifically about the fact that, when you first conform an EDL to a Media Pool full of files, any session’s clips that are linked to the same Media Pool file are automatically linked together in the Timeline, such that changes to one clip are automatically rippled to the other linked clips.

It’s all fun and games if you’re grading well-managed media, with strictly delineated coverage and no lighting changes in the middle of an angle (or, ugh, a take). For well-shot, well-organized projects, auto-linking is a GIGANTIC time-saver, and I love me some saved time.

However, if you’re editing documentary footage, or working on a project where multiple angles of coverage are combined into mixed content files that defy logic, then this mechanism can be more trouble then it’s worth. I’ve talked to more then one colorist who, upon conforming any EDL, immediately uses the Batch Unlink command to force all clips in a session to use Local grades, which are never linked among clips. In this way, its possible to work strictly clip by clip, linking only via manually created groups of your own choosing.

Another approach that I’ve used is to continue using the auto-linked Remote grades, creating new Versions of automatically linked clips that I need to individually tweak. You see, when you create a new Version, it’s no longer linked to the other clips and you can grade independently. The only problem with versions is that media management mechanisms such as the ColorTrace tool automatically link to the “Default Version,” regardless of the version you had set for that clip. If you never copy grades from a session in one project to a session in another project, then this is irrelevant. Until later you decide you want to…

However, I like auto-linking, and I want to use it until I exhaust its timesaving possibilities.

Which, it turns out, is easy to do.

Simply put, when I first conform an EDL, I go ahead and use the auto-linked Remote Versions to start grading. Each grade I make ripples happily out among sensibly-linked clips, and if there are any linked clips that I want to make an individual adjustment to, I create a new Version for that clip and do what I need to do. In this way, I rough out all of the grades on my first pass. At this point, I’m not looking for seamless shot-matching, I just want to get the major grades assembled and distributed out amongst as much of the timeline as possible.

THEN, when I’ve hit the wall in terms of what I can do with the linked grades in that session, I do the magic thing—Batch Copy.

Rt-clicking the Thumbnail Timeline and choosing Batch Copy, it's like having my gravy and drinking it too.

Unlike Batch Unlink, which switches every clip in the session to a blank Local grade (you can switch back to your grades using Batch Link, by the way), Batch Copy copies every clip’s current version to a new Local version, which is now (by definition) unlinked. Every clip has exactly the same grade that it had before, except that now none of them are linked, which means I can start digging into the nitty-gritty of making all the tiny manual adjustments that will make each scene play smoothly.

To summarize—I start my first pass using Resolve’s auto-linked Remote grades, and then I Batch Copy to a new set of Local grades that I can individually tweak without worrying about rippled grades causing problems.

For the last few projects I’ve done, this has worked really well, and it seemed worth sharing.


Color Correction Handbook 2nd Edition: Grading theory and technique for any application.
Color Correction Look Book: Stylized and creative grading techniques for any application.
What's New in DaVinci Resolve 14: Covering every new feature in Resolve 14 from Ripple Training.
DaVinci Resolve Tutorials: Far ranging DaVinci Resolve instruction from Ripple Training.

Don’t Make My Tools Easy. Make Them More Fun.

An Open Letter to Developers of
Software Tools for Specialists

I’ve worked with a number of software companies over the years, on various applications invariably relating to media creation. I’ve never been able to settle on which is more fun, participating in the development of new tools, or using those tools to actually create media. In my case, I like both too much. It’s just so satisfying to make a suggestion to an engineer, and watch it get implemented knowing that the lives of all who use that software will get a little less tiresome.

The great thing about software is that, hardware limitations aside, you can make things work pretty much however you want, if only you’re clever enough to figure out how. Whenever you hear someone from a software company say “we can’t do that,” what they’re really saying is:

  • “We don’t have the time to figure out how to do that.”
  • “We don’t want to do that.”
  • “We could do that, but your performance would be
    so horrible that you’d wish we hadn’t.”
  • “We’re planning on doing that, but that’s number 287
    on a list of 500 planned feature requests.”

To be fair, these are all legitimate issues. However, when a team of developers decides it’s time to try and add a particular new feature, there’s tremendous freedom of implementation. You can design a feature to take as many or as few steps as you want, use exotic input devices or simple keystrokes, take advantage of pre-built interface widgets or design new ones yourself, in the eternal struggle to second-guess the user’s preferences for how to do things. Which brings me to the reason for bringing this up in the first place.

You see, developers have a dilemma. To make an application more easily “discoverable” to new users, you generally have to create a more obvious user interface, with highly visible controls that are obviously labeled, that lead to activities that are easy to figure out and accomplish though trial and error, and constrained enough to keep the user from doing anything too insane that might result in catastrophic failure. The result is often software that’s drop-dead simple to use (one hopes). This is a good thing for folks who want to sit down and get something simple done without a lot of hand-wringing, and there are ample examples of this type of software UI that I like and use.

On the other hand, if you want to make a software application capable of complex functionality that’s fast and efficient, that’s doable as well. Design an interface that does away with needless mouse clicks, create a fast way to trigger functions (often keyboard shortcuts) that invoke specialized, streamlined workflows with open-ended functionality that impose as few restrictions on the user as possible, with as much variation and flexibility packed into as few interface widgets as is feasible. The result can be software that’s incredibly powerful and fast to use, but inscrutable to the point of presenting a dauntingly steep learning curve. This is a good thing for power users who are willing to put the work in, intent on rocking their software like Jimmy Page playing the guitar, blasting through creative work while thinking up new ways of problem-solving.

My point is that “easy” UI and power-user UI are almost mutually exclusive. Finding a way to merge the two is, I believe, the great challenge of our age of software design. That’s because simple software, while fast to learn and easy to use, becomes frustrating once you’ve plumbed its depths and reached its limits, forcing an excessive number of mouse-clicks and other laundry-lists of steps that, while making those tasks easy to learn, now turn every simple thing you want to do into a chore.

On the other hand, software for power users can be maddeningly frustrating to learn. Without someone to show you the ropes, enrollment in a class, or the patience to plow through user documentation (please god let it be well written), you may sit there with your expensive new software application clicking and pressing keys until the sun goes down without any clue of how to proceed. Even worse, it could take you days or weeks of this kind of torture until you finally become proficient enough to get done what you need to do without too much hair-pulling and google searching. That said, after a painful apprenticeship, you learn the magic keyboard shortcuts and mouse gestures, at which point you spend the rest of your career with that software flying through your tasks with joyful precision, with others gaping slack-jawed at your wizardry.

In terms of user interface design, it’s really, really hard to accomplish both; discoverable software that also allows you to fly though tasks as a power user once you’re ready to ditch the training wheels. I sometimes fear this sisyphean task may be well nigh impossible, but I don’t want to believe that because, well, it’s software. We can design things any way we like, and maybe if someone clever enough and wise enough comes along, the feat could be accomplished.

In the meantime, software for postproduction finds itself split between these camps. Simple to use software that’s frustrating to do big projects with, and power-user software that’s time-consuming to learn. This is the great challenge of software development as I see it, and is an honorable undertaking for any developer.

However, somehow another notion has crept into the collective unconscious of the software development community. This notion is that people want their tasks to be made easier.

Allow me to clarify, because this is important. It’s one thing to say, “I’m creating a music application, and I want the software to be easy to use for writing music.” It’s entirely another thing to say “I want to write a software application that makes writing music easy.”

To use another example I’m most intimately acquainted with, color correction is difficult. You have to learn obscure things. As a result, it’s tempting to want to develop software so that, with the click of a button, any shot is auto-magically corrected to look amazing, and then the lucky amateur can move on to something much more worthwhile, like drinking beer. And before you accuse me of beating up on any particular application, I’m not; every contemporary grading, editing, and photo manipulation tool I can think of has some manner of auto color-correction functionality. It’s a universal aspiration.

Now, I’m not knocking simple tools for non-specialists. That’s like complaining about automatic transmissions in cars (and I’ll freely admit that I prefer driving automatics, I’m no race car driver). One-click auto correct, template-driven video editing and compositing, audio auto-leveling and auto-mixing functions, auto-tune and quantization for music, and many other auto-magical features are wonderful things and allow folks that lack specialized skills to create interesting media. I use many of these myself, and I’m glad to have them when the time is right.

However, we specialists want more then one-click solutions to our problems. Frankly, those of us who are looking for the experience of “driving stick” within a particular domain of software are so inclined because we don’t want anyone telling us what to do. We want to find original solutions to our creative problems, or at least to imagine that we’re coming up with our own secret sauce version of whatever it is we’re trying to accomplish. In the process, we want to exercise multiple iterative variations, we want to make tweaks both gross and minute, and we want to do all of this as quickly as possible with the least amount of wrist strain since we’re doing whatever it is we do all day, all week, all month, and if we’re financially lucky, all year.

As a specialist myself, I speak for other software specialists, and not for casual users, when I say that I don’t want software to make things easy. Because the only way to make a task easy is to tell someone how to do it, or do it for them.

Using a pencil is easy. Any two-year-old can use the hell out of a pencil. Using a paint brush is easy, you just dip it in paint and start smearing. Swinging a hammer is easy. When I was seven and my dad was building a house, he gave me a hammer, a pile of nails, and a piece of two by four and I was a hammering fool.

You know what’s not easy? Writing a novel. Painting a portrait. Chiseling a sculpture. For all of these activities, the user interface is unbelievably easy, but the task itself is hard. It requires knowledge. It requires practice. It requires skill. Perhaps a bit of talent if you’re so lucky.

Easy to learn UI, but does it make the task easier? Should it?

And that’s great. The whole point of art, as I see it, is the doing of it. The process, the act of learning, growing, figuring out how to make the results more interesting, more exciting, more effective and unique.

I’m going to wax idealistic for a moment. Nobody ever set out to become an editor of film and/or video because they said to themselves “I think I can make a pile of money editing corporate communications videos.” People become editors when they’re exposed to the process of creating meaning by juxtaposing one shot next to another, and get hooked. They want to figure out how to do it better. They want to try creating the kind of meaning that is most interesting to them, be it documentary, short form, or long form narrative.

I believe this is true of all creative specialists; musicians, writers, colorists, mixers, compositors and animators. We all use software, and we all want to do something new, something great, something interesting. Compositors and sound designers create new worlds of experience by building and combining layers of information in novel and interesting ways. Writers and musicians discover the ability to impart entirely new experiences to audiences using the deceptively simple tools of language and instrumentation. Colorists discover deeper and more effective ways of fine-tuning images to guide the audience’s emotional reception of a scene. All of these occupations can fill a lifetime with a completely engaging struggle for improvement.

Along the way we all take day jobs doing all kinds of work that pays the bills, but the one thing that, I hope, keeps most of us going is that we’re all striving to figure out how to do what it is we do better. The day I feel like I’ve plateaued at a particular task is the day I lose interest, turning to something else where I feel there’s more to explore.

What does this have to do with software? Puzzling out that first creative adjustment to the color and contrast of an image is the best part of my day. I don’t want an auto-correct button, because that robs me of the joy of discovering the best adjustment for the first shot of a brand new project.

Instead, I’d much rather developers found ways to make the software more fun to use. Find the creative obstacles that impede a particular task, and create software that minimizes or eliminates them. Figure out what specialists want to do when the software frustrates them, limits their ability or the expressiveness of the operation they’re trying to perform, and create new tools to overcome these limits that aren’t cumbersome to use. These are the areas that need real improvement.

A good friend of mine, Michael Wohl, is fond of saying that good software gets the hell out of your way and lets you do what you want to do. I think this is a terrific goal for any developer.

Furthermore, every software-driven workflow inevitably involves some amount of digital drudgery. If it’s possible to identify repetitive, non-creative busywork that drains the joy out of a specialist’s day, then that’s another target for improvement. Are there new tools that could be added that would make the process of exploring new ways of processing the image more enjoyable, open-ended, creative, and exciting? Those are the improvements I want, things that let me focus less on house-cleaning, and more on the creative aspects of the process.

Lastly, give me more creative tools. And make them fun to use while keeping them as customizable and open-ended as possible so I can find my own particular tweaks. I want to look forward to coming to work, opening your software, and finding new creative uses for interesting tools that perhaps nobody else has yet discovered.

Don’t make my job easier. I don’t want to paint by numbers. Make my job more fun and expand my toolkit of creative possibilities, and I’ll buy every single software upgrade you make.


Color Correction Handbook 2nd Edition: Grading theory and technique for any application.
Color Correction Look Book: Stylized and creative grading techniques for any application.
What's New in DaVinci Resolve 14: Covering every new feature in Resolve 14 from Ripple Training.
DaVinci Resolve Tutorials: Far ranging DaVinci Resolve instruction from Ripple Training.

Hands-On With an FSI Monitor

I finally got my hands on the new Flanders LM-2461W thanks to Dan Desmet, owner of FSI. Alas, being in the middle of a move and with many obligations at NAB this year, I didn’t have as much time to work with the monitor as I would have liked, but I still managed to fire it up, run a few scenes on it, do a bit of grading, and get a solid sense of the monitor’s performance.

If you’ve been reading my blog for a while, you’ll remember I gave this monitor a shout out in my “What (Inexpensive) Display Should I Buy?” article. I made the recommendation mainly because of glowing reviews I’ve gotten in conversation with Robbie Carman, who’s a satisfied owner of the previous model, and from more positive written reviews from another FSI monitor owner, Walter Biscardi. Between that, and glimpses of various iterations of this display at past NAB shows, I was comfortable recommending it even though I didn’t have hands-on experience with it.

Well, now I have. But before I go into what I found appealing about the Flanders display, I want to make it clear that in today’s world of competing monitor technologies, display selection is a very personal thing. No two hi-fi enthusiasts are going to like the same speakers, and no two colorists are necessarily going to like the same display. Different display technologies have different characteristics, making them more useful for some applications then others. Some people find high-end LCD appropriate for their needs, while others prefer plasma, and still others are better off with a projector setup. I’ll address what I think the Flanders is most suited for later on.

My most basic advice is to never, ever buy a monitor without evaluating it in person first, in a reasonably suitable environment with appropriate lighting. This year at NAB Flanders had set up a proper, shaded viewing booth, appropriately shielded from ambient glare, and backlit to provide a suitable surround. That’s how you want to view the monitor, as it’s the only way to really see how its black level and contrast stack up in a real-world suite.

I’m going to start with my overview of what I found appealing about the Flanders 2461 display from the outside in.

Connectivity

First off, FSI does the right thing by providing a host of connectivity options. Where other companies have traditionally made you pay a premium for SDI or HD-SDI inputs, Flanders includes SDI, HD-SDI, Dual Link, and 3G (now standard with the 2461) digital inputs, as well as Y’PbPr (for you analog holdouts). There’s even DVI in if you want to use the FSI as a computer monitor (although keep in mind that DVI is limited to 8-bits-per-channel). That makes the FSI monitors incredibly flexible for just about any postproduction application.

Given the built-in support for single and dual link HD-SDI (or either via 3G), corresponding settings supporting 4:2:2 and 4:4:4  monitoring (either Y’CbCr or RGB, depending on your signal path) can be found in the menus. Again, dual link 4:4:4 support was once a quite expensive option, so having this built-in makes any shop able to accurately monitor high end HD and digital cinema signal paths.

Chroma sampling menu options

If you’re monitoring via Y’PbPr (the analog signal standard, as opposed to Y’CbCr, which is the digital signal standard), there are also all the options you’d want for analog monitoring, including SMPTE/N10 (with now standard 0 IRE setup, or black level), Betacam (with its now non-standard 7.5 IRE setup for North American analog Betacam output), and MII (with a 7.5 IRE setup similar to Betacam, but slightly different saturation levels).

As a side note (and I can’t stress this enough) if you’re monitoring for eventual output to analog NTSC Beta SP, then you’d use the Betacam setting. If you’re monitoring to eventually output via the SDI or HD-SDI outputs to any other format, or if you’re simply using the Y’PbPr connection for monitoring a digital signal, then use SMPTE/N10. Its 0 IRE setup is the standard for any application other then analog NTSC Beta SP.

Different options for analog component input

Commensurate with the wealth of video inputs, this is very much a multi-format display, with support for SD, HD, and 2K formats of all standard frame rates, frame sizes, and interlacing standards. I’ll let their technical specs link speak for itself. Bottom line, you shouldn’t have a problem monitoring any format of digital or analog video with the 2461.

Power

I don’t typically discuss the power connectors of post gear, but in addition to the standard three pin computer power plug, the Flanders display also supports 24V DC portable power for field use. In conjunction with other features I’ll discuss later, this makes the FSI a flexible display for a wide variety of production situations as well.

DC power connector for on-set use

Color Fidelity

One of the main points of buying a monitor like this is that it can be properly calibrated to the required video standards.

Flanders takes a unique approach to monitor calibration, which is to carefully precalibrate each monitor that leaves their warehouse to be ready for you to use, as is. At a later time, they offer recalibration on demand, with you shipping your monitor back to them for the service whenever necessary (monitor recalibration is free for the life of the monitor, except for the cost of shipping). However, I’ve been assured that the panels are very stable, so Flanders only recommends full recalibration every 18-24 months.

However, the luminance of the Fluorescent backlighting can diminish very slightly over time with regular use (a matter of months). Before sending your monitor in for recalibration because of the backlight, there is a DIY calibration option that’s designed to account for this using an inexpensive colorimeter (the X-rite i1 and i1 Display 2 probes are approved). You must have first calibrated the probe to your brand new FSI monitor when you first received it, then you can use that probe to periodically check the luminance level of the monitor (Dan Desmet recommends every two months), using the monitor’s backlight setting to compensate for any aging of the fluorescent backlighting tubes.

When the time comes to send your monitor in for an overall recalibration, I’ve been told that turnaround is about 24 hours, depending on the method of shipping you decide to use. Flanders own calibration is done using a Minolta CA-210 or 310 (itself calibrated by FSI using an even more precise spectroradiometer), the results of which are used to generate a 3D calibration LUT that’s loaded directly into the display, accounting for any shifts within the panel or backlight that occur over time. Interestingly, the Flanders color engine is designed to use 64x64x64 LUTs. According to what I know about calibration LUTs, this is overkill, but hey, there’s nothing wrong with overkill if it doesn’t cost you anything extra, and the extra precision should make anyone sleep better at night.

Flander’s calibration service is a significant selling point, as the cost of purchasing your own high-quality colorimeter (to measure the display’s color characteristics), software for measurement and LUT generation, and an outboard video processor to apply the results (were you to use a different type of display lacking onboard LUT processing) could easily run you from $8K to $16K (USD) depending on the vendors you use (I’ve written an overview of the LUT calibration process here). Flanders takes this burden upon themselves so you don’t have to, and that’s a pretty good deal in and of itself.

The panel of the 2461 has a wide enough gamut to support a variety of standardized color spaces. These include:

  • Wide Gamut — The uncalibrated, native gamut of the panel used in the 2461.
  • SMPTE C — The gamut of the phosphors used by broadcast CRT’s (such as the Sony BVM series).
  • Rec 709 — The published gamut standard for HD video
  • EBU — Another gamut defined by CRT phosphors that was standardized by the EBU. It’s similar, but not identical to, the SMPTE C phosphor gamut.
  • DCI P3 — The gamut defined by the Digital Cinema Specification, for digital distribution and projection in theaters. Flanders says the 2461 supports 97 percent of the overall P3 gamut, being shy primarily in the most saturated greens. This is similar to the HP Dreamcolor’s stated 97 percent support of the gamut, and is pretty good. However, I suspect most users of this monitor will be most interested in the Rec 709 setting.

All of these are available via a convenient on-screen menu.

Options in the color space menu

Gamut is only one characteristic of professional display adjustment. Another is Gamma, which is a whole other topic. Suffice it to say the current thinking of some prominent industry experts is that, despite some disagreement between postproduction professionals in different segments of the industry, the preferred standard for the gamma of monitors displaying Rec 709 is 2.4 (when backlit and with subdued lighting). This has apparently been ratified by the International Telecommunications Union (ITU).

The most appropriate gamma to use also depends on the display’s color space. The published gamma standard of a display calibrated to DCI P3 is 2.6 (in a blackout theater environment). Apparently, audience testing has shown that higher gamma values look better in darker viewing environments, while lower gamma values look better in lighter viewing environments.

However, opinions still vary. For example, the EBU seems to have standardized on 2.35 for consumer displays (see the EBU TECH 3321 document). Flanders takes the high road by providing adjustable gamma anywhere from 1.0 to 2.8. Why so much adjustability? Well, if you wanted to use this monitor as a computer display, you could set it to the current 2.2 gamma standard of Windows or OS X.

Options in the gamma menu

I neglected to get a picture of the menu, but there’s also a selectable color temperature option. Most folks will likely be monitoring at 6500K (the North American and European standard), but this is also adjustable if necessary, with settings for 3200K, 5000K, 5600K, 6500K, and 9300K (the broadcast color temperature standard for many Asian countries).

Interlacing

Much as it pains me that interlacing is still firmly entrenched in the HD postproduction world, it’s still very much in use by producers and broadcasters worldwide. As a result, it’s still critical to be able to get a true look at the field order of a video signal, and this is another area where the Flanders excels. Incoming interlaced images are displayed properly, with sequential fields presented in order so it’s apparent if, for example, an effects shot has had the interlacing reversed accidentally. A conventional computer display that deinterlaces everything wouldn’t necessarily show this problem as obviously.

You can control how the 2461 handles the display of an interlaced signal, but the menu settings are not obvious. The Video menu has a Processing submenu. When set to Normal or Fast, an interlaced video signal is displayed with properly resolved discreet sequential fields on the FSI monitor. When set to Noise Reduction, interlaced video signals are deinterlaced.

Manual Controls

Now, while carefully calibrated standards are important, it’s easy to forget that displays in a broadcast environment are simply evaluation instruments, and often manual adjustments to abuse the signal are useful for finding odd problems that won’t reveal themselves in normal use. For example, a shape-limited correction may look fine with ordinary calibration, but give itself away horribly on a display that’s a bit too bright. By temporarily cranking up the brightness on your monitor, you can see what your graded image will look like on a display that’s mis-calibrated (good to do if you’re working on a project destined for the wild west of the film festival circuit).

For this purpose, the Flanders monitors also include a host of manual controls for phase, chroma, bright, and contrast.

Manual monitor control knobs

Unlike the “center-click-for-détente” potentiometers of traditional monitors, the Flanders display uses infinite rotary controls with an onscreen digital readout. It’ll take a bit of getting used to for old-school monitor users, but this is a digital monitor through-and-through, and it gives you the comfort of seeing when your setting is completely and accurately at 0 détente.

On-screen control values

Furthermore, there are all the traditional controls you’d expect for over/underscan, H/V Delay, Blue Only, and Monochrome only (found in the menus).

This is also an interesting time to point out that the LAN jack on the back of the monitor will eventually support an application that can, via your network, control the monitor and adjustment settings of the Flanders remotely. Furthermore, Flanders sells a customizable monitor remote unit (with a whiteboard face for easy relabling) that can be used to control various features of your choosing via the GPI interface (connectable via CAT5 cable).

On the topic of instrument controls, the front panel has convenient pushbuttons for each of the video inputs. It’s a small thing, but I love that I don’t have to go clicking through menus or up/down button sequences to get to the input I want; one click and I’m there. Hooray!

Buttons controlling all inputs

Tools for On-set Use

There’s also a series of user-programmable Function buttons (F1-F5).

User definable function buttons

By default, these are mapped to useful utilities like video scopes, windows, measurement, and false color analysis (more on these later), and you can customize these via an additional set on onscreen menus.

User definable function menu

As a suite colorist, these options are of limited use to me, as I’ve already got a set of video scopes, and frankly I want a pure, unvarnished look at the image I’m supposed to be making improvements to. The last thing I need is stuff superimposed over the picture, distracting me from what I need to be paying attention to.

However, all of this is incredibly useful for an on-set crew, and to that purpose there are some options I’d like to call your attention to.

First, there’s a comprehensive set of video scopes available, visible up to two at a time, including a full suite of waveform scopes (Luma, Parade, Y’CbCr parade and overlay), Vectorscope, Histograms, audio meters, and more.

Video scope overlays

These are a real convenience on the set, but if you’re in the studio I wouldn’t get too excited. In use, I noticed that the framerate of analysis was less then realtime, which is not useful if I’m sweating the QC on a broadcast show. Furthermore, popping onscreen scopes on and off via the F-buttons is great when you’re adjusting exposure on your camera, but as I mentioned I don’t want anything superimposed over my image when I’m trying to color correct a scene. This monitor is no replacement for a set of dedicated video scopes in a postproduction environment.

There’s also a really interesting pixel analysis tool. If there’s an element in the scene that you absolutely have to have a numeric analysis for, it’s here.

On screen measurement tool

Another great feature for setting lights and exposure is a false-color mode for showing regions of maximum and minimum Luma.

False color mode for luma limits

The zones are adjustable, but by default are set to show you which areas of the image are falling into the upper and lower 10 percent of image tonality, as an aid to help you avoid overzealous clipping.

Chart showing false color breakdown

There are many more features targeted to the on-location crew, including pixel zoom, focus assist (with a different false-color red overlay showing which areas of the image are analyzed to be in focus), timecode display, alarms, VU meters, and onscreen markers for Title Safe, Background, Center, etc, with settings for whatever frame aspect ration you may want to view.

Finally, Flanders provides the ability (new in the 2461) to display two inputs side by side.

Side by side dual input display

You can also wipe either horizontally or vertically between inputs to compare cameras, view dual channels of a stereo rig, or whatever else you need to do.

Viewing Angle

But enough of the bells and whistles. Getting back to the grading suite experience with this monitor, I’d like to talk about viewing angle. Most LCD-panel based displays lack a wide horizontal viewing angle, which is a liability when you’ve got multiple clients in the suite, and the one who’s sitting at the end of the client area is looking at an image that’s darker then the one who’s sitting in the “sweet spot” behind you. This is one area where Plasma displays have an advantage.

The Flanders, in my opinion, has a suitably wide viewing angle for two to three people sitting in a typical triangle about the display (colorist in front, clients to either side, but not too far away from one another). Flanders claims 178˚ (which seems about right in my informal “head-swinging” test). In this configuration, everyone will be seeing pretty much the same thing, which is what you want. That said, it’s still a 24″ monitor. While that was once considered luxury for midrange grading suites seven years ago (when a 24″ Sony BVM would run somewhere around $30K), these days clients have been so spoilt by facilities using calibrated 60″ plasma displays, or projector-based mini-theaters, that a 24″ display might seem a bit meagre in a multi-client environment (I’m talking five agency clients sitting in a room).

Also, keep in mind that the current recommendation for ideal viewing distance from your eyeball to an HD screen is 3-4 times the vertical height of the display. For the Flanders (with a screen height of 12 3/4″), that means that the ideal viewing distance is approximately 36″-49″ (3′-4′).

Black Level

Another issue to keep in mind is that, being based on backlit LCD technology, the black level is not as deep as either the CRTs we were once used to, a properly configured plasma, or the newfangled OLED monitors that are coming on the market (though for OLED it’ll cost you). For some, this may be an unconscionable liability. For others, the stable color and noiseless shadows of LCD makes it superior to Plasma, which is the other cost-conscious alternative.

This, in the end, is purely a matter of user preference, and is one of the reasons I recommend you evaluate any display you want to purchase in person. Certainly it’s true that as far as black level goes, you can do better, though you’ll likely pay considerably more for the privilege (the Dolby PRM-4200 has great blacks, and it’s only around $50K).

However, I would point out that the Flanders monitors does well in a standard, backlit video suite configuration. Perceptually, a dim, backlit environment is going to provide the best appearance of good contrast for any monitor (one of the main reasons for a proper viewing environment when using any display). Once used to the black level of these displays in relation to the numerical black of one’s video scopes, I don’t think anyone would have a serious problem grading their shadows predictably and well on the 2461.

Conclusion

As far as image quality goes, when I fired up the 2461 and loaded some projects I had previously graded on my calibrated JVC RS2 projector (calibrated to Rec 709), there were no surprises. Granted, this was not a probe-driven numerical analysis, but everything looked exactly as I’d expected it to, and the range of color and contrast that presented itself was absolutely suitable for professional work. The material looked right, and a series of test images that I loaded revealed nothing improper. Colorimetrically speaking, I’m impressed.

Based on the size and viewing angle, I’d say this is the perfect monitor for a shop with smaller suites designed to accommodate 2-3 people, or situations where colorists are working largely unsupervised and the Flanders is their hero display. It’s also an excellent solution for editorial environments where there is a desire for the editing displays to match the grading displays in the hero suite, preventing the unwanted surprise of a program looking different as it travels from room to room.

So that’s my final analysis. Solid color fidelity, low-cost calibration for the life of the display, and simple, flexible connectivity and video signal support make this an easy monitor to integrate into any postproduction environment (from out of the box to grading with my DaVinci rig was about 7 minutes). The size and viewing angle of the Flanders make it most suited for smaller suites, and the pricing brings a high-quality display instrument within reach of smaller shops needing color-critical monitoring. Furthermore, if you’re a “preditor” that engages in production as well as post, the Flanders provides you terrific options for field use. If you’re in the market for a new monitor in this size, I’d recommend getting a demo to see if it’s for you.


Color Correction Handbook 2nd Edition: Grading theory and technique for any application.
Color Correction Look Book: Stylized and creative grading techniques for any application.
What's New in DaVinci Resolve 14: Covering every new feature in Resolve 14 from Ripple Training.
DaVinci Resolve Tutorials: Far ranging DaVinci Resolve instruction from Ripple Training.