Finding The Commands You Need in Resolve

Yes, it’s been MONTHS since I’ve posted an article. Partially I figured it’d take most people this long to get through the last gigantic article I wrote on HDR, and partially it’s been because I’ve been utterly slammed this year producing the pilot for a TV show I’ve been developing to shoot in China. I’d love to tell you more, (you have no idea how much), but there have been many twists and turns, “goes” and “stops,” and I’ve been waiting to see what’s going to happen with that before saying anything more specific (you know how these things go).

However, it’s come to my attention that a lot of folks who are considering using Resolve for Editing are having trouble finding the commands they need, which leads them to wonder if Resolve even HAS those commands. Other than recommending a thorough read of the editing chapters I’ve painstakingly written over the years, the following movie has a nice tip for searching through the many commands Resolve provides. Resolve’s capabilities are deeper than you might think, and this will hopefully help you explore what can be done more widely.

Have fun, and hopefully I’ll have more news to share as I put my producer/director hat on in the coming months.


Color Correction Handbook 2nd Edition: Grading theory and technique for any application.
Color Correction Look Book: Stylized and creative grading techniques for any application.
What's New in DaVinci Resolve 12.5: Covering every new feature in Resolve 12.5 from Ripple Training.
DaVinci Resolve 12 QuickStart: A 4 hr editing and grading overview from Ripple Training.
Editing & Finishing in Resolve 12: 9 hrs of tutorials from Ripple Training.
Grading in DaVinci Resolve 11: Comprehensive 13 hr grading tutorials from Ripple Training.
Grading A Scene: Watch a short horror scene graded, from start to creative finish, Ripple Training.

HDR, Resolve, and Creative Grading

High Dynamic Range (HDR) video describes an emerging group of monitoring, video encoding, and distribution technologies designed to enable a new generation of television displays to play video capable of intensely bright highlights and increased maximum saturation. I’ve been keen on this technology ever since I first saw demonstrations at the 2015 NAB conference, and I’ve had the good fortune to sit with some excellent colorists who’ve been grading HDR projects to see what they’ve been doing with it. I’ve also managed to work on a few HDR grading jobs myself, on two different HDR displays, which was the point at which I felt I had something interesting to contribute to the topic.

While I’d started, many weeks ago, to write an overview of HDR for folks who are interested in what’s going on, the growing enormity of the article caused it to be unfinished when I paused to attend the 2016 NAB conference to see this year’s update of what directions HDR seems to be taking. In the process, I also was invited to participate on a panel moderated by colorist Robbie Carman and hosted by Future Media Concepts on which Katie Hinsen (Light Iron), Marco Solorio (One River Media), Bram Desmet (Flanders Scientific), Robert Carroll (Dolby), Joel Barsotti (SpectraCal) and I got to chat about HDR. Happily, it seems that most of what I’d written before NAB was in line with the experiences of others in the field, providing both confirmation and a sense of relief that I was on the right track.

In this article, I provide a summary, from a colorist’s perspective, of what HDR is, what the different flavors of HDR distribution look like right now, and how HDR works inside of DaVinci Resolve (this article is a vast expansion of a new section on HDR I added to the DaVinci Resolve 12.5 User Manual). Lastly, I try to provide some food for thought regarding the creative uses of HDR, in an effort to get you to think differently about your grading in the wake of this wonderfully freeing and expanded palette for viewing images.

Before I continue, I want to give thanks to some folks who generously provided information and answered questions in conversation as I developed this piece, including Robert Carroll and Bill Villarreal at Dolby, colorist Shane Ruggieri, Bram Desmet at Flanders Scientific, and Gary Mandel at Sony. I also want to thank Mason Marshall at the Best Buy in Roseville Minnesota, who was able to give me a quite knowledgeable tour of the actual consumer HDR televisions that are for sale in 2016.

What Is It?

Simply put, HDR (High Dynamic Range) is an escape from the tiny box, as currently defined by BT.709 (governing color gamut), BT.1886 (governing EOTF), and ST.2080-1 (governing reference white luminance levels), in which colorists and the video signals/images they manipulate have been kept imprisoned for decades.

HDR for film and video is not the same as “high dynamic range photography,” which is a question I’ve gotten a few times from DPs I know. Whereas High Dynamic Range photography is about finding tricky ways of squeezing both dark shadow details and bright highlight details from wide-lattitude image formats into the existing narrow gamuts available for print and/or on-screen display, HDR for film and video is about actually expanding the available display gamut, to make a wider range of dark to light tones and colors available to the video and cinema artist for showing contrast and color to viewers on HDR-capable displays.

It’s impossible to accurately show what HDR looks like in this article, given the screen you’re likely reading this on is not HDR, because the levels I’m discussing simply cannot be visually represented. However, if you look at a side-by-side picture of an HDR-capable display and a regular broadcast-calibrated BT.709 display, with the picture exposed for the HDR highlights, it’s possible to see how the peak highlights and saturation on both displays compare, relative to one another. In such a picture, the comparative dimness of the BT.709 display’s highlights is painfully obvious. The following (admittedly terrible) photo I took at NAB 2016 gives you somewhat of an idea what this difference is like. To be clear, were you to see the SDR display to the right by itself, you would have said it looks fine, but in contrast to the HDR image being displayed at the left, there’s no comparison.

HDR vs SDR

4000 nit HDR Output on a rear-projection JVC display (left), versus 100 nit SDR Output (right)

Another approach to illustrating the difference between High Dynamic Range and BT.709 displays is to show a simulation of the diminished highlight definition, color volume, and contrast of the BT.709 image in a side by side comparison. Something similar can be seen in the following photo of a comparison of two images from the same scene represented on Canon reference displays. At left is the HDR image, at right is the BT.709 version of the image.

Canon HDR vs SDR Comparison

HDR (left) compared to a BT.709 rendition (right) of the same scene (above), on Canon displays

Again, these sorts of example images give you a vague impression of the benefits of HDR monitoring, but in truth they’re an extremely poor substitute for actually looking at an HDR display in person.

So, HDR displays are capable of representing an expanded range of lightness, and in the process can output a far larger color volume than previous generations of displays can. However, this expanded range of color and lightness is meant to be used in a specific way, at least for now as we transition from an all “Standard Dynamic Range” (SDR) distribution landscape, to a mixture of SDR and HDR televisions, disc players, streaming services, and broadcast infrastructure, using potentially different methods of distributing and displaying HDR signals.

The general idea is that much of the tonal range of an HDR image will be graded similarly to how an SDR image is graded now, with the shadows and midtones being treated similarly between traditionally SDR and HDR-graded images in order to promote wider contrast, maintain a comfortable viewing experience, and to ease backward compatibility when re-grading for non-HDR displays. “Diffuse white” highlights (such as someone’s white shirt), are where the expanded range of HDR begins to offer options for providing more vivid levels to the viewer. HDR’s most immediately noticeable benefit, however, is in providing abundant additional headroom for “peak” highlights and more intense color saturation that far exceeds what has been visible (without clipping) in SDR television and cinema up until now.

For example, a reference SDR display should have a peak luminance level of 100 “nits” (cd/m2), above which all video levels are (probably) clipped. Meanwhile, today’s generation of professional HDR displays have peak luminance levels of 1000, 2000, or even 4000 nits (depending on the model and manufacturer), and support at least most of the expanded P3 gamut for color. Eventually, televisions capable of displaying even brighter highlights (Dolby Vision and BT.2084 support levels up to 10,000 nits) and expanded color saturation (reaching out towards the promise of BT. 2020) may become available.

And these peak HDR-strenth highlights look spectacular.

Why Is This Cool?

Frankly, the only way to answer this question is to finagle yourself into an HDR screening. I can type until my fingers cramp about how wonderful all of this is, but without seeing it for yourself, the benefits of HDR are a bit abstract. Once you’ve seen it, you’ll know why it’s cool, why you’ll want to shoot your next project with HDR in mind (as I am), and why getting your hands on HDR as a colorist will be enormous fun. I’ve now sat in on several different HDR demonstration screenings, grading sessions, and theatrical viewings, and have had a few HDR grading gigs of my own, and everyone I’ve talked to afterwards, both colorists and clients, has been almost immediately enthusiastic.

The core benefits of HDR, as I see them, are two-fold.

Firstly, you can have portions of the highlights of your image exhibit extremely bright specular highlights, glints, and sparkles with far greater visible detail within these areas because much of the detail within these highlights won’t clip. Practically, this means that instead of clipping all highlights above 100 nits (ST.2080-1 standardizes the peak luminance that’s associated with displays set to output BT.709/BT.1886), now you can see the difference between a 100 nit detail, a 300 nit detail, a 500 nit detail, and an 800 nit detail within such a highlight, assuming you’re looking at an HDR display capable of showing you that range. There’s simply no comparison.

If we look at a linear vertical representation of these values, similarly to how we’d plot the scale of a waveform monitor, it becomes immediately obvious what a difference this is. Keep in mind that the little tiny green slice at the bottom of the illustration represents the total range of luminance that’s available to colorists in a conventionally graded BT.709/BT.1886/ST.2080-1 image.

Common HDR "nit" levels, compared

Secondly, and to me almost more importantly, richly saturated colorful and bright image details such as neon lights, emergency vehicle lights, backlit tinted glass, explosion effects, firelight, skin shine and bright highlights, and other saturated reflective areas and direct light sources, as well as the glows and volumetric lighting effects they emit, may carry saturation well above the 100 nit level on an HDR display, which is a creative choice previously forbidden to colorists, who had to be sure to compress color saturation somewhat below the 100%/100 IRE/700 mV maximum allowed by most conservative QC specifications for broadcast television just to be on the safe side. With HDR, you no longer have to crush the life out of vividly bright highlights to squeeze them onto TV, you can actually leave them be, and revel in the abundance of smear-free extra saturation and detail you can allow in the highlights of sunsets, stained-glass windows, Vegas-style signage, and other brightly-lit areas of colorful detail.

Now, the illustration above, while exciting, is not quite accurate, in that the human eye has a logarithmic response to highlights. Practically speaking, this means that our eyes perceive a difference between two very bright levels as a smaller percentage of what that difference actually is. This is one reason why we can handle going outside on a sunny day without being blinded, when there are reflective nit levels all over the place that are off the chart of what we see on an SDR television or in an SDR movie theater. Not coincidentally, HDR signals are logarithmically encoded for distribution, and if we look at an actual logarithmically compressed waveform scope scale for evaluating HDR media, we get a somewhat more comprehensible comparison of SDR and HDR signals, that’s a bit more actionable from the colorist’s perspective.

HDR nit levels compared logarithmically

Another advantage to HDR displays is that, since viewers experience contrast as the difference between the brightest and darkest pixels within an image, and since edge contrast is a visual cue for sharpness, having dramatically brighter pixels, even a few of them in the top highlights, means that the perceived contrast of the image will be dramatically higher, and details will appear to be much crisper. My experience from looking at a few HD-resolution HDR displays at NAB 2015 was that they appeared to be sharper than some of the 4K displays I was seeing, because HDR highlights add contrast that make the edges by which we evaluate sharpness really pop. Combining HDR with 4K will be an exceptional viewing experience no matter how huge your living room television is.

One last advantage to HDR for distribution is that, with few exceptions, HDR distribution standards require a minimum of 10-bits to accommodate the wide range being distributed (HDR mastering requires 12-bits). Even though that 10-bits will be stretched more thinly than with an SDR signal, given the expanded latitude of HDR, this hopefully means that a side benefit of HDR will be a reduction in the kind of 8-bit banding artifacts in shadows and areas of shallow gradated color such as blue skies or bare walls that we’ve been cursed with ever since television first embraced digital signals. That alone is worth the cost of admission.

Another interesting thing about HDR is that, unlike other emerging distribution technologies such as Stereo 3D, high-frame-rate exhibition, wide gamuts, and ever-higher resolutions (4K, 8K) which engender quite a bit of debate about whether or not they’re worth it, HDR is something that nearly everyone I’ve spoken with, professional and layperson alike, agree looks fantastic once they’ve seen it. This, given all the griping about those other technologies I’d mentioned, is amazing to me. Furthermore, it’s easy for almost anyone to see the improvement, no matter what your eyeglass prescription happens to be.

(Updated) However, because it’s an emerging technology, the technical standards being promulgated at the moment exceed what the first few generations of consumer displays are capable of. I had a look at what’s on store shelves at the time of this writing in 2016, and depending on the make and model you get, consumer televisions are “only” capable of outputting a maximum of 300, 500, 800, or 1000-1400 nits peak luminance. Capabilities vary widely. Moreover, because display manufacturers are racing one another to improve each subsequent generation of consumer televisions, HDR standards for peak brightness are a moving target. While HDR this year means peak luminance of 300–1000 nits, maybe next year will bring a 2000 nit model. The year after that, who knows?

Because of this, two of the proposed mastering methods of HDR have been designed to accommodate up to 10,000 nits, while one other will accommodate up to 5,000 nits. Of course, no current television can get anywhere even remotely close to either of these maximum levels, but the Dolby Pulsar, which has the highest nit output display in use for mastering HDR (at the time of this writing), is capable of displaying an HDR signal with a peak luminance level of 4,000 nits, making this the de facto reference at facilities lucky enough to be grading programs from movie studios and content distributors that are mastering for Dolby Vision. Many other facilities are using 1000 nits as a more achievable de facto reference given that’s what the Sony BVM X300 HDR display is capable of doing.

This basically means that many colorists are grading and mastering programs to be future-proofed for later generations of television viewers with better televisions, and in the short term different strategies are employed to deal with how these higher-than-currently-feasible peak HDR-strength highlights will be displayed on the first generations of consumer HDR televisions.

Automatic Brightness Limiting (ABL)

There’s one other wrinkle. Consumer HDR displays have legally mandated (regulated by the California Energy Commission and by similar European agencies) limits on the maximum power that televisions can use in relation to their size and resolution. Consequently, automatic brightness limiting (ABL) circuits are a common solution manufacturers use to limit power consumption to acceptable and safe levels for home use. Practically speaking, an ABL circuit limits the percentage of the picture that may reach peak luminance without automatically dimming the display. This type of ABL limiting is not required on professional displays, but some manner of limiting may still be used to protect the display from damage stemming from drawing more current than they can handle in exceptionally bright scenes.

Naturally, on my first HDR grading job I was keenly interested in just how much of the picture could go into very-bright HDR levels before the average consumer HDR-capable TV would interfere, since I didn’t want to push things too far. Unfortunately, nobody could tell me what that threshold was at the time, so I simply proceeded with caution, grading relative to the 30″ Sony BVM X300 display we were using as our HDR reference display (and a beautiful monitor it is). The grade went well, I tried to be judicious about how far I pushed the brightest of the signal levels, and the client went away with a master that made them happy (sadly, it was a secret project…).

Later, I had the good fortune of speaking with Gary Mandle, of Sony Professional Solutions, who illuminated the topic of how ABL affects the HDR image, at least so far as the BVM X300 is concerned. A number of different rules are followed, all of which interact with one another:

  • In general, only 10% of the overall image may reach the X300’s peak brightness of 1000 nits (assuming the rest of the signal is back down at 100 nits or under)
  • The overall image is evaluated to determine the allowable output. An extremely simple (and certainly oversimplified) example is that you could (probably) have 20% of the signal at 500 nits, rather than 10% at 1000 nits. I have no idea if this kind of tradeoff is linear, so the truth undoubtedly varies. The general idea is that if you only had, say 2% of the image at 1000 nits, and 5% of the image at 500 nits, then you can probably have a reasonable additional percentage of the image at 200 nits, which is by no means at the top of the range, but is still twice as bright as SDR (standard dynamic range) images that peak at 100 nits. I don’t know what the actual numbers are, but the basic idea is the total percentage of pixels of HDR-strength highlights you’re allowed to have depends on the intensity of those pixels.
  • The dispersion of image brightness over the area of the screen is also evaluated, and output intensity is managed so that areas with a lot of brightness don’t overheat the OLED panel.

Long story short, how ABL gets triggered is complicated, and while you can keep track of how much of the image you’re pushing into HDR-specific highlights, how bright those highlights are, and how clustered or scattered the highlights happen to be, there will still be unknowable interactions at work. Fortunately, the Sony BVM X300 has an “Over Range indicator” light, which illuminates and turns amber whenever ABL is triggered, so you know what’s happening and can back off if necessary. Incidentally, it’s worth noting that the X300, being an OLED display, is susceptible to screen burn-in if you leave bright levels on-screen for too long, so don’t leave an HDR image on pause going out to your display before going home for the evening.

Bram Desmet, CEO of Flanders Scientific, pointed out that VESA publishes a set of test patterns (ICDMtp-HL01) devised by the International Committee for Display Metrology (ICDM) which can be used to analyze a display’s (a) susceptibility to halation, defined as “the contamination of darks with surrounding light areas,” and (b) susceptibility to power loading, which describes screens “that cannot maintain their brightest luminance at full screen because of power loading.” The set consists of two groups of ten test patterns. Black squares against white backgrounds are used to measure halation, while white squares against black backgrounds are used to measure power loading. For the power loading patterns, the ten patterns feature progressively larger white squares against a black background labeled as L05 to L90; the number indicates what diagonal percentage of the screen each box represents (which I’m told is different from a simple percentage of total pixels).

Halation & Loading Patterns

By measuring a display’s actual peak luminance while outputting progressively larger white boxes on black backgrounds, you can determine the maximum percentage of screen pixels that are possible to display at full strength before peak luminance is reduced due to power limiting. Of course, this doesn’t account for all the factors that trigger ABL, but it does provide at least one comprehensible metric for display performance, and some display manufacturers cite one of these test patches as an indication of a particular display’s performance.

Of course, the ABL on consumer televisions is potentially another thing entirely, as each manufacturer will have their own secret sauce for how to handle excess HDR brightness that exceeds a given television’s power limits. Hopefully, consumer ABL will be close enough to the response of professional ABL that we colorists won’t have to worry about it too much, but this will be an area for more exploration as time goes on and more models of HDR televisions become available.

(Update) In fact, I had just published this article when I had to run over to Best Buy to purchase a video game for a friend who I’ve decided is entirely too productive with their time. While I was there, I had a look at the televisions, and in the course of chatting about all of this (because I can’t stop), associate Mason Marshall pointed out a chart at rtings.com that does the kind of test chart evaluation I mention previously to investigate the peak luminance performance of different displays as they output different percentages of maximum white. The results are, ahem, illuminating. For example, while the Samsung KS9500 outputs a startling 1412 nits when 2% of the picture is at maximum white, peak luminance drops to 924 nits with 25% of the picture at maximum white, and it drops further down to 617 nits with 50 percent of the picture at maximum white. Results of different displays vary widely, so check out their chart. Now, this simple kind of Loading Pattern test isn’t going to account for all the variables that a display’s ABL contends with, but it does show the core principal in action of which colorists need to beware.

Dire as all this may sound, don’t be discouraged. Keep in mind that, at least for now, HDR-strength highlights are meant to be flavoring, not the base of the image. My experience so far has been if you’re judicious with your use of very-bright HDR-strength highlights, you’ll probably be relatively safe from the ravages of ABL, at least so far as the average consumer is concerned. Hopefully as technology improves and brighter output is possible with more efficient energy consumption, these issues will become less of a consideration. For now, however, they are.

More About Halation

Because of the intense light output required by HDR displays, different backlighting schemes are being developed to achieve the necessary peak luminance while attempting to keep power consumption economical. This is a period of rapid change in display technologies, but at this point in time some displays may exhibit halation in certain scenes, which can be seen as a fringing or ringing in lighter areas of the picture that surround darker subjects. These artifacts are not in the original signal, but are a consequence of a display whose backlighting technology is susceptible to this issue. This is the reason for the Halation test patterns described above, and it’s something you should keep an eye out for when looking at HDR displays you want to use for professional work.

Terminology in the Age of HDR

The advent of HDR requires some new distinguishing terminology, most of which has already been used in this article. Still, in the interests of clarification, SDR, or Standard Dynamic Range, describes video as has been previously experienced on conventional consumer televisions, where we talk about a display’s EOTF (electro-optical transfer function) being (hopefully) governed by the BT. 1886 standard, and your peak luminance level is probably (if you’ve calibrated) 100 nits as defined by the ST.2080-1 standard. Of course, standards compliance is entirely dependent on you and your clients choosing the correct settings on your displays, and maintaining the calibration of said displays on a regular-enough basis.

If you want to be specific, a “nit” is a colloquialism for candelas per meter squared (cd/m²), a unit for measuring emitted light. Nits is easier to type and more fun to say.

At the risk of being redundant, HDR describes video meant to be shown on a display that delivers peak reference white levels that are considerably higher, but that don’t use the BT.1886 EOTF that that you’re used to with SDR. Instead, HDR displays use an EOTF that’s described either by the ST.2084 or Hybrid Log-Gamma (HLG) standards (more on these later).

It used to be that Gamma was colloquially used to describe how image values at different levels of tonality were displayed when output to a SDR television. With the ratification of BT.1886 recommending a slightly more complicated tonal response with which to standardize modern digital SDR displays, we must now refer more specifically to the EOTF of a display, which describes the same principle of how image values at different levels of tonality are output on a display, but in a more general way that may encompass multiple methods and standards.

So, BT.1886, ST.2084, and HLG each describe a different EOTF. On a brand new professional HDR display, you must make sure that it’s set to the correct EOTF for the type of signal you’re mastering, since it can probably be set to any one of these standards.

HDR is Not Tied to Resolution

Whether a signal is SDR or HDR has nothing to do with either display resolution, gamut, or frame rate. These characteristics are all completely independent of one another. Most importantly:

  • HDR is resolution agnostic. You can have a 1080p (HD) HDR image, or you can have a 3840 x 2160 (UHD) SDR image, or you can have a UHD HDR image. Right this moment, a display being capable of HDR doesn’t guarantee anything else about it.
  • HDR is gamut agnostic as well, although the HDR displays I’ve seen so far adhere either to P3, or to whatever portion of the far wider Rec.2020 gamut they can manage. Still, there’s no reason you couldn’t master a BT.709 signal with an HDR EOTF, it’d just be kind of sad.
  • You can deliver HDR in any of the standardized frame rates you care to deliver.

That said, the next generation of professional and consumer displays seems focused on the combination of UHD resolution (3840×2160) and HDR, with at least a P3 gamut. To encourage this, the HDR10 industry recommendation or “Ultra HD Premium” industry brand-name is being attached to consumer displays capable of a combination of such high-end features (more on this later). As a side note, HDR10 is not the same as Dolby Vision, although both standards use the same EOTF as defined by ST.2084 (more on this later).

Higher resolutions are not required to output HDR images. They’re just nice to have in addition.

How Do You Shoot HDR?

You don’t.

By which I mean to say that you’re not required to do anything in particular to shoot material that’s suitable for HDR grading if you’re using one of numerous digital cinema cameras available today that are capable of capturing and recording 13 – 15 stops of wide-latitude imagery. The more latitude you have in the source signal, the greater a range of imagery you’ll be able to make available to the colorist for fitting into the above-100 nit overhead that HDR allows. My first client-driven HDR job consisted of RED DRAGON R3D media, which wasn’t originally shot for HDR grading. However, there was plenty of extra signal available in the raw highlights to create compelling HDR-strength highlights with naturalistic detail.

Of course, I imagine intrepid DPs will find themselves making all kinds of different decisions, potentially, about whether or not to let windows blow out, what to do with ND, how to deal with direct sunlight, etcetera. However, since most of the signal (shadows and midtones) in a well-graded image will initially continue to be graded down around 0-100 nits, you’re probably not going to be doing anything radically different in terms of how you shoot faces, shadows, and anything up to the sorts of diffuse white highlights that constitute the bedrock of your images. You just have to know that whatever peak highlights you have in the frame will be preserved, and have the potential to venture into super-bright levels, so you should start planning your highlights within the image accordingly.

I’m guessing DPs will start asking for a lot more flags on set.

Even if you’re shooting with a camera that doesn’t have the widest latitude possible, colorists can always “gin up” HDR-strength highlights in post from low-strength highlights, by isolating whatever highlights there happen to be and stretching them up to reasonably good effect. You probably won’t want to push these kinds of “fake” HDR-strength pixels as high as you would genuinely wide-latitude highlights for fear of banding and artifacts given the thin image data, but you can still do a lot, so you’re not without options.

Bottom line, if you already own a camera with reasonably wide latitude, HDR won’t be an excuse to buy another one, and it seems to me that there’s nothing extra you need to buy for the camera or lighting departments if you want to shoot media for an HDR grade. At least, not unless you really, really want to. As time goes on, I’m sure DPs will find new methodologies for taking advantage of greater dynamic range, and there will be much more to say on the subject. We’re in the very early days of HDR, and I’m sure I’ll have more interesting advice to contribute after working with my DP on my next shoot.

Don’t Lose Your Dynamic Range in Post

It ought to go without saying, but shooting wide-latitude images in the field as raw or log-encoded media files is only useful so long as you preserve this wide latitude during post-production. In terms of mastering, grading with your camera-original raw files such as R3D, ARRIRAW, Sony RAW, and CinemaDNG is an easy way to do this.

If you’re dealing with VFX pipelines, you can transcode wide-latitude raw media into log-encoded 16-bit OpenEXR files to retain latitude in a media format that’s useful in a wide variety of applications. Otherwise, grading with 12-bit log-encoded 4:4:4 sampled media in formats such as ProRes 4444, ProRes 4444 XQ, or DNxHR 444 will also preserve the latitude necessary for high-quality HDR grading. In either case, documentation from Dolby indicates that PQ-, Log C-, and Slog-encoded media is all suitable within a 12- or 16-bit container format.

Happily, all of these formats are compatible with DaVinci Resolve.

The Different Formats of HDR

Now that we’ve discussed in broad terms what HDR is, and what it takes to make it, how is it mastered?

While different HDR technologies use different methods to map the video levels of your program to an HDR display’s capabilities, they all output a “near-logarithmically” encoded signal that requires a compatible television that’s capable of correctly stretching this signal into its “normalized” form for viewing. This means if you look at an HDR signal that’s output from the video interface of your grading workstation on an SDR display, it will look flat, desaturated, and unappealing until it’s plugged into your HDR display of choice.

A log-like HDR image with 4000 nit peak highlights

A log-like HDR image with 4000 nit peak highlights

It should go without saying that most professional grading applications such as FilmLight Baselight and SGO Mistika support HDR in color management, grading, and finishing workflows, and everything I describe in this article that’s non app-specific equally applies to HDR being worked on in any software environment with support for the standards you want to use. Since I’m obviously most familiar with DaVinci Resolve, that’s what I describe in this article.

At the time of this writing, there are three approaches to mastering HDR that DaVinci Resolve is capable of supporting, including Dolby Vision, HDR10 using ST.2084, and Hybrid Log-Gamma (HLG). Each of these HDR mastering/distribution methods focuses on describing how an HDR signal is encoded for output, and how that signal is later mapped to the output of an HDR display.

Each of these standards are most easily enabled using Resolve Color Management (RCM) via Color Space options in the Color Management panel of the Project Settings. Alternately, LUTs are available for each of these color space conversions if you want to do things the old-fashioned way, but Resolve Color Management has become so mature in the last year that, from experience, I personally recommend this approach to handling HDR within Resolve.

However, these standards have nothing to say about how these HDR-strength levels are be used creatively. This means that the question of how to utilize the expansive headroom for brightness and saturation that HDR enables is fully within the domain of the colorist, as a series of artistic decisions that must be made regarding how to assign the range of highlights that are available in your source media to the above-100 nit HDR levels you’re mastering to as you grade, given the peak reference white that you’re mastering with.

Funnily enough, even though HDR workflows are most easily organized using scene-referred color management, at the moment, HDR grading decisions are display-referred by virtue of the fact that the HDR peak luminance level of the display you happen to be using (1000 nit, 4000 nit, more?) will strongly influence the creative decisions you make, despite underlying HDR distribution standards all having much higher maximums.

Because of all of this, the following sections will describe in general terms how to work with Dolby Vision, HDR10, and Hybrid Log-Gamma in Resolve. However, the creative use of HDR will be addressed separately in a later section.

Dolby Vision

(Updated) Long a pioneer and champion of the concept of HDR for enhancing the consumer video experience, Dolby Labs has developed a proprietary method for encoding HDR called Dolby Vision. Dolby Vision defines a “PQ” color space, with an accompanying PQ electro-optical transfer function (EOTF) that is designed to accommodate displays capable of a wide luminance range, from 0 to 10,000 cd/m2. In short, instead of mastering with the BT.1886 EOTF, you’ll be mastering with the ST.2084 (or PQ) EOTF instead.

However, to accommodate backwards compatibility with SDR displays, as well as the varying maximum brightness of different makes and models of HDR consumer displays, Dolby Vision has been designed as a two-stream video delivery system consisting of a base layer and an enhancement layer with metadata. On an SDR television, only the base layer is played, which contains a Rec.709-compatible image that’s a colorist-guided approximation of the HDR image. On an HDR television, however, both the base and enhancement layers will be recombined, using additional “artistic guidance” metadata generated by the colorist to determine how the resulting HDR image highlights should be scaled to fit the varied peak luminance levels and highlight performance that’s available on any given Dolby Vision compatible television. Dolby Vision also supports a more bandwidth-friendly single layer delivery stream that is not backwards compatible; mastering is identical for both single and dual layer delivery.

Those, in a nutshell, are the twin advantages of the Dolby Vision system. It’s backward compatible with SDR televisions, and it’s capable of intelligently scaling the HDR highlights, using metadata generated by the colorist as a guide, to provide the best representation of the mastered image for whatever peak luminance a particular television is capable of. All of this is guided by decisions made by the colorist during the grade.

So, who’s using Dolby Vision? At the time of this writing, all seven major Hollywood studios are mastering in Dolby Vision for Cinema. Studios that have pledged support to master content in Dolby Vision for home distribution include Universal, Warner Brothers, Sony Pictures, and MGM. Content providers that have agreed to distribute streaming Dolby Vision content include Netflix, Vudu, and Amazon. If you want to watch Dolby Vision content on television at home, consumer display manufacturers LG, TCL, Vizio, and HiSense have all announced models with Dolby Vision support.

DaVinci Resolve Hardware Setup for Dolby Vision

To make all this work in DaVinci Resolve, you need a somewhat elaborate hardware setup, consisting of the following equipment:

  • Your DaVinci Resolve grading workstation, outputting via either a DeckLink 4K Extreme 12G or an UltraStudio 4K Extreme video interface
  • A Dolby Vision Certified HDR Mastering Monitor
  • An SDR (probably Rec.709-calibrated) display
  • A standalone hardware video processor called the Content Management Unit (CMU), which is a standard computer platform with a Video I/O card. The CMU is only available from Dolby Authorized System Integrators; you must contact Dolby for an Authorized Systems Integrator near you.
  • A video router, such as the BMD Smart Videohub

This hardware is all connected as seen in the following illustration:

Dolby Vision Mastering Setup

In one possible scenario, you’ll connect your Resolve workstation’s dual SDI outputs to the BMD Smart Videohub, which splits the video signal to two mirrored sets of SDI outputs. One mirrored pair of SDI outputs goes to your HDR display. The other mirrored pair of SDI outputs goes to the CMU (Content Mapping Unit), which is itself connected to your SDR display via SDI. Lastly, the Resolve workstation is connected to the Dolby CMU via Gigabit Ethernet to enable the CMU to communicate back to Resolve.

The CMU is an off-the-shelf video processor that uses a combination of proprietary automatic algorithms and colorist-adjustable metadata within Resolve to define, at least initially, how an HDR-graded video should be transformed into an SDR picture that can be displayed on a standard Rec. 709 display, as well as how the enhancement layer should scale itself to varying peak luminance levels.

Dolby Vision automatic analysis and manual trim controls in DaVinci Resolve send metadata to the CMU that’s encoded into the first line of the SDI output. This metadata guides how the CMU makes this transformation, and the controls for adjusting this metadata are exposed in the Dolby Vision palette. These controls consist of luminance-only Lift/Gamma/Gain controls (that work slightly differently than those found in the Color Wheels palette), Chroma Weight (which darkens parts of the picture to preserve colorfulness that’s clipping in Rec.709), and Chroma Gain.

Dolby Vision Palette

Dolby Vision Palette in the Color page

(Updated) These Dolby Vision analysis and trim controls in DaVinci Resolve send metadata to the CMU by encoding it into the first line of the SDI output. This metadata guides how the CMU makes this transformation, because the CMU is actually the functional equivalent of the Dolby Vision chip that’s inside each Dolby Vision-enabled television, what you’re really doing is using the CMU to make your SDR display simulate a 100 nit Dolby Vision television.

Additionally, the CMU can be used to output 600 nit, 1000 nit, and 2000 nit versions of your program, if you want to see how your master will scale to those peak luminance levels. This, of course, requires the CMU to be connected to a display that’s capable of being set to those peak luminance output levels.

Though not required, you have the option to visually trim your grade at up to four different peak luminance levels, including 100 nit, 600 nit, 1000 nits and 2000 nit reference points, so you can optimize a program’s visuals for the peak luminance and color volume performance of many different televisions with a much finer degree of control. If you take this extra step, Dolby Vision compatible televisions will use the artistic guidance metadata you generate in each trim pass to ensure the creative intent is preserved as closely as possible, in an attempt to provide the viewer with the best possible representation of the director’s intent.

For example, if a program were graded relative to a 4000 nit display, along with a single 100 nit Rec 709 trim pass, then a Dolby Vision compatible television with 750 nit peak output will reference the 100 nit trim pass artistic guidance metadata in order to come up with the best way of “splitting the difference” to output the signal correctly. On the other hand, were the colorist to do three trim passes, the first at 100 nits, a second at 600 nits, and a third at 1000 nits, then a 750 nit-capable Dolby Vision television would be able to use the 600 and 1000 nit artistic intent metadata to output more accurate HDR-strength highlights that take better advantage of the 750 nit output of that television.

You should note that to expose the Dolby Vision controls in DaVinci Resolve Studio, you need a Dolby Vision Mastering license from Dolby. More instructions for all of this are available in the DaVinci Resolve User Manual.

Dolby Vision Certified Mastering Monitors

At the time of this writing, only three displays have been certified as Dolby Vision Certified Mastering Monitors. Requirements include a minimum peak brightness of 1000 nits, a 200,000:1 contrast ratio, P3 color gamut, and native support for SMPTE ST.2084 as the EOTF (otherwise known as PQ). When grading Dolby Vision, your monitor should be set to a P3 gamut using a D65 white point. Suitable displays include:

  • The Sony BVM X 300 (30″, 1000 nit peak luminance, 4K)
  • The Dolby PRM 32FHD (32″, 2000 nit peak luminance, 1080)
  • The Dolby Pulsar (42″, 4000 nit, 1080)

Of these, only the Sony is commercially available. The Dolby monitors are not commercially available, and are provided only in limited availability from Dolby.

Setting Up Resolve Color Management For Grading HDR

Once the hardware is set up, setting up Resolve itself to output HDR for Dolby Vision mastering is easy using Resolve Color Management (RCM). In fact, this procedure is pretty much the same no matter which HDR mastering technology you’re using. Only specific Output Color Space settings will differ.

Set Color Science to DaVinci YRGB Color Managed in the Master Project Settings, and Option-click the Save button to apply the change without closing the Project Settings. Then, open the Color Management panel, and set the Output Color Space pop-up to the HDR ST.2084 setting that corresponds to the peak luminance, in nits, of the grading display you’re using. For example, if you’re grading with a Sony BVM X300, choose HDR ST.2084 1000 nits. At the time of this writing, RCM supports six HDR ST.2084 peak luminance settings:

  • HDR ST.2084 300 nits
  • HDR ST.2084 500 nits
  • HDR ST.2084 800 nits
  • HDR ST.2084 1000 nits
  • HDR ST.2084 2000 nits
  • HDR ST.2084 4000 nits

This setting is only the EOTF (a gamma transform, if you will). If “Use Separate Color Space and Gamma” is turned off, the Timeline Color Space setting will define your output gamut. If “Use Separate Color Space and Gamma” is turned on, then you can specify whatever gamut you want in the left Output Color Space pop-up menu, and choose the EOTF from the right pop-up menu.

Be aware that whichever HDR setting you choose will impose a hard clip at the maximum nit value supported by that setting. This is to prevent accidentally overdriving HDR displays, which can possibly have negative consequences depending on which display you happen to be using.

Next, choose a setting in the Timeline Color Space that corresponds to the gamut you want to use for grading, and that will be output. For example, if you want to grade the timeline as a log-encoded signal and “normalize” it yourself, you can choose Arri Log C or Cineon Film Log. If you would rather have Resolve normalize the timeline to P3-D65 and grade that way, you could choose that setting as well.

Be aware that, when it’s being properly output, HDR ST.2084 signals appear to be very log-like, in order to pack their wide dynamic range into the bandwidth of a standard video signal. It’s the HDR display itself that “normalizes” this log-encoded image to look as it should. For this reason, the image you see in your Color page Viewer is going to appear flat and log-like, even though the image being displayed on your HDR reference display looks vivid and correct. If you want to make the image in the Color Page Viewer look “normalized” at the expense of clipping the HDR highlights, you can use the 3D Color Viewer Lookup Table setting in the Color Management panel of the Project Settings to assign the appropriate “HDR X nits to Gamma 2.4 LUT,” with X being the peak nit level of the HDR display you’re using.

Additionally, the “Timeline resolution” and “Pixel aspect ratio” (in the project settings) that your project is set to use is saved to the Dolby Vision metadata, so make sure your project is set to the final Timeline resolution and PAR before you begin grading.

Resolve Grading Workflow For Dolby Vision

Once the hardware and software is all set up, you’re ready to begin grading Dolby Vision HDR. The general workflow in DaVinci Resolve is fairly straightforward.

  1. First, grade the HDR image on your Dolby Vision Certified Mastering Monitor to look as you want it to. Dolby recommends starting by setting the look of the HDR image first, to determine the overall intention for your grade.
  2. When using various grading controls in the Color page to grade HDR images, you may find it useful to enable the HDR Mode of the node you’re working on by right-clicking that node in the Node Editor and choosing HDR mode from the contextual menu. This setting adapts that node’s controls to work within an expanded HDR range. Practically speaking, this makes controls that operate by letting you make adjustments at different tonal ranges, such as Custom Curves, Soft Clip, etcetera, work over an expanded range, which makes adjusting wide-latitude images being output to HDR much easier.
  3. When you’re happy with the HDR grade, click the Analysis button in the Dolby Vision palette. This analyzes every pixel of every frame of the current shot, and performs and stores a statistical analysis that is sent to the CMU to guide its automatic conversion of the HDR signal to an SDR signal.
  4. If you’re not happy with the automatic conversion, use the Lift/Gamma/Gain/Chroma Weight/Chroma Gain controls in the Dolby Vision palette to manually “trim” the result to the best possible Rec.709 approximation of the HDR grade you created in step 1. This stores what Dolby refers to as “artistic guidance” metadata.
  5. (Updated) If you obtain a good result, then move on to the next shot and continue work. If you cannot obtain a good result, and worry that you may have gone too far with your HDR grade to derive an acceptable SDR downconvert, you can always trim the HDR grade a bit, and then re-trim the SDR grade to try and achieve a better downconversion. Dolby recommends that if you make significant changes to the HDR master, particularly if you modify the blacks or the peak highlights, you should re-analyze the scene. However, if you only make small changes, then reanalyzing is not strictly required.

As you can see, the general idea promoted by Dolby is that a colorist will focus on grading the HDR picture relative to the 1000, 2000, 4000, or higher nit display that is being used, and will then rely on the colorist to use the DolbyVision controls to “trim” this into a 100 nit SDR version, with this artistic guidance turned into metadata and saved for each shot. This “artistic guidance” metadata is saved into the mastered media, and it’s used to more intelligently scale the HDR highlights to fit within any given HDR display’s peak highlights, to handle how to downconvert the image for SDR displays, and also to determine how to respond when a television’s ABL circuit kicks in. In all of these cases, the colorist’s artistic intent is used to guide all dynamic adjustments to the content, so that the resulting picture looks as it should.

Analyzing HDR Signals Using Scopes

When you’re using waveform scopes of any kind, including parade and overlay scopes, the signal will fit within the 10-bit full-range numeric scale much differently owing to the way HDR is encoded. The following chart of values will make it easier to understand what you’re seeing:

  • 1023 = 10,000 nits (no known display)
  • 920 = 4000 nits (peak luminance on a Dolby Pulsar Monitor)
  • 844 = 2000 nits (peak luminance on a Dolby PRM 32FHD)
  • 767 = 1000 nits (peak luminance on a Sony BVM X300)
  • 528 = 108 nits (Dolby Cinema projector peak luminance)
  • 519 = 100 nits
  • 0 = 0 nits (black, ideally corresponds to less than 0.05 nits on LCD, 0.0005 nits on OLED)

If you’re monitoring with the built-in video scopes in DaVinci Resolve Studio, you can turn on the “Enable HDR Scopes for ST.2084” checkbox in the Color panel of the Project Settings, which will replace the 10-bit scale of the video scopes with a scale based on “nit” values (or cd/m²) instead.

If you’re unsatisfied with the amount of detail you’re seeing in the 0 – 519 range (0 – 100 nits) of the video scope graphs, then you can use the 3D Scopes Lookup Table setting in the Color Management panel of the Project Settings to assign the appropriate “HDR X nits to Gamma 2.4 LUT,” with X being the peak nit level of the HDR display you’re using. This converts the way the scopes are drawn so that the 0 – 100 nit range of the signal takes up the entire range of the scopes, from 0 through 1023. This will push the HDR-strength highlights up past the top of the visible area of the scopes, making them invisible, but it will make it easier to see detail in the midtones of the image.

Rendering a Dolby Vision Master

To deliver a Dolby Vision master after you’ve finished grading, you want make sure that the Output Color Space of the Color Management panel of the Project Settings is set to the appropriate HDR ST.2084 setting, based on the peak luminance in nits of your HDR display. Then, you want to set your render up to use one of the following Format/Codec combinations:

  • TIFF, RGB 16-bit
  • EXR, RBG-half (no compression)

(Updated) When you render for tapeless delivery, the artistic intent metadata is rendered into an Dolby Vision XML and delivered with either the Tiffs or EXR renders. These two sets of files are then delivered to a facility that’s capable of creating the Dolby Vision Mezzanine File (this cannot be done in Resolve).

Playing Dolby Vision at Home

On distribution, televisions that have licensed Dolby Vision use the base layer and enhancement layer+metadata to determine how the HDR image should be rendered given each display’s particular peak luminance capabilities. Distributors, for their part, need to provide a minimum 10-bit signal to accommodate Dolby Vision’s wide range. As a result, Dolby Vision videos will look as they should on displays from 100 nits through however many nits the program was mastered to take advantage of, up to 10,000 nits, with the enhancement layer’s HDR-strength highlights being scaled to whatever peak luminance level is possible on a given display using the artistic intent metadata as a guide, and recombining these highlights with the base layer, so that there’s no unpredictable clipping, and the image looks as it should.

SMPTE ST.2084, Ultra HD Premium, and HDR10

Some display manufacturers who have no interest in licensing Dolby Vision for inclusion in their televisions are instead going with the simpler method of engineering their displays to be compatible with SMPTE ST.2084. It requires only a single stream for distribution, there are no licensing fees, no special hardware is required to master for it (other than an HDR mastering display such as the Sony X300), and there’s no special metadata to write or deal with (at this time).

Interestingly, SMPTE ST.2084 ratifies the “PQ” EOTF that was developed by Dolby and that’s used by Dolby Vision that accommodates displays capable of peak luminance up to 10,000 cd/m2 into a general standard. This standard requires at minimum a 10-bit signal for distribution, and the EOTF is described such that the video signal utilizes the available code values of a 10-bit signal as efficiently as possible, while allowing for such a wide range of luminance in the image.

SMPTE ST.2084 is also part of the new “Ultra HD Premium” television manufacturer specification, that stipulates televisions bearing the Ultra HD Premium logo have the following capabilities:

  • A minimum UHD resolution of 3840 x 2160
  • A minimum gamut of 90% of P3
  • A minimum dynamic range of either 0.05 nits black to 1000 nits peak luminance (to accommodate LCD displays), or 0.0005 nits black to 540 nits peak luminance (to accommodate OLED displays)
  • Compatibility with SMPTE ST.2084

Finally, ST.2084 has been included in the HDR10 distribution specification adopted by the Blu-ray Disc Association (BDA) that covers Ultra HD Blu-ray. HDR10 stipulates that Ultra HD Blu-ray discs have the following characteristics:

  • UHD resolution of 3840 x 2160
  • Up to the Rec.2020 gamut
  • SMPTE ST.2084
  • Mastered with a peak luminance of 1000 nits

The downside is that, by itself, this EOTF is not backwards compatible with SDR displays that use BT.1886 (although the emerging metadata standard SMPTE ST.2086 seeks to address this). Furthermore, no provision is made to scale the above-100 nit portion of the image to accommodate different displays with differing peak luminance levels. For example, let’s say you grade and master an image to have peak highlights of 4000 nits, as seen in the following image:

4000 Nit Peak Luminance Image

An image with 4000 nit peak luminance highlights

Then, you play that signal on an ST.2084-compatible television that’s only capable of 800 nits. The result will be that all peaks of the signal above 800 nits will be clipped, while everything below 800 nits will look exactly as it should relative to your grade, as seen in the following image:

Clipped 800 Nit Peak Luminance Image

The same image clipped to 800 nit peak luminance highlights

This is because ST.2084 is referenced to absolute luminance. If you grade an HDR image referencing a 1000 nit peak luminance display as is recommended by HDR10, then any display using ST.2084 will respect and reproduce all levels from the HDR signal that it’s capable of reproducing as you graded them, up to the maximum peak luminance level it can output. For example, the Vizio R Series television can output 800 nits, so all mastered levels from 801 – 1000 will be clipped.

How much of a problem this is really depends on how you choose to grade your HDR-strength highlights. If you’re only raising the most extreme peak highlights to maximum HDR-strength levels, then it’s entirely possible that the audience might not notice that the display is only outputting 800 nits worth of signal and clipping any image details from 801 – 1000 nits because there weren’t that many details above 800 anyway other than glints and sparks. Or, if you’re grading large explosions filled with fiery detail up above 800 nits in their entirety because it looks cool, then maybe the audience will notice. The bottom line is, when you’re grading for displays that simply display ST.2084, you need to think about these sorts of things.

Monitoring and Grading to ST.2084 in DaVinci Resolve

Monitoring an ST.2084 image is as simple as getting a ST.2084-compatible HDR display (such as the Sony X300), and connecting the output of your video interface to the input of the display. In the case of the Sony X300, which is a 4K capable display, you can connect four SDI outputs from a DeckLink 4K Extreme 12G with the optional DeckLink 4K Extreme 12G Quad SDI daughtercard, or an UltraStudio 4K Extreme, directly from your grading workstation to the X300, and you’re ready to go.

Setting up Resolve Color Management to grade for ST.2084 is identical to setting up to grade for Dolby Vision. You’ll also monitor the video scopes identically, and output a master identically, given that both standards rely upon the same EOTF, and require the same high bit depth.

Hybrid Log-Gamma (HLG)

The BBC and NHK jointly developed a different EOTF that presents another method of encoding HDR video, referred to as Hybrid Log-Gamma (HLG). The goal of HLG was to develop a method of mastering HDR video that would support a range of displays of different brightness without additional metadata, that could be broadcast via a single stream of data, that would fit into a 10-bit signal, and that would be easily backward-compatible with SDR televisions without requiring a separate grade.

The basic idea is that the HLG EOTF functions very similarly to BT.1886 from 0 to 0.6 of the signal (with a typical 0 – 1 numeric range), while 0.6 to 1.0 segues into logarithmic encoding for the highlights. This means that, if you just send an HDR Hybrid Log-Gamma signal to an SDR display, you’d be able to see much of the image identically to the way it would appear on an HDR display, and the highlights would be compressed to present what ought to be an acceptable amount of detail for SDR broadcast.

On a Hybrid Log-Gamma compatible HDR display, however, the highlights of the image (not the BT.1886-like bottom portion of the signal, just the highlights) would be stretched back up, relative to whatever peak luminance level a given HDR television is capable of outputting, to return the image to its true HDR glory. This is different from the HDR10 method of distribution described previously, in which the graded signal is referenced to absolute luminance levels dictated by ST.2084, with levels higher than a TV can output being clipped. With HLG, all HDR-strength highlights will be scaled relative to whatever a television is capable of.

And while this facility to support multiple HDR displays with differing peak luminance levels seeks to accomplish the same goal of scaling HDR-strength highlights to suit whatever a given television is capable of outputting, HLG requires no additional metadata to guide how the highlights are scaled. Depending on your point of view, this is either a benefit (less work), or a deficiency (no artistic guidance to make sure the highlights are being scaled in the best possible way).

As is true for most things, you don’t get something for nothing. The BBC White Paper WHP 309 states that, for a 2000 nit HDR display with a black level of 0.01 nits, up to 17.6 stops of dynamic range without visible quantization artifacts (“banding”) is possible. BBC White Paper WHP 286 states that the proposed HLG EOTF should support displays up to about 5000 nits. So, partially, the backwards compatibility that HLG makes possible is due to discarding long-term support for 10,000 nit displays. However, given that the brightest commercially-available HDR display at the time of this writing is only 1000 nits peak luminance (the Sony X300), and the brightest HDR display I’m aware of only outputs 4000 nits peak luminance (the experimental Dolby Pulsar), it’s an open question whether or not over 5000 nits is necessary for consumer enjoyment. Only time will tell.

At the time of this writing, Sony and Canon have demonstrated displays capable of outputting HLG encoded video. DaVinci Resolve, naturally, supports this standard through Resolve Color Management (the RCM setting is labeled HLG).

Monitoring and Grading to Hybrid Log-Gamma in DaVinci Resolve

Monitoring an HLG image is as simple as getting a Hybrid Log-Gamma-compatible HDR display, and connecting the output of your video interface to the input of the display.

Setting up Resolve Color Management to grade for HLG is identical to setting up to grade for Dolby Vision, except that there are two basic settings that are available:

  • HDR HLG-2020
  • HDR HLG-709

Optionally, if you choose to enable “Use Separate Color Space and Gamma,” you can choose either Rec.2020 or Rec.709 as your gamut, and HLG as your EOTF.

The Aesthetics of Shooting and Grading HDR

At the moment, given that we’re in the earliest days of HDR grading and distribution, there are no hard rules when it comes to how to use HDR. The sky’s the limit, which makes it either an exciting or harrowing time to be a colorist, depending on your point of view. For me, it’s exciting, and I’ve been telling everyone that grading HDR is the most fun I’ve had as a colorist since I started doing this crazy job.

Developing the HDR image to best effect is, in my view, the domain of the colorist. The importance of lighting well and shooting a wide-lattitude format is indisputable, but the process of actively deciding which highlights of the image are diffuse white, which qualify as HDR-strength, and how bright to make each “plane” of HDR-strength highlights are all artistic decisions and assignments that are most easily and specifically controllable in the grading suite. In this way, I think HDR is going to bind the creative partnership between DPs and Colorists even more tightly.

In this section, I deliberately veer away from the technical in order to explore the creative potential for HDR. Some of this section is based on my experiences, some on my observations of the work of others, but much is also based on my perennial quest to mine the fine arts that have come before us for creative solutions that already exist, but have been neglected due to colorists having been stuck within the narrow confines of BT.709 and BT.1886 for so long. Breaking free of those restraints makes the work of other artistic disciplines even more accessible to us as models for what is artistically possible.

Differentiating Highlights

With images governed by BT.1886, the difference between diffuse and specular highlights can often be as little as 10% of the signal, sometimes less, and these differences are often so subtle as to be lost on most viewers. The difference between highlights of varying intensity can be accentuated by reducing the average levels of your midtones and shadows to create more headroom for differentiated highlights, but then you’re potentially fighting the legibility of the picture in uncertain viewing conditions (read – shitty televisions that are calibrated poorly). Bottom line, with SDR signals, you’re in a position where often both the white shine on someone’s face and a naked light bulb may both be up around 100 nits, which in truth has never really made any sense.

This no longer need be true in an HDR grade, where it’s possible to have skin shine around 100 nits if you want, but you can then push the light bulb up higher in the grade, where it would really peak, maybe at 800 nits. In addition, there will be much more detail available within that light bulb (depending on the latitude of the recording format), so you’ll potentially be able to see the interior of the bulb’s housing, so that the bulb isn’t simply a flat white flare.

Going farther, in an outdoor scene, it’s possible have a bright white t-shirt at one level, colorful highlights on a face at a clearly differentiated level, the rim-lighting of the sun on clouds at a different, higher level, and reflected sun glints off of a lake in the distance at an even higher level, resulting in a much richer distribution of highlight tonality throughout the scene. This is what’s new about grading HDR, you’ve finally got the ability to create dramatically differentiated planes of highlights, which finally gives the digital colorist the perceptual tools that fine artists working in the medium of painting have had for hundreds of years.

To use the example of a painting I referenced in a blog article some time back, Johann Liss’ The Prodigal Son With the Prostitutes, 1623 (Image thanks to TimSpfd at flickr).

The Prodigal Son with the Prostitutes 1623 Johann Liss

Given the elevated black levels of the photograph as seen on this computer screen, it’s hard to grok the true impact of the way this painting looks in person with more ideal gallery lighting and the direct reflection of light off the surface of the painting providing brighter levels than can be reproduced in a photograph. In person, the dimmer highlights of the background players emerge from the inky pools of shadow surrounding everyone, and the highlights of those background players faces are clearly dimmer than the highlights reflected off of the central two characters, and those highlights themselves are at a slightly but noticeably reduced level from the brilliant whites of the foreground sleeves and metallic glints dappled here and there throughout the image.

This, to me, represents the promise of what HDR grading done creatively can offer, in terms of using multiple planes of differentiated highlights to create a sensual glimmer, to add exciting punch to the image, and to guide the eye on a prioritized tour around the scene; to the arm encircling the woman’s waist, to the hand splashing wine into the prodigal’s gobblet, to the Prodigal’s face lasciviously eyeing the activities before him.

Getting Used to It

One thing that multiple colorists warned me about, and that I definitely experienced, is that it takes a little time to get used to “the look of HDR.” When you’ve spent years getting to know how audiences respond to images with a BT.1886 distribution of tonal values that max out at 100 nits, how to see and allocate highlights within that narrow range of tonality, and how images “should” look when graded for broadcast, the shockingly brilliant highlights and color volume that HDR allows can be confusing at first. It doesn’t look “right.” It shouldn’t even work.

More to the point, it’s tempting to either avoid highlights that seem too bright altogether, or to succumb to the impulse to linearly scale the entire image, midtones and all, to be uniformly brighter. Both impulses are ones you should try to avoid, but to avoid them, you’re going to need some time to get used to seeing what HDR images have to offer. To get used to the idea of comparatively subdued shadows and midtones contrasted against brilliant splashes of color and contrast. To familiarize yourself with tones and colors on an expanded palette that you’ve never had the opportunity to play with before. In conversation, colorist Shane Ruggieri was emphatic about the need to “unlearn 709 thinking” in order to be able to more fully explore the possibilities that HDR presents.

Don’t Just Turn It to Eleven

It cannot be over-emphasized that HDR grading is not about making everything brighter. Never minding the limitations imposed by the ABL on consumer televisions, just making everything brighter is like doing a music mix where you simply make everything louder. You’re not really taking advantage of the ability to emphasize specific musical details via increased dynamic range, you’re just making individual details harder to hear amongst all the increased energy bombarding the audience. Maintaining contrast is the key to taking the best advantage of HDR-strength highlights, which will lack punch if you boost all of your midtones too much and neglect the importance and depth of your shadows. HDR images only really look like HDR images when you’re judicious with your highlights.

I honestly think that looking to various eras of painting can be enormously instructive when getting ideas for what to do with HDR. I was in the middle of this article when I happened to go to an event at the Minneapolis Institute of Art. Since I had HDR on the brain as I wandered the collection, a few pieces leapt out at me as terrific examples of the use of selective specular highlights, large shadow areas combined with pools of highlights, and the guidance of the viewer’s eye through an entire scene within a single frame using lighting. Clearly, the reproductions I include in this article are a poor facsimile compared to seeing these paintings in person, where the reflective light from the surface of the painting results in a considerably more vivid experience, but I’ve tried to simulate their punch by applying a simple, slight gamma correction to give you a similar impression to what I felt when viewing the originals. Of course, your computer screen’s accuracy is the limiting factor.

A Little Can Go a Long Way

The following painting (Nicolas Poussin’s The Death of Germanicus, 1627) is a great example of using targeted high-octane highlights to great effect. Notice how the vast majority of the image is relatively dark, employing rich colors in the low midtones and high shadows (which can also be reproduced due to the increased color volume of HDR displays) but the artist uses polished strokes of brightness in key areas to add specular highlights that make the image really pop. These highlights are few, small, and they’re carefully targeted, but they punch up an image that otherwise has relatively subdued highlights falling on the skin and cloth of the participants. Also, because of the lattitude available to HDR-strength highlights, specular shines such as these can fall off gracefully towards the shadows, so that they’re not harsh “cigarette burns” with an abrupt edge, but areas that transition smoothly and naturalistically out of the lower tones of the image.

Nicolas Poussin's The Death of Germanicus, 1627

Nicolas Poussin’s The Death of Germanicus, 1627

This, to me, is a tremendous illustration of what HDR enables the colorist to now do. In another example (Cornelis Jacobz. Delff’s Allegory of the Four Elements, c. 1600), a still life with metal vessels is brought vividly to life through the use of some carefully placed metallic shine, despite a preponderance of shadows wrapped around every surface. These bright highlights are streaked here and there through the image, adding an impression of considerable sharpness thanks to the resulting contrast.

Cornelis Jacobz. Delff's Allegory of the Four Elements, c. 1600

Cornelis Jacobz. Delff’s Allegory of the Four Elements, c. 1600

Know When to Fold It

Granted, it’s easy to overdo HDR-strength highlights. On one job I was grading, one of the characters of a scene had brass buttons on their jacket, which were natural candidates for putting out some HDR-strength glints. I keyed and boosted them, but I was moving so fast that the first adjustment I made had the buttons glowing like little suns. I paused to take in the effect, and the client and I simultaneously burst out laughing, the result was so completely ridiculous. It goes without saying that HDR-strength highlights should be motivated, but I was surprised by just how instantly hilarious the wrong use of these highlights was.

Balancing Subjects in the Frame, Using Negative Space

Keeping the people inhabiting a scene interesting despite amazing HDR effects happening in the background also becomes a new and interesting challenge. In an SDR image, even the brightest highlights in an image may only be 25 nits higher than the highlights coming off of people, so subjects aren’t so easily overwhelmed by their surroundings. However, in HDR you might have vividly colorful 600 nit highlights in the background that are competing with 100 nit highlights illuminating people inhabiting the foreground. One example that springs to mind from a program I saw graded by another colorist was a scene with sun-drenched stained-glass windows placed behind two actors having a conversation. After the first preliminary primary adjustment which went by the natural lighting in the scene, the window was so beautifully spectacular that the people in front held practically zero interest. A bit of extra work was required to pull the actors out back in front so they could compete with the scenery.

A useful example can be seen in the following painting (Constant Troyon’s Landscape with Cattle and Sheep, c. 1852-58), where the white cow catches the sunlight in dazzling fashion, relative to the far dimmer tones found throughout the rest of the image. The milk-maid is almost easy to miss, were she not so forcefully present as negative space within the cow’s dazzling highlights.

Constant Troyon's Landscape with Cattle and Sheep, c. 1852-58

Constant Troyon’s Landscape with Cattle and Sheep, c. 1852-58

A creative use of negative space in the composition of an image can be a powerful way out of this dilemma, which is nice as this is a technique the colorist can harness through careful control of contrasting midtone and shadow values.

Plan for a Wandering Eye

I’ve heard several people express concern about HDR-strength highlights proving distracting, but I think it’s a mistake to be too terrified of losing the audience’s attention to the bold highlights that are possible within an HDR image. In the following image (Giovanni Francesco Barbieri’s Ermina and the Shepherds, 1648-49), the most vivid planes of highlights are on the armored woman’s arm, face, breastplate, and robes, on the man’s sleeve, elbow, and knee, and on the arm of the foremost boy to the right, and the sheep. The man’s face is hilighted, but diminished relative to these other elements, as are (to a greater extent) the faces of the two boys far to the back. However, this lighting scheme adds considerable depth to the image, as the brighter elements jump forward, pushing the darker elements back. And the artist uses contrast of saturation to make sure that the ruddy faces of the boys are still worthy of the viewer’s attention vs. their immediate background. The highlights don’t necessarily drive our gaze directly to each face as the first thing we look at, but the path traced by our eyes moving among each available highlight gets us there nonetheless, as a secondary act of exploration.

Giovanni Francesco Barbieri's Ermina and the Shepherds, 1648-49

Giovanni Francesco Barbieri’s Ermina and the Shepherds, 1648-49

Something I’m keen to try more of as I work with a greater range of HDR programming is the potential for directing the viewer’s gaze by sprinkling HDR highlights strategically across the image. I think we’ve become a bit too obsessed with treating the colorist’s ability to guide the eye using digital relighting and vignetting as a “bulls-eye” targeting technique, giving the viewer only a single clear region of the image to focus on. I suspect that to utilize HDR most effectively, we need to reconsider the notion of guiding the viewer’s eye through the scene, providing a path from one part of the image to another that encourages the viewer to explore the frame, rather than simply having the viewer obsess over just one element within it. In this way, HDR-strength highlights can be used to provide a roadmap through the image.

In this regard, fine artists showed the way hundreds of years ago. I’ve long felt that painted scenes were once the equivalent of an entire short film in terms of the viewer’s experience, and the technique of being guided through an ambitious work’s mise-en-scène by the painter via lighting is an amazing thing to experience in person, if you’re willing to give the time. In the following image (Francesco Bassano; Jacopo Bassano’s The Element of Water, c. 1576-1577), dappled highlights pluck each of the scene’s participants from the shadows to spectacular effect, and guide the viewer’s eye along the thoroughfare of the scene’s major areas of activity, not just through the street, but farther down the road, to the horizon in the distance.

Francesco Bassano; Jacopo Bassano's The Element of Water, c. 1576-1577

Francesco Bassano; Jacopo Bassano’s The Element of Water, c. 1576-1577

With the wider and now-standard 16:9 frame available to the home viewer and the considerably wider availability of large-screen televisions from 55-85 inches, the medium is ripe for creating a more ambitious mise-en-scène that challenges the viewer to engage more fully with the narrative image. And even on smaller devices, the so-called “retinal” resolutions now available to the tablet and “phablet” viewer make it possible to peer more deeply into even these diminutive images. So, instead of using grading as an invitation to the viewer to dwell on a single element of the picture, it might be time to compose, light, and grade in such a way as to invite a more sweeping gaze, guided in part by HDR-strength highlights.

Choices For Handling Midtones

So yes, HDR provides endless opportunities for finding creative uses for your highlights. Blah, blah, blah. However, in an HDR grade, what are we to do with our midtones? This is an interesting question that is, in my opinion, ripe for exploration.

The first answer is the “party line” that many discussions of HDR emphasize (myself as well), which is to grade your midtones (including skin tones which fall squarely within the midtones of most images) largely the same as you would before. Not only does this make it easier to create dazzling HDR effects in contrast to restrained midtones and deep shadows, but this makes it considerably easier to maintain backward compatibility with the Rec.709 trim pass that you’re inevitably going to have to produce, given that the vast majority of televisions out in the world are still SDR. At this point in time, grading to make your trim pass easier makes all the sense in the world.

However, I don’t think it’s going to take very long for colorists to begin seeing the potential of using the lower portion of whatever range of HDR highlights you’re mastering with to let the brighter midtones of an image breathe, so long as you can count on a few hundred nits more peak luminance to maintain the separation and punch of your HDR-strength highlights. Of course, if you’re grading relative to a lower peak luminance threshold, then you should probably keep your high midtones lower, otherwise you risk de-emphasizing the glittery effect that’s possible.

However, assuming you’ve got the headroom, an example of what should be possible when allowing oneself to use the brightness and saturation that can be found within the 100-400 nit midtone range might be seen in the following painting (Gerrit van Honthorst’s The Denial of St. Peter, c. 1623). This painting employs a beautiful use of silhouettes and vignetting shadows as negative space against the vividly lit face at the center of the image. Pushing these skin tone highlights up past what’s ordinarily possible in SDR to achieve more luminosity through the combination of brightness and saturation would make this practically jump off the screen, while maintaining an even more profound separation from the shadows, shadows that nonetheless hold considerable detail because it’s not necessary to crush them to flat black in order to maintain contrast given the higher midtones. In such an image, 800 nit highlights wouldn’t even be necessary, though you’d probably find a few pixels of eye glints, metal on the candlestick, or (as in the painting) shine off of the top edge of the foreground soldier’s breastplate, to provide just a tiny bit of flash up around 700-1000 nits.

Gerrit van Honthorst's The Denial of St. Peter, c. 1623

Gerrit van Honthorst’s The Denial of St. Peter, c. 1623

If you let yourself use higher-nit midtones, you’ll have more of a chore before you as you trim those grades to look as they should on a BT.709/BT.1886 display, but I anticipate as more and more of the viewing audience upgrades to HDR-capable televisions, it’ll be worth it.

Contrast of Saturation Becomes Even More Powerful

Truly, all forms of color contrast will become more potent tools for the colorist given the increased color volume that a P3 or Rec.2020 gamut coupled with BT.2084 or HLG permits. Different hues have the potential to brilliantly ring against one another at the higher levels of saturation that will be allowed. However, the availability of richer saturation also means that you can have multiple planes even of the same hue of blue, for instance, all differentiated from one another by significantly different levels of saturation.

Should I Worry About the Audience’s Eyeballs?

It’s good to be mindful that, should someone at home eventually have a 2000 nit television, that sun in the frame that you decided to put all the way up at the top of your grade will definitely make them squint. I’m not kidding, I graded a sun all the way to peak luminance in the shot on a 2000 nit Dolby display, and everyone in the room was squinting. However, I’m not too personally worried. I’ve had long HDR grading sessions with 1000 nit displays, and while I was initially worried about early eye fatigue, in truth I had not that much more eye fatigue at the end of an 8-hour day than I do with SDR grading sessions. That said, I’m pretty firm about taking regular breaks every 2-3 hours from the grading suite to stretch the legs, get a tasty beverage, and see the sun for a few minutes before diving back into the job, so perhaps my good habits help.

However, spectacularly vivid contrast is something that regularly occurs in our everyday lives. For example, while chatting with Shane, he shared some actual luminance measurements from his office, in which shadowed areas of the wall with visible image detail fell around 1.5 nits, and light reflected from just under a fluorescent fixture measured 3070 nits, making the point that examples from life can inform and reestablish what dynamic range can plausibly be within a scene, even one as subdued as a “dimly lit office.”

Dolby’s “The Art of Better Pixels” document, authored by D.G. Brooks of Dolby Laboratories (available here), cites tests performed to determine preferred viewer experiences for black, diffuse white, and highlight levels. Studies with viewers show that on large-screen displays, diffuse white values around 3,000 nits and peak highlights at 7,000 nits were luminance levels that satisfied 90% of the test subjects (smaller screens engendered even higher preferred levels). I suspect any colorist who’s had a client ask for “more contrast, more contrast, still more contrast” can certainly relate to this data.

I also see the ability to have these sorts of squint-inducing highlights as another creative opportunity, one that’s been available to audiences looking up at the stage lighting of plays, musicals, and concerts for years. If you’re careful not to abuse the privilege, I think the ability to cut to a bright frame, surprise with a sudden flare or shower of sparks, or grade a light-show with similar physiological impact to the real thing can create compelling narrative opportunities in our storytelling.

HDR In Movie Restoration

I’ve seen several examples of older films being remastered for HDR, which I find an interesting task for consideration. In truth, when remastering older films, you’re adding something that wasn’t there. Even during a film’s original theatrical run, the standard for peak luminance in the theater has long been only 48 nits (SMPTE 196M specified 16 fL open gate with a minimum of 11 fL, practically 14 – 9 fL with light running through a strip of clear film), although with a gamma of 2.6 and a lack of surround lighting, that peak luminance seems much brighter than it actually is.

Bottom line, a television displaying HDR-strength highlights at even 500 nits is going to present isolated highlights that are vastly brighter than this (at least when viewed in a darkened room). If you’re interested in preserving the director’s intent, then splashing HDR onto older films is a case of deliberately imposing something new onto an older set of decisions.

On the other hand, for directors and cinematographers who are revisiting their own films, older negatives have ample latitude to be re-scanned and regraded to take advantage of HDR, to present a new look at previously released material.

While this article is largely focused on HDR for television, it’s also worth mentioning that there are emerging theatrical exhibition formats for HDR, such as Dolby Cinema, which allows the projection of images with peak luminance of 108 nits, over double the brightness of ordinary theatrical projection, advertising down to 0 nit black for a claimed 1 million to 1 contrast ratio on Dolby Cinema projectors (a collaboration between Dolby and Christie). This high contrast in a darkened theater yields similarly dazzling results when graded carefully, and I believe many of the creative decisions I describe here will apply to cinema grading as well.

Being Creative During the Shoot

I think HDR really shines when contemplating the creation of new films and television, where you have the opportunity to think about how to use HDR as a deliberate part of the project.

Despite my assertion that HDR will thrive as a domain of colorist creativity, cinematographers obviously have real decisions to make. I suspect that more careful and deliberate lighting schemes will mean more lighting and grip used to shape the pools and ratios of light and shadow. From my experience, it’ll really help colorists save time if you create the preconditions for the sparkly bits that you want, so we don’t have to go digging around the highlights around the signal to find something specific to pry out. It’ll be interesting to see more deliberate planning for a differentiation between diffuse whites and HDR-strength highlights, in order to take advantage of the fact that there’s a difference between 100 nit, 400 nit, and 800 nit highlights.

Additionally, the art department has an enormous contribution to make, as production designers, set dressers, wardrobe, and makeup all have something to add to (or subtract from) the HDR image. Production Designers will be tasked with making sure there are highlights to be had through careful selection of set materials, paint, and glossy vs. flat regions of the environment. Small set dressing and propmaster decisions will have a large impact – selection of items within the frame such as having a couple of shiny desk accessories in the office (or not), using a car with chrome trimming, using reflective fabrics, etcetera, etcetera.

Wardrobe choices offer similar opportunities. Sequins, brass buttons, shimmery or flat fabrics, choice and manner of stitching, selection of wardrobe accessories, all these and more are opportunities to contribute to what HDR can present to the audience. Even the makeup artists can contribute. It only takes a few pixels of highlights adjacent a few pixels of lower midtones to create some HDR-strength flash, so cosmetics with shimmer or gloss, glitter, or simple control of shine become powerful tools to shape HDR effects on arguably the most important subject within any frame, people’s faces.

All these are tremendously meaningful decisions when shooting for HDR mastering, and the prudent creative team would do well to schedule more on-camera testing with a colorist’s support to see how things are going to work out. This is exactly what I’m contemplating doing for my next film, more on-camera testing prior to the actual shoot to see how different makeup and costume schemes are going to work. It’s a bit more logistics than the typical indie project has to go through, but I think it’ll be worth the hassle. I’ll let you know when it’s done.

If Grading In DaVinci Resolve, What Tools Will Help?

At the end of the day, grading HDR material is simply a matter of manipulating a video signal with a different weight to the distribution of shadows, midtones, and however many levels of highlights you’ll be individuating. Just as a quick tip, here are some Resolve tools that I’ve found help enormously when grading HDR material:

  • Resolve 12.5 has a new HDR Mode in the Node Editor, which has become indispensable for HDR grading when you’re outputting to one of the HDR or HLG profiles using Resolve Color Management (RCM). Right-click any node and turn on HDR Mode to set the controls in that node to act upon a wider signal range than normal, and you’ll find that the controls in the Color Wheels palette, the Custom Curve controls, and soft clip all feel much more natural than they do when HDR Mode is turned off (which is the default).
  • The Highlights control, found in page 2 of the Color Wheels palette, can be a fast way of boosting or attenuating highlights while simultaneously adjusting the high midtones of the image. This control works better with HDR Mode enabled.
  • Using the Highlight master control in the Log mode of the Color Wheels palette is a more targeted way of boosting or attenuating the highlights of your image. Using the default High Range parameter setting and HDR Mode enabled, this control affects only the top HDR-strength highlights of the image. With HDR Mode disabled, this control affects more of the highlights of the image, but is more restrictive than the Highlights control. You can of course change how much of the top end of the signal is affected by adjusting the High Range parameter.
  • Custom Curves, with HDR Mode enabled, are hugely useful when shaping the contrast of the HDR highlights, the midtones, and the shadows. In fact, I can safely say that every HDR grade I’ve done has used the Custom Curves to create just the right tonal separation for each situation.
  • Secondary corrections made by Luma Keying or Chroma Keying just the range of highlights that I want to boost or attenuate is an invaluable technique that I’ve used again and again. Often, I may want to isolate highlights that aren’t actually the brightest thing in the picture to boost up to become HDR-strength highlights, because the natural highlights of the image (light falling on someone’s face, for example), weren’t good candidates for HDR-strength highlights.

In Conclusion

Due in part to my unbridled enthusiasm for the topic, and the fact that HDR is such a wide-ranging subject, what I had intended to be a quick overview of HDR gradually snowballed into a massive 14,000 word essay on the topic. At the time of its writing, there’s a lively debate about which formats will “win” the hearts and minds of audiences and the industry, whether or not people are ready for HDR-strength brightness, what the correct (and by extension incorrect) uses of HDR should be, and ultimately, whether HDR is worth the hassle.

Clearly, I think it is.

That said, this is a rapidly evolving facet of the industry, and I’ll be curious to find out how long it takes for this article to become woefully out of date and in need of an upgrade. Usually I just write an article here and leave it for the ages, but this one I’ll have to keep an eye on. I hope you’ve found it useful.

(5/11/16 Update – Updated Dolby Vision section with updated information. 5/7/16 Update – I updated a paragraph covering the peak luminance capabilities of current televisions, and added another paragraph describing the ABL performance of consumer televisions. Yes, I made this article even longer.)


Color Correction Handbook 2nd Edition: Grading theory and technique for any application.
Color Correction Look Book: Stylized and creative grading techniques for any application.
What's New in DaVinci Resolve 12.5: Covering every new feature in Resolve 12.5 from Ripple Training.
DaVinci Resolve 12 QuickStart: A 4 hr editing and grading overview from Ripple Training.
Editing & Finishing in Resolve 12: 9 hrs of tutorials from Ripple Training.
Grading in DaVinci Resolve 11: Comprehensive 13 hr grading tutorials from Ripple Training.
Grading A Scene: Watch a short horror scene graded, from start to creative finish, Ripple Training.

So You Want to Buy a Spectroradiometer?

It all started with me wanting to analyze the color of some out-of-calibration projectors with potentially aged bulbs in order to see if I could create a “poorly calibrated projector” LUT to more closely examine the effects of poor projector quality on a graded image. Why is a tale for another time; suffice it to say, it’s a research project.

I have a Klein K-10 Colorimeter which I was originally intending to use for the project, but while discussing my plan with Bram Desmet at Flanders Scientific, who’s an extremely knowledgable fellow when it comes to display calibration, he pointed out that a Colorimeter would be unsuitable for my purposes since the potentially aged bulbs of the projectors that I needed to measure would have an unknown spectral distribution, and Colorimeters assume a known spectral distribution for any given device (which is supplied as a profile for each device).

Crap.

Turns out I needed to use a Spectroradiometer, which is another device for measuring color, that directly measures the short, medium, and long wavelengths of light that we see as color – making it able to accurately measure the spectral distribution of any light source without any other information.

I’ve avoided Spectroradiometers up until now because (a) they’ve traditionally been pretty expensive, and (b) like I said, I’ve already got a Colorimeter. However, given some projects on the horizon, it had occurred to me that it might not be a bad thing to bite the bullet and invest in another measurement instrument, not only for its value in future color research, but also because I could then use it to recalibrate my Colorimeter, since all Colorimeters benefit from periodic recalibration to make sure that everything is being measured accurately.

The Colorimetry Research CR-250 Spectroradiometer

The Colorimetry Research CR-250 Spectroradiometer, the model shown is with the optional targeting scope.

Of course, it turns out that you ALSO need to get the Spectroradiometer periodically calibrated. However, I discovered that I had no idea how Spectroradiometers got calibrated. And I hate not knowing things.

Bram introduced me to Guillermo Keller, President of Colorimetry Research, who graciously invited me to the lab where the Spectroradiometers they make (the CR-250) are calibrated before being shipped out, so I could see the whole process in person.

I’ve written about display calibration before, both on this blog, and in my Color Correction Handbook. In order to do color-critical work such as grading a movie, episodic show, or music video for the public’s enjoyment, it’s essential to have a display capable of outputting accurate, standards-compliant video. Displays are made accurate via a calibration procedure whereby thousands of color patches are displayed on that monitor and measured by a color probe of some kind, either a Colorimeter or Spectroradiometer.

Using the CR-250 with LightSpace to calibrate a theater screen.

Using the CR-250 with LightSpace to calibrate a theater screen.

The software that generates the color patches going to the display and simultaneously records measurements made with the probe (applications include Light Illusion’s LightSpace and SpectraCal’s CalMan) then compares the actual color of each patch with the measured color being emitted by your display, and compiles the thousands of measurements being taken into a characterization that describes how that display is really showing color. The calibration software can then mathematically compare a display’s characterization to the desired video standard that display is supposed to be outputting (BT.709, P3, or Rec.2020), and generate a calibration LUT to load back onto the display (or onto a LUT box sending a video signal to the display) that is used to guarantee that display is outputting accurate color across the spectrum according to the appropriate video standard in use.

Display calibration is dependent on the accuracy of your measuring device, and Colorimeters and Spectroradiometers can subtly shift over time, so unfortunately it’s not enough to simply buy an expensive probe and put it on your shelf, you need to have your probe of choice recalibrated over time. Guillermo recommends having both the CR-250 and CR-100 recalibrated once yearly.

Calibration, in fact, is a carefully controlled chain of device measurements. Monitors can be calibrated using Colorimeters. Colorimeters can be calibrated using Spectroradiometers. But how then are Spectroradiometers calibrated?

Very carefully, it turns out. And using equipment that is itself calibrated, extending the chain of calibration all the way back to fundamental components that are manufactured and performance-tracked by companies such as Gooch and Housego, that are themselves compared to light sources that are traceable to devices and methods standardized by NIST, the National Institute of Standards and Technology, an agency of the U.S. Department of Commerce. So, if you’re wondering who, through the long chain of calibration, is ultimately responsible for the color accuracy of every movie, television show, promo, and advertisement you watch, it’s the federal government.

But this is going all the way down the rabbit hole. For the film and video practitioner’s practical purposes, it is the calibration of Spectroradiometers upon which the scaffolding of our industry rests, and there are four fundamental procedures involved with this. Each of these tests rely on taking spectral measurements of a known light source. The accuracy of everything else relies entirely on the maintenance and care taken with these light sources.

First, a Helium-gas lamp is used to calibrate the Spectroradiometer sensor’s pixel-to-wavelength transformation.

Calibrating a Spectroradiometer to a Helium light source.

Calibrating a Spectroradiometer to a Helium light source.

The Helium-gas lamp bulb, which is similar in principle to a Neon sign tube, has a unique and utterly reliable spectral distribution that spikes at specific wavelengths. These spikes are clear to see, do not vary, and provide an easy way to calculate the difference between what the probe is reading, and the reality of physics. This offset is stored on the probe as a transformation.

Helium spectral distribution

The spectral distribution of a Helium-gas lamp.

Next, a tungsten light source reflecting diffusely within an integrated sphere is used to calibrate the probe’s reading of spectral distribution.

The integrated sphere is itself calibrated to NIST standards, and the bulb usage is carefully timed and recorded, since the whole sphere is periodically sent in for measurement. In fact, one of the measures taken to extend the life of this device is to only turn it on by slowly increasing the voltage from 0 to full, in order to prevent spikes of voltage causing unnecessary wear to the bulb.

As with the Helium measurement, the difference between the measured spectral radiance in linear pixels (the raw data that is recorded by the probe through the optics) and the known output of the integrated sphere is used to determine the transform from the pixel value recorded by the probe to an accurate reading of spectral radiance. This transform is also stored on the probe.

Spectral output of the diffuse tungsten lighting within the integrated sphere.

Spectral output of the diffuse tungsten lighting within the integrated sphere.

Lastly, as an alternate step, the quality of the integrated sphere’s output can be verified by measuring the reflectance of a NIST-traceable tungsten bulb (a $1000 200-watt lamp) shining on a similarly NIST-standardized diffuse “reflectance standard” from a specific distance. To highlight how picky these devices are, the bulb must be sent in to be re-measured every 600 minutes, with the new measurements being factored in to subsequent use of that bulb. Meanwhile, the reflectance target, which is comprised of compressed chalk-like particles, must be certified to be close to 100% reflective.

This is only done for spot checking, in order to verify that the integrated sphere is operating correctly. The bulb and reflectance target are mounted a measured distance apart (the intensity of the reflected light is controlled in this way via the inverse square law), with the probe pointed at the target, and another measurement is taken and compared.

Spectroradiometer measuring the NIST traceable bulb reflecting off of the reflectance standard target.

Spectroradiometer measuring the NIST traceable bulb reflecting off of the reflectance standard target.

And that’s it. Once each Spectroradiometer has been calibrated in this way with the offsets stored on the probe, they’re shipped out to manufacturers, calibrators, and facility people who in turn use them to calibrate the displays we use in the world of film and video.

In the process of learning how Spectroradiometers are calibrated, I also learned much more about how they actually work, and how they fundamentally differ in operation from Colorimeters. These differences are key to understanding each device’s differing advantages and disadvantages when it comes to you making a choice about what kind of device to use.

Spectroradiometers measure the wavelengths of light directly. Optics are used to gather light through the front lens and  focus it through a “diffraction grating,” which is a grooved filter where each groove works as a tiny prism to split the light apart for measurement. In Spectroradiometers, the quality of these optics determine the quality of the instrument, given in nanometers (for example, the CR-250 is a 4 nm probe, which is considered extremely accurate for purposes of video calibration).

The CR-250 shown connected to an Android phone running portable measurement software.

The CR-250 shown connected to an Android phone running portable measurement software.

The light that’s split apart via the diffraction grating then falls upon the 250-pixel grid of the Spectroradiometer’s CMOS sensor, which is set up to measure the 380 to 780 nanometer range of the spectrum that CIE 1931 specifies as the visible range of light. Because Spectroradiometers measure the spectral distribution of light directly, they need no other information about the source being measured.

However, because of the physics of how they function, Spectroradiometers are slow. The diffraction grating is not efficient at transmitting light; two-thirds of the light coming in through the front lens is lost right off the bat. Then, only 1/250th of the remaining light is measured by each pixel of the probe’s sensor. The only way to compensate for this low sensitivity is to increase the exposure time of light falling onto the sensor. This isn’t a problem when measuring bright colors, but it becomes a significant problem when measuring very dark colors. For example, measuring a 3 candela source requires a 30 second exposure for a Spectroradiometer. This means that they’re slow to operate.

Colorimeters work much differently. Colorimetry Research also makes a Colorimeter, the CR-100, but the principle is the same for colorimeters made by anyone. For the CR-100, light coming through the front lens is split and directed through three colored glass filters, one each for Red, Green, and Blue, with the filtration specified by the CIE 1931 2 degree standard observer spectral response curves, which attempt to model the sensitivity of the cones of human eyes to low, medium, and high wavelengths to light. The output of each filter is then measured, with the quality of the measurement depending entirely on how well the filters match the CIE 1931 standard observer model.

The CR-250 and CR-100 mounted side by side.

The CR-250 and CR-100 mounted side by side.

Because the sensors reading the output of the Red, Green, and Blue filters are each receiving one-third of the available light, Colorimeters are extremely fast. The same 3 candela source that takes 30 seconds to be read by a Spectroradiometer only takes 1 millisecond on a Colorimeter. However, the truth is that the speed of Colorimeter readings also depends on the refresh rate of the display device (in Hz), so assuming a display running at 60 Hz, the measurement actually takes 16.6 milliseconds. Either way, this is considerably faster than a Spectroradiometer.

And this increased sensitivity means that Colorimeters are also better at measuring extremely dark colors, with the CR-100 capable of taking accurate color measurements all the way down to .03 cd/m2, and accurate luminance measurements all the way down to .003 cd/m2.

However, because Colorimeters are using fixed filters based on CIE 1931, they must be supplied with specific information about the spectral distribution of the particular type of light they’re measuring, as different displays use completely different types of light sources to emit an image. Otherwise, they’ll give inaccurate results. This means that you need to store different profiles on the Colorimeter (which is typical) for Plasma, Fluorescent-backlit LCD, White-LED-backlit LCD, OLED, etcetera. Usually, Colorimeters store generic profiles on the probe itself (which are available via pop-up menus in the calibration software you’re using), for use in measuring each display you have, and typically this works fine.

Different profiles for each of the available display backlight technologies.

Different profiles for each of the available display backlight technologies.

However, depending on the quality of your display and the accuracy and age of its backlight, it’s possible that the backlight of your display may diverge from the optimism of the generic profile on your probe, in which case the resulting measurements may be a little off.

So, the basic choices are between a Spectroradiometer that will be totally accurate for any device, but will take a really, really long time to do a full 17 x 17 x 17 sampling of the RGB color cube to profile your display (that’s 4,913 color patches), or a Colorimeter which will do that same 4,913 color patch calibration in an hour, but that might be a tiny bit off if there’s something obscure that’s wrong with your display.

I’m not trying to scare you. To put this into perspective, many companies get great results when using a calibrated Colorimeter’s generic presets to measure a high-quality display device. This is yet another reason to not try and use a cheap television or computer display, since displays that are designed to be color-critical also happen to be easier to calibrate.

However, if you demand total accuracy and total efficiency in any situation, there is another path, and that is to use a Spectroradiometer in addition to a Colorimeter in what calibration applications refer to as offset mode. Both LightSpace and CalMan can do this, and it involves using the Spectroradiometer to take four readings from your monitor, Red, Green, Blue, and White. Those readings are then used to calculate an offset for the Colorimeter’s measurements, so that the Colorimeter’s 4,913 readings are totally accurate for that display at that moment in time.

So, if you were wondering why high-quality color probes are so expensive, this glimpse behind the curtain of the technologies involved hopefully provides some, ahem, illumination. Although I would be remiss were I not to point out that prices are lower than they’ve ever been, what with Colorimetry Research’s CR-250 Spectroradiometer going for $6,990, and their CR-100 Colorimeter going for $4,990 (prices taken from Flanders Scientific). Furthermore, there are many other vendors to consider, including Klein Instruments, Photo Research, Konica Minolta, and Xrite, to name the ones with which I’m familiar.

And hopefully this has clarified the concrete differences between the two kinds of probes, giving you some background for further research in the process of trying to figure out which will be more useful for your application.

Typically, the easy answer is usually the most expensive one. Buy one of each.


Color Correction Handbook 2nd Edition: Grading theory and technique for any application.
Color Correction Look Book: Stylized and creative grading techniques for any application.
What's New in DaVinci Resolve 12.5: Covering every new feature in Resolve 12.5 from Ripple Training.
DaVinci Resolve 12 QuickStart: A 4 hr editing and grading overview from Ripple Training.
Editing & Finishing in Resolve 12: 9 hrs of tutorials from Ripple Training.
Grading in DaVinci Resolve 11: Comprehensive 13 hr grading tutorials from Ripple Training.
Grading A Scene: Watch a short horror scene graded, from start to creative finish, Ripple Training.

On Violence, Terrorism, and War

Rage is the engine, and retribution is the fuel that keeps the carousel of violence on which we find ourselves spinning. More rage and more retribution won’t solve or end anything, but it will result in more death, and it will keep the carousel spinning.


Color Correction Handbook 2nd Edition: Grading theory and technique for any application.
Color Correction Look Book: Stylized and creative grading techniques for any application.
What's New in DaVinci Resolve 12.5: Covering every new feature in Resolve 12.5 from Ripple Training.
DaVinci Resolve 12 QuickStart: A 4 hr editing and grading overview from Ripple Training.
Editing & Finishing in Resolve 12: 9 hrs of tutorials from Ripple Training.
Grading in DaVinci Resolve 11: Comprehensive 13 hr grading tutorials from Ripple Training.
Grading A Scene: Watch a short horror scene graded, from start to creative finish, Ripple Training.

Generating Optimized Media That Won’t Clip

Here’s an important tip when using “Optimized Media” in DaVinci Resolve 12 (or higher) to spare yourself the processing overhead of debayering raw media. For those of you who don’t know, you can right-click a selection of clips in the Media Pool that are in one or more formats that are processor intensive to work with (camera raw clips, H.264, other intensive-to-decode media types), and choose “Generate Optimized Media” to have Resolve automatically create an alternate set of media files that let you work faster.

Generate Optimized Media

All Optimized Media you generate is compressed using whatever setting is currently selected in the General Options panel of the Project Settings. The default media format is ProRes 422 HQ.

Optimized Media Format

Once you’ve generated optimized media for a set of clips in a project, the Playback > Use Optimized Media if Available setting determines whether or not you’re using Optimized Media, or the original media files that you had imported into the Media Pool.

Use Optimized Media if Available

When using Optimized Media, you can also reveal an additional column in the Media Pool’s list view, which lets you see which clips have been optimized, and which clips haven’t.

Optimized Media Media Pool Column

However, there’s a potential problem with using Optimized Media, which can be seen in clips with high dynamic range; the highlights of any image data with levels above 1023 become clipped. In the following screenshots, you can see the winter exterior has plenty of levels above 1023, as evidenced by the waveform below.

Original Image

Original Waveform

However, after optimizing these CinemaDNG raw clips, any attempt to retrieve the highlights above 1023 by lowering the Gain or Offset controls results in flat, clipped highlights, which can also be seen as a flattening in the waveform.

Clipped Image

Clipped Waveform

This, of course, defeats the whole purpose of shooting camera raw media in the first place. However, there’s a way you can generate optimized media that actually preserves these highlights, and that’s by changing the format used for optimization in the General Options panel of the Project Settings to “Uncompressed 16-bit float.”

Changing Optimized Media Format

Uncompressed 16-bit float is a proprietary DaVinci image format designed to preserve out-of-gamut floating point image data. The only downside to this is that by using Uncompressed 16-bit float to generate optimized media, you create larger optimized media files. However, you still spare yourself the processor overhead of having to debayer your camera raw media, and you preserve high dynamic range image data for grading. So, you might need to make sure you have fast hard drive storage, but you’ll still work faster.

Preserved Highlights Image

Preserved Highlights Waveform

Incidentally, the exact same issue occurs when using the Smart Cache, which generates cache media for timeline and grading effects that are too processor intensive to play back in real time, except you’ll need to change the “Cache frames in” pop-up in the General Options panel of the Project Settings to Uncompressed 16-bit float, instead.

Cache Frames Format

Optimized Media and the Smart Cache are two of Resolve’s best features for letting you grade higher quality media on systems with lower processing power. If you’re careful about what media format you use, you can preserve the quality of high dynamic range media, and you can even use Optimized Media for finishing and final output.


Color Correction Handbook 2nd Edition: Grading theory and technique for any application.
Color Correction Look Book: Stylized and creative grading techniques for any application.
What's New in DaVinci Resolve 12.5: Covering every new feature in Resolve 12.5 from Ripple Training.
DaVinci Resolve 12 QuickStart: A 4 hr editing and grading overview from Ripple Training.
Editing & Finishing in Resolve 12: 9 hrs of tutorials from Ripple Training.
Grading in DaVinci Resolve 11: Comprehensive 13 hr grading tutorials from Ripple Training.
Grading A Scene: Watch a short horror scene graded, from start to creative finish, Ripple Training.

Brand New DaVinci Resolve 12 Editing Tutorials

Editing and Finishing in DaVinci Resolve 12

I’m very happy to announce that, after a huge amount of recording, and even more time spent editing and organizing, my new Editing & Finishing in DaVinci Resolve 12 video training is now available from Ripple Training, for $99 USD. I’m really happy with how these lessons turned out, so if you want to understand how editing in Resolve works, than this is the title for you.

It’s an exhaustive look at editing in DaVinci Resolve, detailing every nook and cranny of the Media and Edit pages. There are nine hours and thirty minutes of videos, spanning 90 meticulously organized lessons complete with chapter markers that let you jump to whatever topic you want to focus on next, making this useful as a reference as well as a class.

01-Metadata-Editor

And every relevant topic is covered, from choosing whether to use the free or studio version of Resolve and touring the application, to setting up and organizing projects, importing and organizing media, improving performance and managing media, drag & drop editing, precision editing, cutting dialog, multicam editing, trimming and rearranging clips, using effects and transitions, and working with audio. Absolutely every available editing technique in DaVinci Resolve is demonstrated in detail.

04-Track-Audio-Controls

However, the true power of Resolve is in its seamless marriage between editing and color, so there are also over an hour of tutorials dedicated to color correction and grading. Starting with how you can prep the color of your clips prior to editing, and continuing with learning the basics of the Color page, making automatic and manual color adjustments using Lift/Gamma/Gain and curve controls, copying and matching grades, and adding secondary adjustments.

07-Split-Screen

And since Resolve is such a capable finishing environment, additional lessons cover audio mixing and effects, creating still and animated video effects, compositing, titling,  stabilization, green-screen compositing, and the use of third party filters.

06-Transition-Curves-Editor

And, in a first for me, this tutorial is accompanied by a complete set of high-quality media and project files so you can follow along as I demonstrate each feature and technique, and then continue to experiment on your own.

At this point, I have several titles available covering DaVinci Resolve from Ripple Training, so here’s how they all fit together.

If you’re wanting a complete understanding of how to edit in Resolve, along some grading basics, then the nine hour Editing & Finishing in DaVinci Resolve 12 is for you.

On the other hand, if you want a faster overview of how both editing and grading works in Resolve, you might want to check out my DaVinci Resolve 12 Quick Start, which is a more approachable 4 hour overview of how to use Resolve, focusing only on the basics.

And of course if you’re interested in learning more about how to grade color, then you should check out my 13 hour Color Grading in DaVinci Resolve 11, along with the 5 hour companion What’s New in DaVinci Resolve 12 (together, these titles cover all of grading in DaVinci Resolve).

And finally, if you want to learn absolutely everything I have to teach about DaVinci Resolve, Ripple Training has put together a five title DaVinci Resolve Essentials Training Bundle (includes Editing & Finishing, Color Grading, What’s New in 12, Color Grading a Scene, and Creative Looks).

So, no matter what aspect of DaVinci Resolve interests you, I’ve got a set of lessons that covers it. I hope you find these useful!


Color Correction Handbook 2nd Edition: Grading theory and technique for any application.
Color Correction Look Book: Stylized and creative grading techniques for any application.
What's New in DaVinci Resolve 12.5: Covering every new feature in Resolve 12.5 from Ripple Training.
DaVinci Resolve 12 QuickStart: A 4 hr editing and grading overview from Ripple Training.
Editing & Finishing in Resolve 12: 9 hrs of tutorials from Ripple Training.
Grading in DaVinci Resolve 11: Comprehensive 13 hr grading tutorials from Ripple Training.
Grading A Scene: Watch a short horror scene graded, from start to creative finish, Ripple Training.

“The Place Where You Live” — A Science Fiction Short

TPWYL_Second_Poster

This is it. After two years of production and post-production, and a year traveling on the film festival circuit, I can finally release my Science Fiction short “The Place Where You Live” free on the web to the general public, available both on both YouTube and Vimeo. It’s been a long time coming.

While the shoot itself went fairly quickly, with two-and-a-half days of principal photography, and another day of pickups a year later, post-production took a good long time for everyone involved. It’s tough squeezing in ambitious VFX composites in-between paid gigs, and even I wasn’t immune as this came during the same year I ended up writing and revising a total of five different books (Adobe SpeedGrade Classroom in a Book, Autodesk Smoke Essentials, the DaVinci Resolve 10 manual, Color Correction Handbook 2nd Edition, and Color Correction Look Book), in addition to the color grading gigs I had that year. Squeezing in my portion of the post where I could was hard, and not a day passed where I didn’t wake up and feel guilt over not being able to get to my film (I’m never writing that many books in a year ever, ever again).

In the end, nothing motivates finishing like a deadline, and an early look at the trailer and a teaser convinced the organizer of the Midwest Sci-Fi Film Festival that he wanted my short in their lineup. This prompted my last and most break-neck month of post-production and finishing, to wrap up the project once and for all, and to embark upon what would become a total of 18 festival screenings, plus one promotional screening (in Beijing, no less). In the process, we garnered six awards for everything from “Best Science Fiction Short” (Big Easy International Film Festival) to “Best Leading Actress” (ConCarolinas Short Film Festival), to a “Special Jury Prize” at the Worldfest-Houston International Film Festival. I travelled to what festivals I could, along the way meeting many talented filmmakers, actors, and film enthusiasts at screenings both in the U.S. and abroad.

Film Festivals are always a great experience; films are meant to be seen by an audience, so it’s gratifying to put the work in front of people, which to me is the the whole point. Happily, we had great audiences who were, on the whole, enthusiastic about the film. And being in the Science Fiction category of a lot of festivals, I have to say there’s a lot of really fantastic work out there right now. “The Place Where You Live” was in great company in every shorts program in which it played.

Please watch the credits, as I can’t thank the folks who worked with me on this nearly enough. Additionally, I want to give a huge shout-out of thanks to Autodesk, who sponsored the project, and develop the software that made it possible (the entire short was entirely edited and composited in Autodesk Smoke). Their support was key to this film’s creation, and helped me to get up to speed with an incredibly capable and deep application. Smoke’s fantastic integration of node-based compositing and editing made it easy to tweak every shot in this movie until the day it was finished. Autodesk 3D Studio Max was also used by artist B.J. West to create the CG effects, so Autodesk Software touches every single frame of this film (along with Adobe Illustrator, Photoshop, and After Effects to create animated graphics elements, DaVinci Resolve Studio to create dailies and do the final grade, GenArts Sapphire plugins to help all along the way, and Avid ProTools to do the sound design and mix). If you’re interested in learning more about the workflow I and the other artists who worked on this project used, you can see a presentation I gave at the 2013 Amsterdam SuperMeet here. In the coming weeks, I’ll be posting a couple more “making of” videos showing preproduction and workflow.

And now, my only appeal. If you like this short movie, please help spread the word among your friends, colleagues, or anyone you know who likes thoughtful Science Fiction. Promotion is one of the great challenges facing independent filmmakers, and word of mouth on social media and in person is one of the best ways you can reward this project if you like what you see.

And so, without further ado, it’s showtime!

Thank you for watching! If you want to read more about our adventures making this film and following the film festival circuit, please check out The Place Where You Live website.


Color Correction Handbook 2nd Edition: Grading theory and technique for any application.
Color Correction Look Book: Stylized and creative grading techniques for any application.
What's New in DaVinci Resolve 12.5: Covering every new feature in Resolve 12.5 from Ripple Training.
DaVinci Resolve 12 QuickStart: A 4 hr editing and grading overview from Ripple Training.
Editing & Finishing in Resolve 12: 9 hrs of tutorials from Ripple Training.
Grading in DaVinci Resolve 11: Comprehensive 13 hr grading tutorials from Ripple Training.
Grading A Scene: Watch a short horror scene graded, from start to creative finish, Ripple Training.

Having Fun With Post – Grading, Compositing, and Editing in Resolve

The shoot for my goofy little rant, “The Importance of Color Correction,” came on the heels of some promos that Steve Martin wanted me to record for my newest Ripple Training titles for DaVinci Resolve 12. I figured, since I’m there on a stage, why not have a bit of fun with it?

A confession – I suffer from incurable impatience between a shoot and the beginning of the cut, so once home I immediately fired up Resolve 12 and got to work. I was determined to do the entire thing inside of Resolve, to test the workflow of grading, compositing, cutting, and finishing a green-screen intensive project, all within Resolve 12. Since I knew I wanted to edit a series of dynamically changing backgrounds that reacted to what was being said, my first order of business was to grade the clip, and create transparency from the green background for compositing within the timeline.

I shot with the BMD Production 4K camera, but I made the decision to record to ProRes HQ, instead of raw, as I wasn’t sure how many takes I’d burn through, or how much space I’d ultimately need. This meant that, although I recorded a log-encoded image, my camera settings were burned into the files. The result, owing to a combination of camera color temperature settings and shooting through the glass of the teleprompter I was using, was the following image (after normalizing to Rec. 709 using Resolve Color Management):

Before the Grade

After a relatively straightforward grade, this was easily turned into:

After the Grade

This took two nodes. It could’ve been one, but I like keeping my HSL curves separate for organization.

My Original Grade

This was the original grade, but since I rendered out self-contained graded clips to hand off to Ripple, I ended up re-importing the graded media and using it as the basis of my next few adjustments and the edit. This wasn’t necessary at all, it just seemed like the thing to do, since I had the media and all.

With the grade accomplished, it was time to create transparency, which I did using the blue-labeled Alpha Output in the Color page’s Node Editor, connecting a matte I created using a combination of techniques (nodes 3, 4, and the Key node), while the color adjustment nodes (1 and 2) connected to the RGB output.

The Grade and Composite

In particular, since some idiot I rolled out of bed and threw on a green jacket with a green pocket square without thinking before rushing over to the stage, I needed to be a bit clever with how I created the matte. Although, being faced with this kind of issue, I was kind of glad to have an interesting test of the new 3D Keyer’s capabilities for green-screen compositing in a slightly awkward situation.

Turns out, the 3D Keyer (in node 3) did a fantastic job of specifically keying the green screen background while omitting the slightly different green of my jacket, while retaining nice edges without too much crunchiness, so big props to the 3D Keyer; it only took one sample of the background green and a second subtractive sample of the foreground jacket to do it (along with very slight application of the Clean Black and Clean White controls).

3D Keyer

However, no combination of samples would also omit the green pocket square, which was just too similar to the background. This required me to divide and conquer, using the Key mixer to combine the 3D Keyer matte with a second matte generated by a tracked window to cover the pocket square.

Keyer Combination

The window itself was easy to make and track, except for the part where some idiot the “talent” decided to wave his arms around.

Bad Tracking Scenario

The hand completely screwed up the track, but my body motion was so irregular that just deleting the disrupted part of the track and letting Resolve automatically interpolate between the areas of the clip that had good tracking data wouldn’t cut it (although that was the first step). So, I ended up using yet another one of Resolve 12’s new features to solve the issue, the new Frame mode of the Tracker palette, that makes it easier to auto-keyframe manual alterations to a window’s shape and position (i.e. a bit of rotoscoping). Five manual adjustments (and keyframes) later, and the hole in the tracking data was nicely filled.

Fixing the Track With Rotoscoping

Inverting the 3D Keyer matte in Node 3 (using the Invert button within the Keyer Palette) and letting the Key Mixer node add the two mattes together from nodes 3 and 4 gave me the overall matte I needed, which, when connected to the Alpha Output, punched out the background nicely.

Now, however, I needed to deal with the green spill that was figuratively (possibly even literally) hitting me in the head. Sadly, while the Despill checkbox that’s built into the 3D Keyer works wonderfully in situations where the person being keyed isn’t wearing fucking green, in my case I couldn’t use it without leeching all the color out of my jacket. So, time to go back to the old ways, isolating my head using a tracked circular window in node 2, and using the Hue vs. Sat curve to selectively desaturate the greens that I didn’t want contaminating my face.

Manual Despill

With all that done, I could now go back to the edit page and cut together the varied mix of backgrounds behind the foreground clip. While I was at it, although the entire rant is a single long take (thank you teleprompter), I wanted to chop it up to punch up the rhythm by rippling out a few pauses, masking the jumps with push-ins made using the Zoom controls of the Edit page Inspector. Thus, at the end of the edit, I had a timeline that looked like this:

The Edited Timeline

For the backdrops and audio cues, I used clips from the THAT Studio Effects collection of HD resolution effect clips (licensed from Rampant Design, which offers 2K–5K resolution media). The cut went smoothly, pretty much in real time on my 2010 Mac Pro with Nvidia GTX 770 GPU. (I can’t believe how much life I’ve gotten out of that five-year-old machine.)

However, I had one last problem. Because I had decided to record to ProRes HQ at 1080 resolution, some of my more aggressive push-ins started to look soft, softer then I liked going out the door. Mulling over how to deal with the issue, I thought it would be funny to try and emulate the effect of zooming into a televised image, such that you’d see the pixels of the TV. Red Giant Universe to the rescue, I used their Holomatrix OpenFX filter to add vertical scan lines (hey, why not) to the zoom-ins, stylizing them to the point where the softness is irrelevant.

Adding OpenFX

And that, as they say, was that. A composite-heavy green-screen promotional piece graded, composited, edited, and finished entirely within DaVinci Resolve. I did the mix as well, but that was nothing to brag about as the first version I uploaded to Vimeo had all of my dialog mixed to the left channel (there’s a reason I send final mixes for my projects to dedicated audio professionals). Still, I fixed the problem, tuned the mix, and completed the program, which you can see in the previous blog post.

All in all, it was a great experience, and while I’m the first to say I’m biased since I work with the DaVinci design team, I’m also being completely honest when I say that I’ve been really enjoying editing in Resolve 12, and using the hell out of all the new grading features, to boot.


Color Correction Handbook 2nd Edition: Grading theory and technique for any application.
Color Correction Look Book: Stylized and creative grading techniques for any application.
What's New in DaVinci Resolve 12.5: Covering every new feature in Resolve 12.5 from Ripple Training.
DaVinci Resolve 12 QuickStart: A 4 hr editing and grading overview from Ripple Training.
Editing & Finishing in Resolve 12: 9 hrs of tutorials from Ripple Training.
Grading in DaVinci Resolve 11: Comprehensive 13 hr grading tutorials from Ripple Training.
Grading A Scene: Watch a short horror scene graded, from start to creative finish, Ripple Training.

Do You Need to Grade Your Program?

I suspect you know what I’m going to say, but on the premise that it’s how you say it…


Color Correction Handbook 2nd Edition: Grading theory and technique for any application.
Color Correction Look Book: Stylized and creative grading techniques for any application.
What's New in DaVinci Resolve 12.5: Covering every new feature in Resolve 12.5 from Ripple Training.
DaVinci Resolve 12 QuickStart: A 4 hr editing and grading overview from Ripple Training.
Editing & Finishing in Resolve 12: 9 hrs of tutorials from Ripple Training.
Grading in DaVinci Resolve 11: Comprehensive 13 hr grading tutorials from Ripple Training.
Grading A Scene: Watch a short horror scene graded, from start to creative finish, Ripple Training.

More Resolve 12 Mini-Tutorials on YouTube

Ripple Training is hard at work editing my “New Features in Resolve 12” title, which should be coming out really, really soon. To tide folks over until then, they’ve started posting some free new features videos I’ve made on the “DaVinci Resolve in Under 5 Minutes” section of their YouTube channel. Two came out today, and there are more to come covering both editing and grading features in the public beta of DaVinci Resolve 12.

The first of this week’s pair of new videos cover the new Smooth Cut transition in the Edit page, for eliminating “ums,” stutters, and other speech disfluencies, and patching up the hole. This feature’s effectiveness depends heavily on how much motion there is in the frame, so it won’t work for every jump cut you throw at it, and it works best when there’s a minimum of subject and camera movement. This video shows what it does.

The second video summarizes how to use the new 3D Qualifier, which is a brand new keyer in Resolve 12 that is often faster, more accurate, and can in many cases be more pleasant to use then the older HSL qualifier. Bottom line, this keyer should let you work more efficiently for most chroma key isolations.


Color Correction Handbook 2nd Edition: Grading theory and technique for any application.
Color Correction Look Book: Stylized and creative grading techniques for any application.
What's New in DaVinci Resolve 12.5: Covering every new feature in Resolve 12.5 from Ripple Training.
DaVinci Resolve 12 QuickStart: A 4 hr editing and grading overview from Ripple Training.
Editing & Finishing in Resolve 12: 9 hrs of tutorials from Ripple Training.
Grading in DaVinci Resolve 11: Comprehensive 13 hr grading tutorials from Ripple Training.
Grading A Scene: Watch a short horror scene graded, from start to creative finish, Ripple Training.

A Resolve 12 User Manual Reader’s Guide

Resolve 12 User Manual

The beta edition of the Resolve 12 User Manual is included with the installation in the DaVinci Resolve application folder

The day has come. After months of development, the DaVinci Resolve 12 public beta is upon us, with dozens upon dozens of new features to use and explore, encompassing both the evolution of Resolve into a fully satisfying creative editing solution, as well as an extension of Resolve’s already powerful grading tools with fantastic new features and numerous workflow enhancements to make grading and finishing faster and smoother then ever.

(update) If you like video tutorials, Ripple Training has just released my “What’s New in DaVinci Resolve 12” title, in which over the course of five hours I provide an in-depth look at nearly every new feature found in DaVinci Resolve 12. If you hate reading, this is the next best thing to all the chapters I’m about to recommend in the updated user manual.

It’s no secret that I work with the Resolve design team at DaVinci, and also write the User Manual. Given the massive collection of features in this year’s release, the accompanying User Manual update was similarly enormous, and now that the manual has cracked the 1000 page mark (1095 pages in the beta version), with 704 new and updated screenshots at last count, it was clearly time to do a full reorganization of the chapters, in an effort to make it easier to find the information you’re looking for. Consequently, the Resolve 12 User Manual is divided into 44 chapters, with many valuable topics now appearing within their very own chapter for the first time. Check out the table of contents on pages 3-19 and you’ll see what I mean.

So, you ask, where do I start if I’m looking for what’s new?

Chapter 2, “Logging In and The Project Manager” will give you some new insights into how and why multi-user login screen is now optional for new installations, and how upgrading Resolve will work on current installations. There’s also updated information on new things you can do using Dynamic Project Switching (it’s now possible to copy/paste clips and timelines among different projects, and Dynamic Project Switching makes this faster), and it covers the new Archive feature, which is great for putting projects with media into long-term storage, or archiving projects to make it easier to hand them off to other facilities.

ArchiveProject

The Archive and Restore commands in the Project Manager

Chapter 5, “Improving Performance, Proxies, and the Render Cache,” is required reading. This chapter consolidates everything you can do to make Resolve run faster, which now includes the all-new ability to use “Optimized Media” (an updated spin on the old Pre-Rendered proxies mechanism Resolve had before) to work faster by turning processor-intensive media formats into faster-to-work-with clips using a format and proxy size of your choosing. Once you’ve optimized media, you can switch back and forth between the optimized and original media without needing to reconform or relink—it’s all managed by Resolve. Additionally, optimized media works with the real-time proxy command (which now lets you choose from Half and Quarter proxies), the Smart cache, and all of Resolve’s other features for improving performance, so this is a chapter worth understanding in its entirety if you want to get the most performance out of Resolve 12.

OptimizedMediaSettings

Customizable options for how Optimized Media is created, you can select the format and the size

Chapter 6, “Data Levels, Color Management, and ACES” covers the brand new DaVinci Resolve Color Management, so head next to page 154 to learn all about how you can use Resolve Color Management (RCM) to deal with the varied color spaces of multiple media formats and log-encoded media without needing to use LUTs. Whether you’re a colorist, a finishing editor, or a creative editor, this new way of managing color just might speed you up.

Color Management

Resolve Color Management lets you specify the Input Colorspace of your media, the Timeline Colorspace (or working color space), and the Output Colorspace

Chapter 8, “Adding and Organizing Media,” has the new section on page 179 covering “Creating and Using Smart Bins,” which is Resolve’s way of letting you use multi-criteria searches employing clip metadata to automatically pull together all clips sharing a particular set of metadata. It’s a really sophisticated implementation that allows you to search for all of some criteria but any of other criteria, enabling you to build really flexible searches.

Complex Smart Bin

You can create Smart Bins for automatically gathering media in your project using simple or complex metadata searches

Chapter 9, “Working With Media,” starts out with information on the new Display Name column of the Media Pool, that lets you create more human-readable clip names that will be displayed in the Timeline. Chapter 9 also includes information on Resolve’s new “Auto-Sync Audio Based On Waveform” commands, which do waveform matching to sync dual-source audio with video recordings that have matching camera audio recorded. Additionally, the section on Changing Clip Attributes has been updated with much more information on how you can use the Clip Attributes window to tailor the clips in your project to suit your needs.

ClipAttributesVideo

Clip Attributes lets you adjust the settings of one or more clips in the Media Pool

One of Resolve’s most powerful new tools is the ability to simply and easily relink media. This feature is explained succinctly on page 201, but it’s explained much more fully in Chapter 22, “Importing Projects and Relinking Media.” In particular, page 490, “How DaVinci Resolve Conforms Clips,” and page 529, “Manually Conforming and Relinking Media,” have been extensively rewritten to explain the difference between “Relinking” and “Conforming” (a new distinction I make to explain how Resolve works with media more clearly), and discusses the numerous methods Resolve 12 employs to manage the relationship between clips in the Media Pool and media files on disk (linking), versus the relationship between clips in a Timeline and clips in the Media Pool (conforming). If you want to understand what’s happening under the hood, this is an important chapter for you to read.

Of course, the vastly improved editing environment is one of the big new aspects of this release. Multicam editing, superior audio playback performance, better JKL responsiveness, expanded multi-selection trim capabilities, better dynamic trimming, media management tools, and hugely increased audio capabilities including audio filter support for both clips and tracks, track level and filter keyframing, mixer automation recording, and ProTools export make Resolve 12 into a great NLE with the tightest grading integration in the industry.

Audio Filters

Resolve 12 is now compatible with AudioUnit and VST audio plugins

The chapters that encompass Resolve’s editing capabilities range from Chapter 13, “Using the Edit Page, through Chapter 21, Media Management. That’s nine chapters of editing information, but here are the highlights.

Chapter 15, “Working in the Timeline,” covers Resolve’s new re-syncing contextual menu commands for automatically dealing with audio and video items that have gone out of sync.

Chapter 16, “Multicam Editing, Take Selectors, Compound Clips, and Nested Timelines” cover all of Resolve’s multi-clip editing capabilities, headlining with version 12’s new Multicam editing tools, which are comprehensive and incredible. This chapter also covers how you can nest one timeline inside of another, which is yet one more new feature available in version 12.

Multicam Switching

The new Multicam editor in Resolve 12

Chapter 17, “Trimming,” has expanded and rewritten this part of the manual to cover all of the newest trimming capabilities that Resolve offers, specifically the ability to make multiple selections on the same track to simultaneously ripple, roll, slip, and now even slide multiple clips or edits at once. This includes making selections to do asymmetric trims on the same track, which opens up some really useful new shortcuts when you’re hammering a sequence into shape. This chapter also covers the new Dynamic Trimming mode accessed using the “W” keyboard shortcut, which lets you use all of the JKL transport commands to trim whatever clips you have selected, in real time, with audio playback and the ability to choose which edit point you’re monitoring when you’ve selected multiple objects.

Ripple Credits, Before

A multi-edit selection, Before

Ripple Credits, After

Rippling a multi-edit selection in Resolve 12, After

Chapter 18, “Transitions,” covers the new transition curve you can use to customize transition timing, as well as the all new “Smooth Cut” transition that you can use to make small jump cuts, that result from removing unwanted verbalisms and pauses in interviews, disappear.

Chapter 19, “Edit Page Effects,” shows you how to use Resolve’s new motion path keyframing with easing controls right in the Edit Page, on page 432.

Motion Path

The new bezier-editable motion path with easing adjustment

Chapter 20, “Working With Audio,” has several new sections, including one at the beginning of the chapter covering Resolve’s new support for AAC, MP3, and AIF audio formats at sample rates up to 192 kHz. A revised section covers how assigning audio channels in the Media Pool affects your ability to edit multi-channel audio, and is required reading. Then, three new sections at the end cover how you can now record clip and track level automation in real time using the mixer, how to expose and keyframe using the new Track level overlay, how you can apply AudioUnit (on OS X) or VST (on OS X or Windows) audio filters to clips right in Resolve, and how you can export to ProTools when you decide it’s time to hand off your audio postproduction to a professional.

Automation Recording

Resolve 12 lets you record level automation in real time using the Mixer

Chapter 21, “Media Management,” covers the new Media Management commands in version 12, which let you move, copy, or transcode the media associated with clips in the Media Pool, or within specific timelines, with the ability to automatically relink your project’s timelines to the newly managed media you’ve put in another location.

Media Manage

All new media management in Resolve 12

If you’re a colorist, or an editor who does a lot of color, Resolve 12 has much more for you to love. Chapter 24, “Using the Color Page,” covers the new Smart Filter capabilities, that let you create your very own multi-criteria thumbnail timeline filters for filtering and sorting the clips you’re grading using any combination of metadata available from the Metadata Editor. Chapter 25, “Color Page Basics,” describes the new “Shot Match” command that lets you automatically grade multiple selected clips to match one another, as a prelude to grading a scene. Chapter 26, “Curves,” is a dedicated chapter that contains all new information on using Resolve’s new unified Custom Curve UI, that you’re going to love.

Custom Curves Palette

The new unified curve editor, with integrated Soft Clip controls

Moving on, Chapter 27, “Secondary Grading Controls,” has been expanded to include the new 3D Keyer mode of the Qualifier palette, which is a brand-new high-quality keyer focused on letting you work faster, and giving you more specific results right off the bat. In conjunction with the new Matte Finesse controls “Clean Black” and “Clean White,” (page 696) which let you remove speckles and holes from the background and foreground of a Key matte really easily, Resolve 12 makes it even easier to create great secondary corrections. Later in the chapter, page 722 covers the new Perspective 3D option of the tracker, which makes the tracking of windows to follow features in a scene even more powerful and accurate. Lastly, page 734 covers the new automatic keyframing for rotoscoping capabilities built into Resolve’s tracker palette. Once you try keyframing windows this new way, you’ll never want to use the Keyframe Editor to do this again.

3D QualiferKey

A Key qualified using the 3D Keyer

3D Qualifier

Controls of the 3D Keyer in the Qualifier Palette

Chapter 28, ” The Gallery and Grade Management,” has sections on version 12’s new ability to let you ripple adjustments made to one node to multiple selected clips (or to all clips in a group), as well as appending a new node to multiple selected clips. This is a great new capability for situations where you don’t want to have to make a group just to ripple a change to a selection of clips, and is the kind of feature that will enable you to grade faster then before.

Chapter 30, “Working in the Node Editor,” covers several new updates to node editing that you’ll want to read about. First, the Parallel, Layer, and Key Mixer nodes have been updated with a new look, making your node tree easier to read. Second, the Key Mixer node (page 886) has been made much easier to work with as all node input controls are now simultaneously exposed in the Key Palette. Third, you can now select multiple nodes and turn them into a single Compound node, which contains multiple nodes of adjustment while only exposing a single node in the Node Editor. You can open compound nodes to edit the contents, and even grade compound nodes to “trim” the contents, all of which is covered on page 865. Last, but not least, there’s a new Node Editor contextual command, “Cleanup Node Graph,” which lets you auto-organize messy node graphs with ease.

Key Mixer Setup

Mixer nodes have a new look, to make node trees easier to read

Furthermore, if you’re a pro colorist and you’ve always wished you knew what Resolve’s order of operations was under the hood, page 869 has a thorough explanation of which operations happen prior to the node editor, which operations happen within each node, and which operations take place after the node operator.

Resolve Order of Operations

Chapter 35, “Rendering Media,” has been updated to reflect the new ProTools Export easy setup, as well as the new reorganization of the Render Settings list to make it faster then ever for you to customize your renders to output what you need. Additionally, while not new, the section in Chapter 38, “Exporting Timelines to Other Applications,” has an expanded section on Exporting to ALE for anyone within a Media Composer workflow.

Obviously, there’s much, much more to this release, but these are the highlights that should get you started. (update) Even now, I’m recording (update) I’ve finished recording (update) As I’d mentioned at the top of this article, Ripple Training has released my “New Features in Resolve 12” video tutorials, which runs through all of these features and more with a fine-tooth comb, showing you how all the new toys work. I’m hoping it comes out within a couple of weeks. Also, check out Ripple Training’s YouTube channel for my ongoing “Resolve In A Rush” free Resolve tip series, which will soon include new tips for DaVinci Resolve 12.

(Updated Aug 20)


Color Correction Handbook 2nd Edition: Grading theory and technique for any application.
Color Correction Look Book: Stylized and creative grading techniques for any application.
What's New in DaVinci Resolve 12.5: Covering every new feature in Resolve 12.5 from Ripple Training.
DaVinci Resolve 12 QuickStart: A 4 hr editing and grading overview from Ripple Training.
Editing & Finishing in Resolve 12: 9 hrs of tutorials from Ripple Training.
Grading in DaVinci Resolve 11: Comprehensive 13 hr grading tutorials from Ripple Training.
Grading A Scene: Watch a short horror scene graded, from start to creative finish, Ripple Training.

“The Place Where You Live” Updates

 

The Place Where You Live Poster

(Updated 9/8/15) This is the last update, as we’ve played all of the film festivals that we were accepted to, and we’re not expecting to hear from any more. At final count, “The Place Where You Live” is up to eighteen festival acceptances, and one non-festival screening in Beijing, with six awards presented, and one additional award nomination. They were all great experiences, and the resulting laurels are a welcome addition to our poster.

The final list is as follows:

  • BIRTV screening in Beijing (screened August 26th)
  • On the Line Film Festival (screened August 7th)
  • From the Beyond Film Festival (screened August 8th)
  • 15th Annual Sci-Fi-London International Festival of Science Fiction (screened June 1st and June 7th)
  • ConCarolinas Short Film Festival (award for best leading actress) (screened May 30th)
  • 48th Annual Worldfest-Houston Int’l Film Festival (special jury award) (screened April 19th, 2015)
  • 34th Annual Minneapolis/St. Paul Int’l Film Festival (screened April 12th, 2015)
  • Big Muddy Film Festival (screened February 26th, 2015)
  • 40th Annual Boston Sci-Fi Film Festival (screened February 7th, 2015)
  • Beloit Int’l Film Festival (screened February 28th, 2015)
  • Idyllwild Int’l Festival of Cinema (nominee for Best Original Score) (screened January 7th, 2015)
  • Big Easy Int’l Film Festival (award for Best Science Fiction Short) (screened December 13th, 2015)
  • Chicago Paranormal Film Festival (screened November 30th, 2014)
  • Fort Lauderdale Int’l Film Festival (screened November 17th, 2014)
  • Wild Rose Independent Film Festival (awards for editing, production design, and VFX) (screened November 7th, 2014)
  • Fargo Fantastic Film Festival (screened October 26th, 2014)
  • East Lansing Film Festival (screened November 4th, 2014)
  • South Dakota Film Festival (screened September 27th, 2014)
  • Midwest Sci-Fi Film Festival (screened July 4th, 2014)

The last festival acceptances delayed the public release of “The Place Where You Live,” pushing it to September. Everyone’s been incredibly patient during my film festival adventures, and I apologize for this last push out, but I really hadn’t anticipated the last-minute festival response we got. Still, September is set in stone, and this gives me ample time to prepare its rollout.

As I mentioned previously, my last tour on the festival circuit with my indie feature “Four Weeks, Four Hours” garnered six acceptances, so this is what progress looks like, and I couldn’t be happier.


Color Correction Handbook 2nd Edition: Grading theory and technique for any application.
Color Correction Look Book: Stylized and creative grading techniques for any application.
What's New in DaVinci Resolve 12.5: Covering every new feature in Resolve 12.5 from Ripple Training.
DaVinci Resolve 12 QuickStart: A 4 hr editing and grading overview from Ripple Training.
Editing & Finishing in Resolve 12: 9 hrs of tutorials from Ripple Training.
Grading in DaVinci Resolve 11: Comprehensive 13 hr grading tutorials from Ripple Training.
Grading A Scene: Watch a short horror scene graded, from start to creative finish, Ripple Training.

Previewing DaVinci Resolve 12

I gave a DaVinci Resolve 12 demo in June to the Mopictive User Group in New York, and they posted a video of the event for all to see. I give a look at how well integrated the new features of Resolve 12 are, letting you move quickly and easily from creative editing to grading to fine trimming to more grading to audio mixing to even more grading, going back and forth with a single click of the mouse. There are some really fantastic new features to show, including multicam editing, advanced color management, automatic shot matching, expanded trimming and dynamic trimming, automation recording and audio filter support, improved tracking, a new keyer, and way, way more.

As of this year, DaVinci Resolve 12 is truly an integrated editing and grading application in which you can begin an edit, grade it, and finish your program all within a single application. And of course I’ll have new Ripple Training titles available later this summer to help you learn how to use it all.

I start at 43 minutes in.


Color Correction Handbook 2nd Edition: Grading theory and technique for any application.
Color Correction Look Book: Stylized and creative grading techniques for any application.
What's New in DaVinci Resolve 12.5: Covering every new feature in Resolve 12.5 from Ripple Training.
DaVinci Resolve 12 QuickStart: A 4 hr editing and grading overview from Ripple Training.
Editing & Finishing in Resolve 12: 9 hrs of tutorials from Ripple Training.
Grading in DaVinci Resolve 11: Comprehensive 13 hr grading tutorials from Ripple Training.
Grading A Scene: Watch a short horror scene graded, from start to creative finish, Ripple Training.

I Have More Resolve Tips and Techniques on YouTube

I’ve been continuing to post five minute tips videos about DaVinci Resolve via the Ripple Training YouTube channel; folks have really been liking them, so I’m inclined to continue doing them, and I thought it worth giving everyone a reminder that they’re there. Here are most two most recent videos that have gotten the most eyeballs. Enjoy!


Color Correction Handbook 2nd Edition: Grading theory and technique for any application.
Color Correction Look Book: Stylized and creative grading techniques for any application.
What's New in DaVinci Resolve 12.5: Covering every new feature in Resolve 12.5 from Ripple Training.
DaVinci Resolve 12 QuickStart: A 4 hr editing and grading overview from Ripple Training.
Editing & Finishing in Resolve 12: 9 hrs of tutorials from Ripple Training.
Grading in DaVinci Resolve 11: Comprehensive 13 hr grading tutorials from Ripple Training.
Grading A Scene: Watch a short horror scene graded, from start to creative finish, Ripple Training.

Who Says Women Can’t Direct?

There’s a Tumblr making the rounds called “Shit People Say to Women Directors.” It’s worth reading to see what women in our industry are having to put up with. It’s ridiculous, in this day and age, that anyone can make these sorts of comments with a straight face. I’ve spent my entire career, from film school through my various jobs in post, working with a variety of talented directors who happen to be women, and the notion that gender imposes any kind of limitation on the job is ludicrous.

Put more bluntly, there is no shortcoming I’ve seen ascribed to women directors that I’ve not also seen exhibited by male directors. I know from personal experience, as a director of one one feature and several shorts, that directing is a grueling gig. At the end of the day, it’s preparation, experience, creativity, and character that separate good directors from terrible ones.

I studied theater arts with an emphasis in film production at U.C. Santa Cruz, and of the professors I considered very influential, two were women. Deborah Fort was a visiting film production professor, whose critical eye and ability to articulate the importance of taking responsibility for the images in your frame stick with me on every project I direct. Marcia Taylor was a formidable directing and acting professor with vast experience, whose practical advice on stagecraft, and direct critiques of my various directing exercises drove me to work harder and prepare more rigorously; when she told the class that “every production you undertake as a director will require everything you’ve ever learned,” she wasn’t kidding, and I find this to be true even 25 years later.

If my memory serves me correctly, our film program’s small classes were somewhere around 75% male and 25% female, and I fell into to working alongside many of the women in my class on their projects; the nature of the program was that everyone did a little of everything, so I worked as a student on several woman-directed projects, and they worked with me on mine. Not once did I ever feel that the women were somehow less talented, in-charge, or in any other way less capable. We were all in it together, and good work (and tedious work) was exhibited equally by everyone.

Moving to San Francisco where I started my postproduction career, I encountered many women directors at the Film Arts Foundation and Bay Area Video Coalition, both organizations of which were dedicated to enabling work outside of the mainstream. As an editor, then later as a broadcast designer, I worked for many women clients, directors and producers, on many varied productions, and looking back I find no generalizations worth making that relate to gender.

Moving later to Los Angeles and Manhattan, where I completed the metamorphosis of the post-production part of my career into a colorist, I worked with many more women directors. And transgender directors. And of course male directors. The good ones were good because of preparation, experience, and creativity. Gender, in my experience, played no role in who was great to work with, and whose work I thought was solid.

And not that I need any more examples, but I’m married to a woman who’s an extremely talented director, with whom I’ve worked in both production and post on two shorts. With twin backgrounds in acting and art direction (she’s been the production designer on all of my recent films, as I’ve been the editor and colorist on hers), she comes at the craft from a different skill-set then I do, and I often find myself envious at the comfortable way she’s able to work with her actors.

If you think that men make better directors then women, you’re wrong. And if you’re lucky enough to get a job on a film, as tough as this industry is, and you can find yourself able to insult or demean the woman who’s directing because of her gender, then you should be fired. It’s the 21st century, and well past time to leave this kind of baggage behind us.


Color Correction Handbook 2nd Edition: Grading theory and technique for any application.
Color Correction Look Book: Stylized and creative grading techniques for any application.
What's New in DaVinci Resolve 12.5: Covering every new feature in Resolve 12.5 from Ripple Training.
DaVinci Resolve 12 QuickStart: A 4 hr editing and grading overview from Ripple Training.
Editing & Finishing in Resolve 12: 9 hrs of tutorials from Ripple Training.
Grading in DaVinci Resolve 11: Comprehensive 13 hr grading tutorials from Ripple Training.
Grading A Scene: Watch a short horror scene graded, from start to creative finish, Ripple Training.