A Sneak Peek

I’ve been spending a bit of time tidying up the blog, knowing that my Resolve 9 training title from Ripple Training is just around the corner. Well, no sooner did I finish then Steve Martin released a small tidbit on YouTube. This is just one small movie out of many hours of content, but it’ll give you an idea of what’s coming.

This is a total redo, covering all features of DaVinci Resolve, new and old, found in version 9. It also adds many topics I didn’t cover in my version 8 title, as they weren’t originally available within DaVinci Resolve Lite. This time, no limitations! The whole series is due to be available on November 19th.

Color Correction Handbook 2nd Edition: Grading theory and technique for any application.
Color Correction Look Book: Stylized and creative grading techniques for any application.
What's New in DaVinci Resolve 15: Covering every new feature in Resolve 15 from Ripple Training.
DaVinci Resolve Tutorials: Far ranging DaVinci Resolve instruction from Ripple Training.

Could We Split the Difference?

No matter who the client, or what the project, sooner or later you’re going to be asked “can you split the difference?” between your interpretation of what the client wanted, and what they discovered they really wanted once they saw what you were up to.

This will make you quietly, politely crazy, and is one of the reasons you need to cultivate a great reservoir of equanimity to do this job.

At the end of the day your client’s needs are more important then your mad skillz, so you’ll make the change, render out the project, and hopefully leave work on time to go knock back a beer or two during happy hour, recalling fondly how cool that program would have looked had they only let you off the chain. This is one reason why colorists still do music videos, despite the woefully poor budgets the majority of them have. Because most low budget music videos seem to want wall-to-wall insanity in the grading.

I was flipping through a fashion catalog (Free People, November) that featured some nice, gentle faded and flared film-ish treatments, and mulling over how I’d achieve those looks in different applications (as I am wont to do over my morning coffee). My wife Kaylynn is a photo stylist who works on these kinds of things, so she gets most of the relevant fashion magazines and catalogs, and we often compare notes on the changing styles of photography from season to season, which is a nice bit of casual research.

Then, of course, I dig into my grading project of the week, and inevitably they don’t want any of that; I’m told they want a nice clean grade, a little warm, with good contrast but no crushing, and FOR GOD’S SAKE DON’T CLIP THE SKIN TONES.

All of which is fine. Your average documentary is not wanting to look like a music video. Still, it makes me treasure all the more those projects that are looking for bolder color treatments. So when I get a project with a flashback or dream sequence, or for which the client is wanting signature looks for specific scenes or acts, and they let me go a little crazy, the pang I feel when I hear “could we split the difference” is just a little more pronounced.

Here’s a pretend example of what I’m talking about. This is an amalgam of different experiences; the example clip did not undergo this, I’m simply using it because it’s at hand (it’s available as one of the clips on the disc of my Color Correction Handbook), and I feel the need to point out that the particular client who brought that project to me was great to work with.

One of the first things I usually do is a simple, non-destructive and neutral grade of the image just to see what I’ve got to work with. In this instance, a very simple set of Lift/Gamma/Gain adjustments and a modest YRGB curves adjustment to compress the toe of the shadows yielded the following image:

At this point, the client tells me, “Yeah, I saw these great color treatments in the Free People catalog, and I really like that faded color with blue shadows, and a faded light-leak on the side. Could you do that? Let’s go crazy!”

And I say, “Heck yeah.” And proceed to start abusing the image, first using the YRGB curves to create nonlinear, per-channel color adjustments to the highlights and shadows to create a warm/turquise disparity, with high contrast specifically targeted to the tonality of the image to maintain a smooth falloff, and a blue lift via the blue channel’s YSFX slider (a DaVinci Resolve-specific adjustment).

Then, I use a Luma vs. Sat curve to create a gradual desaturation of the highlights, muting the colors of the skin tone.

Finally, I add a really, really soft window, and use it to limit another curves adjustment to create the light leak effect.

Then, I show the client the result. Predictably, after a period of silence, the client asks, “I’m not sure about the flaring. Could we split the difference?” The remainder of this narrative could go on and on, but to make a long story short, oftentimes situations like this have the following evolution.

My first take based on the reference imagery the client used:

Splitting the middle by fading/dissolving the curves adjustment literally by 50%, and losing the light flare:

What the client eventually signed off on:

The final solution ends up being slightly warmer midtones and very neutral shadows, easily accomplished by deleting all my other adjustments and making two simple color balance tweaks to Lift and Gamma. The reference image turned out to be a macguffin that served only to show the general direction of the correction. It was not, in fact, what the client wanted.

This happens all the time, and consequently I find I’m a bit skeptical when someone asks me to do something incredibly brash and bold. I don’t want to spend too much of the client’s time working up an elaborate grade when all they really want is something pretty simple. On the other hand, you want to take the client seriously, and if they really are looking for something bold, you don’t want to seem too meek, lest you’re thought of as a creative simpleton.

At the end of the day, I find it all boils down to getting to know your client as well as you can, and your first two hours are critical. Pay particular attention your client’s verbal and nonverbal cues as you create the initial, exploratory grades for a new piece. Chances are, you’ll know within three adjustments if you’re on the right track, and you can swiftly change course if you’re not.

And besides, if you create some super-cool look that the client doesn’t ultimately want, you can save it for some other job. That’s what the still store is for.

Download This Example

Added Nov 3rd, 2012—At reader’s request, I’ve uploaded a saved still and the grade I use as an example for you to download via this link. If you want to apply it to the clip in this example, this very clip comes with the media that accompanies my Color Correction Handbook. To import it into Resolve 9, download the file and uncompress it (it’s a .zip file), then open Resolve, Right-click in the Gallery, choose Import, then select the SplittingDifference_1.30.1.dpx file, and click Import. The still and its grade should import.


Color Correction Handbook 2nd Edition: Grading theory and technique for any application.
Color Correction Look Book: Stylized and creative grading techniques for any application.
What's New in DaVinci Resolve 15: Covering every new feature in Resolve 15 from Ripple Training.
DaVinci Resolve Tutorials: Far ranging DaVinci Resolve instruction from Ripple Training.

A Fun Conversation…

Ron Dawson at Dare Dreamer magazine conducted a really fun interview with me, the result of which is an eighty-six minute episode of the “Crossing the 180” podcast. We talk about the utility of film school, filmmaking and creativity, and of course, color correction. It’s a wider-ranging conversation then I usually get to participate in, and you might find it interesting.

Download it here. Or subscribe to the “Crossing the 180” podcast here.

Color Correction Handbook 2nd Edition: Grading theory and technique for any application.
Color Correction Look Book: Stylized and creative grading techniques for any application.
What's New in DaVinci Resolve 15: Covering every new feature in Resolve 15 from Ripple Training.
DaVinci Resolve Tutorials: Far ranging DaVinci Resolve instruction from Ripple Training.

A Good Year to Be In Amsterdam—IBC 2012

Amsterdam was gorgeous this year

This year’s IBC conference was a busy one, and since I was neither speaking or teaching this time around, I had a lot of time to check out what the current state of grading software was among most companies. There’s never enough time to see everything, but I got a good overview of most of the things that interested me.

I’ve been asked by a few people what, in my opinion, the “theme” of this year’s IBC was. For me, it was workflow. Everyone seems to be looking to make managing the media and metadata from on-set through post more streamlined, or to integrate effects, grading, and finishing in different ways. Refinement also seems to be the name of the game, with most grading apps I saw having polished their UI, and added small but useful features such as composite mode blending of multiple layers (a feature that every single grading application now seems to boast). Incorporation of compositing and effects tools into grading applications continues to happen at a rapid pace, with features such as high quality motion estimated speed effects finding their way into more and more grading applications.

The first booth I visited was that of Filmlight, makers of Baselight. A preview version of Baselight was shown with new composite mode blending of different layers, but it seemed that Filmlight’s major news was more related to workflow integration. Now that they’re shipping Baselight plugins for both FCP 7 and Avid Media Composer/Symphony, with a plugin for Nuke right around the corner, Filmlight is in a position to offer a unified and consistent means of handing off comprehensive grading adjustments from NLE through compositing to grading and finishing. Their web site touts this as “Filmlight at every stage,” and they’re working hard to achieve this.

Filmlight also introduced the Flip, a sort of “Baselight in a box,” in order to bring Baselight goodness to the on-set crowd. The Flip is a self contained Baselight system—simply plug in a monitor and a control surface (the Tangent Element panels are supported) and you’re in business. The front of the Flip has buttons with dynamically updating labels (similar to the Blackboard 2), a touch-sensitive display, and a single trackball/ring control that allows simple adjustment even in the absence of a surface.

The Flip, a self-contained Baselight in a box

The idea is that, in addition to being able to use CDL-compatible tools to apply and manage simple primary grades from on-set through post, you can use the Flip to apply full Baselight grades while on-set, previewing them live on your camera’s output, and linking them to your captured media. Once the media/grade relationship has been defined, Baselight’s BLG file format (Baselight Grade File) is used to exchange grading data, with optionally embedded before/after wipe frames of the media files, and the option to even embed the full resolution media itself (as OpenEXR frames of the raw pixel data). At minimum, you can exchange media-less BLG files that contain all of the full multi-layered Baselight grades that correspond to your project’s media among the Flip, FCP 7, Avid, and Nuke Baselight plugins, and a full Baselight workstation. Very cool, from a workflow perspective. (A video about the BLG file format is available here).

Speaking of Tangent, their new Element panels seemed to be everywhere at the show except the Quantel booth. Currently Resolve, Lustre and Flame Premium, the Filmlight Flip, Scratch, Mistika, and REDCineX all support the Elements, along with numerous other applications and utilities. Tangent’s main news was a set of new Pelican case foam inserts for folks needing portability; I could have sworn I took a picture, but alas I did not (they’re black). Pricing information is not yet available, but the foam inserts are designed to fit the relatively slim Pelican iM2370 case, and the larger  iM2500 (a wheeled case which is capable of fitting both the panels and a 15″ laptop). I’m looking forward to picking one of these up for my on-location gigs.

After a few years of not having time to check out SGO’s Mistika software, it appears I picked a good year to get a comprehensive demo, as Mistika now sports a brand new color correction interface. Known for being an integrated editing, compositing, and grading environment with an excellent stereoscopic toolset, it’s nice to know that they’re not resting on their laurels, and that they’re continuing to develop and refine their toolset.

Mistika’s new color correction UI

I was unfamiliar with their previous color correction interface, but the new UI has everything you would expect, with lift/gamma/gain three-way controls, multiple layers of primary or secondary correction (called Vectors) embedded into each “timeline layer” of grading, printer light controls, white/black “manually sampled” auto-correction, tracked shapes, qualification, etcetera. All of this is compatible with up to six Tangent Element panels; in fact Tangent tells me that Mistika supports more simultaneously mapped Element panels then any other application right now, making good use of the Element’s expandability.

One unique feature is a mode of “five-way” correction, with five color balance and contrast controls for lift/shadows/gamma/highlights/gain adjustment, simultaneously presented. This is an interesting variation that I can see being quite useful.

Five way color controls in Mistika

Mistika’s grading and compositing is layer-based, and one aspect of this is the concept of a single layer grade being able to encompass multiple clips (similar to track effects in Symphony, or track grades in Speedgrade). What’s unique in Mistika is the ability to “route” key data from one layer to another in the stack, which provides functionality similar to DaVinci’s node-based key and RGB routing. Since Mistika is as much a compositing tool as a grading tool, this provides many node-based advantages while retaining the familiarity of a layer-based interface.

Mistika’s layer-based compositing and color correction

One touch I like in the “trackless” timeline is the resizable playhead, which lets you change its height in order to control how many composited layers you want to preview as you play. A taller playhead that intersects all layers shows the full composite, whereas a shorter playhead that only intersects two of the layers results in only those two layers being composited during playback.

Though not new, I wanted to get a look at the stereoscopic 3D tools that I often hear folks rave about, and I wasn’t disappointed. SGO has implemented a real-time optical flow engine that enables real-time slow motion and format conversion processing. However, it also enables automatic pixel by pixel geometry and color matching between challenging left and right-eye stereoscopic media, making short work of clips where differences in paint reflectivity and sky polarization cause other auto-matching tools to fall short of an overall match. Furthermore, knowing that optical flow processing can produce visual artifacts in clips with overlapping elements in motion, Mistika’s full compositing toolset can be used to isolate and manually repair sections of the odd clip where optical flow artifacts happen to appear.

One of the things that really impressed me, though, is Mistika’s tool for using optical flow processing to alter the interocular distance of elements in the scene at a particular range of depth. SGO refers to this as “depth-discriminated interocular adjustments,” and this lets you stretch or squeeze selected regions of the picture to come forward or move back in postprodution. This is a level of detail that made the film-maker in me rejoice; I was very impressed.

Speaking of stereoscopic work, Omnitek was showing off their tools for stereoscopic analysis and quality control. Formerly developed to run on PC computers, Omnitek now offers their scopes in familiar, self-contained form factors—a rasterizer pizza box (the OTM 1001) and a more traditionally “back-to-the-70s” handled box with a screen (the OTM 1000). Both form factors contain exactly the same hardware, and in fact cost the same, so it’s simply a matter of convenience.

The Omnitek stereo analysis display

Getting a walk through the different displays, I definitely got the sense that this option is valuable for shops doing lots of stereoscopic work. For example, a multi-planar depth scatter graph shows the overall range of positive and negative parallax in an easy to grasp visual manner, with indicators for the outer acceptable range of depth, and shows which parts of the image correspond to what depth. This display corresponds to a depth histogram at the right.

Below that, discrete RGB channel left/right eye exposure comparison scopes show discrepancies in exposure via a horizontal bend in the offending channel’s otherwise vertically oriented graph.

Omnitek’s stereo exposure comparison

A series of bar graphs show left/right eye discrepancies in depth range, vertical and horizontal position, rotation, zoom, sharpness, and color, with unambiguous center indicators for each property.

Omnitek’s Stereo QC analysis

Now that these scopes are self-contained, and I’m told pricing starts at $4K (the stereo options are extra), Omnitek is definitely worth a look if you’re in the market for a dedicated set of outboard scopes.

While I was looking at outboard gear, I took the time to speak with someone at Snell about the Alchemist video signal convertor. I’ve long heard that the Alchemist is one of the premier boxes for format conversion from NTSC to PAL and back again, and was curious to learn more about this somewhat obscure piece of equipment. Unlike other solutions that rely upon optical flow analysis, the Alchemist relies upon a technique called “phase correlation motion estimation” to do its magic. I’m told that the Alchemist’s success at conversion is due in part to years of careful refinement of this method of processing, stemming from customer issues and requests. It’s nice to hear about this kind of evolution in a product.

Interestingly, even though the Alchemist hardware continues to be designed for moving SDI/HD-SDI signals in and out, Snell has come up with a way for both new and existing customers to process video in file-based workflows, using something they call their FileFlow server. It’s basically a computer that can be connected to an existing Alchemist via the HD-SDI inputs and outputs, which itself can be connected to your facility’s network. Video files in supported formats can be uploaded directly to the server, managed via a list-based interface, and converted using the Alchemist hardware.

Interface for the Snell Alchemist FileFlow server

When I inquired about this somewhat indirect add-on approach, they wanted to develop a solution that could be added by the substantial existing customer base, instead of requiring everyone to purchase a new box.

This brought me to the area of the conference center that showcased monitors, among other dedicated bits of hardware. I had a nice chat with Bram Desmet at Flanders Scientific about the new 12-bit XYZ monitoring option which is available as a firmware update to all existing customers. The 24-inch 2461W was already a well-regarded, flexible display, but this makes it even more useful in a wider variety of postproduction situations.

However, what I was really interested in was Flanders’ 10-bit CM170. Even though this 17-inch display isn’t exactly new (it was announced at NAB) this was my first look at it, and I liked what I saw. While these days it’s really too small for a room with five clients in it, given that Flanders has designed it to be a full-resolution, color critical display, at $3,295 it’s probably the best bang for the buck that’s available as a grading monitor for a small one-colorist unsupervised suite.

Flanders Scientific CM170

Additionally, the fact that it’s compact, accurate, and full resolution makes this an especially attractive monitor for on-set work. For that purpose there’s an external DC power input and screw holes for battery attachments. Nice.

Just a few booths away, I also checked out the Penta Studiotechnik HD2 range of color-critical displays. I was previously unaware of this company’s offerings, and while the show floor is a terrible place to do a proper evaluation, Steve Shaw of Light Illusion spoke highly of them, and has worked with the company to make Lightspace available as a calibration option.

The Pentax Studiotechnic video wall

Pentax Studiotechnic offers a range of LCD displays under the HD2 Pro brand, in a variety of sizes. The 32″ and under displays use ND filters to control light output and improve blacks, while the impressive-looking 55″ panel is claimed to offer 187 degrees of viewing, with a glossy screen that doesn’t need ND filtering. If you’re researching different displays, it’s another company to look into.

Getting back to grading software, Quantel was showing off their new Pablo Rio. Building upon the features of the Pablo, Rio offers all that and more in a new, hardware agnostic version. Quantel is relinquishing their dependance on dedicated processing hardware, and embracing GPU-based processing. What they were showing on the floor was a Windows-based solution using two Nvidia Tesla cards to do all of the real-time magic one would expect from Quantel. For video input and output, the Rio still uses a Quantel I/O card, but they have plans to support the Atomic I/O card later. Alongside more flexible hardware support, there’s a new ability to soft-mount all supported media formats (and it’s a long list) from any connected volume.

Quantel Rio, less proprietary, more features

In addition to shedding proprietary hardware, Quantel has updated the grading side of things as well. The UI has seen some tidying up, and a pop-up has been added to switch the three way color balance and contrast controls among shadows, mid tones, and highlights, allowing for “nine-way” adjustments that are similar to what Speedgrade and Lustre colorists are used to.

Pop-up for switching the three way controls among master, shadows, midtones, and highlights tonal ranges

Additionally, a new ability to customize the overlapping ranges of influence of the lift/gamma/gain controls has been added, via overlapping curves.

Customizable three way tone curves

In another surprising example of opening up, Quantel has licensed the Mocha planar tracking toolset, incorporating it directly into Rio, where it can be used for tracking shapes in secondary operations.

Mocha’s planar tracking available in Rio

Keeping with one of my unofficial themes of the show, Rio adds the ability to recombine secondary operations (referred to as Cascades) using composite modes. While I ordinarily wouldn’t include a screenshot, the sheer number of composite modes available to choose from is unexpectedly massive.

Loads of composite modes in Quantel Rio

Furthermore, Quantel is supporting third party filters, with the ability to apply one per cascade (they were demoing the peerless Sapphire plugin set on the floor). Rio also boasts a new higher quality sharpen filter; given the endless parade of soft-focus HD material found in run and gun projects, a better sharpen filter is always welcome.

Finally, Quantel was showing SynthIA, which offers optical flow-based image processing for stereoscopic 3D media similar to what Mistika does—altering a range of interaxial depth. However, unlike Mistika, Synthia does this via a separate application, so it’s not otherwise integrated with the full Rio toolset. I was told this was done to make Synthia available to folks who are focused only on stereoscopic processing, so they don’t need to spend the money on a full Rio license, but I imagine users would like to see those features get rolled into Rio down the road.

Seemingly eager to shed a reputation as one of the most expensive grading solution on the market, Quantel was quick to discuss pricing; available as software only, the Rio comes in at $47K (with an additional 40% off at the moment), while a full turnkey solution with software and hardware costs $130K (with an additional 30% off at the moment). Yes, I know, that’s still not exactly cheap, but it’s a heck of a lot less expensive then what was previously available from Quantel, and you get all of the real-time multi-format conversion, integrated editing, compositing, paint, and grading that previously cost a mint, so it’s progress.

Stopping by Dolby’s booth, I was truly impressed at the Phillips glasses-free auto-stereo television being shown. It’s a 4K display that shows each eye at a full 1080 HD resolution, the viewing angle was impressive (though front-on was still optimal), and there were plenty of “sweet spots” as far as the lenticular front of the display went; I only needed to move a couple of inches to the left or right to jump from one sweet spot to the next, and each sweet spot had a wide range.

Auto stereo displays are great; guess you had to be there…

“Dolby 3D,” in addition to handling the stereoscopic to auto stereo conversion, also touts “adjustable depth,” basically embedding a per-pixel depth map into the video signal stream so that, with a single control, viewers can adjust and collapse the depth of the stereo being presented from the full default, all the way down to no depth at all, depending on one’s comfort level.

I was also lucky enough to have a chat with the Bob Frye, the product manager for the Dolby PRM-4200. With the recent price drop from $50K to $30K, this display solution has gone (for me) from completely unattainable to merely unaffordable, and I wanted to have another look just to make myself jealous. In particular, I was curious to learn where it excels the most over more affordable Plasma and LCD solutions. The real draw seems to be its shadows reproduction. I’m told the tonal reproduction in the blacks is very smooth, with every code word drawn without clipping; it certainly looked good to me. I’m also told that at a recent “shoot-out” of different monitoring technologies and how well they could be made to match cinema projection, the 4200 was one of the best-regarded matches, so there’s that. With support for 12-bit XYZ right now (ACES support is being looked into), this seems to be a high-quality solution for cinema grading in smaller suites, and for a yearly fee, Dolby will send a calibrator to you to make sure it’s always in tip top calibration for those $200 million tentpole jobs. So now I just have to convince Michael Bay that he needs to grade Transformers 4 here in Saint Paul with me.

Assimilate Scratch is another application that I hadn’t had a chance to check out at NAB, and I was quite pleased to see that they’ve taken pains to refine the onscreen UI, and have been making strides with greater integration of grading and 3D compositing. New features include a “Pre” track for inserting operations at the beginning of the Scratch image-processing pipeline, and nested scaffolds for precomping multiple scaffolds inside of a single scaffold’s worth of operations.

A more polished UI and 3D compositing

Scratch has also incorporated motion estimated speed processing, making high-quality slow motion the rule, rather then the exception, as of this year’s crop of grading applications.

Scratch gets motion estimated speed effects

Stacking together these features along with features such as their mesh warper and other compositing and effects tools really starts to show off the kinds of work that Scratch is becoming capable of.

Combining compositing, mesh warping, and grading in Scratch

Scratch is also adding to their workflow story, with support for watch folders to do automated image processing, Mocha integration via the import of Mocha tracking data, and a new Nuke round trip workflow that supports the exchange of primary grades, framing info, edits, and LUT data via a scratch node available inside of Nuke. Lastly, ACES support has been added alongside the previously supported colorspaces.

ACES support in Scratch

Last, but certainly not least, Blackmagic Design shipped the final version of DaVinci Resolve 9, as well as announcing their new DeckLink 4K Extreme 4K video interface, with support for 10- and 12-bit RGB or YCbCr signals input and output at 4K resolutions via dual-link 3G-SDI. The UltraStudio Thunderbolt interface offers 10-bit RGB or YCbCr input and output at 4K resolutions, so there are two new Resolve-compatible video interfaces for the emerging 4K crowd.

Fun times at the Blackmagic Design booth

I was also pleased to note that they’ve announced that the reengineered Teranex processors are now shipping, as is the Blackmagic Cinema Camera (an upcoming passive Micro Four Thirds version was also announced).

Catching up with the Resolve engineering team, the shipping version of Resolve 9 had some last minute new features slipped in, including a new RGB output in the Ext Matte node that’s useful for adding grain or distress stock video to a clip’s grade via one of the composite modes in the Layer Mixer node; short clips of grain will even be looped endlessly to match the duration of any clip. You do have to take the added step of opening the Media page and adding your grain or distress layer as a matte to the clips you want to use them with, but this ensures that you can render grain or distress into clips in “Render timeline as: Individual source clips” mode.

The node structure for adding grain or distress to a grade

Another improvement is that the currently selected clip in the Lightbox can now be graded using your control surface, making the Lightbox into a way of quickly browsing and grading clips of a scene.

There’s also a functional alteration of a control that had already been available in the public betas. The contrast control in log mode has been changed so that there’s now a smooth rolloff at the highlights and shadows, an automatic S-curve, so detail won’t be clipped.

Increasing contrast using the Log mode’s Contrast parameter now creates an S-curve adjustment, seen in the waveform of a formerly linear ramp gradient.

The Keyframe Editor has been updated, with a new look for keyframes (static keyframes are round, dynamic keyframes are diamond-shaped) that makes them easier to select and drag.

The updated Keyframe Editor

Lastly, the video scopes have even been updated, with an optional skin tone indicator in the Vectorscope, and optional minimum and maximum reference level lines (yellow) in the Waveform.

More optional reference indicators in the Video Scopes

So there you go. Honestly, this year’s IBC is probably the most fun I’ve had at a trade show in years, and it was really illuminating to see demos of nearly every major grading workstation on the market, side by side. As I tweeted during the show, each grading application has something that’s particularly special, but no one grading application does everything. That said, we’ve got more tools available to us then ever before in the history of this crazy profession. Now we’ve just got to figure out what to do with them.

Color Correction Handbook 2nd Edition: Grading theory and technique for any application.
Color Correction Look Book: Stylized and creative grading techniques for any application.
What's New in DaVinci Resolve 15: Covering every new feature in Resolve 15 from Ripple Training.
DaVinci Resolve Tutorials: Far ranging DaVinci Resolve instruction from Ripple Training.

Six, (er) Seven New Features in Resolve 9

So, Resolve 9 has finally been made public after much anticipation since its unveiling at NAB. Many of the new features have already been shown and discussed, but there are even more features being shipped then have been talked about previously, and I thought it’d be nice to highlight six seven of those in this post. (The lead engineer reminded me of, how could I have forgotten, the updated video scopes, which are so pretty I had to add a screenshot.)

Mixed Frame Rate Support

For me, this is the single biggest new feature in this release. Bigger even then the new UI. Mixed frame rate media has been a frequent hassle in projects I get from clients. Most NLEs let you edit any kind of footage you want together into a single timeline, regardless of frame rate. And as you may or may not know, mixing frame rates can be rather challenging when it comes to finishing, since you can ultimately only output one frame rate as your finished media file or tape output. Prior versions of Resolve were constrained by only supporting a single frame rate in a particular project, but no more.

Resolve 9 lets you mix and match whatever frame rates are necessary within a single project, so long as you turn on the “Handle mixed frame rate material” checkbox in the Master Project Settings panel of the new Project Settings window (available by clicking the gear icon in the lower left-hand corner).

Mixed Frame Rate Support

You have to turn this checkbox on before you import an AAF or XML mixed frame rate project (to learn why, check the manual). After you import your AAF or XML file with mixed frame rate media, you’ll want to make sure that your “Playback framerate” is identical to the “Calculate timecode at” setting for optimal performance. (Both settings are also in the Master Project Settings panel of the Project Settings window.)

When rendering a Mixed Frame Rate timeline, how the media is output depends on whether you render to Source or Target mode. In Source mode, each clip is rendered at its native frame rate, for handoff to another NLE or finishing application. In Target mode, all frames are converted to the frame rate specified by the “Calculate timecode at” setting of that project, letting you output the entire project as a single media file at the target frame rate.

I don’t know about you, but this alone is going to save me, and my clients, hours of project prep.

Light Box View

This is another new feature that was previously unannounced. While working in the Color page, you can click the Lightbox View button:

Lightbox Button

…to view every clip in your timeline using the Resolve Lightbox.

The Lightbox

The Lightbox view makes it easy to scan through your project looking for a particular scene, to make multiple selections in order to create groups, or to use the new Flag command to assign differently colored flags to various clips to note things you want to do. This is a terrifically timesaving feature for projects of any duration.

Clip Attributes

Another interesting new feature is the Clip Attributes window, found in the Media Pool. This window replaces many of the contextual menu commands available for altering various editable properties of clips, for example, to change data levels, pixel aspect ratio settings, or to reinterpret the alpha channel mode now that Resolve 9 supports alpha channels for imported media. It also handles timecode alteration and manual, per-clip reel name changes, as well as stereoscopic 3D media assignments.

The Clip Attributes Dialog

What’s notable is that you can select multiple clips, and use the Clip Attributes window to change them all at once.

Metadata Editor

I had shown the metadata editor in my video presentation (viewable here), but since I’ve shown it last, a shedload of editable metadata attributes has been added. Far too many to show on one page.

More Metadata

Fortunately, they’re organized into groups, which are available from a pop-up menu at the upper right-hand corner of the metadata editor.

Metadata Groups

If you’re working on digital dailies, or you’re an extremely organized colorist, this is going to be a benefit.

Big Ass Curves

One frequent complaint I’ve heard is that the relatively small size of the DaVinci Resolve custom curves made them difficult to use for precision adjustments. I myself had never quite noticed this to be a problem, but fortunately DaVinci heard your anguished cries, and provided a new Large Curve mode for the Custom Curves. Clicking a button at the bottom of the Custom Curves:

The Large Curves Button

…opens up a window presenting a huge version of the same curves, with all the same controls.

Big Ass Curves

Having used the large curves for a while, I can safely say that they’re a huge improvement (ha) and truly do give you more refined control of your curve-driven adjustments. I never knew what I was missing until I started using these, and now there’s no going back for those finicky log-to-linear custom adjustments I now find myself making with more frequency.

Updated Video Scopes

While they were updating the rest of the UI, DaVinci decided to update the video scopes, too.

New Video Scopes

The new one-window scopes look beautiful, and I find them easier to manage then the four individual windows that were available previously. Providing an analysis of every single line of image data, the Waveform, Parade, Vectorscope, and Histogram are all there. However, if you like, you can change the number of scopes displayed to 1-up, 2-up, or the default 4-up, which lets you enlarge individual scopes if you don’t need the whole shooting match. Performance is dependent on how much GPU processing power your workstation has, so single or dual GPU systems may have less then stellar performance. However, folks who routinely use the Resolve scopes have cause for rejoicing, as these are a distinct improvement over what was there before.

A New Manual

You knew I was going to mention this. I’ve been hard at work (which explains the paucity of blogging around here) for the last three months writing what has ended up being a 600 page, near total rewrite of the DaVinci Resolve 9 User Manual. (To give you some perspective, the previous version of the manual was 435 pages)

New Version, New Manual

It’s been quite a challenge keeping up with the DaVinci Resolve team as they’ve piled on the improvements and evolved the UI over the months, but it’s been a truly rewarding experience, and I’m rather proud of the result.

Now, bear in mind that, as the product is still in beta, the user manual is also a work in progress, with edits and screenshot changes yet to be put in. However, I’m glad that the team has seen fit to make it available to the public, so that everyone can get a jump on what’s new. There are a lot of subtle refinements, and I’ve tried hard to capture all the little things and interoperabilities.

There are a few things of which, however, I’m particularly proud. “Before You Conform,” on page 111, contains detailed information about project preparation, effects support from NLEs, an explanation of the rules for media conforms, details about image processing and clip data levels, a summary of ACES support in Resolve, and an overview of digital dailies workflow. I tried to answer a lot of the questions that folks have had about Resolve’s inner workings in this section, and I think you’ll find it illuminating.

Also, “AAF Workflow Overview” on page 137 provides a detailed overview, from soup to nuts, of how you get projects from Media Composer or Symphony to Resolve and back again. The DaVinci Resolve team has worked extremely hard to make this workflow smoother and easier in version 9, and I executed each workflow personally while writing this section (kudos to Avid for answering my questions and giving me additional support while I developed the content). If you’re dealing with AAF, read this section. It may explain some of the issues you’ve been having, and will guide you through ways of getting the job done.

If you’re completely new to DaVinci Resolve, there’s a new, almost 30 page tutorial on page 71. It’s basic, so if you already know Resolve, you can probably skip it. But if you’ve never used Resolve at all, it’ll give you a quick and thorough tour of bringing a project in, doing some grading using a core selection of the Resolve toolset, and then rendering your project out. And, you can follow along using the sample media that comes on the DaVinci installer disk (and is also available by downloading from Blackmagic Design support).

So, I hope you find the new version of Resolve as big an improvement as I do, and I hope the new manual helps you to get the most out of it.

Color Correction Handbook 2nd Edition: Grading theory and technique for any application.
Color Correction Look Book: Stylized and creative grading techniques for any application.
What's New in DaVinci Resolve 15: Covering every new feature in Resolve 15 from Ripple Training.
DaVinci Resolve Tutorials: Far ranging DaVinci Resolve instruction from Ripple Training.

This Isn’t a Blog Post About Mac Pros…


I wasn’t going to chime in on this ongoing conversation, as frankly I don’t know that I have anything meaningful to add, and there’s nothing worse then baseless speculation about Apple. However, friend and colleague Patrick Inhofer noticed a blog entry of mine dating from the summer of 2010, fully two years ago this month, in which I foolishly elected to weigh in on the topic in response to Apple’s second to last, somewhat weak refresh of the Mac Pro line.

Since I had just moved the client part of my color correction practice over to DaVinci Resolve, I needed a new machine for the suite, so I went ahead and bought the 2010 Mac Pro later that summer. Little had I realized I’d be parked on that machine for the next two years.

My speculations in that prior post are now woefully dated, and I have no problem admitting that I was completely wrong. Apple was obviously not waiting for next generation FireWire, they went all in on Thunderbolt. And clearly, updating to PCIe 3.0 hasn’t been a priority (yet). And so, all of us who are still Mac Pro based shops continue to wait. So what do I think Apple’s going to do?

I have no fucking idea.

I might venture to guess that Apple is waiting for next gen Thunderbolt, but that’s hardly an original stroke of genius on my part since it’s the only machine in the lineup that’s lacking the new port. I’ve long been saying to friends that it wouldn’t surprise me at all if Apple rethought the overall form factor in some dramatic way, but that’s not exactly an original thought either. If someone put a gun to my head and forced me to make a bet, I would guess that Apple will most likely release something that they think will serve users in the Mac Pro market. And they may call it a Mac Pro, or they may not. Whatever they call it, the users will decide whether it’s a legitimate upgrade, voting with their wallets.

As I said on Twitter last night, if Apple releases a new machine that affordably does what I need, high-bandwidth data transfer among multiple high-end GPUs, with lots of RAM, fast CPUs, and access to suitable pro-video interfaces and accelerated storage, then I don’t care what they call the thing or what it looks like.

I’m not quite willing to believe that the Mac Pro is dead to Apple. After all, Apple isn’t shy about pulling the plug on things. When it was announced that the Xserve was no more, Apple blew out the stock and took the product off their storefront. That’s what I call a dead product. So long as the assembly line is cranking out new Mac Pros, no matter how creaky they are, I’m inclined to believe that there’s something on the horizon.

So me? I’m waiting to see. Granted, I’ve got a relatively “recent” Apple box, so I can afford to wait and see as my current equipment can keep up with the needs of my current clientele. But other folks, like one commenter who’s stuck with a six year+ machine, have the really tough choice. I don’t blame anyone for going the Windows route, I think it makes all the sense in the world for someone who needs more power, to earn money and get things done, to switch platforms.

However, my general philosophy of buying new technology is to wait until two paying clients in a row come to me to do something that I can’t do with my current hardware. As my general goal is to avoid saying “no” three times in a row, I’ll gladly spend money for something that pays for itself in jobs I’d otherwise be unable to get. However, I’m not going to buy anything new just to have a new hotrod. Much as I’d love to, I’ve got other financial priorities.

So, while I can do what I need with what I have, as soon as I find myself in the awkward position of having a job go marginally because my hardware isn’t up to the task, then I too will be evaluating my options, and I’m not at all opposed to switching to Windows, or even Linux, if that gives me better bang for the buck, and better capabilities, then Apple’s offerings on that date. My software is no longer a limiting factor (although ProRes encoding, distressingly, still is), so switching platforms, even multiple times, is not as much of a pain in the ass as it once might have been.

I figure I’ve got another 12-odd months with my current workstation before I too start feeling the pinch. If Apple makes something that’s expandable and useful by then, cool. But I’m not taking any bets. We’ll see.

Color Correction Handbook 2nd Edition: Grading theory and technique for any application.
Color Correction Look Book: Stylized and creative grading techniques for any application.
What's New in DaVinci Resolve 15: Covering every new feature in Resolve 15 from Ripple Training.
DaVinci Resolve Tutorials: Far ranging DaVinci Resolve instruction from Ripple Training.

Variations on a Theme

I stumbled upon this thanks to i09, and just had to share it. The tumblr site humanæ is Brazilian artist Angelica Dass’ ongoing project to sample the skin tone of as wide a variety of people as possible, matching an average sampled value from each to a corresponding Pantone swatch.

It’s an ambitious effort to chart the range of possibilities of this most memorable of memory colors, one that I tackled in a tiny way (with the help of photographer Sasha Nialla) in my Color Correction Handbook. However, where my small sample size was meant merely to illustrate a point, the much larger sample size of this project makes the survey that much more compelling. I’m also interested by how they choose the single representative value. In a Google translation from the original Spanish, the About page shares the following:

The development of the project is conducting a series of portraits whose background is dyed the exact shade extracted from a sample of 11×11 pixels the very face of the people portrayed. The ultimate aim is to record and catalog, through a scientific measurement, all possible human skin tones.

I’m curious which part of the face she chose to sample, given the variation in hue and lightness that comes from sun exposure, and the highlights and shadows of ambient lighting (this is something that came up in my simple illustrations). The single Pantone representation doesn’t strike me as all that interesting in terms of representing one person’s skin tone, but the aggregate of all of these sampled patches is much more interesting when seen as data points on a scatter graph that could illustrate a cloud of possibility, where human skin hue and lightness are concerned.

Another interesting aspect of skin tone analysis that can be seen in these images, although it has nothing to do with what’s being presented, is how much variation in skin tone there is from one region of the body to another; the faces are often markedly different from the torso given (I imagine) varying sun exposure.

This is a fantastic project, and I look forward to seeing the sample size continue to grow and expand to illustrate more and more of the subtle hues that can be found in humanity. Bravo.

Color Correction Handbook 2nd Edition: Grading theory and technique for any application.
Color Correction Look Book: Stylized and creative grading techniques for any application.
What's New in DaVinci Resolve 15: Covering every new feature in Resolve 15 from Ripple Training.
DaVinci Resolve Tutorials: Far ranging DaVinci Resolve instruction from Ripple Training.

One of the Originals, Still One of the Best

Steve Hullfish was nice enough to send along the new second edition of his “The Art and Technique of Digital Color Correction” for me to peruse, which was most generous. I’d bought his first edition on my own dime and enjoyed it thoroughly, and we’re all lucky that his publisher (Focal Press) has seen fit to give him the opportunity to update this book, and add even more breadth and depth to the information within.

Steve has been writing about color correction in a digestible way for longer then anyone else I’m aware of. Once upon a time, his original “Color Correction for Digital Video” sat alongside Stuart Blake Jones’ “Video Color Correction for Nonlinear Editors” as the only two books on my shelf that covered this complex subject for the layman, and I benefitted alongside countless others from Steve’s clear presentation, and his inclusion of many voices from the field.

As useful as his original book is, however, “Art and Technique” goes so much further, especially in this new edition. Jumping from 370 to almost 500 pages, Steve has organized a wealth of interviews with some of the top colorists working today, discussing practical issues that colorists of any level of experience will benefit from.

If you’re interested in grading and you don’t have this book, go to your favorite vendor and simply order one right now. You need this on your shelf.

Color Correction Handbook 2nd Edition: Grading theory and technique for any application.
Color Correction Look Book: Stylized and creative grading techniques for any application.
What's New in DaVinci Resolve 15: Covering every new feature in Resolve 15 from Ripple Training.
DaVinci Resolve Tutorials: Far ranging DaVinci Resolve instruction from Ripple Training.

Having a Look at Resolve 9

I was pleased to be in Montreal, presenting to the Final Cut MTL group’s PostNAB 2012 gathering, and they’ve released the video of my Resolve 9 preview on YouTube. If you’re a current Resolve user, you have a lot to look forward to. If you’re not, but you’re thinking of getting started, the next version will make it a much more enjoyable experience.

Alas, I only had so long to present, but there are plenty of other new features to look forward to, like node labels, renamable still store albums, additional metadata columns in the Media Pool, the list goes on and on.

Thanks again to Matt Pellowski at Red Line Studios for the footage I was able to use.

Color Correction Handbook 2nd Edition: Grading theory and technique for any application.
Color Correction Look Book: Stylized and creative grading techniques for any application.
What's New in DaVinci Resolve 15: Covering every new feature in Resolve 15 from Ripple Training.
DaVinci Resolve Tutorials: Far ranging DaVinci Resolve instruction from Ripple Training.

The Tangent Element, a One Month Review

Folks who’ve been reading this blog for a while will know that I’ve been following Tangent’s development of the Element color correction control surface for a long while. They’ve now been shipping the Element for some time, and it’s been so unexpectedly popular that they’ve had some trouble keeping up with orders, which is a nice problem for Tangent to have, and I congratulate them.

At NAB, I met up with the principals, and they were good enough to provide me with a set to try out. So now, months after playing with their initial prototypes, I’ve finally had the chance to see how the shipping version works.

If you’re busy, here’s my quick takeaway. They feel fantastic, the build quality is everything one might want in a surface of any price, and their compact size makes them at home in anybody’s suite while the clever design doesn’t compromise features. At $3500 USD for the set (actually, $3,199.99 at B&H), it’s the best bang for the buck you’re going to get, in my opinion.

Now, for those of you wanting a bit more detail, let’s look a little closer. In fact, let’s start with an unboxing. When you order the set, consisting of one each of the button panel, the knob panel, the trackball panel, and the button/transport panel, you get a box containing four other boxes (five including a box for other hardware).

Since these panels are also available individually, Tangent made the decision to package them individually, so folks could custom order whatever combination they wanted.

For those of you looking for a convenient carrying case, these boxes aren’t it. However, I’m told that Tangent is considering creating a set of custom foam inserts for a pelican case. You’d buy the foam inserts from Tangent, buy the appropriate case from Pelican, and then you’ll be in business. I look forward to this becoming available, since the durability and compact size of these panels makes them an excellent choice for portable use.

Each of the panels connects via USB, and Tangent recommends a specific USB hub for use. Depending on your suite’s configuration, it either conveniently or inconveniently has a built-in extension cable, so you can run the hub quite far away from your CPU, should you so desire.

Each panel connects to the hub via it’s own Micro-B to Type A cable. This means that a set of four panels will run four USB cables to the hub, which in turn connects to your CPU.

This may sound like a potential rats-nest of cabling, but I found that by looping each cable underneath each panel’s back riser, they could be brought together into a single snake you can run to the hub.

Speaking of connections, the four panels themselves can be arranged on your desk any way you like, for the ultimate in configurable customizability. However, if you want to line them up in a straight row as a single unit, there’s a clever magnetic pin arrangement you can use to “click” them all together.

The pins come in a separate little bag, and you use two pins to join each pair of panels that you want to sit side by side. If you insert these pins incorrectly, you can always pull them out, but you’ll need a pliers to do so, unless you’ve a preternaturally strong grip.

Power is delivered via the USB hub. The power supply that comes with the recommended hub is international; you remove the plastic shipping insert and then use whichever of the accompanying international plugs you need.


Once you’ve gotten everything plugged in and assembled, the Element panels have a pretty unassuming footprint on your desktop.

With the whole set on my administrative computer’s desk, I still have room for my Bamboo graphics tablet, my magic trackpad, and yo-yo.

I didn’t want to just plug it all in, use it for an hour, and then post a snap review on the spot, so I gave myself a month or so to use it in everyday situations, to see how I liked its functionality and feel in the long term. At this point, I think it’s great.

The knobs and contrast rings are nice and smooth, but with a pleasing bit of resistance that encourages precision. The buttons are the same ones that Tangent has been using ever since the $30,000 USD CP-100 panel, which I like. Some have commented on the audible click these buttons make, but it’s never bothered me. In fact, I should point out that my considerably more expensive DaVinci Control Surface uses buttons with a similarly audible click, and I’ve never heard anyone complain about those.

Here’s a fun fact. In speaking with the Tangent guys, I’m told that out of approximately 60,000 buttons they’ve used in panels they’ve made over the last 12 years, the only button failures they’ve experienced have been on four or five of the original CP-100 panels that were shipped 12 years ago, all of which have seen intensive use. With that kind of reliability, I’m very happy with Tangent’s choice of hardware.

Each panel has an OLED display at the top, with multiple lines of text designed to dynamically label the functionality of each row of controls on every panel. I was wondering if I’d find this visually confusing, and the truth is I haven’t. Unlike LCD displays, OLED displays aren’t polarized, and the “lens covers” over each panel’s display have been specifically engineered not to interfere with the polarized glasses used by passive stereoscopic monitors, so that’s an additional bonus if you regularly work on stereoscopic projects.

There are many grading and postproduction applications that have announced Tangent Element support, but I’ve only been using these panels with DaVinci Resolve. In general, I’ve found the Resolve mappings quick to learn,  easy to operate, and they hit all the basics. However, I agree with those who’ve voiced a desire for a bit more mapped functionality, as there’s plenty of room for more. On the other hand, room for improvement does not mean the current mappings are bad, and I wholeheartedly recommend these panels for Resolve use.

So that’s my overview. If you’d like to learn more about these panels, I heartily recommend Patrick Inhofer’s video review if you’ve not yet seen it, at his excellent Tao of Color website. He demonstrates the panels in action, which lets you see how the mappings work with Resolve. Also, I want to point out that panel touch and feel is subject to very personal preferences. Before buying any panel, I strongly recommend you find a way to actually try it out in person to make sure that it’s your cup of tea. There are many different color correction control surfaces on the market, and each has its fans and detractors; the only way to really know if a panel will work for you is to try before you buy.

Color Correction Handbook 2nd Edition: Grading theory and technique for any application.
Color Correction Look Book: Stylized and creative grading techniques for any application.
What's New in DaVinci Resolve 15: Covering every new feature in Resolve 15 from Ripple Training.
DaVinci Resolve Tutorials: Far ranging DaVinci Resolve instruction from Ripple Training.

What’s in a Name?

The following is a reply I posted originally on Creative Cow’s DaVinci forum regarding the little line in the Vectorscope that serves either as an in-phase indicator for signal alignment, or an approximate guideline for the angle at which human skin tone may fall in a neutrally graded shot.

It was brought up that, since the original engineering purpose of this indicator was for image alignment, and had nothing in fact to do with skin tones, and since in-phase and quadrature indicators have nothing to do with high definition signals, that it is perhaps inappropriate to carry this indicator forward into newer implementations of video scopes in the digital world. In particular, Mike Most (for whom I have the highest respect) wrote “an assumption that the I axis is there specifically for flesh tones – in particular Caucasian flesh tones – is incorrect, based on equally incorrect information printed in an Apple document.” You can see the full discussion here. The interesting parts come towards the end.

Mike Most makes some excellent points about the origins of this indicator, and about over-reliance on it being a crutch for inexperienced colorists. However, “incorrect” is a strong word, and since I wrote the Apple documentation under discussion, I thought it might be interesting to shine a light on both some design decisions that were made in FCP and Apple Color, and how terminology gets coined and disseminated.

When FCP 3.0 was in development, the then-new color correction tools and video scopes being added were brand new to the majority of desktop video editors, and the engineering team was working to try and make this unfamiliar paradigm of lift/gamma/gain style controls and the accompanying scopes comprehensible to a new audience. It was a deliberate design decision to include one half of the in-phase axis line all by itself as an indicator of the general hue of skin tone, since to my knowledge the not-so-coincidental dual use of that indicator had been a documented rule of thumb of videoscope use for many years prior. This coincident use was not something the Final Cut Pro engineering team made up.

In an effort to make the purpose of this line more transparent, I (as the writer of the manual) and some others decided to call this the “Flesh tone line,” a decision I now somewhat regret as it muddies the history of this indicator; and yet as this was the only one of the in-phase and quadrature lines that the team elected to draw upon the FCP vectorscope, I stand by the decision as it made immediate sense to the new user, and the purpose of this indicator as intended had nothing to do with signal alignment, and everything to do with providing a flesh tone signpost to people new to reading scopes.

Regarding the “I-bar” terminology, this was the term I decided upon when writing the Apple Color documentation, as I expected a more experienced audience would appreciate an acknowledgment of the original purpose of this line. Also, the Color team implemented all four in-phase and quadrature indicators, so it seemed appropriate, “I” for in-phase, bar because it’s a line.

I did not make this term up; after scouring different terminology from various sources, I found and settled on “I-bar” as the shortest term for purposes of documentation (try typing “i-axis indication line” ten times fast). Unfortunately, I can’t cite my source anymore as this was years ago, but nobody in a position to offer a technical review of that manual, nor any colorist who’s done either technical or casual reviews of books I’ve written since, has ever informed me that “I-bar” is wrong, and I’ve been using it consistently ever since. If, in fact, it turns out that my deadline had made me delusional and “I-bar” was the fevered ravings of a caffiene-addled technical writer, then I still stand by it since it’s less to type and is a fine abbreviation, but I cannot in truth take the credit.

I’ve discussed the history of the I and Q axes with many folks over the years, and while it’s true that the engineering reasons behind these indicators have nothing strictly to do with flesh tones, my personal feeling is that the coincidental utility of the in-phase indicator’s position has, over time, come to outweigh its original purpose, and in fact I would consider the “I-bar” we’re currently referring to as a new thing that coopted the old, sort of like Easter coopting an earlier collection of various pagan celebrations.

To clarify, I would never and have never suggested that this line is a strict guideline for human hue. In my “color correction handbook” I wrote and illustrated more pages then my editor may have wished about the subtle variations of human skin tone, color interactions between a subject and the illuminant of a scene, and how the in-phase indicator under discussion is merely a general signpost. Like speed limits, nobody follows them exactly, but they let you know you’re around what you ought to be doing.

Lastly, I’ve used scopes that have an I-bar, and I have a very expensive scope that doesn’t (in HD mode, as has been pointed out), and while I still think it’s nice to have, its absence has never hampered me from delivering attractive skin tones to my commercial clients. However, given the choice, I’d like to see this indicator as an option for folks who like it; in fact, I’d love to see someone develop the option for multiple programmable vectorscope indicators at user-selectable angles, but then I’m a bit nutty for options. The hue that, in NTSC, is represented by the I-bar can certainly be mathematically translated into the same hue in HD color space, and I see no reason why that wouldn’t be useful or appropriate, if it’s documented clearly that this is no longer in-phase, but in fact an analogous flesh tone guidepost that can be turned on or off.

I would suggest that video scopes at this point are simply software, and it should be no sin for developers to add new features of utility to users and to label them clearly. I’m fond of pointing out that the days of fixed ground glass graticules are over, and it would be nice to see developers find more things to do with scopes for both basic and advanced users then to simply replicate functionality from analog, trace-drawn CRT technology.

Color Correction Handbook 2nd Edition: Grading theory and technique for any application.
Color Correction Look Book: Stylized and creative grading techniques for any application.
What's New in DaVinci Resolve 15: Covering every new feature in Resolve 15 from Ripple Training.
DaVinci Resolve Tutorials: Far ranging DaVinci Resolve instruction from Ripple Training.

Starship Detritus—The Artwork is Finished!

I received some fantastic news this week from my collaborator, illustrator Ryan Beckwith, regarding our long-term project, Starship Detritus. He’s been laboring for months on the art for our pilot episode of this animated science-fiction series. This being a side-gig for both of us, his work as a commercial storyboard artist kept interrupting (damn you for being so successful, Ryan!), but getting the news that he’s finished is the biggest leap forward since I finished writing all 13 episodes of the first season.

Of course, now it means I need to get off my backside and start scheduling some After Effects character animators to put these images into motion. Ryan’s been creating high-resolution, multi-layered Photoshop comps (in conjunction with his assistant Ryan Zalis who aided with flatting and other assorted tasks). Working with our first animator, Steve Rein, the artwork has been constructed to accommodate skeletal and puppet-tool animation in After Effects.

Being an illustrator and not an animator, Ryan has gone in a much different direction with the artwork then in most animations. From the very first color tests he did, I was impressed with the texture and detail he brought to the world I’ve written, and it’ll be exciting to see the scenes come alive.

It’s a bit poignant; I used to work with After Effects every day back in the late 90’s, but having been focused on color correction for so long, at this point I’m so rusty that I’d rather work with faster artists to bring these characters to life. I’ll stick to animating the camera and framing of the final comps for rendering out the finished shots.

Of course, as the writer/director/editor, I’ve a few other things to handle. The very first thing I did, after Ryan and I storyboarded the first episode, and he created the first complete set of roughs, was to record a group of temp actors reading the script, and edit together an animatic in order to get the timing of the show right. This has been our reference going forward, and as soon as I get my hands on the full finished set of artwork, I’m looking forward to updating the animatic with the color art.

Which will take a bit of doing. The original animatic was put together in Final Cut Pro, but given this is such an After Effects-heavy project, I’m planning on moving the entire edit over to Premiere Pro in CS6, to take advantage of its AE integration. I’m hoping this creates some efficiencies. Besides, it’s an excuse to learn a new piece of software by doing something real, which I find is always the best way to learn.

Additionally, since this process has ended up taking far, far longer then it was supposed to (par for the course), I’ve begun novelizing this first season. As fun as the 13 episode, ten-minute-per-episode structure I used for the scripts has been, there’s additional story that my chosen format simply won’t accommodate. Prose has been a perfect outlet for the added bits, and the idea of telling this story across different platforms is tremendously appealing to me. At this point I’m 11,000 words into the novelization, and having tremendous fun with it. Alas, now my day job is interrupting, since working on the new version of the DaVinci Resolve 9 manual is proving to make scheduling creative time challenging.

Moving forward, there are plans within plans, and I’ll be sure to share more when there’s more to share. It’s easy to get caught up in the day-to-day grading and tech-writing work that I do, but creative projects like this are what brought me into post-production in the first place, and it’s gratifying to be making progress on my biggest creative project to date.

Color Correction Handbook 2nd Edition: Grading theory and technique for any application.
Color Correction Look Book: Stylized and creative grading techniques for any application.
What's New in DaVinci Resolve 15: Covering every new feature in Resolve 15 from Ripple Training.
DaVinci Resolve Tutorials: Far ranging DaVinci Resolve instruction from Ripple Training.

Mayo Clinic Spots


A few months ago I graded three web spots for the Mayo clinic at Minneapolis’ Splice Here, for whom I’ve been doing some freelance grading. They posted the full spot on a page highlighting some of my work.

It’s a fun high-style grade that splits the top and bottom halves of image tonality for separate rebalancing, employs selective desaturation using the hue curves, adds some subtle glow via luma keying, and includes some individual work on skin tones to keep them natural amidst all the stylizations. That’s one of the great things about working on spots, you get to dig so much deeper into the grade then with most other types of shows I work on.

Color Correction Handbook 2nd Edition: Grading theory and technique for any application.
Color Correction Look Book: Stylized and creative grading techniques for any application.
What's New in DaVinci Resolve 15: Covering every new feature in Resolve 15 from Ripple Training.
DaVinci Resolve Tutorials: Far ranging DaVinci Resolve instruction from Ripple Training.

It’s Not About Piracy, It’s About Respect

I’ve been mulling over the topic of piracy and media consumption for several years. As a writer in the middle of developing a project with a web component, it’s of great interest to me whether or not it’s possible to make money creating a video series of ambition primarily for a digital download audience.

Lately, there’s been a lot of back and forth about the rights of the individual versus the rights of copyright holders, consumer convenience, dumb-ass big media companies, etcetera. There’s a lot of high-minded rhetoric on either side flying around, but in all the debate, I can’t help but feel that the concerns of individual copyright holders, be they artists, writers, filmmakers, or programmers, are being forgotten in all the angst over “big media.”

However, before I continue, I want to make four quick points so you know where I’m coming from.

(1) I’m not going to discuss large corporate media, since that of necessity addresses a whole set of issues that I think dilutes the fundamental issue of creator compensation. Instead, I want to focus on small-time, creator-distributed media, which I would like to think is the future of media. It’s always been my dream that we creators have an environment in which we can sell content directly to the audience. And technology could make that more feasible then ever.

(2) I believe we can all agree that DRM is a giant pain in the ass, and it’s not a credible answer. Also, I’m in favor of liberal fair-use policies. Individuals shouldn’t have to live in fear when creating mash-ups, remixes, and the like. Clear, universal policies with no repurcussions for non-commercial activities should be put into place.

(3) However, I firmly support copyright as an artist’s most effective, international, treaty-ratified protection against big media poaching an independent creator’s intellectual property. On the other hand, I think copyright needs to expire with no exceptions, and not be constantly re-extended for well-heeled corporations. If patents expire like clockwork for major pharmaceudical companies’ most expensive medications, then the Disneys of the world can let their copyrights expire, as well.

(4) Making it difficult for people to buy one’s content easily and affordably is probably stupid.

Okay, let’s talk about piracy.

As an author, I have no interest in pursuing criminal charges against folks that consume media I’ve created without paying. Personally, I make a distinction between simply copying a file, and enjoying the media therein. If everyone in the world copied the file of one of my books without reading it, I honestly wouldn’t care. Where I draw the line is when folks watch the movie, read the book, or listen to the song, and then don’t pay. That, I consider to be thoughtless behavior.

My main point is simple math. If a creator’s job is to create, then someone has to pay for that creator to keep doing what they’re doing. If the creator sucks and nobody much buys the thing, then it’s artistic darwinism and time to go get a day job. However, if the creator is terrific, and lots of folks listen to/watch/read/play the thing without paying, then that deliberately avoids rewarding artists for doing good work, and is a tragedy regardless of your thoughts about free culture.

Big media wants to protect the profits of copyright holders by enforcing draconian laws and technological boondoggles, none of which I support because these schemes go overboard and infringe on genuine civil liberties, and from a technological perspective promise to cause far more problems then they would solve.

Rather, I think the fundamental issue at play is people’s attitudes about media consumption, and about paying the artists’ price for what they read/watch/hear/experience.

Making money off of digital media is a numbers game. Folks expect low prices, so the aggregate is important. The more people decide to download media file X and then pay the creator for it, the more money the creator has. It’s that simple.

If you’re not planning on paying for a piece of media you’ve listened to/watched/played/read, then yes, you can provide free publicity for the creator, spreading the word on Twitter and your blog and Facebook and by texting all your friends. And if you’re dead broke, that’s cool. It’s genuinely helpful. But if you’re not broke, at the end of the day you could have done that and given them five dollars. Or two dollars. Or 99¢.

You can argue that copying without payment is not theft, that nothing’s been taken, that the file being duplicated makes more! And I’ll agree with you. Copying a file and then using it without payment is, to my mind, no more an act of larceny then refusing to toss a buck in the cup of a street musician after standing there listening to their whole song. But it is miserly to do so if you have the money to spend. And rude.

At the end of the day, digital media distribution makes filmmakers, musicians, writers, programmers, and other creators of mass distributable content the equivilant of buskers standing by the side of the street. You can enjoy what we make for free, and it’s up to you whether or not you pay us. And whether you as a creator love this new reality or hate it, that’s the truth.

However, it’s disingenuous for tech pundits to stand by the side of the road and say that figuring out how to make a profit is the artists problem, or to suggest that in the future perhaps it’s simply not possible for creators to make a living doing nothing but creating.

To me, the argument is not whether copying media freely is right or wrong–it’s an issue of manners. Of respect for the creator’s time, and the resources that were put into the making of that thing you’ve decided to copy to your digital device in order to upload into your brain.

If you want a more self-serving reason to fork out cash for digital media you enjoy, consider whether or not you want that creator to keep creating. For anyone planning a media project of any sort of ambition, the math regarding whether or not it can be done is simple.

How much folks will pay me

How much it costs to make


Whether or not I go broke

Keep in mind that not every type of media you might want to download is created solely as a function of one person’s time. In the case of a film, a whole lot of resources can go into even the humblest 5 minute project. Paying other artists, actors, buying materials for sets and props, paying for insurance, renting equipment, buying bags of clothes-pins, municipal shooting permits, the list can be quite long.

And when it comes to costs, time must be assigned a value. No matter what kind of media we’re talking about, the artist’s time is worth money, and it’s a mistake to think otherwise.

I also believe that artists do their best work when they have the ability to focus on what it is they’re doing, as opposed to working on their thing at 11pm at night after spending all day waiting tables, pumping gas, or writing backend database code. Creation is a job, too, that benefits from a fresh mind and well-rested energy.

So, if you want your favorite artist to be able to focus on what it is they’re creating for you to consume, it would behoove you to toss five or ten bucks into their project. If you’ve got it. And if you don’t have it, keep them in mind  when you do.

It’s the nice thing to do.

Color Correction Handbook 2nd Edition: Grading theory and technique for any application.
Color Correction Look Book: Stylized and creative grading techniques for any application.
What's New in DaVinci Resolve 15: Covering every new feature in Resolve 15 from Ripple Training.
DaVinci Resolve Tutorials: Far ranging DaVinci Resolve instruction from Ripple Training.

I Do Like to Talk

And Mark Spencer and Steve Martin do their level best to keep me going in this hour and a half interview on MacBreak Live, wherein I discuss how I got started with color correction in the first place, why I like using Resolve, control surfaces, monitors, grading for the web, how I organize grades, how to move projects from Final Cut Pro X to Resolve, why experience matters, and what I think distinguishes colorists who take the craft seriously. It was a fun chat, I hope you like it.

Color Correction Handbook 2nd Edition: Grading theory and technique for any application.
Color Correction Look Book: Stylized and creative grading techniques for any application.
What's New in DaVinci Resolve 15: Covering every new feature in Resolve 15 from Ripple Training.
DaVinci Resolve Tutorials: Far ranging DaVinci Resolve instruction from Ripple Training.