Ron Dawson at Dare Dreamer magazine conducted a really fun interview with me, the result of which is an eighty-six minute episode of the “Crossing the 180” podcast. We talk about the utility of film school, filmmaking and creativity, and of course, color correction. It’s a wider-ranging conversation then I usually get to participate in, and you might find it interesting.
This year’s IBC conference was a busy one, and since I was neither speaking or teaching this time around, I had a lot of time to check out what the current state of grading software was among most companies. There’s never enough time to see everything, but I got a good overview of most of the things that interested me.
I’ve been asked by a few people what, in my opinion, the “theme” of this year’s IBC was. For me, it was workflow. Everyone seems to be looking to make managing the media and metadata from on-set through post more streamlined, or to integrate effects, grading, and finishing in different ways. Refinement also seems to be the name of the game, with most grading apps I saw having polished their UI, and added small but useful features such as composite mode blending of multiple layers (a feature that every single grading application now seems to boast). Incorporation of compositing and effects tools into grading applications continues to happen at a rapid pace, with features such as high quality motion estimated speed effects finding their way into more and more grading applications.
The first booth I visited was that of Filmlight, makers of Baselight. A preview version of Baselight was shown with new composite mode blending of different layers, but it seemed that Filmlight’s major news was more related to workflow integration. Now that they’re shipping Baselight plugins for both FCP 7 and Avid Media Composer/Symphony, with a plugin for Nuke right around the corner, Filmlight is in a position to offer a unified and consistent means of handing off comprehensive grading adjustments from NLE through compositing to grading and finishing. Their web site touts this as “Filmlight at every stage,” and they’re working hard to achieve this.
Filmlight also introduced the Flip, a sort of “Baselight in a box,” in order to bring Baselight goodness to the on-set crowd. The Flip is a self contained Baselight system—simply plug in a monitor and a control surface (the Tangent Element panels are supported) and you’re in business. The front of the Flip has buttons with dynamically updating labels (similar to the Blackboard 2), a touch-sensitive display, and a single trackball/ring control that allows simple adjustment even in the absence of a surface.
The idea is that, in addition to being able to use CDL-compatible tools to apply and manage simple primary grades from on-set through post, you can use the Flip to apply full Baselight grades while on-set, previewing them live on your camera’s output, and linking them to your captured media. Once the media/grade relationship has been defined, Baselight’s BLG file format (Baselight Grade File) is used to exchange grading data, with optionally embedded before/after wipe frames of the media files, and the option to even embed the full resolution media itself (as OpenEXR frames of the raw pixel data). At minimum, you can exchange media-less BLG files that contain all of the full multi-layered Baselight grades that correspond to your project’s media among the Flip, FCP 7, Avid, and Nuke Baselight plugins, and a full Baselight workstation. Very cool, from a workflow perspective. (A video about the BLG file format is available here).
Speaking of Tangent, their new Element panels seemed to be everywhere at the show except the Quantel booth. Currently Resolve, Lustre and Flame Premium, the Filmlight Flip, Scratch, Mistika, and REDCineX all support the Elements, along with numerous other applications and utilities. Tangent’s main news was a set of new Pelican case foam inserts for folks needing portability; I could have sworn I took a picture, but alas I did not (they’re black). Pricing information is not yet available, but the foam inserts are designed to fit the relatively slim Pelican iM2370 case, and the larger iM2500 (a wheeled case which is capable of fitting both the panels and a 15″ laptop). I’m looking forward to picking one of these up for my on-location gigs.
After a few years of not having time to check out SGO’s Mistika software, it appears I picked a good year to get a comprehensive demo, as Mistika now sports a brand new color correction interface. Known for being an integrated editing, compositing, and grading environment with an excellent stereoscopic toolset, it’s nice to know that they’re not resting on their laurels, and that they’re continuing to develop and refine their toolset.
I was unfamiliar with their previous color correction interface, but the new UI has everything you would expect, with lift/gamma/gain three-way controls, multiple layers of primary or secondary correction (called Vectors) embedded into each “timeline layer” of grading, printer light controls, white/black “manually sampled” auto-correction, tracked shapes, qualification, etcetera. All of this is compatible with up to six Tangent Element panels; in fact Tangent tells me that Mistika supports more simultaneously mapped Element panels then any other application right now, making good use of the Element’s expandability.
One unique feature is a mode of “five-way” correction, with five color balance and contrast controls for lift/shadows/gamma/highlights/gain adjustment, simultaneously presented. This is an interesting variation that I can see being quite useful.
Mistika’s grading and compositing is layer-based, and one aspect of this is the concept of a single layer grade being able to encompass multiple clips (similar to track effects in Symphony, or track grades in Speedgrade). What’s unique in Mistika is the ability to “route” key data from one layer to another in the stack, which provides functionality similar to DaVinci’s node-based key and RGB routing. Since Mistika is as much a compositing tool as a grading tool, this provides many node-based advantages while retaining the familiarity of a layer-based interface.
One touch I like in the “trackless” timeline is the resizable playhead, which lets you change its height in order to control how many composited layers you want to preview as you play. A taller playhead that intersects all layers shows the full composite, whereas a shorter playhead that only intersects two of the layers results in only those two layers being composited during playback.
Though not new, I wanted to get a look at the stereoscopic 3D tools that I often hear folks rave about, and I wasn’t disappointed. SGO has implemented a real-time optical flow engine that enables real-time slow motion and format conversion processing. However, it also enables automatic pixel by pixel geometry and color matching between challenging left and right-eye stereoscopic media, making short work of clips where differences in paint reflectivity and sky polarization cause other auto-matching tools to fall short of an overall match. Furthermore, knowing that optical flow processing can produce visual artifacts in clips with overlapping elements in motion, Mistika’s full compositing toolset can be used to isolate and manually repair sections of the odd clip where optical flow artifacts happen to appear.
One of the things that really impressed me, though, is Mistika’s tool for using optical flow processing to alter the interocular distance of elements in the scene at a particular range of depth. SGO refers to this as “depth-discriminated interocular adjustments,” and this lets you stretch or squeeze selected regions of the picture to come forward or move back in postprodution. This is a level of detail that made the film-maker in me rejoice; I was very impressed.
Speaking of stereoscopic work, Omnitek was showing off their tools for stereoscopic analysis and quality control. Formerly developed to run on PC computers, Omnitek now offers their scopes in familiar, self-contained form factors—a rasterizer pizza box (the OTM 1001) and a more traditionally “back-to-the-70s” handled box with a screen (the OTM 1000). Both form factors contain exactly the same hardware, and in fact cost the same, so it’s simply a matter of convenience.
Getting a walk through the different displays, I definitely got the sense that this option is valuable for shops doing lots of stereoscopic work. For example, a multi-planar depth scatter graph shows the overall range of positive and negative parallax in an easy to grasp visual manner, with indicators for the outer acceptable range of depth, and shows which parts of the image correspond to what depth. This display corresponds to a depth histogram at the right.
Below that, discrete RGB channel left/right eye exposure comparison scopes show discrepancies in exposure via a horizontal bend in the offending channel’s otherwise vertically oriented graph.
A series of bar graphs show left/right eye discrepancies in depth range, vertical and horizontal position, rotation, zoom, sharpness, and color, with unambiguous center indicators for each property.
Now that these scopes are self-contained, and I’m told pricing starts at $4K (the stereo options are extra), Omnitek is definitely worth a look if you’re in the market for a dedicated set of outboard scopes.
While I was looking at outboard gear, I took the time to speak with someone at Snell about the Alchemist video signal convertor. I’ve long heard that the Alchemist is one of the premier boxes for format conversion from NTSC to PAL and back again, and was curious to learn more about this somewhat obscure piece of equipment. Unlike other solutions that rely upon optical flow analysis, the Alchemist relies upon a technique called “phase correlation motion estimation” to do its magic. I’m told that the Alchemist’s success at conversion is due in part to years of careful refinement of this method of processing, stemming from customer issues and requests. It’s nice to hear about this kind of evolution in a product.
Interestingly, even though the Alchemist hardware continues to be designed for moving SDI/HD-SDI signals in and out, Snell has come up with a way for both new and existing customers to process video in file-based workflows, using something they call their FileFlow server. It’s basically a computer that can be connected to an existing Alchemist via the HD-SDI inputs and outputs, which itself can be connected to your facility’s network. Video files in supported formats can be uploaded directly to the server, managed via a list-based interface, and converted using the Alchemist hardware.
When I inquired about this somewhat indirect add-on approach, they wanted to develop a solution that could be added by the substantial existing customer base, instead of requiring everyone to purchase a new box.
This brought me to the area of the conference center that showcased monitors, among other dedicated bits of hardware. I had a nice chat with Bram Desmet at Flanders Scientific about the new 12-bit XYZ monitoring option which is available as a firmware update to all existing customers. The 24-inch 2461W was already a well-regarded, flexible display, but this makes it even more useful in a wider variety of postproduction situations.
However, what I was really interested in was Flanders’ 10-bit CM170. Even though this 17-inch display isn’t exactly new (it was announced at NAB) this was my first look at it, and I liked what I saw. While these days it’s really too small for a room with five clients in it, given that Flanders has designed it to be a full-resolution, color critical display, at $3,295 it’s probably the best bang for the buck that’s available as a grading monitor for a small one-colorist unsupervised suite.
Additionally, the fact that it’s compact, accurate, and full resolution makes this an especially attractive monitor for on-set work. For that purpose there’s an external DC power input and screw holes for battery attachments. Nice.
Just a few booths away, I also checked out the Penta Studiotechnik HD2 range of color-critical displays. I was previously unaware of this company’s offerings, and while the show floor is a terrible place to do a proper evaluation, Steve Shaw of Light Illusion spoke highly of them, and has worked with the company to make Lightspace available as a calibration option.
Pentax Studiotechnic offers a range of LCD displays under the HD2 Pro brand, in a variety of sizes. The 32″ and under displays use ND filters to control light output and improve blacks, while the impressive-looking 55″ panel is claimed to offer 187 degrees of viewing, with a glossy screen that doesn’t need ND filtering. If you’re researching different displays, it’s another company to look into.
Getting back to grading software, Quantel was showing off their new Pablo Rio. Building upon the features of the Pablo, Rio offers all that and more in a new, hardware agnostic version. Quantel is relinquishing their dependance on dedicated processing hardware, and embracing GPU-based processing. What they were showing on the floor was a Windows-based solution using two Nvidia Tesla cards to do all of the real-time magic one would expect from Quantel. For video input and output, the Rio still uses a Quantel I/O card, but they have plans to support the Atomic I/O card later. Alongside more flexible hardware support, there’s a new ability to soft-mount all supported media formats (and it’s a long list) from any connected volume.
In addition to shedding proprietary hardware, Quantel has updated the grading side of things as well. The UI has seen some tidying up, and a pop-up has been added to switch the three way color balance and contrast controls among shadows, mid tones, and highlights, allowing for “nine-way” adjustments that are similar to what Speedgrade and Lustre colorists are used to.
Additionally, a new ability to customize the overlapping ranges of influence of the lift/gamma/gain controls has been added, via overlapping curves.
In another surprising example of opening up, Quantel has licensed the Mocha planar tracking toolset, incorporating it directly into Rio, where it can be used for tracking shapes in secondary operations.
Keeping with one of my unofficial themes of the show, Rio adds the ability to recombine secondary operations (referred to as Cascades) using composite modes. While I ordinarily wouldn’t include a screenshot, the sheer number of composite modes available to choose from is unexpectedly massive.
Furthermore, Quantel is supporting third party filters, with the ability to apply one per cascade (they were demoing the peerless Sapphire plugin set on the floor). Rio also boasts a new higher quality sharpen filter; given the endless parade of soft-focus HD material found in run and gun projects, a better sharpen filter is always welcome.
Finally, Quantel was showing SynthIA, which offers optical flow-based image processing for stereoscopic 3D media similar to what Mistika does—altering a range of interaxial depth. However, unlike Mistika, Synthia does this via a separate application, so it’s not otherwise integrated with the full Rio toolset. I was told this was done to make Synthia available to folks who are focused only on stereoscopic processing, so they don’t need to spend the money on a full Rio license, but I imagine users would like to see those features get rolled into Rio down the road.
Seemingly eager to shed a reputation as one of the most expensive grading solution on the market, Quantel was quick to discuss pricing; available as software only, the Rio comes in at $47K (with an additional 40% off at the moment), while a full turnkey solution with software and hardware costs $130K (with an additional 30% off at the moment). Yes, I know, that’s still not exactly cheap, but it’s a heck of a lot less expensive then what was previously available from Quantel, and you get all of the real-time multi-format conversion, integrated editing, compositing, paint, and grading that previously cost a mint, so it’s progress.
Stopping by Dolby’s booth, I was truly impressed at the Phillips glasses-free auto-stereo television being shown. It’s a 4K display that shows each eye at a full 1080 HD resolution, the viewing angle was impressive (though front-on was still optimal), and there were plenty of “sweet spots” as far as the lenticular front of the display went; I only needed to move a couple of inches to the left or right to jump from one sweet spot to the next, and each sweet spot had a wide range.
“Dolby 3D,” in addition to handling the stereoscopic to auto stereo conversion, also touts “adjustable depth,” basically embedding a per-pixel depth map into the video signal stream so that, with a single control, viewers can adjust and collapse the depth of the stereo being presented from the full default, all the way down to no depth at all, depending on one’s comfort level.
I was also lucky enough to have a chat with the Bob Frye, the product manager for the Dolby PRM-4200. With the recent price drop from $50K to $30K, this display solution has gone (for me) from completely unattainable to merely unaffordable, and I wanted to have another look just to make myself jealous. In particular, I was curious to learn where it excels the most over more affordable Plasma and LCD solutions. The real draw seems to be its shadows reproduction. I’m told the tonal reproduction in the blacks is very smooth, with every code word drawn without clipping; it certainly looked good to me. I’m also told that at a recent “shoot-out” of different monitoring technologies and how well they could be made to match cinema projection, the 4200 was one of the best-regarded matches, so there’s that. With support for 12-bit XYZ right now (ACES support is being looked into), this seems to be a high-quality solution for cinema grading in smaller suites, and for a yearly fee, Dolby will send a calibrator to you to make sure it’s always in tip top calibration for those $200 million tentpole jobs. So now I just have to convince Michael Bay that he needs to grade Transformers 4 here in Saint Paul with me.
Assimilate Scratch is another application that I hadn’t had a chance to check out at NAB, and I was quite pleased to see that they’ve taken pains to refine the onscreen UI, and have been making strides with greater integration of grading and 3D compositing. New features include a “Pre” track for inserting operations at the beginning of the Scratch image-processing pipeline, and nested scaffolds for precomping multiple scaffolds inside of a single scaffold’s worth of operations.
Scratch has also incorporated motion estimated speed processing, making high-quality slow motion the rule, rather then the exception, as of this year’s crop of grading applications.
Stacking together these features along with features such as their mesh warper and other compositing and effects tools really starts to show off the kinds of work that Scratch is becoming capable of.
Scratch is also adding to their workflow story, with support for watch folders to do automated image processing, Mocha integration via the import of Mocha tracking data, and a new Nuke round trip workflow that supports the exchange of primary grades, framing info, edits, and LUT data via a scratch node available inside of Nuke. Lastly, ACES support has been added alongside the previously supported colorspaces.
Last, but certainly not least, Blackmagic Design shipped the final version of DaVinci Resolve 9, as well as announcing their new DeckLink 4K Extreme 4K video interface, with support for 10- and 12-bit RGB or YCbCr signals input and output at 4K resolutions via dual-link 3G-SDI. The UltraStudio Thunderbolt interface offers 10-bit RGB or YCbCr input and output at 4K resolutions, so there are two new Resolve-compatible video interfaces for the emerging 4K crowd.
I was also pleased to note that they’ve announced that the reengineered Teranex processors are now shipping, as is the Blackmagic Cinema Camera (an upcoming passive Micro Four Thirds version was also announced).
Catching up with the Resolve engineering team, the shipping version of Resolve 9 had some last minute new features slipped in, including a new RGB output in the Ext Matte node that’s useful for adding grain or distress stock video to a clip’s grade via one of the composite modes in the Layer Mixer node; short clips of grain will even be looped endlessly to match the duration of any clip. You do have to take the added step of opening the Media page and adding your grain or distress layer as a matte to the clips you want to use them with, but this ensures that you can render grain or distress into clips in “Render timeline as: Individual source clips” mode.
Another improvement is that the currently selected clip in the Lightbox can now be graded using your control surface, making the Lightbox into a way of quickly browsing and grading clips of a scene.
There’s also a functional alteration of a control that had already been available in the public betas. The contrast control in log mode has been changed so that there’s now a smooth rolloff at the highlights and shadows, an automatic S-curve, so detail won’t be clipped.
The Keyframe Editor has been updated, with a new look for keyframes (static keyframes are round, dynamic keyframes are diamond-shaped) that makes them easier to select and drag.
Lastly, the video scopes have even been updated, with an optional skin tone indicator in the Vectorscope, and optional minimum and maximum reference level lines (yellow) in the Waveform.
So there you go. Honestly, this year’s IBC is probably the most fun I’ve had at a trade show in years, and it was really illuminating to see demos of nearly every major grading workstation on the market, side by side. As I tweeted during the show, each grading application has something that’s particularly special, but no one grading application does everything. That said, we’ve got more tools available to us then ever before in the history of this crazy profession. Now we’ve just got to figure out what to do with them.
So, Resolve 9 has finally been made public after much anticipation since its unveiling at NAB. Many of the new features have already been shown and discussed, but there are even more features being shipped then have been talked about previously, and I thought it’d be nice to highlight
six seven of those in this post. (The lead engineer reminded me of, how could I have forgotten, the updated video scopes, which are so pretty I had to add a screenshot.)
Mixed Frame Rate Support
For me, this is the single biggest new feature in this release. Bigger even then the new UI. Mixed frame rate media has been a frequent hassle in projects I get from clients. Most NLEs let you edit any kind of footage you want together into a single timeline, regardless of frame rate. And as you may or may not know, mixing frame rates can be rather challenging when it comes to finishing, since you can ultimately only output one frame rate as your finished media file or tape output. Prior versions of Resolve were constrained by only supporting a single frame rate in a particular project, but no more.
Resolve 9 lets you mix and match whatever frame rates are necessary within a single project, so long as you turn on the “Handle mixed frame rate material” checkbox in the Master Project Settings panel of the new Project Settings window (available by clicking the gear icon in the lower left-hand corner).
You have to turn this checkbox on before you import an AAF or XML mixed frame rate project (to learn why, check the manual). After you import your AAF or XML file with mixed frame rate media, you’ll want to make sure that your “Playback framerate” is identical to the “Calculate timecode at” setting for optimal performance. (Both settings are also in the Master Project Settings panel of the Project Settings window.)
When rendering a Mixed Frame Rate timeline, how the media is output depends on whether you render to Source or Target mode. In Source mode, each clip is rendered at its native frame rate, for handoff to another NLE or finishing application. In Target mode, all frames are converted to the frame rate specified by the “Calculate timecode at” setting of that project, letting you output the entire project as a single media file at the target frame rate.
I don’t know about you, but this alone is going to save me, and my clients, hours of project prep.
Light Box View
This is another new feature that was previously unannounced. While working in the Color page, you can click the Lightbox View button:
…to view every clip in your timeline using the Resolve Lightbox.
The Lightbox view makes it easy to scan through your project looking for a particular scene, to make multiple selections in order to create groups, or to use the new Flag command to assign differently colored flags to various clips to note things you want to do. This is a terrifically timesaving feature for projects of any duration.
Another interesting new feature is the Clip Attributes window, found in the Media Pool. This window replaces many of the contextual menu commands available for altering various editable properties of clips, for example, to change data levels, pixel aspect ratio settings, or to reinterpret the alpha channel mode now that Resolve 9 supports alpha channels for imported media. It also handles timecode alteration and manual, per-clip reel name changes, as well as stereoscopic 3D media assignments.
What’s notable is that you can select multiple clips, and use the Clip Attributes window to change them all at once.
I had shown the metadata editor in my video presentation (viewable here), but since I’ve shown it last, a shedload of editable metadata attributes has been added. Far too many to show on one page.
Fortunately, they’re organized into groups, which are available from a pop-up menu at the upper right-hand corner of the metadata editor.
If you’re working on digital dailies, or you’re an extremely organized colorist, this is going to be a benefit.
Big Ass Curves
One frequent complaint I’ve heard is that the relatively small size of the DaVinci Resolve custom curves made them difficult to use for precision adjustments. I myself had never quite noticed this to be a problem, but fortunately DaVinci heard your anguished cries, and provided a new Large Curve mode for the Custom Curves. Clicking a button at the bottom of the Custom Curves:
…opens up a window presenting a huge version of the same curves, with all the same controls.
Having used the large curves for a while, I can safely say that they’re a huge improvement (ha) and truly do give you more refined control of your curve-driven adjustments. I never knew what I was missing until I started using these, and now there’s no going back for those finicky log-to-linear custom adjustments I now find myself making with more frequency.
Updated Video Scopes
While they were updating the rest of the UI, DaVinci decided to update the video scopes, too.
The new one-window scopes look beautiful, and I find them easier to manage then the four individual windows that were available previously. Providing an analysis of every single line of image data, the Waveform, Parade, Vectorscope, and Histogram are all there. However, if you like, you can change the number of scopes displayed to 1-up, 2-up, or the default 4-up, which lets you enlarge individual scopes if you don’t need the whole shooting match. Performance is dependent on how much GPU processing power your workstation has, so single or dual GPU systems may have less then stellar performance. However, folks who routinely use the Resolve scopes have cause for rejoicing, as these are a distinct improvement over what was there before.
A New Manual
You knew I was going to mention this. I’ve been hard at work (which explains the paucity of blogging around here) for the last three months writing what has ended up being a 600 page, near total rewrite of the DaVinci Resolve 9 User Manual. (To give you some perspective, the previous version of the manual was 435 pages)
It’s been quite a challenge keeping up with the DaVinci Resolve team as they’ve piled on the improvements and evolved the UI over the months, but it’s been a truly rewarding experience, and I’m rather proud of the result.
Now, bear in mind that, as the product is still in beta, the user manual is also a work in progress, with edits and screenshot changes yet to be put in. However, I’m glad that the team has seen fit to make it available to the public, so that everyone can get a jump on what’s new. There are a lot of subtle refinements, and I’ve tried hard to capture all the little things and interoperabilities.
There are a few things of which, however, I’m particularly proud. “Before You Conform,” on page 111, contains detailed information about project preparation, effects support from NLEs, an explanation of the rules for media conforms, details about image processing and clip data levels, a summary of ACES support in Resolve, and an overview of digital dailies workflow. I tried to answer a lot of the questions that folks have had about Resolve’s inner workings in this section, and I think you’ll find it illuminating.
Also, “AAF Workflow Overview” on page 137 provides a detailed overview, from soup to nuts, of how you get projects from Media Composer or Symphony to Resolve and back again. The DaVinci Resolve team has worked extremely hard to make this workflow smoother and easier in version 9, and I executed each workflow personally while writing this section (kudos to Avid for answering my questions and giving me additional support while I developed the content). If you’re dealing with AAF, read this section. It may explain some of the issues you’ve been having, and will guide you through ways of getting the job done.
If you’re completely new to DaVinci Resolve, there’s a new, almost 30 page tutorial on page 71. It’s basic, so if you already know Resolve, you can probably skip it. But if you’ve never used Resolve at all, it’ll give you a quick and thorough tour of bringing a project in, doing some grading using a core selection of the Resolve toolset, and then rendering your project out. And, you can follow along using the sample media that comes on the DaVinci installer disk (and is also available by downloading from Blackmagic Design support).
So, I hope you find the new version of Resolve as big an improvement as I do, and I hope the new manual helps you to get the most out of it.
I wasn’t going to chime in on this ongoing conversation, as frankly I don’t know that I have anything meaningful to add, and there’s nothing worse then baseless speculation about Apple. However, friend and colleague Patrick Inhofer noticed a blog entry of mine dating from the summer of 2010, fully two years ago this month, in which I foolishly elected to weigh in on the topic in response to Apple’s second to last, somewhat weak refresh of the Mac Pro line.
Since I had just moved the client part of my color correction practice over to DaVinci Resolve, I needed a new machine for the suite, so I went ahead and bought the 2010 Mac Pro later that summer. Little had I realized I’d be parked on that machine for the next two years.
My speculations in that prior post are now woefully dated, and I have no problem admitting that I was completely wrong. Apple was obviously not waiting for next generation FireWire, they went all in on Thunderbolt. And clearly, updating to PCIe 3.0 hasn’t been a priority (yet). And so, all of us who are still Mac Pro based shops continue to wait. So what do I think Apple’s going to do?
I have no fucking idea.
I might venture to guess that Apple is waiting for next gen Thunderbolt, but that’s hardly an original stroke of genius on my part since it’s the only machine in the lineup that’s lacking the new port. I’ve long been saying to friends that it wouldn’t surprise me at all if Apple rethought the overall form factor in some dramatic way, but that’s not exactly an original thought either. If someone put a gun to my head and forced me to make a bet, I would guess that Apple will most likely release something that they think will serve users in the Mac Pro market. And they may call it a Mac Pro, or they may not. Whatever they call it, the users will decide whether it’s a legitimate upgrade, voting with their wallets.
As I said on Twitter last night, if Apple releases a new machine that affordably does what I need, high-bandwidth data transfer among multiple high-end GPUs, with lots of RAM, fast CPUs, and access to suitable pro-video interfaces and accelerated storage, then I don’t care what they call the thing or what it looks like.
I’m not quite willing to believe that the Mac Pro is dead to Apple. After all, Apple isn’t shy about pulling the plug on things. When it was announced that the Xserve was no more, Apple blew out the stock and took the product off their storefront. That’s what I call a dead product. So long as the assembly line is cranking out new Mac Pros, no matter how creaky they are, I’m inclined to believe that there’s something on the horizon.
So me? I’m waiting to see. Granted, I’ve got a relatively “recent” Apple box, so I can afford to wait and see as my current equipment can keep up with the needs of my current clientele. But other folks, like one commenter who’s stuck with a six year+ machine, have the really tough choice. I don’t blame anyone for going the Windows route, I think it makes all the sense in the world for someone who needs more power, to earn money and get things done, to switch platforms.
However, my general philosophy of buying new technology is to wait until two paying clients in a row come to me to do something that I can’t do with my current hardware. As my general goal is to avoid saying “no” three times in a row, I’ll gladly spend money for something that pays for itself in jobs I’d otherwise be unable to get. However, I’m not going to buy anything new just to have a new hotrod. Much as I’d love to, I’ve got other financial priorities.
So, while I can do what I need with what I have, as soon as I find myself in the awkward position of having a job go marginally because my hardware isn’t up to the task, then I too will be evaluating my options, and I’m not at all opposed to switching to Windows, or even Linux, if that gives me better bang for the buck, and better capabilities, then Apple’s offerings on that date. My software is no longer a limiting factor (although ProRes encoding, distressingly, still is), so switching platforms, even multiple times, is not as much of a pain in the ass as it once might have been.
I figure I’ve got another 12-odd months with my current workstation before I too start feeling the pinch. If Apple makes something that’s expandable and useful by then, cool. But I’m not taking any bets. We’ll see.
Steve Hullfish was nice enough to send along the new second edition of his “The Art and Technique of Digital Color Correction” for me to peruse, which was most generous. I’d bought his first edition on my own dime and enjoyed it thoroughly, and we’re all lucky that his publisher (Focal Press) has seen fit to give him the opportunity to update this book, and add even more breadth and depth to the information within.
Steve has been writing about color correction in a digestible way for longer then anyone else I’m aware of. Once upon a time, his original “Color Correction for Digital Video” sat alongside Stuart Blake Jones’ “Video Color Correction for Nonlinear Editors” as the only two books on my shelf that covered this complex subject for the layman, and I benefitted alongside countless others from Steve’s clear presentation, and his inclusion of many voices from the field.
As useful as his original book is, however, “Art and Technique” goes so much further, especially in this new edition. Jumping from 370 to almost 500 pages, Steve has organized a wealth of interviews with some of the top colorists working today, discussing practical issues that colorists of any level of experience will benefit from.
If you’re interested in grading and you don’t have this book, go to your favorite vendor and simply order one right now. You need this on your shelf.
I was pleased to be in Montreal, presenting to the Final Cut MTL group’s PostNAB 2012 gathering, and they’ve released the video of my Resolve 9 preview on YouTube. If you’re a current Resolve user, you have a lot to look forward to. If you’re not, but you’re thinking of getting started, the next version will make it a much more enjoyable experience.
Alas, I only had so long to present, but there are plenty of other new features to look forward to, like node labels, renamable still store albums, additional metadata columns in the Media Pool, the list goes on and on.
Thanks again to Matt Pellowski at Red Line Studios for the footage I was able to use.
Folks who’ve been reading this blog for a while will know that I’ve been following Tangent’s development of the Element color correction control surface for a long while. They’ve now been shipping the Element for some time, and it’s been so unexpectedly popular that they’ve had some trouble keeping up with orders, which is a nice problem for Tangent to have, and I congratulate them.
At NAB, I met up with the principals, and they were good enough to provide me with a set to try out. So now, months after playing with their initial prototypes, I’ve finally had the chance to see how the shipping version works.
If you’re busy, here’s my quick takeaway. They feel fantastic, the build quality is everything one might want in a surface of any price, and their compact size makes them at home in anybody’s suite while the clever design doesn’t compromise features. At $3500 USD for the set (actually, $3,199.99 at B&H), it’s the best bang for the buck you’re going to get, in my opinion.
Now, for those of you wanting a bit more detail, let’s look a little closer. In fact, let’s start with an unboxing. When you order the set, consisting of one each of the button panel, the knob panel, the trackball panel, and the button/transport panel, you get a box containing four other boxes (five including a box for other hardware).
Since these panels are also available individually, Tangent made the decision to package them individually, so folks could custom order whatever combination they wanted.
For those of you looking for a convenient carrying case, these boxes aren’t it. However, I’m told that Tangent is considering creating a set of custom foam inserts for a pelican case. You’d buy the foam inserts from Tangent, buy the appropriate case from Pelican, and then you’ll be in business. I look forward to this becoming available, since the durability and compact size of these panels makes them an excellent choice for portable use.
Each of the panels connects via USB, and Tangent recommends a specific USB hub for use. Depending on your suite’s configuration, it either conveniently or inconveniently has a built-in extension cable, so you can run the hub quite far away from your CPU, should you so desire.
Each panel connects to the hub via it’s own Micro-B to Type A cable. This means that a set of four panels will run four USB cables to the hub, which in turn connects to your CPU.
This may sound like a potential rats-nest of cabling, but I found that by looping each cable underneath each panel’s back riser, they could be brought together into a single snake you can run to the hub.
Speaking of connections, the four panels themselves can be arranged on your desk any way you like, for the ultimate in configurable customizability. However, if you want to line them up in a straight row as a single unit, there’s a clever magnetic pin arrangement you can use to “click” them all together.
The pins come in a separate little bag, and you use two pins to join each pair of panels that you want to sit side by side. If you insert these pins incorrectly, you can always pull them out, but you’ll need a pliers to do so, unless you’ve a preternaturally strong grip.
Power is delivered via the USB hub. The power supply that comes with the recommended hub is international; you remove the plastic shipping insert and then use whichever of the accompanying international plugs you need.
Once you’ve gotten everything plugged in and assembled, the Element panels have a pretty unassuming footprint on your desktop.
With the whole set on my administrative computer’s desk, I still have room for my Bamboo graphics tablet, my magic trackpad, and yo-yo.
I didn’t want to just plug it all in, use it for an hour, and then post a snap review on the spot, so I gave myself a month or so to use it in everyday situations, to see how I liked its functionality and feel in the long term. At this point, I think it’s great.
The knobs and contrast rings are nice and smooth, but with a pleasing bit of resistance that encourages precision. The buttons are the same ones that Tangent has been using ever since the $30,000 USD CP-100 panel, which I like. Some have commented on the audible click these buttons make, but it’s never bothered me. In fact, I should point out that my considerably more expensive DaVinci Control Surface uses buttons with a similarly audible click, and I’ve never heard anyone complain about those.
Here’s a fun fact. In speaking with the Tangent guys, I’m told that out of approximately 60,000 buttons they’ve used in panels they’ve made over the last 12 years, the only button failures they’ve experienced have been on four or five of the original CP-100 panels that were shipped 12 years ago, all of which have seen intensive use. With that kind of reliability, I’m very happy with Tangent’s choice of hardware.
Each panel has an OLED display at the top, with multiple lines of text designed to dynamically label the functionality of each row of controls on every panel. I was wondering if I’d find this visually confusing, and the truth is I haven’t. Unlike LCD displays, OLED displays aren’t polarized, and the “lens covers” over each panel’s display have been specifically engineered not to interfere with the polarized glasses used by passive stereoscopic monitors, so that’s an additional bonus if you regularly work on stereoscopic projects.
There are many grading and postproduction applications that have announced Tangent Element support, but I’ve only been using these panels with DaVinci Resolve. In general, I’ve found the Resolve mappings quick to learn, easy to operate, and they hit all the basics. However, I agree with those who’ve voiced a desire for a bit more mapped functionality, as there’s plenty of room for more. On the other hand, room for improvement does not mean the current mappings are bad, and I wholeheartedly recommend these panels for Resolve use.
So that’s my overview. If you’d like to learn more about these panels, I heartily recommend Patrick Inhofer’s video review if you’ve not yet seen it, at his excellent Tao of Color website. He demonstrates the panels in action, which lets you see how the mappings work with Resolve. Also, I want to point out that panel touch and feel is subject to very personal preferences. Before buying any panel, I strongly recommend you find a way to actually try it out in person to make sure that it’s your cup of tea. There are many different color correction control surfaces on the market, and each has its fans and detractors; the only way to really know if a panel will work for you is to try before you buy.
The following is a reply I posted originally on Creative Cow’s DaVinci forum regarding the little line in the Vectorscope that serves either as an in-phase indicator for signal alignment, or an approximate guideline for the angle at which human skin tone may fall in a neutrally graded shot.
It was brought up that, since the original engineering purpose of this indicator was for image alignment, and had nothing in fact to do with skin tones, and since in-phase and quadrature indicators have nothing to do with high definition signals, that it is perhaps inappropriate to carry this indicator forward into newer implementations of video scopes in the digital world. In particular, Mike Most (for whom I have the highest respect) wrote “an assumption that the I axis is there specifically for flesh tones – in particular Caucasian flesh tones – is incorrect, based on equally incorrect information printed in an Apple document.” You can see the full discussion here. The interesting parts come towards the end.
Mike Most makes some excellent points about the origins of this indicator, and about over-reliance on it being a crutch for inexperienced colorists. However, “incorrect” is a strong word, and since I wrote the Apple documentation under discussion, I thought it might be interesting to shine a light on both some design decisions that were made in FCP and Apple Color, and how terminology gets coined and disseminated.
When FCP 3.0 was in development, the then-new color correction tools and video scopes being added were brand new to the majority of desktop video editors, and the engineering team was working to try and make this unfamiliar paradigm of lift/gamma/gain style controls and the accompanying scopes comprehensible to a new audience. It was a deliberate design decision to include one half of the in-phase axis line all by itself as an indicator of the general hue of skin tone, since to my knowledge the not-so-coincidental dual use of that indicator had been a documented rule of thumb of videoscope use for many years prior. This coincident use was not something the Final Cut Pro engineering team made up.
In an effort to make the purpose of this line more transparent, I (as the writer of the manual) and some others decided to call this the “Flesh tone line,” a decision I now somewhat regret as it muddies the history of this indicator; and yet as this was the only one of the in-phase and quadrature lines that the team elected to draw upon the FCP vectorscope, I stand by the decision as it made immediate sense to the new user, and the purpose of this indicator as intended had nothing to do with signal alignment, and everything to do with providing a flesh tone signpost to people new to reading scopes.
Regarding the “I-bar” terminology, this was the term I decided upon when writing the Apple Color documentation, as I expected a more experienced audience would appreciate an acknowledgment of the original purpose of this line. Also, the Color team implemented all four in-phase and quadrature indicators, so it seemed appropriate, “I” for in-phase, bar because it’s a line.
I did not make this term up; after scouring different terminology from various sources, I found and settled on “I-bar” as the shortest term for purposes of documentation (try typing “i-axis indication line” ten times fast). Unfortunately, I can’t cite my source anymore as this was years ago, but nobody in a position to offer a technical review of that manual, nor any colorist who’s done either technical or casual reviews of books I’ve written since, has ever informed me that “I-bar” is wrong, and I’ve been using it consistently ever since. If, in fact, it turns out that my deadline had made me delusional and “I-bar” was the fevered ravings of a caffiene-addled technical writer, then I still stand by it since it’s less to type and is a fine abbreviation, but I cannot in truth take the credit.
I’ve discussed the history of the I and Q axes with many folks over the years, and while it’s true that the engineering reasons behind these indicators have nothing strictly to do with flesh tones, my personal feeling is that the coincidental utility of the in-phase indicator’s position has, over time, come to outweigh its original purpose, and in fact I would consider the “I-bar” we’re currently referring to as a new thing that coopted the old, sort of like Easter coopting an earlier collection of various pagan celebrations.
To clarify, I would never and have never suggested that this line is a strict guideline for human hue. In my “color correction handbook” I wrote and illustrated more pages then my editor may have wished about the subtle variations of human skin tone, color interactions between a subject and the illuminant of a scene, and how the in-phase indicator under discussion is merely a general signpost. Like speed limits, nobody follows them exactly, but they let you know you’re around what you ought to be doing.
Lastly, I’ve used scopes that have an I-bar, and I have a very expensive scope that doesn’t (in HD mode, as has been pointed out), and while I still think it’s nice to have, its absence has never hampered me from delivering attractive skin tones to my commercial clients. However, given the choice, I’d like to see this indicator as an option for folks who like it; in fact, I’d love to see someone develop the option for multiple programmable vectorscope indicators at user-selectable angles, but then I’m a bit nutty for options. The hue that, in NTSC, is represented by the I-bar can certainly be mathematically translated into the same hue in HD color space, and I see no reason why that wouldn’t be useful or appropriate, if it’s documented clearly that this is no longer in-phase, but in fact an analogous flesh tone guidepost that can be turned on or off.
I would suggest that video scopes at this point are simply software, and it should be no sin for developers to add new features of utility to users and to label them clearly. I’m fond of pointing out that the days of fixed ground glass graticules are over, and it would be nice to see developers find more things to do with scopes for both basic and advanced users then to simply replicate functionality from analog, trace-drawn CRT technology.
I received some fantastic news this week from my collaborator, illustrator Ryan Beckwith, regarding our long-term project, Starship Detritus. He’s been laboring for months on the art for our pilot episode of this animated science-fiction series. This being a side-gig for both of us, his work as a commercial storyboard artist kept interrupting (damn you for being so successful, Ryan!), but getting the news that he’s finished is the biggest leap forward since I finished writing all 13 episodes of the first season.
Of course, now it means I need to get off my backside and start scheduling some After Effects character animators to put these images into motion. Ryan’s been creating high-resolution, multi-layered Photoshop comps (in conjunction with his assistant Ryan Zalis who aided with flatting and other assorted tasks). Working with our first animator, Steve Rein, the artwork has been constructed to accommodate skeletal and puppet-tool animation in After Effects.
Being an illustrator and not an animator, Ryan has gone in a much different direction with the artwork then in most animations. From the very first color tests he did, I was impressed with the texture and detail he brought to the world I’ve written, and it’ll be exciting to see the scenes come alive.
It’s a bit poignant; I used to work with After Effects every day back in the late 90’s, but having been focused on color correction for so long, at this point I’m so rusty that I’d rather work with faster artists to bring these characters to life. I’ll stick to animating the camera and framing of the final comps for rendering out the finished shots.
Of course, as the writer/director/editor, I’ve a few other things to handle. The very first thing I did, after Ryan and I storyboarded the first episode, and he created the first complete set of roughs, was to record a group of temp actors reading the script, and edit together an animatic in order to get the timing of the show right. This has been our reference going forward, and as soon as I get my hands on the full finished set of artwork, I’m looking forward to updating the animatic with the color art.
Which will take a bit of doing. The original animatic was put together in Final Cut Pro, but given this is such an After Effects-heavy project, I’m planning on moving the entire edit over to Premiere Pro in CS6, to take advantage of its AE integration. I’m hoping this creates some efficiencies. Besides, it’s an excuse to learn a new piece of software by doing something real, which I find is always the best way to learn.
Additionally, since this process has ended up taking far, far longer then it was supposed to (par for the course), I’ve begun novelizing this first season. As fun as the 13 episode, ten-minute-per-episode structure I used for the scripts has been, there’s additional story that my chosen format simply won’t accommodate. Prose has been a perfect outlet for the added bits, and the idea of telling this story across different platforms is tremendously appealing to me. At this point I’m 11,000 words into the novelization, and having tremendous fun with it. Alas, now my day job is interrupting, since working on the new version of the DaVinci Resolve 9 manual is proving to make scheduling creative time challenging.
Moving forward, there are plans within plans, and I’ll be sure to share more when there’s more to share. It’s easy to get caught up in the day-to-day grading and tech-writing work that I do, but creative projects like this are what brought me into post-production in the first place, and it’s gratifying to be making progress on my biggest creative project to date.
A few months ago I graded three web spots for the Mayo clinic at Minneapolis’ Splice Here, for whom I’ve been doing some freelance grading. They posted the full spot on a page highlighting some of my work.
It’s a fun high-style grade that splits the top and bottom halves of image tonality for separate rebalancing, employs selective desaturation using the hue curves, adds some subtle glow via luma keying, and includes some individual work on skin tones to keep them natural amidst all the stylizations. That’s one of the great things about working on spots, you get to dig so much deeper into the grade then with most other types of shows I work on.
I’ve been mulling over the topic of piracy and media consumption for several years. As a writer in the middle of developing a project with a web component, it’s of great interest to me whether or not it’s possible to make money creating a video series of ambition primarily for a digital download audience.
Lately, there’s been a lot of back and forth about the rights of the individual versus the rights of copyright holders, consumer convenience, dumb-ass big media companies, etcetera. There’s a lot of high-minded rhetoric on either side flying around, but in all the debate, I can’t help but feel that the concerns of individual copyright holders, be they artists, writers, filmmakers, or programmers, are being forgotten in all the angst over “big media.”
However, before I continue, I want to make four quick points so you know where I’m coming from.
(1) I’m not going to discuss large corporate media, since that of necessity addresses a whole set of issues that I think dilutes the fundamental issue of creator compensation. Instead, I want to focus on small-time, creator-distributed media, which I would like to think is the future of media. It’s always been my dream that we creators have an environment in which we can sell content directly to the audience. And technology could make that more feasible then ever.
(2) I believe we can all agree that DRM is a giant pain in the ass, and it’s not a credible answer. Also, I’m in favor of liberal fair-use policies. Individuals shouldn’t have to live in fear when creating mash-ups, remixes, and the like. Clear, universal policies with no repurcussions for non-commercial activities should be put into place.
(3) However, I firmly support copyright as an artist’s most effective, international, treaty-ratified protection against big media poaching an independent creator’s intellectual property. On the other hand, I think copyright needs to expire with no exceptions, and not be constantly re-extended for well-heeled corporations. If patents expire like clockwork for major pharmaceudical companies’ most expensive medications, then the Disneys of the world can let their copyrights expire, as well.
(4) Making it difficult for people to buy one’s content easily and affordably is probably stupid.
Okay, let’s talk about piracy.
As an author, I have no interest in pursuing criminal charges against folks that consume media I’ve created without paying. Personally, I make a distinction between simply copying a file, and enjoying the media therein. If everyone in the world copied the file of one of my books without reading it, I honestly wouldn’t care. Where I draw the line is when folks watch the movie, read the book, or listen to the song, and then don’t pay. That, I consider to be thoughtless behavior.
My main point is simple math. If a creator’s job is to create, then someone has to pay for that creator to keep doing what they’re doing. If the creator sucks and nobody much buys the thing, then it’s artistic darwinism and time to go get a day job. However, if the creator is terrific, and lots of folks listen to/watch/read/play the thing without paying, then that deliberately avoids rewarding artists for doing good work, and is a tragedy regardless of your thoughts about free culture.
Big media wants to protect the profits of copyright holders by enforcing draconian laws and technological boondoggles, none of which I support because these schemes go overboard and infringe on genuine civil liberties, and from a technological perspective promise to cause far more problems then they would solve.
Rather, I think the fundamental issue at play is people’s attitudes about media consumption, and about paying the artists’ price for what they read/watch/hear/experience.
Making money off of digital media is a numbers game. Folks expect low prices, so the aggregate is important. The more people decide to download media file X and then pay the creator for it, the more money the creator has. It’s that simple.
If you’re not planning on paying for a piece of media you’ve listened to/watched/played/read, then yes, you can provide free publicity for the creator, spreading the word on Twitter and your blog and Facebook and by texting all your friends. And if you’re dead broke, that’s cool. It’s genuinely helpful. But if you’re not broke, at the end of the day you could have done that and given them five dollars. Or two dollars. Or 99¢.
You can argue that copying without payment is not theft, that nothing’s been taken, that the file being duplicated makes more! And I’ll agree with you. Copying a file and then using it without payment is, to my mind, no more an act of larceny then refusing to toss a buck in the cup of a street musician after standing there listening to their whole song. But it is miserly to do so if you have the money to spend. And rude.
At the end of the day, digital media distribution makes filmmakers, musicians, writers, programmers, and other creators of mass distributable content the equivilant of buskers standing by the side of the street. You can enjoy what we make for free, and it’s up to you whether or not you pay us. And whether you as a creator love this new reality or hate it, that’s the truth.
However, it’s disingenuous for tech pundits to stand by the side of the road and say that figuring out how to make a profit is the artists problem, or to suggest that in the future perhaps it’s simply not possible for creators to make a living doing nothing but creating.
To me, the argument is not whether copying media freely is right or wrong–it’s an issue of manners. Of respect for the creator’s time, and the resources that were put into the making of that thing you’ve decided to copy to your digital device in order to upload into your brain.
If you want a more self-serving reason to fork out cash for digital media you enjoy, consider whether or not you want that creator to keep creating. For anyone planning a media project of any sort of ambition, the math regarding whether or not it can be done is simple.
How much folks will pay the artist
How much it costs to make
Whether or not the artist goes broke
Keep in mind that not every type of media you might want to download is created solely as a function of one person’s time. In the case of a film, a whole lot of resources can go into even the humblest 5 minute project. Paying other artists, actors, buying materials for sets and props, paying for insurance, renting equipment, buying bags of clothes-pins, municipal shooting permits, the list can be quite long.
And when it comes to costs, time must be assigned a value. No matter what kind of media we’re talking about, the artist’s time is worth money, and it’s a mistake to think otherwise.
I also believe that artists do their best work when they have the ability to focus on what it is they’re doing, as opposed to working on their thing at 11pm at night after spending all day waiting tables, pumping gas, or writing backend database code. Creation is a job, too, that benefits from a fresh mind and well-rested energy.
So, if you want your favorite artist to be able to focus on what it is they’re creating for you to consume, it would behoove you to toss five or ten bucks into their project. If you’ve got it. And if you don’t have it, keep them in mind when you do.
It’s the nice thing to do.
And Mark Spencer and Steve Martin do their level best to keep me going in this hour and a half interview on MacBreak Live, wherein I discuss how I got started with color correction in the first place, why I like using Resolve, control surfaces, monitors, grading for the web, how I organize grades, how to move projects from Final Cut Pro X to Resolve, why experience matters, and what I think distinguishes colorists who take the craft seriously. It was a fun chat, I hope you like it.
For a variety of reasons, I couldn’t resist taking the opportunity to give Filmlight’s new Baselight plugin for Final Cut Pro 7 a whirl. Baselight has long been one of the industry’s premiere grading applications, used on projects both large and small, and among professional colorists I’ve always heard it spoken of glowingly.
When announced at last year’s NAB conference, everyone’s amazement that a high-end company like Filmlight would bring their technology to the Mac as, of all things, a plug-in was overshadowed by Apple’s announcement of Final Cut Pro X, which rendered all FCP7 news somewhat obsolete.
However, as there are many, many shops still using Final Cut Studio 3 regularly, and there are likely to be many who use it into the coming year, I can understand Filmlight’s interest in finishing the project and bringing their plug-in to market, especially given the unique workflow that it enables, of grading from within Final Cut Pro in such a way as to be able to export the corrections directly, with perfect fidelity, via XML to a full-blown Baselight workstation for a dedicated grading session.
What really drew me to work with the plug-in, however, was the desire to get my hands on Baselight’s well-regarded user interface. Having been exposed to Baselight while writing my Color Correction Handbook, I learned to appreciate the numerous tools and modes it provides, as well as some of their more unique takes on common color correction tools.
What is perhaps most impressive is that FilmLight has truly managed to squeeze nearly the entire Baselight UI into this plug-in, which makes this a great way to see what the Baselight interface offers.
So let’s have a look.
After running the installer and opening Final Cut Pro, the FilmLight plugin appears, innocuously, in your Video Filters bin in the Effects tab.
When you drop this plugin onto a clip, the Baselight loading screen appears.
This tells you right away that the Baselight plug-in is no small affair. It’s effectively an application within an application, similar to the approach of other color correction plug-in user interfaces such as Colorista II and Magic Bullet Looks.
For the best previewing performance while using Baselight, you’re recommended to use the Unlimited RT mode (resulting in orange render bars). Otherwise every clip you add this plugin to appears with the red render bars that force a complete render before previewing.
The plugin’s performance was good with the primary corrections I was making. Keeping in mind that you are able to stack many layers of correction one upon another, I was able to stack several layers of primary operations one on top of another and maintain good performance. However, after adding a few secondaries, I needed to select dynamic for both Playback Video Quality and Playback Frame Rate in order to maintain performance.
Incidentally, the accompanying documentation recommends legalizing out-of-bounds (over 100%) signals with a Color Corrector 3-Way filter prior to the FilmLight plugin, to make sure no part of the signal gets clipped when being fed to Baselight.
When you open your clip’s Filters tab, you’ll see the Baselight plugin collapsed vertically, with instructions to expand the Viewer window in order to see the UI within the Filters tab, or double-click the Baselight box to open a dedicated UI in its own window.
If you expand the size of the Viewer and the width of the Parameters column, most of the Baselight controls appear, which is an amazing sight to see inside of Final Cut Pro.
While the Baselight controls are visible in the Viewer, you can view your changes in the Canvas and via video-out on your video interface. However, if you instead double-click to open Baselight into its own window (or click the “pop out” button at the upper right corner), you get all of these controls, plus a viewer that’s useful for other Baselight functions (like drawing curves), as well as LUT and Viewer controls, and Baselight’s own take on the Histogram overlay scope.
This self-contained window can be enlarged to be full-screen, and the divider separating the controls from the image preview and histogram can be resized, giving either half of the interface priority.
Now is probably a good time to point out that the Baselight plug-in is compatible with the Avid Artist Color control panel, allowing you to control much of the UI using that panel’s trackballs, rings, knobs, and buttons.
The general idea behind Baselight is that you can build up a grade using layers. Each layer can use controls from a variety of toolsets that are available, either individually or in combination, to make adjustments of various kinds. These toolsets are the Film Grade, the Video Grade, the Curve Grade, the Hue Shift, and the Six Vector tools.
Each of these tools can be qualified using either keying or shapes, and each tool has parameters for making adjustments both inside and outside of a secondary qualification, simply by clicking the tools button in the appropriate column.
These different toolsets are a unique way in which Baselight organizes what you can do. In particular, the separated “Film Grade” and “Video Grade” tools are an interesting way of exposing two very different kinds of functionality to colorists of different backgrounds.
Examining the Film Grade first, two tabs with three main controls each are exposed.
The left-most tab, ExpContSat, contains an exposure section which provides you with a global exposure slider (raising or lowering the entire signal equally), as well as a global color control that allows for offset adjustments of color (letting you re-balance color by raising or lowering each color channel in its entirety).
The Contrast sliders let you expand or contract contrast with a single adjustment, about a pivot point that’s defined via the middle dotted cyan lines intersecting the diagonal graph found underneath. The R G and B contrast sliders are ganged by default, but an individual slider can be unganged by turning off its button, directly underneath.
Finally, the Saturation sliders provide global control over saturation, but interestingly you can selectively disable ganging on individual color channels, with the result being a sort of color rebalance that works quite differently.
Exposing the ShadsMidsHighs tab reveals another set of film-oriented controls.
Although this may appear to be a standard three-way color balancing system, it’s not. The names of the three color balance controls may be deceiving if you’re used to other grading applications that use the labels of Shadows, Midtones, and Highlights incorrectly, because these color balance controls influence a completely different set of tonal ranges then do the Lift, Gamma, and Gain controls found in the Video Grade toolset. The Shadows/Midtones/Highlights ranges are more restrictive, allowing far more specificity regarding which parts of the picture are excluded from each color balance control’s effect (I’ll be doing a separate blog entry on film-style grading tools later). Furthermore, in this mode the exposure sliders provide curved control of the knee and toe of the signal.
Colorists coming from more video-oriented toolsets may find these tools strange, but these controls were designed specifically for film colorists who come from a completely different tradition, and the truth is once you get used to this style of working, you’ll discover a range of situations for which they provide fast solutions.
However, the beauty of Baselight is that they don’t just give you some of the tools. They give you all of the tools, hence the Video Grade toolset that’s next in the list.
This set of controls provides the familiar Lift/Gamma/Gain toolset that many of you may be more familiar with, with Shadow and Highlight contrast controls that allow for controlled compression and expansion of the Luma, and color balance controls with broadly overlapping tonal regions of influence, allowing extremely soft and subtle interactions between adjustments made to the darkest and lightest regions of the image.
Incidentally, if you set the FilmLight pop-up menu to Default, you can then open the Region Graph tab, within which you can redefine the tonal ranges of influence exercised by the lift/gamma/gain controls.
Once you’ve created new curves, you can save the result as a graph that you can recall later.
You may also notice in the image above that an RGB Correction graph shows you the effect your adjustments are having on each of the three color channels of the signal. What you can’t see is the Region Graph tab, which exposes controls for customizing the default tonal overlap of the color balance controls.
Similarly to the Film Grade toolset, the Video Grade has two tabs, RGB and Y’CrCb.
These tabs put the lift/gamma/gain controls into either color space’s mode of operation. In RGB mode, contrast expansion increases saturation. In Y’CrCb mode, contrast expansion decreases saturation. As I mentioned, Baselight gives you all the tools, with every variation you might like.
Baselight also includes a powerful Curve Grade toolset.
Two tabs worth of curves are available. HueSaturationLightness provides a complete set of hue curves, while RedGreenBlue provides dedicated luma and color channel curves as a separate set. One of the really neat things about the Baselight curve UI is the automatic “zoomed” view provided to the right of each curve. You can manipulate control points either in the zoomed out view at left, or you can manipulate the selected control point more finely using the zoomed in view at right.
This provides a terrific degree of control for those super-fine detail adjustments that sometimes come up when adjusting skin tone or shadow detail. One thing that takes a bit of getting used to, by the way, is the default behavior of a locked X position for control points. This prevents you from shifting the hue or tonal area affected by a control point while you make adjustments to its intensity, but can be vexing until you discover how to disable the “Lock X Positions by Default” option in the Customize pop-up menu.
So, film controls, video controls, and curve controls, all within a plugin. But wait, there’s more… In a nod to tools available in other software and hardware based color correctors, Baselight provides two other toolsets that, while specific, allow quick adjustments of various kinds.
The first of these, the Hue Shift toolset, provides a slider-driven interface for making changes to hue, saturation, or lightness, with each individual slider governing a specific slice of hue.
While at first this might seem a bit primitive, like a “graphic eq” from 1987, the truth is this can be a really fast way to make a specific adjustment, sort of like a slider-driven hue curve. I imagine this is the type of control that’s much nicer to use from a control surface, where a set of knobs provide logical and quick access to these parameters, but the sliders can be handy, too.
Next up, the Six Vector controls expose a series of tabs that default to pie slices of the color wheel. In essence, this is a qualifier with default settings that, while completely customizable, are designed to be used to quickly target and adjust the primary and secondary ranges of color in the additive RGB model–RGBCMY.
This is a pretty standard HSL qualifier, but with a nice UI and a fast set of limited controls for making adjustments to the hue, saturation, and lightness of the isolated region, as opposed to creating a key with which to limit adjustments made using other toolsets. Again, this is a dedicated tool designed to do specific things very quickly. But don’t worry, there are other tools for doing proper secondary work.
So those are the main tools for adjustment within the Baselight interface. When it comes to secondaries, those are found in the Matte menu, which exposes the many methods that are available for creating a matte with which to limit one’s adjustments, creating a shape, using the DKey keyed, MatteRGB, or HueAngle.
MatteRGB and HueAngle are fairly standard methods of RGB and HSL qualification, so I’ll focus on the Shape and DKey controls, which are unique.
Baselight has a fantastic shapes interface, with two pop-up menus providing different shape drawing options. The first presents some standard freehand/rectangle/ellipse choices, along with the terrific addition of “edge.”
The Edge option exposes a single-line UI for creating gradients, as opposed to customizing a rectangle to do the same thing. This alone saves many mouse clicks.
It’s worth mentioning that the stand-alone window UI is the only place you can adjust shapes and draw freehand curves. Curves have a typical bezier handle interface, but it’s notable that there’s now a shape drawing interface available right within Final Cut Pro 7.
However, as nice as all of this is, what really got my attention was the Quickshape menu, which provides an array of frequently used shapes that you can invoke for isolating specific regions of the image without a lot of customization. Very, very cool.
Moving on, the DKey interface is a three-dimensional keyed, designed for “carving out” a region of RGB space in order to create a custom matte for secondary work.
Dragging a bounding box over the thumbnail to sample produces a targeted “blob” within the 3D Color Space View, and various sliders let you expand and contract the offset, radius, and softness of this blob in order to isolate the most useful range of color for your targeted operation. You can turn on one of three kinds of overlays to see the matte you’re creating as you work (the traditional black and white matte is shown above).
Once you’ve created a matte using any of these tools, making an adjustment is as easy as clicking the “color” Mona Lisa tab to switch back to your grading tools, and making any adjustment you like using any of the available tools, either singly or in combination.
Since we’re talking about secondary corrections, these are added via additional layers, added using the layers pop-up menu, within which you can add, remove, and reorganize layers in order to control the sequence of operations.
Opening this menu and clicking the green plus icon, I got up to 20 layers before I gave up. It seems clear that there’s no artificial limit on how many layers you can stack up.
The other nice thing about having so many layers available is that you can divide multiple primary adjustments among multiple layers if you so choose. Baselight’s layers mechanism is a nicely flexible tool for managing your corrections.
Incidentally, when creating a matte for a secondary operation, you can click the Reference button while in Matte mode and choose which state of the image, or which layer, you want to use as the source for keying. A very nice bit of flexibility that can be invoked in a hurry.
As this “quick” look is running a bit long, I’m going to jump to a couple of other important features that are worth mentioning. One is support for LUTs.
The Baselight plug-in comes with a few LUTs (look up tables) built-in, or you have the option to import one of your own. The pop-up menu for this is available in the stand-alone window, above the histogram.
There’s also support for keyframing.
Each parameter in Baselight has its own keyframe button (shown turned on, in blue), from which key framing can be enabled or disabled for the current set of parameters. All keyframes then appear, when created, upon a single keyframe track running along the bottom of the Baselight UI. Individually keyframed parameters can be isolated using the Show All pop-up menu. While keyframing, you need to move the playhead in either the Timeline or Canvas, and while keyframing is enabled, new keyframes are automatically created when a keyframe-enabled parameter is adjusted.
So, that was my quick tour of the Baselight plug-in for Final Cut Pro. I’ve only just scratched the surface, there are many more features for refining one’s adjustments and customizing the UI. On the plus side, it provides terrific tools for grading. On the other hand, being a plug-in, it relies on Final Cut Pro 7 for all grade management and image comparison functions, and while that’s not the worst thing in the world, the experience still doesn’t come close to using a dedicated grading application (such as the full Baselight). Finally, the performance is fine so long as you’re willing to work at the proxy resolutions that Unlimited RT with Dynamic Video Quality and Frame Rates enables, but if you’re looking for an environment in which to create complex grades while monitoring at full quality, this isn’t necessarily going to be your best choice.
Bottom line, if you’re interested in learning more about Baselight, or you’re a post facility with a Baselight suite or two already, this is a great plugin to have. If you’re looking for a plugin-based environment for grading work inside of Final Cut Pro 7 because you don’t want to have to learn a whole other application for grading, download the trial version and give it a whirl to see how well it integrates into your FCP workflow.
For some reason, everything always happens while I’m traveling.
After a long delay due to many unexpected happenings last fall, I’m happy to announce that my first video training title for DaVinci Resolve is now available from Ripple Training. It’s a seven hour overview covering every aspect of Resolve functionality, from project import, through the myriad grading tools Resolve provides, and finishing with Resolve’s flexible methods for outputting your project.
While I started out intending to do a “quick rundown” of how to use Resolve, the depth and breadth of the application forced me to expand what I was doing. After all, I didn’t want anyone to miss out on any of Resolve’s many features for making a colorist’s life easier.
As a result, the title consists of 53 individual movies, each covering short, specific topics. If you’ve already been using Resolve for a while, this makes it easy to focus on just those features that interest you. Ripple did a great job editing, indexing, and finessing the media to make the workings of the interface clear to see and easy to follow.
Lastly, I designed the lessons so that you can download the free (as in beer) DaVinci Lite version of the application from Blackmagic Designs support, then download the media I use from Ripple (instructions are included), and follow along for no extra money. And the free Resolve Lite now runs on either OS X or Windows, so you can follow along no matter what your platform.
So please, check it out. It’s like hanging out with me all day for $79 US bucks. That’s less then three martinis in Oslo, and there’s no hangover.
There are sample movies, a topic outline, and more at the Ripple Training web site.
I’m on the road at the moment, and up to my eyeballs in work and activities. However, as I’ve been catching glimpses of the news and chatting with friends and colleagues, it’s been impossible not to feel barraged by an inexplicable wave of state and federal legislation around the country, both attempted and successful, seeking to regulate various activities of women involving reproduction, health access, and child rearing. Invariably these regulatory measures are either restrictive or punitive.
I’m not going to go into more detail then that, because I frankly don’t think it matters. Whether you’re talking birth control, associated health care services, abortion services, or single motherhood, the core concept of the various regulatory attempts I’m referring to is to ultimately restrict the activities and options available to women, either directly or indirectly, financially, legally, and logistically.
This is inexcusable. If we, as a people, truly pride ourselves on freedom and self-determination, then intrusive regulation of issues affecting the private lives of women is insulting and degrading. We owe the women in our lives, across our nation, and throughout the world respect, and the acknowledgement that they’re capable of rational decision-making without the need for male mediation, be it medical, political, or bureaucratic, regarding issues of reproduction and health.
How we treat other people says a lot about who we are. Similarly, the quality of a society can be measured by the respect, dignity, and equality of thought and action accorded the women of that society. Without that, we are all of us diminished, left to play foolish roles written for us by frightened authors.