Simulating Physically Accurate Depth of Field in Nuke

Depth of field is a hugely important part of simulating photoreal imagery. Focal length and bokeh and film back size greatly affect the look of a photograph.

I recently uploaded a couple of Nuke tools that I created in my spare time to visualize and simulate physically accurate depth of field.

OpticalZDefocus: A ZDefocus replacement which creates physically accurate defocus using the depth of field equation.

DofCalc: A tool to visualize the range of focus, given the specified lens geometry, focus distance, and depth unit.

These tools are also on Nukepedia – OpticalZDefocus and DofCalc.

I also uploaded a tutorial on depth of field, if anyone is interested in hearing me ramble on about nerd stuff!

Simulating Physically Accurate Depth of Field in Nuke from Jed Smith on Vimeo.


  • Terminology – What is depth of field, circles of confusion, hyperfocal distance, etc
  • What variables affect depth of field and how: lens focal length, lens f-stop, camera filmback size, and focus distance
  • How to simulate depth of field in Nuke
    • ZDefocus
    • Explaining z-depth channels and how they work
    • Demonstrating the OpticalZDefocus Tool
    • Visualizing depth of field behavior with changing lens parameters with the DofCalc gizmo
    • Simulating lens bokeh more accurately by sampling a bokeh from a plate and using that as a filter input to our defocus tool
Posted in Tutorials | 1 Response

PostageStamp Tools

I sometimes like to use PostageStamp nodes in my Nuke scripts when I need to connect one place in my script to another place that is far away. PostageStamp nodes with hidden inputs function as a visual marker for what the thing is that is being “imported” from another place in the script.

There are some frustrations with using PostageStamp nodes though.

First when you Ctrl/Cmd-click a node to select all upstream nodes and move them, nodes are selected through the hidden inputs of postageStamps, so you can accidentally move nodes that might be in a totally different place in your script.

Second, when you want to cut or copy and paste a section of your node graph, all PostageStamps with hidden inputs will be disconnected. This tool posted on Nukepedia got me thinking about how to fix this particular frustration, and I finally got around to writing something that works reliably.

This tool monkeypatches the cut/copy/paste behavior in Nuke to handle PostageStamps as a special exception. It stores what node each one is connected to in a knob on the node itself, so that it can reconnect to the right place when you paste it.

It also adds a shortcut to create a new PostageStamp node, and adds some helpful things like a button to connect it to the selected node (useful if you need to connect it to a node that’s really far away, and don’t want to drag a pipe for hours, or select one and then select the other and hit Y).

Here is the code.

And here’s how to add it to your

import postageStampTools
nuke.toolbar('Nodes').addCommand('Other/PostageStamp', 'postageStampTools.create()', 'alt+shift+p')"Nuke").addCommand('Edit/Cut', lambda: postageStampTools.cut(), 'ctrl+x')"Nuke").addCommand('Edit/Copy', lambda: postageStampTools.copy(), 'ctrl+c')"Nuke").addCommand('Edit/Paste', lambda: postageStampTools.paste(), 'ctrl+v')

Posted in Tools | Leave a comment

Tracker Link Tools

Nuke Tracker Link Tools – Code

Tracker Link Tools Interface
It is often the case that you need link things to a Tracker node. It could be a Roto, RotoPaint, or SplineWarp layer that has linked transforms, or a Tracker node that links its transforms to a parent tracker. Tracker Link Tools is a script to do these things.

The script has two functions. The first is to create a linked Roto node. If you have a tracker node selected, and have installed it as described, press Alt-O, and a Roto node will be created with a layer linked to the selected Tracker node. Sometimes you might want to create a linked layer in an existing Roto, RotoPaint or SplineWarp node. No problem, just select as many target nodes as you want, along with your Tracker node, and press Alt-O to run the script. All selected target nodes will have a linked layer added to them.

The other function is to create a linked Transform node. Sometimes you have a Tracker or a Transform node, and you need to apply the same transformation in many places in your Nuke script. You could create many copies of your original Tracker or Transform node, or you could use this script to create a TransformLink node. Select as many parent Tracker or Transform nodes, and press Alt-L. Linked Transform nodes will be created for each.

The TransformLink node has some extra features compared to a regular Transform node. By default, when it is created, it will be linked using a relative transform. This means that on the identity frame specified, the transformation will be zeroed out. This identity frame is separate from the parent Tracker node. You can switch the node from Matchmove to Stabilize functionality by checking the ‘invert’ knob.

Sometimes, especially if you are linking to a parent node that is a Transform, you will just want to inherit the exact transformation of the parent node. If this is the case, you can click the Delete Rel button, and it will remove the relative transformation. Once the TransformLink node is created, you can also use the Set Target button to link it to a different Tracker or Transform node. You can also bake the expressions on the transform knobs with the Bake Expressions button.

This node might seem redundant to the built-in functionality of Nuke7’s Tracker node that lets you create a Matchmove or Stabilize transform. Unfortunately the transform nodes that are created using this method are burdened by excessive python code on the ‘invert’ knob, which is evaluating constantly, degrading Nuke’s UI performance. Turn on “Echo python commands to output window” in the Preferences under Script Editor. In a heavy script with a few of these nodes, you will probably notice stuttery UI responsiveness and freezing.

Put the file somewhere in your nuke path. You can add the script as commands to your Nodes panel. This code creates a custom menu entry called “Scripts”. I have them set to the shortcuts Alt-O and Alt-L.

import tracker_link
nuke.toolbar('Nodes').addMenu('Scripts').addCommand('Link Roto', 'tracker_link.link_roto()', 'alt+o')
nuke.toolbar('Nodes').addMenu('Scripts').addCommand('Link Transform', 'tracker_link.link_transform()', 'alt+l')

Posted in Tools | Tagged , , | 5 Responses

Bake Roto

Bake Roto Nuke Script – Code

Bake Roto Interface
Bake Roto is a python script that will bake an arbitrary control vertex on a Nuke Rotopaint shape or stroke into position data. This is inspired by Magno Borgo‘s BakeRotoShapesToTrackers script. Baking is nearly instantaneous, and takes into account transforms for shapes parented under layers that have animations or expression links.

Select a Roto or RotoPaint node and run the script. Don’t forget to turn on the “toolbar label points” button to see the cv numbers. It’s on my list to make the interface for this better. I should learn PySide better and make a selectable list of CV points instead of a dropdown. Soon, when there is time!
[gist id=6968830]

Posted in Tools | Leave a comment

Distort Tracks

Distort Tracks Nuke Gizmo – Code

Distort Tracks Gizmo Interface
This gizmo reformats and/or distorts tracking data based on a uv distortion map input. When you are working with CG elements in your comp that are undistorted and padded resolution, sometimes it is useful to reconcile tracking data from a 3d position through a camera into screen space. This data can then be used to do stuff in 2d: track in lens flares, matchmove roto or splinewarps, etc. The problem is that when this tracking data comes back from our padded undistorted 3d scene into distorted, unpadded resolution comp land, it doesn’t line up.

Connect the UV input to a uv distortion map and set the channel that holds it, (for example, a LensDistortion node set to output type Displacement, outputting a UV distortion map into the forward.u and forward.v channels)
Set the padded resolution format and the destination format: Padded resolution is the overscan resolution that you are distorting from, Destination format is the comp resolution you end up in. If they are the same, set them both to be the same.
Add as many tracking points as you want and copy or link the data in. You can show or hide the input and output tracks for convenience. (It is easier to copy the data of many tracks in if you don’t see the output track knobs.)
Hit Execute, and all tracks will be distorted. The output tracking data will be copied into each tracks respective trk_out_# knob.

iDistort Input will theoretically let you plug an iDistort map in and have your tracking data distorted by it. UVMap Animation enabled will severely limit the speed at which the uvmap image data can be sampled, but will enable animated distortion maps.
Note that right now this only works with reformat types set to center, no resize, such as you would use when cropping a padded resolution cg plate back to comp resolution before distorting it. Theoretically this gizmo should work to ‘reformat’ tracking data as well. If you plug in an ‘identity’ uvmap, the tracking data should be undistorted, but reformatted from the source format to the destination format.
Also note that the distorted track output will switch to the reformatted track at the bounds of frame, so that the distorted track does not suddenly pop to 0,0 where the distortion map turns black.

Huge thanks to Ivan Busquets for the ninja-comp technique used to invert the UV Map using a DisplaceGeo.

Posted in Tools | Leave a comment

Nuke Screen Replacement Tutorial

This is a set of rather lengthy video tutorials about the process of using Nuke compositing to replace the content of a cell-phone screen. I have been doing a rather huge quantity of screen replacements at work, and thought it would be useful to share the overall process, some tips, and common pitfalls to avoid.

Some screen replacement techniques have advantages over others, but none is perfect. Most smartphones and tablets these days are very reflective. This can introduce problems. Here is an overview of different approaches, with their strengths and weaknesses.

Black screen
Shooting the phone with the screen not illuminated results in a nearly perfect representation of the reflections and smudges that might be on the screen. Since there is no light being emitted from the screen, putting the reflections back on top of whatever screen you put back into the phone’s screen is as simple as adding them together. Unfortunately, if there are foreground objects occluding the screen, like fingers interacting with the screen, or even hair dangling in front of the screen, getting a matte to separate the foreground object from the screen can be quite challenging and time consuming. For fingers, you would have to roto wherever the finger goes in front of the screen. For hair, the situation would be the same. If the hair is wispy and you aren’t able to pull an acceptable luminance matte from differences in brightness between the hair and the screen, this could be quite challenging. Tracking can also be difficult, because it can be hard to tell where the actual corner of the screen is when it is not illuminated. Sometimes putting small tracking markers might be necessary if only one or two screen corners are in frame. Removing these markers can prove tricky, especially if there are dynamic screen reflections jumping all over the screen.
In short: Looks the best but is the most difficult.

Green screen
Illuminating the phone screen and setting it to a solid green or blur color can allow you to use chroma key techniques to extract a matte for foreground objects that might occlude the screen. Since there is light being emitted through the screen though, it can be difficult to recover reflection information to put back over the new screen. Additional problems may be also created from the sickly green of the screen’s light spilling all over your foreground, if they are in shot. The best approach is to set the screen’s brightness to be as dark as possible while still seeing the color clearly in camera. This spills less, competes with reflection less, and hopefully still allows you to extract a key for foreground objects. Having the screen illuminated also allows you to easily determine the exact corners of the screen. Tracking markers might still be necessary if only 1 or 2 corners of the screen are in frame.

Marker screen
With both of the previous approaches, tracking can be quite difficult. If there is a large or even marginal amount of reflections, tracking the surface of the screen with a planar tracking solution like Mocha or NukeX’s PlanarTracker can be difficult. Planar tracking works by analyzing the relative movement of many points on a surface, and calculating the movement of the surface as a planar surface in three dimensional space. When there is not enough points of detail to track, or if the surface is not static, the track can get wobbly. As you might imagine, the screens of phones don’t often provide lots of points to track. Usually it’s just a few buttons on the bottom and maybe a speaker grill at the top of the phone, and then a lot of reflections moving everywhere.
One approach to make tracking easier is to fill the screen with tracking markers, like this:
image of screen covered in tracking markers
This makes a planar tracking solution quite easy! Unfortunately, it makes getting reflections back from the screen impossible. It also makes extracting occluding foreground objects difficult. Possibly more difficult than shooting with a black screen, because if the foreground object is moving quickly and has highly motion-blurred edges, these edges will likely have to be replaced because the detail of the tracking surface of the screen will show through.

Essentially, like in most areas of visual effects, there is no easy solution. The best solution varies from situation to situation, and from shot to shot. Knowing the upsides and pitfalls to each approach can be valuable in choosing which approach you will take. Here is the process for replacing a low-luminance chroma-blue cell phone screen.

Nuke Screen Replacement Tutorial Part 1: Tracking The Screen

Nuke Screen Replacement Tutorial Part 2: Compositing and Integrating the Screen Content

I have decided to include the source assets and the script that I used in this tutorial, since understandably some people do not have access to a camera to shoot footage of their own with. Download it here:

To help you out with some difficult point tracks, here is a CornerPin gizmo for Nuke that has a keyframeable offset for each corner: CornerPinOffset.

Posted in Tutorials | 6 Responses

Bloodhail Timelapse

I am not dead, just sleeping.
Here is a dream I made:

Have a Nice Life – Bloodhail from Jed Smith on Vimeo.

Posted in Media Projects | Leave a comment

Nuke VFX Cleanup Tutorials

NOTE: 2012-09-16
This post is really really old, and I have learned a lot about compositing since this was made. If I were to create this tutorial now, I would do a lot of things differently. I probably should re-do these, or make a new tutorial on related subject matter. I highly recommend that you watch these videos from The Foundry if you are interested in this:
Wire Removal with Nuke RotoPaint

These videos on basic color correction concepts from an old Nuke Master Class by Frank Reuter are also very useful. If you don’t know that there is no difference between Gain and Multiply in the Grade node, watch these.
Nuke Basic Workflows Colour Correction – Part A
Nuke Basic Workflows Colour Correction – Part B

And definitely read all of the Art of Roto article on fxguide.

A big part of visual effects work is removing or altering unwanted items in shots. Wires or rig, blemishes on actors or in the set design, text or signs on buildings, all of these things are prime candidates for visual effects cleanup work.

There are many possible techniques to use for cleaning up a shot, ranging in difficulty from extremely easy, to mind-numbingly complex. How hard it is depends on how complex the background behind the object being removed is, and what might be occluding the object being removed in various parts of the shot.

For example, if there is a large unfortunate piece of rig that happens to be in front of a complex and defined tree-branch blowing wildly in the wind, occluded in the foreground by a healthy wisp of smoke, cue the nightmare scenario. Basically the aim of cleanup work is to re-create the background behind the object needing to be removed, such that a person can’t tell there was ever anything there.

Here are some of the techniques used to do this.
2D or 3D tracking of still “cleanup” images into shots: this works well for background objects that are not deforming, for example, the sides of buildings, trucks, rocks, and other hard things. This technique does not work as well for soft moving things like people, clothes, energetic trees, and water. Another thing that confounds this technique is interactive lighting changes. If there is a flickering light on the side of a building, using a still image to clean up something on the wall of said building will look out of place, unless a keyframed color correction is applied to match the lighting changes.

Cloning one area of an image to another area, in order to cover something up: This works well for shots where the background of the object needing to be removed has a moving texture. For example, for something like ripples in water, still image “patching” will not work because the ripples in the water have to move. Since the texture of the water is ideally relatively consistent in its pattern of ripples, cloning from one area of the frame to the other might not be noticeable. However, if the background’s pattern is non-repeating or complex, this technique might easily be foiled.

Clone Painting: This technique is varied and quite effective with the right tool in skilled hands. It is similar to wielding the “rubber stamp” tool in Photoshop, except that it must be kept in mind at all times that one is working with a moving shot. One can clone areas from adjacent frames to replace the background over a moving wire. In order to do this effectively, the plate has to be motion tracked and stabilized to the object being manipulated, so that the object being removed doesn’t change position from frame to frame. Clone painting from the same frame using an offset to remove something on a moving object also can work well. When cloning with an offset on consecutive frames, one has to be very careful in order to avoid motion artifacts that result from the cloning happening slightly differently on each frame. A first inclination might be to just clone out an object on each frame and be done with, but when you watch it back in motion, horrible boiling artifacts will occur over the object that is removed so perfectly when looking at each frame individually. Generally, offset cloning is easier to get away with on edges, and objects in motion, and harder to get away with on static objects that have subtle gradients.

Here are a couple of video tutorials on how to accomplish some of the things discussed, using The Foundry’s Nuke 6.0.

Nuke Cleanup Methods Tutorial – 2D Tracking and Cloning from Jed Smith on Vimeo.

A simple tutorial in Nuke on how to clone from one area of a moving image to another, using 2D tracking and stabilization, and basic compositing. Uses a shot from The Hotdog Cycle, produced by The Last Quest in Seattle.
The Hotdog Cycle Trailer

Nuke Cleanup Tutorial – Temporal Clone Painting and Grain Manipulation from Jed Smith on Vimeo.

A demonstration of a method of cleanup using clone-painting from adjacent frames on a stabilized plate, in NukeX 6.0.
The shot used is from the animation “High Strung”, produced by Tommy Thompson at The Evergreen State College.
Tommy Thompson’s Production Blog
A Short Documentary About The Project.

Posted in Tutorials | 11 Responses

Tips on Timelapse

A collection of timelapses shot over the last year by myself, using my modest photographic equipment: a Canon 350d (Rebel XT), a Canon EF 35-105mm f/3.5-5.6 zoom lens, and a Sigma EF-S 10-20mm f/4.5-5.6 zoom lens. Most were shot in Raw, and especially the cloud sequences have extensive post color correction.

Download 720p Version
Watch on Vimeo
The music is Buralta by Fedaden, off of his new LP Broader ( is the only place that has it in lossless, and it costs a ridiculous $25).

I shot my first timelapse a little more than a year ago. Above is a compilation of the best ones that I’ve created. I have learned a few things about timelapse:

1). Shutter angle in timelapses is very important. In stop motion animation, the strobing look of objects moving without motion blur is part of its visual aesthetic (except when counteracted by techniques such as Go Motion). In timelapse, since the subjects move by themselves, very filmic results can be achieved. The trick is to think about shutter angle, and to adjust your camera’s settings accordingly. Tyler Ginter wrote a more in-depth post about the technical and aesthetic considerations of Shutter Angle, but my description of it in application to timelapse follows.
Read More »

Posted in Media Projects, Tutorials | 2 Responses

I Leapt from my Room

Here is a soundtrack for your ears of wind and rain.
Lepawindarain by jedypod

Yesterday eve, I sprang forth from the pages of the Art and Science of Digital Compositing, and spied beams of sun setting through the crack above the blankets shielding me from the outside, through the pain of glass. Gripped by sudden aescetic cravings of exterior exposure, I groped for objects of anamnesis both photographic and calligraphic and was on my way.

Posted in Music | Tagged , , , | Leave a comment