Category Archives: Tutorials

Simulating Physically Accurate Depth of Field in Nuke

Depth of field is a hugely important part of simulating photoreal imagery. Focal length and bokeh and film back size greatly affect the look of a photograph.

I recently uploaded a couple of Nuke tools that I created in my spare time to visualize and simulate physically accurate depth of field.

OpticalZDefocus: A ZDefocus replacement which creates physically accurate defocus using the depth of field equation.

DofCalc: A tool to visualize the range of focus, given the specified lens geometry, focus distance, and depth unit.

These tools are also on Nukepedia – OpticalZDefocus and DofCalc.

I also uploaded a tutorial on depth of field, if anyone is interested in hearing me ramble on about nerd stuff!

Simulating Physically Accurate Depth of Field in Nuke from Jed Smith on Vimeo.


  • Terminology – What is depth of field, circles of confusion, hyperfocal distance, etc
  • What variables affect depth of field and how: lens focal length, lens f-stop, camera filmback size, and focus distance
  • How to simulate depth of field in Nuke
    • ZDefocus
    • Explaining z-depth channels and how they work
    • Demonstrating the OpticalZDefocus Tool
    • Visualizing depth of field behavior with changing lens parameters with the DofCalc gizmo
    • Simulating lens bokeh more accurately by sampling a bokeh from a plate and using that as a filter input to our defocus tool
Posted in Tutorials | 1 Response

Nuke Screen Replacement Tutorial

This is a set of rather lengthy video tutorials about the process of using Nuke compositing to replace the content of a cell-phone screen. I have been doing a rather huge quantity of screen replacements at work, and thought it would be useful to share the overall process, some tips, and common pitfalls to avoid.

Some screen replacement techniques have advantages over others, but none is perfect. Most smartphones and tablets these days are very reflective. This can introduce problems. Here is an overview of different approaches, with their strengths and weaknesses.

Black screen
Shooting the phone with the screen not illuminated results in a nearly perfect representation of the reflections and smudges that might be on the screen. Since there is no light being emitted from the screen, putting the reflections back on top of whatever screen you put back into the phone’s screen is as simple as adding them together. Unfortunately, if there are foreground objects occluding the screen, like fingers interacting with the screen, or even hair dangling in front of the screen, getting a matte to separate the foreground object from the screen can be quite challenging and time consuming. For fingers, you would have to roto wherever the finger goes in front of the screen. For hair, the situation would be the same. If the hair is wispy and you aren’t able to pull an acceptable luminance matte from differences in brightness between the hair and the screen, this could be quite challenging. Tracking can also be difficult, because it can be hard to tell where the actual corner of the screen is when it is not illuminated. Sometimes putting small tracking markers might be necessary if only one or two screen corners are in frame. Removing these markers can prove tricky, especially if there are dynamic screen reflections jumping all over the screen.
In short: Looks the best but is the most difficult.

Green screen
Illuminating the phone screen and setting it to a solid green or blur color can allow you to use chroma key techniques to extract a matte for foreground objects that might occlude the screen. Since there is light being emitted through the screen though, it can be difficult to recover reflection information to put back over the new screen. Additional problems may be also created from the sickly green of the screen’s light spilling all over your foreground, if they are in shot. The best approach is to set the screen’s brightness to be as dark as possible while still seeing the color clearly in camera. This spills less, competes with reflection less, and hopefully still allows you to extract a key for foreground objects. Having the screen illuminated also allows you to easily determine the exact corners of the screen. Tracking markers might still be necessary if only 1 or 2 corners of the screen are in frame.

Marker screen
With both of the previous approaches, tracking can be quite difficult. If there is a large or even marginal amount of reflections, tracking the surface of the screen with a planar tracking solution like Mocha or NukeX’s PlanarTracker can be difficult. Planar tracking works by analyzing the relative movement of many points on a surface, and calculating the movement of the surface as a planar surface in three dimensional space. When there is not enough points of detail to track, or if the surface is not static, the track can get wobbly. As you might imagine, the screens of phones don’t often provide lots of points to track. Usually it’s just a few buttons on the bottom and maybe a speaker grill at the top of the phone, and then a lot of reflections moving everywhere.
One approach to make tracking easier is to fill the screen with tracking markers, like this:
image of screen covered in tracking markers
This makes a planar tracking solution quite easy! Unfortunately, it makes getting reflections back from the screen impossible. It also makes extracting occluding foreground objects difficult. Possibly more difficult than shooting with a black screen, because if the foreground object is moving quickly and has highly motion-blurred edges, these edges will likely have to be replaced because the detail of the tracking surface of the screen will show through.

Essentially, like in most areas of visual effects, there is no easy solution. The best solution varies from situation to situation, and from shot to shot. Knowing the upsides and pitfalls to each approach can be valuable in choosing which approach you will take. Here is the process for replacing a low-luminance chroma-blue cell phone screen.

Nuke Screen Replacement Tutorial Part 1: Tracking The Screen

Nuke Screen Replacement Tutorial Part 2: Compositing and Integrating the Screen Content

I have decided to include the source assets and the script that I used in this tutorial, since understandably some people do not have access to a camera to shoot footage of their own with. Download it here:

To help you out with some difficult point tracks, here is a CornerPin gizmo for Nuke that has a keyframeable offset for each corner: CornerPinOffset.

Posted in Tutorials | 6 Responses

Nuke VFX Cleanup Tutorials

NOTE: 2012-09-16
This post is really really old, and I have learned a lot about compositing since this was made. If I were to create this tutorial now, I would do a lot of things differently. I probably should re-do these, or make a new tutorial on related subject matter. I highly recommend that you watch these videos from The Foundry if you are interested in this:
Wire Removal with Nuke RotoPaint

These videos on basic color correction concepts from an old Nuke Master Class by Frank Reuter are also very useful. If you don’t know that there is no difference between Gain and Multiply in the Grade node, watch these.
Nuke Basic Workflows Colour Correction – Part A
Nuke Basic Workflows Colour Correction – Part B

And definitely read all of the Art of Roto article on fxguide.

A big part of visual effects work is removing or altering unwanted items in shots. Wires or rig, blemishes on actors or in the set design, text or signs on buildings, all of these things are prime candidates for visual effects cleanup work.

There are many possible techniques to use for cleaning up a shot, ranging in difficulty from extremely easy, to mind-numbingly complex. How hard it is depends on how complex the background behind the object being removed is, and what might be occluding the object being removed in various parts of the shot.

For example, if there is a large unfortunate piece of rig that happens to be in front of a complex and defined tree-branch blowing wildly in the wind, occluded in the foreground by a healthy wisp of smoke, cue the nightmare scenario. Basically the aim of cleanup work is to re-create the background behind the object needing to be removed, such that a person can’t tell there was ever anything there.

Here are some of the techniques used to do this.
2D or 3D tracking of still “cleanup” images into shots: this works well for background objects that are not deforming, for example, the sides of buildings, trucks, rocks, and other hard things. This technique does not work as well for soft moving things like people, clothes, energetic trees, and water. Another thing that confounds this technique is interactive lighting changes. If there is a flickering light on the side of a building, using a still image to clean up something on the wall of said building will look out of place, unless a keyframed color correction is applied to match the lighting changes.

Cloning one area of an image to another area, in order to cover something up: This works well for shots where the background of the object needing to be removed has a moving texture. For example, for something like ripples in water, still image “patching” will not work because the ripples in the water have to move. Since the texture of the water is ideally relatively consistent in its pattern of ripples, cloning from one area of the frame to the other might not be noticeable. However, if the background’s pattern is non-repeating or complex, this technique might easily be foiled.

Clone Painting: This technique is varied and quite effective with the right tool in skilled hands. It is similar to wielding the “rubber stamp” tool in Photoshop, except that it must be kept in mind at all times that one is working with a moving shot. One can clone areas from adjacent frames to replace the background over a moving wire. In order to do this effectively, the plate has to be motion tracked and stabilized to the object being manipulated, so that the object being removed doesn’t change position from frame to frame. Clone painting from the same frame using an offset to remove something on a moving object also can work well. When cloning with an offset on consecutive frames, one has to be very careful in order to avoid motion artifacts that result from the cloning happening slightly differently on each frame. A first inclination might be to just clone out an object on each frame and be done with, but when you watch it back in motion, horrible boiling artifacts will occur over the object that is removed so perfectly when looking at each frame individually. Generally, offset cloning is easier to get away with on edges, and objects in motion, and harder to get away with on static objects that have subtle gradients.

Here are a couple of video tutorials on how to accomplish some of the things discussed, using The Foundry’s Nuke 6.0.

Nuke Cleanup Methods Tutorial – 2D Tracking and Cloning from Jed Smith on Vimeo.

A simple tutorial in Nuke on how to clone from one area of a moving image to another, using 2D tracking and stabilization, and basic compositing. Uses a shot from The Hotdog Cycle, produced by The Last Quest in Seattle.
The Hotdog Cycle Trailer

Nuke Cleanup Tutorial – Temporal Clone Painting and Grain Manipulation from Jed Smith on Vimeo.

A demonstration of a method of cleanup using clone-painting from adjacent frames on a stabilized plate, in NukeX 6.0.
The shot used is from the animation “High Strung”, produced by Tommy Thompson at The Evergreen State College.
Tommy Thompson’s Production Blog
A Short Documentary About The Project.

Posted in Tutorials | 11 Responses

Tips on Timelapse

A collection of timelapses shot over the last year by myself, using my modest photographic equipment: a Canon 350d (Rebel XT), a Canon EF 35-105mm f/3.5-5.6 zoom lens, and a Sigma EF-S 10-20mm f/4.5-5.6 zoom lens. Most were shot in Raw, and especially the cloud sequences have extensive post color correction.

Download 720p Version
Watch on Vimeo
The music is Buralta by Fedaden, off of his new LP Broader ( is the only place that has it in lossless, and it costs a ridiculous $25).

I shot my first timelapse a little more than a year ago. Above is a compilation of the best ones that I’ve created. I have learned a few things about timelapse:

1). Shutter angle in timelapses is very important. In stop motion animation, the strobing look of objects moving without motion blur is part of its visual aesthetic (except when counteracted by techniques such as Go Motion). In timelapse, since the subjects move by themselves, very filmic results can be achieved. The trick is to think about shutter angle, and to adjust your camera’s settings accordingly. Tyler Ginter wrote a more in-depth post about the technical and aesthetic considerations of Shutter Angle, but my description of it in application to timelapse follows.
Read More »

Also posted in Media Projects | 2 Responses

Roto Tutorial #2

Up until this point in time, I have only created one “screencast” video tutorial on this blog. I have really been meaning to create some more of these type of tutorial videos, because they not only help me in my ability to communicate effectively and teach effectively, but they might actually be interesting to some of the few people who read this little weblog.

This post then, we will consider a step in the right direction, but not by any means achievement of this goal. Namely, I mean that this 2nd Rotoscoping Video Tutorial that follows is exceedingly rough, rambly, random, unrehearsed, raw, borderline-reprehensible, and reeking of underflowed thought-speech-buffer. If you have 30 spare minutes of your time, however, you can get a 1st person experience of not only one of the many things that I have been up to of late, but some information about what rotoscoping is, and how a novice student performs one of the things essential to feature film visual effects.
Read More »

Posted in Tutorials | Tagged , , , , , , , | 4 Responses