Nuke Tips – Kronos, MotionBlur, Oflow, or VectorBlur?

The last draft for this article dated back to Jan 25, 2016… time to revive it!

Kronos, MotionBlur, Oflow, or Vector Blur? I’m blur…

The great thing with Nuke is that we have many ways to skin a cat problem.

I’ll be focusing on adding motion blur to CGI FX elements like fire, blood and debris. Yes you can abuse I mean use Kronos and Oflow for adding motion blur instead of slowing down footage!

Time to explore the various methods and see which one make or break depending on the situation.

Do not get confused with MotionBlur2D and MotionBlur3D which are not designed to generate motion blur by analysing the image sequences. MotionBlur2D uses the Transform animation while MotionBlur3D uses the camera animation to generate motion blur.

Continue reading

Houdini Tips – Matching Mantra Motion Blur with Vray Motion Blur (3ds Max)

Matching Mantra Motion Blur with Vray Motion Blur? That’s wicked!

So I’ve been working across two 3D package for my upcoming demoreel and one of the issue is to find the right settings for Mantra to match as closely as possible with the motion blur from Vray in 3ds Max.

Since I’m doing all my FX works in Houdini, I did impose a rule to not transfer any of the FX caches from Houdini into Max for the final render.

Actually that is a lie as I did render one FX shot (fracture) as I need to reuse the Vray material in 3ds Max. The rest though are rendered using Mantra and composited over the Vray renders in Nuke.

What’s the magic chant to match the motion blur?

If the camera and Vray motion blur in Max are at default value, proceed as usual.

If not… maybe look at the following steps and see if you can figure out the right value to use for Mantra.

First, make sure the imported Alembic from Max are unpacked.

Don’t forget to put a Trail SOP set to Compute Velocity and also enabling Compute Angular Velocity like in the screenshot above.

This assume you import the camera from Max (and the Alembic geometry and camera are correctly scaled as covered in my prior Houdini Tips)!

Configure Mantra Renderer with the following settings:

  • Allow Motion Blur: On
  • Geo Time Samples: 2
  • Shutter Offset: -2

As usual, do a test render and compare with the Vray render!

The following animated GIFs are self-explanatory.

OK it is not exactly 100% match. Maybe 97-99%?

The rest can be fix in compositing stage if required.

Any reasons for doing this thingamajig instead of round tripping from Houdini to Max?

Basically if I were to render in Max using Vray, I need to match the look of the Pyro FX shader aka spending a long time tweaking the Vray VolumeGrid shader…

Unless you already prepared presets for both Pyro FX and Vray VolumeGrid shader that achieved the same look, I say working directly in Houdini and adjusting in compositing stage is faster and flexible.

Exception for situation that requires interaction with elements in Max only (like Fume FX or BiFrost) but a workaround for it is to convert the caches to VDB and use your preferred renderer.

So it is up to you (or your team) to figure out the best solution to a problem!

Further Reading

Matching Mantra motion blur with Vray motion blur ? – https://www.sidefx.com/forum/topic/34051/

http://www.sidefx.com/docs/houdini/render/blur.html

https://docs.chaosgroup.com/display/VRAY3MAX/Volumetric+Grid+%7C+VRayVolumeGrid

Nuke Tips – Hide Node Input

Hide Node Input or Node Hide Input or Input Node Hide

When your Nuke script start to grow bigger and complicated, it is time to start to organise the node graph flow for better readability and remove redundancy.

I don’t feel like writing a lengthy explanation on when or why to use it but here’s a bullet list on the usefulness of hiding node input:

  • Reduce redundancy by avoiding duplicates “copy” of the same node
  • Ensuring a cleaner and easier to read Node Graph (if only I can keep my personal belongings as tidy as the node graph–)
  • Avoid potential Nuke script corruption when cloning a node which can happen if you’re not lucky

Potential Nuke Script Corruption?!! Eeeepssss

To further explain on the last point, there are situation where you want to clone a node so you can use it on another part of the comp and prefer the ease of manipulating either of the clone nodes to propagate the same changes.

BUT

It comes with the risk of Nuke potentially screwing up the actual render and one of the way to fix it is to have a copy of it (think Clone Nodes is like Duplicate Special in Maya or Instance in 3ds Max) or linking the master node directly to where you want to use it.

This is where the Hide Node Input comes in handy!

Quick GIF Demonstration

Nuke Tips – Normalize Depth Pass

Time to Conform to Normality

Another short Nuke Tips so without further ado let’s normalise/normalize!

If you output a Pz (depth) image plane from Houdini’s Mantra and read it in Nuke, you’ll see it as a solid flat colour and wonder why it doesn’t match the view in the Render tab or Mplay in Houdini.

What happened is Houdini remap the Pz data by normalising the min/max value to a viewable range for the artist.

So here’s what we can do to make it viewable (and usable for various compositing tasks) in Nuke.

  • Shuffle the depth channel to RGB and alpha to… Alpha. (this step can be skipped if you prefer to manipulate the depth channel directly)
  • Optional but you can use the CurveTool node and use the Max Luma Pixel operator to find the brightest pixel in the depth channel.
  • Use Grade node to remap the black and white point. Typically you can leave the blackpoint at 0 and set the whitepoint to the brightest pixel.
  • Another optional step but in case there is no valid sky dome geometry to represent the sky depth, you can fake it by using a Constant node with a value way higher than the brightest pixel and merge it using Under if you been following the above steps.
  • To adjust the midpoint of the depth, tweak the Gamma value.

This is the end result:

Here’s the node graph setup for your reference:

Why Normalize Depth Pass?

Seeing a solid colour in the viewer is not exactly practical!

This tutorial is more for Mantra way of handling the depth pass where it renders depth using the distance from the camera.

Pixels that are nearer to the camera are closer to zero and pixels that are further from the camera are represented by the distance from the camera.

Imagine you have a building geometry that are 100 units away from the camera, the pixel value will be approximately 100.

Traditionally most renderer will offer options to set the near and far clip value to render the depth pass but this are destructive as we “bake” the possible depth range to fit into the final depth render. Unless you render and export the depth pass at least in 16-bit Half Float or better 32-bit Float (which can be overkill unless you need the highest accuracy during compositing).

Further Reading/Viewing

Learn How to Render and composite Z-Depth in Houdini: https://www.sidefx.com/tutorials/learn-how-to-render-and-composite-z-depth-in-houdini

Houdini Rendering to Z-depth with Mantra (Pz) (Japanese): http://yoong-cut-and.blogspot.com/2015/02/houdini-rendering-to-z-depth-with.html

Houdini – ZDepth Passes (Japanese): http://www.technical-artist.net/?p=92

Tutorial – Demystifying Camera Depth Passes in Maya Mental Ray: https://vimeo.com/113997080

The right way to render a depth pass: http://forums.cgsociety.org/showthread.php?t=901605

VRayZDepth using 3ds Max: https://docs.chaosgroup.com/display/VRAY3MAX/Z-Depth+%7C+VRayZDepth

VRayZDepth using Maya: https://docs.chaosgroup.com/display/VRAY3MAYA/Z-Depth+%7C+vrayRE_Z_depth

Houdini Tips – Fetch Cam Attributes for Imported Cam

Let’s Play Fetch and Blend with the Cameraman!

This is with default Houdini scale units (Meter).

Play around until it match whatever you’re doing from the original 3D software.

  1. Create Null and scale it (typically 0.01 for Max/Maya using CM although I’m unsure if units affect the total scale).
  2. Parent imported camera to it.
  3. Create Fetch node and fetch the transform in the Imported Camera node (e.g. /obj/s001c001_CAM_1203_v001.ABC/s001c001_CAM_trans)
  4. Create Blend node and parent it to Fetch. Only select the Transform and Rotation as the whole idea is to not inherit the Scale!)
  5. Create Camera node and configure it to match your imported camera settings.
  6. Parent the camera to Blend and verify it. Done!

If something looks off… repeat until it works.

And here’s the screenshots of the settings with the relevant parameters highlighted:

21 Jan 2018 Update: It seems that leaving the Weight 1 at 1 is a bad idea as it messes up the near and far clipping of the camera. Set it to 0 to fix the issue while still inheriting the translation and rotation of the imported camera.

What’s the Reason?

It is simply a BAD IDEA to scale a camera since it will result in wacky rendering artifact especially volumetric like PyroFX etc.

By scaling the imported camera with a null, all the attributes will be multiply by the new value.

Instead, we create a new camera in Houdini and retrieve the relevant attributes while maintaining the original scale when it was created.

Remember to NOT USE the scaled imported camera for renders! It looks fine on the viewport but a disaster when rendering so always create a native Houdini camera (without scaling it of course!)