Abusing 3D tracked data for stabilising purposes
Sometimes it is fun when one uses a function differently than the actual intent of the tool which in this case is stabilising footage with CameraTracker.
While I’m not going to run through a step by step on using CameraTracker in this post, I’ll laid out the general idea on how to reverse engineer the tracking information from CameraTracker to stabilise the footage.
Since we’re on CameraTracker topic, I have written a post on tweaking CameraTracker settings if you encounter difficult shots to be tracked (unless it is beyond redemption).
Why stabilise using CameraTracker?
Just imagine if you have a shot where the camera is moving/panning with a talent who is walking around and you need to roto the talent. Now the surrounding have enough information to be tracked using CameraTracker and you noticed that the camera translate/dolly/rotate a fair amount throughout the shot. The above side by side GIF demonstrates said scenario (the right side is the original shot).
Conventional method that uses 2D tracker can be a pain if you need to track a particular marker that is overlap by the talent several times in the shot and offset tracking sometimes give less than decent result. Plus stabilising a shot using 2D tracker will result in lots of distortion from poorly tracked data which is not fun when rotoscoping.
If we can get a decent 3D solved camera from CameraTracker, we can project the original plate onto a card in 3D space and duplicate the same camera and choose a particular frame that we want to lockdown to stabilise the shot at that moment. The above GIF shows where the red camera is the lockdown camera and the green camera is the original solved camera which is use to project the plate onto a card.
So is this method possible to smooth out the overall solved camera movement?
Yes! If you ever need to work with a more accurate 3D space approach to smooth a camera movement, just filter the camera animation parameters.
The same principle apply like the above method where you project the footage onto a card using the original solved camera and through duplicating the camera, filter it enough to your liking and make sure the filtered duplicate camera is attached to a scanline render (and NOT THE ORIGINAL SOLVED CAMERA).
The following GIF shows a before after comparison of a filtered camera animation.
There is a catch though to this method
This approach will result in slightly softer footage due to the need to project onto a card. As usual, make sure to take care of your transformation method to ensure you do not lose your sharpness of the original plate.
Another thing to consider is that since this approach requires the use of 3D scene that is output through Nuke scanline render, it will take some time to render the stabilise projected footage.
To be honest, I hate Nuke slow-mo scanline render but it seems to be fixed in the upcoming Nuke 9 release so all finger crossed as the scanline render method will eat into your project allocated time.
Curious folks will noticed that I didn’t mentioned anything about planar tracking. For me, planar tracking will still have similar difficulty like 2D tracker when you need to stabilise a shot. With the CameraTracker approach, we can utilise the tracked 3D camera movement and use it to create a more accurate stabilisation method (disregarding any form of distortion/motion blurring that is already in the original plates).