Wednesday, February 5, 2014

Procedural Game Camera Asset

For the BYU 2014 Animated Short Owned, one of the long projects that I committed to it was the Houdini Digital Asset used to emulate the Game Camera for the video game that is featured in it.


The Problem: The story of Owned revolves around a fighting style video game. In order for the game to feel as real as possible, the camera needs to act like veterans of such games would expect it to. The Camera must be able to:
  1. Track the positions of the two characters
  2. Translate in the X, Y, and Z directions when the characters' position changes to keep them in frame
  3. Limit the translation towards the characters so they won't go out of frame
  4. Smooth its movement so that it won't react to movement unnaturally fast
  5. Visualize its path for artistic smoothing
  6. Allow for adjustments such as camera shake 

The Solution: Build a custom Houdini asset with a Houdini camera object in it and manipulate it with Python.

Tracking Character Positions: In order for the camera to know the positions of the two characters, my initial idea was to use the position of the origin of each of the character models as the points to track. However, two problems quickly emerged with that: (1) The origin of the character was set at the character's feet, not at their chest. So, this point is not always found at the center of the character. (2) More importantly, the characters were animated using Alembic animation data from a motion capture program and so the origin point did not move at all with the characters.

Instead, I discussed the problem with effects artists on the film who were having the same problem. Together, we decided to create points that are constrained to key points on each character's model (e.g. head, hands, and feet) One of them was assigned to the torso and this point was the one I ended up using.

Translating the Camera: Once I had the position data available to the the asset, I was able to use that to determine their displacement vertically and horizontally. For the film, I was able to be sure that the characters would stay on the same XY plane, but the method I used could easily be extrapolated to allow for rotation to get camera movement like we see in Tekken.

Once I had the displacement in the X and Y directions the next task was to determine how much the camera needed to move based on that displacement data. The X and Y directions were easily set to the midpoint in their X and Y displacement. The Z direction was more challenging. The movement in the Z direction would depend heavily on the aspect ratio of the camera and the position on screen the director would like the characters to remain in. So, in order for that to be possible, I allowed the influence of the displacement to be changed as a parameter on the camera. Thus the camera's displacement in the Z direction from the characters positions ended up being:

Character X Displacement  *  X Displacement Influence

OR

Character Y Displacement  *  Y Displacement Influence

...whichever was greater.

Maximum Zoom Level: In limiting the amount that the camera could zoom in, I needed to consider the size of the character models themselves. However, no data was available to me to determine that. So, I created a parameter to allow for this to be changed.

All of the controls for adjusting the tracking of the characters were rounded together into the "Tracking" tab

Tracking Controls
Smoothing the Movement: As this data was directly interpreted, it became clear very soon that a linear interpolation of the tracking data did not give us appealing camera movement. While the camera did track the characters accurately and keep them in frame, at times it moved faster than any man could actually move and it would tend to "jerk" around quite often.

Using Python by itself to smooth the camera movement would prove difficult since in a SOP context Houdini does not store the position data from previous or future frames unless it is reading from baked animation data, such as an Alembic. So, I decided to use a CHOP network to smooth the movement. In here I used a combination of a Box Filter and a De-Spike Filter to gain control over the movement of the camera and promoted their parameters to allow the influence of the smoothing algorithm to be changeable.

Visualizing the Camera Path: One challenge that quickly cropped up with this was the fact that in trying to smooth the movement of the camera, it was really hard to tell what influence the smoothing algorithm was having on the camera without actually playing through the animation. This quickly became very tedious as one would have to resort to a trial and error method to determine the right settings to use. If the camera path was visible somehow, it would make directing the camera's movement a lot easier.

Now that the camera's movement data was being processed through a CHOP network though, it wasn't too hard to actually take the position data form the CHOP and export it to the SOP context. There I was able to assign a line to those positions, effectively visualizing the path that the camera would take. This path would update instantly as the influence of the smoothing tools were adjusted.

The controls for smoothing and visualizing the camera path were wrapped up into the "Smoothing" tab

De-Spike Filter Controls
Box Filter Controls
Artistic Camera Shake etc. After this was done, we were able to create a very convincing camera movement, but we weren't satisfied yet. To help sell the game as real, we wanted to add other movements you might see in a camera such as camera shake when intense action or violent movement occurred. These moments would happen under various conditions, so for those special moments, I added a custom rotation and translation control to add on top of the tracking to produce those effects.

The controls for allowing for these effects and other necessary camera controls were promoted to the "Camera" and "Lens" tabs

Camera Controls
Lens Controls