Computer generated fur softly lit with flickering candle light
December 25, 2008 8:59 PM   Subscribe

How automated is PIXAR-caliber computer animation these days?

I just watched Wall-e, and was blown away by the lighting effects, the depth-of-field of the 'camera,' etc. As an admirer of good CGI who knows little about the ins and outs of this technology, I ask you: how would you experts out there describe its current capabilities? What are the current weaknesses and frustrations for animators? What are some pivotal innovations that are just around the horizon?

Exactly to what extent are the scenes rendered frame-by-frame by an animator, like they were back when they were hand drawn, versus, to what extent is it a matter of designing a creature, determining how it moves, and then letting it loose?
posted by umbĂș to Technology (14 answers total) 8 users marked this as a favorite
 
Well, for one thing all the actual "rendering" will be done by computer. Animators will specify a 3-d model and describe where it goes and how it moves. Rather then imaging hand-drawn animation, think of claymation or old-school stop-motion animation. The animators will design a model and do a sequence of poses called "keyframes" The computer will then interpolate new frames between those keyframes to determine where the model should be when not specifically specified.

For high-end stuff, they'll probably have tons and tons of key frames, you could think of doing a keyframe as the equivalent of an oldschool animator drawing a frame, but it would be more like posing a model for a stopmotion animation -- the hard work is designing the model in the first place, posing it may not take that long.

When you see huge crowds, though, those models are posed by a computer running AI routines. This was pioneered in the Lord of the Rings movie.
posted by delmoi at 9:29 PM on December 25, 2008 [2 favorites]


"the depth-of-field of the 'camera'"

Oddly enough, when a 3d scene is rendered, there is also a camera object that is "animated" (or at least given movement) throughout the scene. This camera can have numerous settings that directly correlate with a real-world camera, and are then calculated by the computer.
posted by niles at 9:50 PM on December 25, 2008 [1 favorite]


Things like movement of fur, and draping of cloth, are algorithmic. Complex lighting (e.g. flickering candles) are a mix; the artist controls the source, and algorithms figure out how that affects all the lighting of other objects in the scene.

The amount of work involved in creating key frames depends enormously on what's going on. In extreme cases it can be necessary to animate frame-by-frame but that rarely lasts for long. In many kinds of sequences, keyframes are every 4-8 frames, and it's not uncommon for them to be even further apart than that.

Also, the amount of work that's involved in creating a keyframe (as an incremental change from the previous keyframe) may not be all that great.

If you haven't messed with any kind of CGI animation program, then your intuition about just what's involved is probably completely wrong. If you're really curious, the thing to do is to pick up such a package and play with it. In particular, if you're curious to know what's involved in animation characters, then you should try Poser. It's pretty accessible to newbies, and while it's not top-of-the-line it's within rock-throwing distance of being so, and it'll let you see enough to be able to figure out the rest of it.

I just now googled for it and noticed that there's a major sale on it: $99. That's a lot less than I paid for my copy.
posted by Class Goat at 10:04 PM on December 25, 2008 [1 favorite]


delmoi: When you see huge crowds, though, those models are posed by a computer running AI routines. This was pioneered in the Lord of the Rings movie.

I believe the first movie use of AI to generate individual "random" actions & movement paths in crowds was actually for the penguin march scenes in Batman Returns, about 10 years before LoTR.
posted by Pinback at 10:59 PM on December 25, 2008


I believe the first movie use of AI to generate individual "random" actions & movement paths in crowds was actually for the penguin march scenes in Batman Returns, about 10 years before LoTR.

Yes, but LOTR revolutionized the process with Massive.
posted by dirtynumbangelboy at 11:17 PM on December 25, 2008


A Pixar level production takes millions of man hours and those people's fingerprints are all over it.

A computer renders the model, but a human models the shape, colors it, lights it, animates the different parts, applies the special effects, such as wind or hair or cloth or fire.

Unlike cell animation there's a lot more pre-production making the individual models since they are reused throughout the production. Once you have a textured (painted) Wall-E, you don't have to re-texture him every frame. But you do need to do some work whenever he gets dirty, or splashed, or takes some damage. And sometimes in different lighting for different effects. And unlike most cell animation different departments take care of different areas, much like real film production. There's a whole lighting department, texturing department and even a special effects department. And the person who modeled the character is rarely the person who animates them.

All of these things are a bit hybrid. On one hand no, an artist doesn't put every hair on Scully's body, and doesn't tell them how to wave in each frame. But they could, and sometimes do take that kind of control under certain circumstances when a performance demands it.

Animating a lead character is like operating a marionette with several hundred strings, while background and minor characters may only have a dozen or less. Fortunately you don't have to do the performance in real time. The animator pulls or pushes a string and the computer fills in what it thinks is best. Then the human goes back in a fixes what the computer doesn't do correctly. I spent several years doing 3d animation, and it's gotten to the point where a budget animator can pretty much just draw a line and say "computer, walk character from point A to point B." and it will do it. It will also look like crap because the computer isn't smart enough to put any emotion into it. Sure, it can put feet on the ground and make them trace a path, but to make it act with emotion and weight takes skill.

And don't forget that someone needed to program all of those computers. Many of those people are freaking geniuses and proper scientists. For example the way that light interacts with hair is still not entirely well understood in a way that can be simulated accurately with any kind of speed. Pixar is one of the pioneers and leaders in visual rendering, and still spends millions annually developing the technology further.
posted by Ookseer at 11:23 PM on December 25, 2008 [1 favorite]


The computers used in CG animation have one fatal flaw - they can only do exactly what you tell them to do. Every cool animated gag, lighting effect, and camera move you see in a film like Wall-E gets there because a small army of artists labors to put it there.


Even though the computer does indeed interpolate quite a bit of animation data, there's still a large amount of "cheating" and "art directing" that goes on:

In a sequence of shots that are part of the same scene, each individual shot still needs specific lighting tweaks - you usually can't just share the same lighting data across similar shots.

For a climatic water splash, which would generally start as a complicated procedural effect, individual water droplets may need to be hand animated.

Character animation is still a labor-intensive task in CG. The more complicated a character is, the more emotion it can convey, but it will take longer to manage all those settings and controls.


So in short - there is a lot of automation going on, but it's greatly finessed by hand. That's why these movies still take about three years and roughly $100 million to make. ;)
posted by shino-boy at 11:26 PM on December 25, 2008


Game developer view, but there's a lot of similarities -

Depends on when you check in to the production. It starts out very non-automated, but gets smoother as things go.

It's a lot more work intensive than you would think though.

Start at the beginning and assume you want to make a chair lit by candlelight.

First, you need a chair model. You need to sculpt the model (z-Brush or Mudbox, or whatever the hell they use for the high poly modeling.

You need textures. That's the physical material for the chair so it looks chairlike. Very time intensive.

Then you need to figure out how it's all supposed to go together. Wood is different than cloth which is different than plastic. This stuff can come off the shelf, but it's still something that has to be done.

Then lights. You need to create a lighting model that matches your art style. How do you want the photons to bounce off of the materials, what post processing effects do you want to make. Does the light animate? Want fire? Have to make fire algorithms or get one off of the shelf.

After you've done all of this you render it. Depending on the size of the frames this could take a few days or more.

The cool thing is, that once you make a chair, you can make more varietals more easily.

Characters are a lot more complex. You need surface materials and the model, but also a skeleton. How do the bones link, how do the muscles work. You need to make a face, put in bones to drive all of the face movements.

This is infrastructure stuff, things that enable characterization and nuance. Once you get all of the interior bits hooked up then you can go to town by hooking up the characterization.

Walk cycles, emotes, facial animations. You need to hand animate the character, or use motion capture data. Either way, there's a lot of tweaking to get it to work

It's not as grueling as it used to be because there's a logical hierarchy of bones. Move a wrist and the rest of the arm moves logically. The linkages of bones/muscles simplifies a lot of things, but giving characters a sense of identity is still extremely time intensive.


It's sorta like being god. You need to make the pieces and then make the tools to use the pieces, but after that it's more straightforward.

So pretty damned intensive.
posted by Lord_Pall at 1:14 AM on December 26, 2008


Shortly after Toy Story 2 came out, I went to see a talk at the University of Washington. One of the profs had gone off to work at Pixar, and he came back to give a tech talk, and do a little recruiting. He talked a little bit about his first project at Pixar, which was to help streamline the process of rigging character models for movement. In particular, I think he was focused on faces. They were trying to ramp up to go from releasing one film every two years, to releasing one film a year and they were bringing on a lot of artists, but they were also investing heavily in tools to make those artists more efficient.

Before it took X hours to rig a face. He was supposed to help cut that time significantly. Apparently his work was well received by the animations and they put it to good use, but it still took X hours to rig a face, because the animators took the opportunity to rig in the ability to create even more finely nuanced facial expressions.

Hearing that reminded me of a project I did to bring up a small render farm for a small 3D studio. One of the artists told me that the new performance was great, but when I was done, it wouldn't be long until it was taking just as long to render out a 30s clip as before, because they'd end up adding even more detail.

I'm sure that progress has been made, but I think once people get used to something taking a certain amount of time, they tend to schedule based on their past experience and a large share of potential efficiency improvements end up becoming quality improvements.
posted by Good Brain at 2:01 AM on December 26, 2008 [1 favorite]


On the Wall-E extras disc is a documentary on PIXAR. It was great. My husband and I got enthralled watching it and the 8 year old got pissed. Be sure and check it out. It's over an hour long but fascinating.
posted by pearlybob at 6:33 AM on December 26, 2008


Things like lighting, liquids, fire, hair, and particle effects (e.g., smoke, dust) are the areas where CGI blows traditional animation out of the water. These things can be very accurately modeled using algorithms, which are obviously no problem for Pixar's computers. With lighting, for example, the algorithms actually simulate the emission, absorption, and re-emission of light from the various surfaces/materials that the light interacts with. Humans can only crudely approximate these effects. So, for example, people who can appreciate such things were probably blown away by any of the scenes in Ratatouille that involved water. I certainly was.

Basically, animators control the "parameters" of the various objects that they're modeling, and the computers generate the final result. For example, the parameters of hair might be density, color, wetness, and length (which might be specified with, e.g., mean and standard deviation of the hair length). For fire, one parameter would be temperature (which determines flame color), and wind strength and direction (which determines how the flame flickers). Putting these two together, an "output" of the flame might be a function that describes the brightness and color of light that is emitted from the flame in all directions, which would be one input to the algorithm that determines how the hair actually looks in the scene. Of course, the flame flicker and brightness are functions of time (the flame is dynamic), so these calculations are done over a specified length of time in certain intervals (e.g., once every 30th of a second).

Hopefully that wasn't too geeky for you.
posted by mpls2 at 11:37 AM on December 26, 2008


I believe the first movie use of AI to generate individual "random" actions & movement paths in crowds was actually for the penguin march scenes in Batman Returns, about 10 years before LoTR.

July '87: "Flocks, Herds, and Schools: A Distributed Behavioral Model" by Craig W. Reynolds of Symbolics.

"In cooperation with many coworkers at the Symbolics Graphics Division and Whitney / Demos Productions, we made an animated short featuring the boids model called Stanley and Stella in: Breaking the Ice. This film was first shown at the Electronic Theater at SIGGRAPH '87."

and

"The 1992 Tim Burton film Batman Returns ... contained computer simulated bat swarms and penguin flocks which were created with modified versions of the original boids software developed at Symbolics."

(sorry, I'm a LISP nerd..)
posted by mrbill at 12:26 PM on December 26, 2008


Response by poster: Thanks, all. Seriously, that gives me a much better sense of the process. And no, the answers weren't too geeky at all.

I had marked around 95% of the answers as best answers, but then I went back and unmarked them. I figured it isn't very helpful if basically all of them are marked.

Keep 'em coming!
posted by umbĂș at 8:46 PM on December 26, 2008


I found the Stanley and Stella in: Breaking the Ice video on YouTube.
posted by mrbill at 11:41 PM on December 28, 2008


« Older How can I be like C.J. Cregg?   |   8-year-olds, Dude. Newer »
This thread is closed to new comments.