The animation system is a somewhat straight forward affair. It is an event driven system (expEvent). The animation system is encapsulated by the expHead class. The expHead::event member function is where all events are processed. I'd like to focus on the differences between this animation system and others I've seen.
Animations consist of a static vector of channels called an Expression. The number of channels in an Expression is created at init time by calculating the number of muscles and the number of additional channels, (currently the additional channels are hard coded). Each Expression is a facial pose, which can be transitioned to/composited with another facial pose. The animation system in effect is a state machine, with every new animation event is transitioned from the previous one. It doesn't have features for automatically looping animation due to the fact that I've heard "walk cycle" all the time but never "smile cycle."
The second part of the toolbox is a generalized vertex compositor that is responsible for deforming the mesh in an order that defined within the expHead class. i.e. the eyelids are rotated then the muscles contracted then the Jaw deformed. This functionality is implemented via the ExpMeshAdapter interface. The adapter is an abstract interface that allows expression to talk to a mesh without knowing the way that it is structured in memory. The overhead from the interface is not totally negligible, a virtual function call must be called for every get and set vertex method, and your mesh class needs to inherit from the interface. (there is away to get out of this. However it involves closely coupling all the muscles to your mesh representation.)
With just the animation system and the vertex compositing functionality you could make a fairly straight forward real-time morpher.
The core of the system and what makes this a facial animation system, is the Anatomical model. A virtual base class called Muscle acts as the interface to all the muscles. Muscle defines some properties and operations such as a mesh to act on, a name, an activate method and a draw function.
The following are the current muscle types. Grouped by functionality:
|Muscles that squeeze
|Muscles that act like a 3ds max morph channel.
In addition to the muscles there are Parameterized eyelids and a Weighted jaw. These could have also used the same interface as the muscles but since the export pipeline and the order that they are executed in is important, I've separated out their functionality.
One big thing to be aware of is that some of the muscles (forehead, limited) the parameterization on the eyelids expect that the face's plane of symmetry is the yz plane. That the right eye is at [-x,y,z] and the left eye is at [+x,y,z]
The two demos that are currently being releases are the following:
a playback/reaction program that shows off the animation system, event system and using the system with a FSM.
a Text to Speech program that demonstrates using driving the anatomical model in real-time without using the animation system. (This program depends on the MS speech DLLs to run)