Automated character performance from audio. SmartBody combined with Cerebella. The input is an audio performance and the transcription of the utterance, the output is an automated 3D character performance. Cerebella analyzes the sentence structure and meaning, then calls appropriate gestures and timings for SmartBody to process. SmartBody coarticulates gestures by holding gestures, combining them, or dropping those that cannot be performed within the appropriate amount of time.
Locomotion from motion capture. SmartBody’s example-based locomotion system using data from a motion capture session. This locomotion system is built from 13 different motions, with 6 of them mirrored from one side (left turns) to the opposite side (right turns), for a total of 19 motions.
Retargeting to arbitrary models. SmartBody’s mocap locomotion system retargeted to a Mixamo character (www.mixamo.com). The SmartBody Python API contains a retarget() method that allows you to use many models and skeletons in SmartBody.
Path Following.SmartBody’s path following capability. A character follows a user-specified path with certain speed constraints, including a minimum and maximum speed. In this case, the character is instructed to maintain a minimum walking speed which means that sharp turns are not exactly followed. If the character is allowed to slow down enough, the path could be followed exactly.
Path Following While Jogging. Path following while jogging. The character is told to jog around the path, and the character attempts to follow the path as accurately as possible given his speed constraints.
Crowds. Crowds. 50 characters using the SteerSuite steering system (http://www.magix.ucla.edu/steersuite/) integrated into SmartBody. The green circles indicate the locomotion target, the blue lines indicate the path to be followed, and the red lines indicate the relationship between the various characters used for dynamic obstacle avoidance.
Gazing. Gazing examples. Note that the gaze can engage different parts of the character: only the eyes, eyes and neck, eyes/neck and chest, or eyes/neck/chest and waist (as shown here).
Locomotion.Example-based locomotion using 19 examples, including forward movement (walk, jog, run, etc.), turning movement (turning in place, turning while walking, turning while running) and strading movement (sideways walking, sideways running). Note that no IK is used, and the parametric space is determined automatically by analyzing the data.
Speech. Automated lip synchronization and gesturing. The audio track is analyzed to determine the utterance. The utterance is transformed into phonemes and visemes, then head movement and gesturing are automatically added based on the syntax and semantics of the utterance. Note that the only input is the audio track – the motion was added automatically.
Reaching. Example-based reaching. The blue spheres represent reaching examples. The green dots represent reaching examples interpolated from the blue sphere examples. The closest examples to the target position are interpolated, then IK is used to fine tune the reaching target location.
Grasping. Example based-reaching with grasping. The character reaches for an object using a set of sitting reaching examples. The hand changes from its current pose to a grasping pose. When contact is detected with an object, individual fingers will stop moving towards the grasping pose. The effect is that the character can pick up objects of varying shape and size without penetration.
Grasping 2.Hand configuration during grasping is determined with a heuristic based on the shape of the object. The character tries to maintain ‘natural’ hand configurations during grasping.
Interactive Reaching. Interactive example-based reaching. Note that the character automatically chooses the closest hand and configures hand posture according to size and shape of the target object.
Reach Constraint.The target of the reach is maintained while the upper body is engaged in a gaze behavior. Note that without the constraint, the character would no longer be able to maintain contact with the target object
Hands on Table Constraint.System allows character to maintain implicit constraints. In this case, the animation requires the character’s hands to remain on the table. However, when using a gaze that engages the entire upper body, the character’s hands violate the constraint and come off the table. SmartBody’s constraint system maintains the hands on the table, even while the upper body is fully engaged in the gaze.
Gesturing. Some SmartBody gestures that are triggered automatically during conversations.
Saccades.Saccade model (fast eye movements). Characters can use ‘listening’ or ‘talking’ saccade modes, or can be instructed to saccade at specific times and locations.
Softeyes.The character’s lids track the pitch of the eye. There is a slight delay between the movement of the eye and the movement of the lid for greater realism.
Mobile Development: SmartBody running on iPad2 with Unity.Android and iPhone versions are also available in the SmartBody code base.
Kinect Integration: Using Kinect with SmartBody.Note that the Kinect can override only select parts of the character. Thus facial movements, finger animations and perhaps entire lower body movements can still be controlled by the animation system, while the user controls only the parts he/she is interested in controlling.
Behavior Markup Language.Three BML realizers engaging in a conversation. SmartBody (right), EMBR (center), Elckerlyc (left) participate in a dialog. During each utterance, feedback is sent to each system after each word is spoken, potentially allowing the characters to react on a much finer scale than a turn-based system would allow. Potentially, characters could interrupt each other, speak over each other, emote or react based on partial or complete understanding of each other’s dialog, and so forth.
Physical simulation with motion tracking, character intuitively responds to perturbation. The kinematic motion (the idle standing motion) is tracked under physics . In other words, the character is executing the underlying idle pose, but the character is under physical simulation, which allows us to interact with it in order to perturb the motion on contact. We also trigger a collision event, and respond to it by automatically gazing at the object that has collided with the character. Note that the character appears to maintain balance because the root joint is artificially fixed in the environment.
Physical simulation with motion tracking and response to different forces.The amount of force applied by the colliding object is compared to a threshold, which determines whether or not to disable the fixed joint in the center of the character. Without it, the character loses balance and is knocked off of his feet.
Using Multiple Parameterized Reaching Spaces. Two sets of parameterized reaching motions are used. A heuristic based on the height of the desired object is used to determine which reaching/grabbing animation set to use.
Reaching while jumping. A set of parameterized jumping+reaching animations are used for objects well above the character’s head.