Below are some videos that demonstrate SmartBody’s capabiliities. You can also look at the videos associated with our papers.
Generation of a set of blendshapes from a single commodity RGB-D sensor. The scans are processed through a near-automatic pipeline. The digital faces are being puppeteered in realtime with tracking software. Rendering done in SmartBody, using a shader that handles multiple textures and masking to allow separation of facial regions.
Fast Avatar Capture using the Kinect and SmartBody. We capture and simulate an avatar of a person in 4 minutes. This represents a major change in the economics of creating avatars. New scans can be done each day as the person wears different clothing, hairstyle, accessories and so forth.
Automated character performance from audio. SmartBody combined with Cerebella. The input is an audio performance and the transcription of the utterance, the output is an automated 3D character performance. Cerebella analyzes the sentence structure and meaning, then calls appropriate gestures and timings for SmartBody to process. SmartBody coarticulates gestures by holding gestures, combining them, or dropping those that cannot be performed within the appropriate amount of time.
Automated lip syncing to speech. The input is audio (or text-to-speech) and the synchronized lip movement is generated automatically. This method has the advantage in that it can work in multiple languages, and use any set of facial poses as input. Each language requires approximately 200-300 hand animated short animations, and each character needs only 8 static facial poses.
Autorigging with SmartBody. Humanoid models can be animated in SmartBody via a drag-and-drop interface then saved out to standard formats. Once acquired, models can be automatically infused with various behaviors and capabilities.
Prototype medically-oriented virtual human demonstration. The virtual human facilitates medical information exchanges. An application such as this could use varying character types, ethnicities, languages and so forth. Uses SmartBody’s automated lip syncing, gestures , head movements and eye saccades via Cerebella.
Hands on Table Constraint.System allows character to maintain implicit constraints. In this case, the animation requires the character’s hands to remain on the table. However, when using a gaze that engages the entire upper body, the character’s hands violate the constraint and come off the table. SmartBody’s constraint system maintains the hands on the table, even while the upper body is fully engaged in the gaze.
Using Multiple Parameterized Reaching Spaces. Two sets of parameterized reaching motions are used. A heuristic based on the height of the desired object is used to determine which reaching/grabbing animation set to use.
Reaching. Example-based reaching. The blue spheres represent reaching examples. The green dots represent reaching examples interpolated from the blue sphere examples. The closest examples to the target position are interpolated, then IK is used to fine tune the reaching target location.
Physical simulation with motion tracking, character intuitively responds to perturbation. The kinematic motion (the idle standing motion) is tracked under physics . In other words, the character is executing the underlying idle pose, but the character is under physical simulation, which allows us to interact with it in order to perturb the motion on contact. We also trigger a collision event, and respond to it by automatically gazing at the object that has collided with the character. Note that the character appears to maintain balance because the root joint is artificially fixed in the environment.
Retargeting to arbitrary models. SmartBody’s mocap locomotion system retargeted to a Mixamo character (www.mixamo.com). The SmartBody Python API contains a retarget() method that allows you to use many models and skeletons in SmartBody.
Locomotion from motion capture. SmartBody’s example-based locomotion system using data from a motion capture session. This locomotion system is built from 13 different motions, with 6 of them mirrored from one side (left turns) to the opposite side (right turns), for a total of 19 motions.
Path Following.SmartBody’s path following capability. A character follows a user-specified path with certain speed constraints, including a minimum and maximum speed. In this case, the character is instructed to maintain a minimum walking speed which means that sharp turns are not exactly followed. If the character is allowed to slow down enough, the path could be followed exactly.
Crowds. Crowds. 25 characters using the SteerSuite steering system (http://www.magix.ucla.edu/steersuite/) integrated into SmartBody. The green circles indicate the locomotion target, the blue lines indicate the path to be followed, and the red lines indicate the relationship between the various characters used for dynamic obstacle avoidance.
Speech. Automated lip synchronization and gesturing. The audio track is analyzed to determine the utterance. The utterance is transformed into phonemes and visemes, then head movement and gesturing are automatically added based on the syntax and semantics of the utterance. Note that the only input is the audio track – the motion was added automatically.
Grasping. Example based-reaching with grasping. The character reaches for an object using a set of sitting reaching examples. The hand changes from its current pose to a grasping pose. When contact is detected with an object, individual fingers will stop moving towards the grasping pose. The effect is that the character can pick up objects of varying shape and size without penetration.
Reach Constraint.The target of the reach is maintained while the upper body is engaged in a gaze behavior. Note that without the constraint, the character would no longer be able to maintain contact with the target object
Kinect Integration: Using Kinect with SmartBody.Note that the Kinect can override only select parts of the character. Thus facial movements, finger animations and perhaps entire lower body movements can still be controlled by the animation system, while the user controls only the parts he/she is interested in controlling.
Behavior Markup Language.Three BML realizers engaging in a conversation. SmartBody (right), EMBR (center), Elckerlyc (left) participate in a dialog. During each utterance, feedback is sent to each system after each word is spoken, potentially allowing the characters to react on a much finer scale than a turn-based system would allow. Potentially, characters could interrupt each other, speak over each other, emote or react based on partial or complete understanding of each other’s dialog, and so forth.
Physical simulation with motion tracking and response to different forces.The amount of force applied by the colliding object is compared to a threshold, which determines whether or not to disable the fixed joint in the center of the character. Without it, the character loses balance and is knocked off of his feet.