University of Southern CaliforniaUSC
USC ICT TwitterUSC ICT FacebookUSC ICT YouTube

Publications

Selected papers related to the SmartBody platform.

Publication Description Paper Video

“Avatar reshaping and automatic rigging using a deformable model”, A. W. Feng, D. Casas, A. Shapiro, ACM SIGGRAPH Conference on Motion in Games, Paris, France, November, 2015

Automatic method for rigging a scanned human model by transferring attributes from a database of human figures.
pdf, bibtex YouTube Preview Image

“Acting the part; the role of gesture on avatar identity”, A. W. Feng, G. Lucas, S. Marsella, E. Suma, C.C. Chiu, D. Casas, A. Shapiro, ACM SIGGRAPH Conference on Motion in Games, Los Angeles, CA, November, 2014

Examination of whether avatars of people are more recognizable (or more ‘like themselves’ if the 3D animated versions also include the gestural style of the original actor. The paper has a study demonstrating that avatars that include the original gestural style are ‘more like’ the original actors than those that don’t.
pdf, bibtex YouTube Preview Image

“Rapid avatar capture and simulation using commodity depth sensors”, A Shapiro, A Feng, R Wang, H Li, M Bolas, G Medioni, E Suma, 27th Conference on Computer Animation and Social Agents, Houston, TX, May, 2014

Acquisition of a 3D model using a single Microsoft Kinect. A 3D avatar is constructed in less then 3 minutes using only 4 different poses. The character is then automatially, rigged, skinned, and animated with a variety of different behaviors.
pdf, bibtex YouTube Preview Image

“Towards Cloth-Manipulating Characters”, E. Miguel, A. W. Feng, A. Shapiro, 27th Conference on Computer Animation and Social Agents, Houston, TX, May, 2014

Cloth manipulation is a common action in humans that many animated characters in interactive simulations are not able to perform due to its complexity.
In this paper we focus on dressing-up, a common action involving cloth.
We identify the steps required to perform the task and describe the systems responsible for each of them.
Our results show a character that is able to put on a scarf and react to cloth collision and over-stretching events. Based on our experiments, we recommend a number of changes to a cloth-character model that would expand the capabilities of such interactions.
pdf, bibtex YouTube Preview Image

“A Practical and Configurable Lip Sync Method for Games”, E. Miguel, A. W. Feng, A. Shapiro, ACM SIGGRAPH Conference on Motion in Games, Dublin, Ireland, November, 2013

A lip sync method that can be constructed without using machine learning, is portable across different character resolutions, and could be used in multiple languages. We compare our results to a commercial lip sync solution.
pdf, bibtex YouTube Preview Image

“Towards Higher Quality Character Performance in Previz”, S. Marsella, A. Shapiro, A. W. Feng, M. Lhommet, S. Scherer, Digital Production Symposium, Anaheim, CA, July, 2013

By obtaining a full 3D performance that is generated only from an audio clip, producers of 3D content can make important decisions about the content before the project is finished.
pdf, bibtex YouTube Preview Image
YouTube Preview Image

“Virtual Character Performance From Speech”, S. Marsella, Y. Xu, A. W. Feng, M. Lhommet, S. Scherer, A. Shapiro, ACM SIGGRAPH Symposium on Computer Animation, Anaheim, CA, July, 2013

Our method can synthesize a virtual character performance from only an
audio signal and a transcription of its word content. The
character will perform semantically appropriate facial expressions and body movements that include gestures, lip synchronization to speech,
head movements, saccadic eye movements, blinks and so forth. Our method can be used in various applications, such as previsualization
tools, conversational agents, NPCs in video games, and avatars for inter
active applications.
pdf, bibtex YouTube Preview Image

“Automating the Transfer of a Generic Set of Behaviors Onto a Virtual Character”, A. W. Feng, Y. Huang, Y. Xu, A. Shapiro, Symposium on Motion in Games, Rennes, France, November, 2013

Humanoid 3D models can be easily acquired through various
sources, including online. The use of such models within a game or sim-
ulation environment requires human input and intervention in order to
associate such a model with a relevant set of motions and cont
rol mechanisms. In this paper, we demonstrate a pipeline where human
oid 3D models can be incorporated within seconds into an animation
system, and infused with a wide range of capabilities, such as locomo
tion, such as how it works on online poker websites such as this poker website, object manipulation, gazing, speech synthesis and lip syncing. We offer a set of
heuristics that can associated arbitrary joint names with canonical ones,
and describe an fast retargeting algorithm that enables us t
o instill a set of behaviors onto an arbitrary humanoid skeleton. We believe that such a system will vastly increase the use of 3D interactive characters due to the ease that new models can be animated.
pdf, bibtex YouTube Preview Image

“An Analysis of Motion Blending Techniques”, A. W. Feng, Y. Huang, Y. Xu, A. Shapiro, Symposium on Motion in Games, Rennes, France, November, 2013

Motion blending is a widely used technique for character an-
imation. The main idea is to blend similar motion examples according
to blending weights, in order to synthesize new motions parameterizing
high level characteristics of interest. We present in this paper an in-depth
analysis and comparison of four motion blending techniques: Barycentric
interpolation, Radial Basis Function, K-Nearest Neighbors and Inverse
Blending optimization. Comparison metrics were designed to measure
the performance across di erent motion categories on criteria including
smoothness, parametric error and computation time. We have imple-
mented each method in our character animation platform SmartBody
and we present several visualization renderings that provide a window
for gleaning insights into the underlying pros and cons of each method
in an intuitive way.
pdf, bibtex YouTube Preview Image

“An Example-Based Motion Synthesis Technique for Locomotion and Object Manipulation”, A. W. Feng, Y. Xu, A. Shapiro, Symposium on Interactive 3D Graphics and Games, Costa Mesa, CA, March 2012

We synthesize natural-looking locomotion, reaching and grasping for a virtual character in order to accomplish a wide range of movement and manipulation tasks in real time. Our virtual characters
can move while avoiding obstacles, as well as manipulate arbitrarily shaped objects, regardless of height, location or placement in a
virtual environment. Our characters can touch, reach and grasp objects while maintaining a high quality appearance. We demonstrate
a system that combines these skills in an interactive setting suitable
for interactive games and simulations.
pdf, bibtex YouTube Preview Image

“Building a Character Animation System”, A. Shapiro, 4th Annual Conference on Motion in Games 2011, Edinburgh, UK, November 2011

Description of the challenges of building an interactive animation system for humanoid characters.
pdf, bibtex

“Demonstrating and Testing the BML Compliance of BML Realizers”, H. van Welbergen, Y. Xu, M. Thiebaux, A. W. Feng, D. Reidsma, A. Shapiro, Intelligent Virtual Agents, 2011, Reykjavik, Iceland, September, 2011

Compatibility and compliance of BML realizers
pdf, bibtex

“SmartBody: behavior realization for embodied conversational agents”, M. Thiebaux, S. Marsella, A. N. Marshall, M. Kallmann, Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems, 2011, Estoril, Portugal, May 2008

Describes the Behavior Markup Language (BML) realization of the SmartBody system.
pdf

“Hierarchical Motion Controllers for Real-Time Autonomous Virtual Humans”, M. Kallmann, S. Marsella, Intelligent Virtual Agents, Kos, Greece, September 2005

Describes the hierarchical controller scheme in SmartBody.
pdf