University of Southern CaliforniaUSC
USC ICT TwitterUSC ICT FacebookUSC ICT YouTube

Lip sync and mapping | General SmartBody Discussion | Forum

Avatar

Please consider registering
guest

sp_LogInOut Log In sp_Registration Register

Register | Lost password?
Advanced Search

— Forum Scope —




— Match —





— Forum Options —





Minimum search word length is 3 characters - maximum search word length is 84 characters

sp_Feed Topic RSS sp_TopicIcon
Lip sync and mapping
January 31, 2016
12:33 pm
Avatar
Member
Members
Forum Posts: 4
Member Since:
December 30, 2015
sp_UserOfflineSmall Offline

Hi everyone,

Trying to find out how to make lip sync work for a mixamo character i came up with some questions about mapping.
I have done lipsync on Rachel and it works well but at the mapping (zebra2map provided) I dont see any facial joints. I mean, how jaw and lip joints of Rachel (JtJawFront, JtLipLowerMid etc) are mapped with the native sb joints (Jaw_front,Lip_bttm_right etc)?

I want to find out this to figure out how things work and then set up lipsync for my mixamo character. About this i have made a custom mapping with the character's joints and after retargeting I was able to animate him with ChrMarine animations. To use gestures I retarget again with ChrBrad.sk and gestures work alright. So far so good. For facial animation I used the files for ChrBrad (ChrBrad@001_inner_brow_raiser_lf etc.) as I have already retarget to ChrBrad, but it didnt work.

Searching the joint tree at the manual, I saw that lips have parent joints (Face_bottom_Parent->Jaw_front->lip..), so how lipsync works for Rachel eventhouh I have not done such a mapping and its not working for a custom character? Am' I missing something here? Hope to find out to make this work.

Thanks,
Paul

January 31, 2016
2:37 pm
Avatar
Admin
Forum Posts: 983
Member Since:
December 1, 2011
sp_UserOfflineSmall Offline

The lip syncing algorithm is based on animating a set of facial shapes. I used facial shapes that represent important facial movements that are necessary for visual speech/lip movement. For example, the facial shape that represents making the 'f' sound where your bottom lip is tucked underneath your front teeth, and so forth.

The algorithm needs a face definition that is mapped to those facial shapes; if you look at the face definition for each character, there is a facial pose (and susequently that represents the 'f' (or fv) shape, one that represents an open mouth, etc. Every character needs these poses to be tuned according to whatever joints (or blendshapes) that they use. So if you are able to move the joints in the face of your Mixamo model so that the facial shape looks like the ones defined for the SmartBody characters, save that pose as a 1-frame animation, specify it in the face definition, then the lip syncing will work.

 

The real question is this: does you Mixamo character have enough flexibility in the facial movements that it could match the basic lip sync poses? 

If there was any standardized face that Mixamo provided, I'd be happy to keep the configuration for that in SmartBody to make it easier to acquire such characters. However, I'm not aware of such standardization (but I'm happy to be made aware if there is one).

Just to give you more information than you asked for, the lip syncing algorithm is capable of working with almost any face, even ones that don't match the facial expression set that I've created. But you would need to create custom animations (about 200) for each new face configuration. You can look at minute 1:59 of the following video for an example of this:

 

 

Ari

February 1, 2016
7:36 am
Avatar
Member
Members
Forum Posts: 4
Member Since:
December 30, 2015
sp_UserOfflineSmall Offline

Thanks for the reply,

My initial thought was to retarget the mixamo character to ChrBrad and use the facial poses of Brad to perform lipsync. As facial animations are no different from gestures or body postures i thought I would be able to set them that way (for example by retargeting ChrBrad@open to my mixamo character). So far this is not working so its because of wrong mapping or my method is incorrect.

Paul

February 1, 2016
1:26 pm
Avatar
Admin
Forum Posts: 983
Member Since:
December 1, 2011
sp_UserOfflineSmall Offline

Right; the ChrBrad@open animation file contains a particular facial position using Brad's facial joints in order to create the open mouth facial pose. So if your Mixamo character doesn't have the exact same facial joints and that positioning them in the exact way produces the same results, then it won't work.

You can, however, use any set of facial joints that exist on your Mixamo character to make an open mouth pose, then save them in a ChrBrad@open animation file (or call it whatever you want in the facedefinition), then the lip syncing will work. 

 

What facial rig exists on your Mixamo character? 

 

Ari

February 1, 2016
2:32 pm
Avatar
Member
Members
Forum Posts: 4
Member Since:
December 30, 2015
sp_UserOfflineSmall Offline

The mixamo character has some similarities with ChrBrad and I 've mapped the most important facial joints (eyes,lips,jaw)
He has these facial joints:
Justin_RightLipUpper Justin_RightNostril Justin_RightCheek Justin_RightEyelidLower
Justin_RightEyelidUpper Justin_RightIOuterBrow Justin_RightInnerBrow Justin_LeftIOuterBrow
Justin_LeftInnerBrow Justin_LeftEyelidUpper Justin_LeftEyelidLower Justin_LeftCheek
Justin_LeftNostril Justin_LeftLipUpper Justin_LeftLipCorner Justin_RightLipCorner
Justin_RightLipLower Justin_JawEND Justin_LeftLipLower Justin_TongueTip Justin_TongueBack
Justin_Jaw Justin_RightEye Justin_LeftEye

However I am confused why eyes are working (at listen mode) and not the other joints, nevermind.

This is the mapping Mixamo -> smartbody
    jointMap->setMapping("Justin_LeftEye", "eyeball_left");
    jointMap->setMapping("Justin_RightEye", "eyeball_right");
    jointMap->setMapping("Justin_LeftLipLower", "Lip_bttm_left");
    jointMap->setMapping("Justin_RightLipLower", "Lip_bttm_rigth");
    jointMap->setMapping("Justin_LeftLipUpper", "Lip_top_left");
    jointMap->setMapping("Justin_RightLipUpper", "Lip_top_rigth");
    jointMap->setMapping("Justin_RightLipCorner", "Lip_out_right");
    jointMap->setMapping("Justin_LeftLipCorner", "Lip_out_left");
    jointMap->setMapping("Justin_LeftEyelidUpper", "upper_eyelid_left");
    jointMap->setMapping("Justin_LeftEyelidLower", "lower_eyelid_left");
    jointMap->setMapping("Justin_RightEyelidUpper", "upper_eyelid_right");
    jointMap->setMapping("Justin_RightEyelidLower", "lower_eyelid_right");   
    jointMap->setMapping("Justin_Jaw", "Jaw_front");   

And the mapping ChrBrad-> smartbody is the same with zebra2map provided but I've added some extra joints for the face animation that they weren't there:(some of them below...)
    zebra2Map->setMapping("JtLipCornerLf", "Lip_out_left");
    zebra2Map->setMapping("JtLipCornerRt", "Lip_out_right");
    zebra2Map->setMapping("JtLipLowerLf", "Lip_bttm_left");
    zebra2Map->setMapping("JtLipLowerRt", "Lip_bttm_right");
    zebra2Map->setMapping("JtLipUpperLf", "Lip_top_left");
    zebra2Map->setMapping("JtLipUpperRt", "Lip_top_right");
    zebra2Map->setMapping("JtJaw", "Jaw_front");
    zebra2Map->setMapping("JtLowerFaceParent", "face_bottom_parent");
    zebra2Map->setMapping("JtUpperFaceParent", "face_top_parent");

So for example as Justin_Jaw and JtJaw share the same sb joint Jaw_front (after retargeting) any movement of Jaw_front by smartbody should animate mixamo's Justin_Jaw also.

I have only seen the mouth open a bit when i set setFaceNeutral("ChrBrad@open"), I guess i am missing some joints here.

So i need to set a facial pose (using maya) and then save it as 1 frame animation and use it as face definition?

February 1, 2016
2:53 pm
Avatar
Admin
Forum Posts: 983
Member Since:
December 1, 2011
sp_UserOfflineSmall Offline

Generally, you don't need to joint map the facial joints, except for the eyes (since they are controlled independently). The other facial joints are only necessary as you define a facial pose that is used in a facedefinition. 

Yes, you'll need a neutral pose and the following poses for lip syncing to speech:

open
W
ShCh
PBM
FV
wide
tBack
tRoof
tTeeth

You can use a .bvh or .dae file for an animation definition. The algorithm works by subtracting the pose (like open) from the neutral values, and using that difference to drive the face. If the face is in a neutral pose when set to a 'zero' value, then that makes it a bit easier (you're neutral animation should be all zeroes, effectively).

 

Ari