University of Southern CaliforniaUSC
USC ICT TwitterUSC ICT FacebookUSC ICT YouTube

Issues setting up a SmartBody Character and animating it using BML | General SmartBody Discussion | Forum

Avatar

Please consider registering
guest

sp_LogInOut Log In sp_Registration Register

Register | Lost password?
Advanced Search

— Forum Scope —




— Match —





— Forum Options —





Minimum search word length is 3 characters - maximum search word length is 84 characters

sp_Feed Topic RSS sp_TopicIcon
Issues setting up a SmartBody Character and animating it using BML
November 6, 2014
6:01 pm
Avatar
Member
Members
Forum Posts: 29
Member Since:
September 30, 2014
sp_UserOfflineSmall Offline

Hello SmartBody enthusiasts,

 

I am trying to make a BML realizer using SmartBody and the Panda3D game engine written in Python by including SmartBody as a library. So far, I was able to set up a SmartBody scene with a character in it and set up the counterpart scene in Panda. Then copying the joints' state from the character in SmartBody over to my Panda3D character. It seems to work when I play existing animation for the Brad character. Though, I have encoutered the following issues:

- I cannot load the mesh of the SmartBody character from either a .dae or .fbx . No matter what I try as "type" argument of the "createCharacter" method (fbx, .fbx, dae, .dae) and how I specify the path to the mesh in the "setStringAttribute" method, I get the following error message:

Character Brad has no dynamic mesh, cannot perform mesh operations.

Problem setting attribute 'mesh' on character Brad

What am I doing wrong? Is it really an issue as I don't really need the mesh as I use Panda3D to render? Will it prevent some behaviors to be executed?

 

- I cannot get the standard BML behaviors to work. For instance, if I try the following command: '<head type="NOD" amount="1" repeats="3"/>', nothing happens.

 

- I tried making the mapping from zebra2 to standard SmartBody skeleton and it did not fix it.

Could anyone help me please?

 

Here is my source code :

 #SmartBody Import
import SmartBody

from SmartBodyInitFunctions import *

#my imports
import os

class SmartBodyScene():

    def __init__(self):

        #create a SmartBody scene
        self.scene = SmartBody.getScene()
        self.scene.startFileLogging("./smartbody.log")

        #set media path to here
        self.mediaPath = os.getcwd()
        self.scene.setMediaPath(self.mediaPath)

        #import assets
        self._motion_path = 'SB_motions/ChrBrad'
        self._script_path = 'SB_scripts'
        self._mesh_path = 'SB_meshes/ChrBrad'
        self._behavior_sets_path = 'SB_behaviorsets'

        self.scene.addAssetPath('motion', self._motion_path)
        self.scene.addAssetPath('script', self._script_path)
        self.scene.addAssetPath('mesh', self._mesh_path)
        self.scene.addAssetPath('script', self._behavior_sets_path)
        self.scene.loadAssets()

        #create character Brad
        self.brad = self.scene.createCharacter('Brad', 'fbx')

        #create Brad's skeleton
        makeZebra2mapping(self.scene)
        self.zebra2Map = self.scene.getJointMapManager().getJointMap('zebra2')
        self.bradSkeleton = self.scene.getSkeleton('SB_motions/ChrBrad/ChrBrad.sk')
        self.zebra2Map.applySkeleton(self.bradSkeleton)
        self.zebra2Map.applyMotionRecurse('SB_motions/ChrBrad')

        #set Brad's skeleton
        self.bradSkeleton = self.scene.createSkeleton('SB_motions/ChrBrad/ChrBrad.sk')
        self.brad.setSkeleton(self.bradSkeleton)

        #give Brad a mesh
        pathToMesh = "C:/Users/Pierre Wargnier/Developements/Test_smartBody/SB_meshes/ChrBrad/ChrBrad.fbx"
        self.brad.setStringAttribute("deformableMesh", pathToMesh)
        self.brad.setDoubleAttribute("deformableMeshScale", 0.2) #scale character

        #give Brad a controller
        self.brad.createStandardControllers()

        #setup Brad's face
        self.bradFace = self.setUpBradFace()
        self.brad.setFaceDefinition(self.bradFace)

        #setup lip-sync
        #self.setupLipSyncBrad()

        self.brad.setPosition(SmartBody.SrVec(0, 0, 35))

        #get the simulation manager
        self.sim = self.scene.getSimulationManager()

        #get the BML processor to send it commands
        self.bmlProcessor = self.scene.getBmlProcessor()

        #send a command to do the idle pose
        #command = '<body posture="ChrBrad@Idle01"/>'
        #self.bmlProcessor.execBML('Brad', command)

        #start the simulation
        self.sim.start()

    def __del__(self):
        self.sim.stop()

    def setUpBradFace(self):
        # Brad's face definition
        bradFace = self.scene.createFaceDefinition('ChrBrad')
        bradFace.setFaceNeutral('ChrBrad@face_neutral')
        bradFace.setAU(1,  "left",  "ChrBrad@001_inner_brow_raiser_lf")
        bradFace.setAU(1,  "right", "ChrBrad@001_inner_brow_raiser_rt")
        bradFace.setAU(2,  "left",  "ChrBrad@002_outer_brow_raiser_lf")
        bradFace.setAU(2,  "right", "ChrBrad@002_outer_brow_raiser_rt")
        bradFace.setAU(4,  "left",  "ChrBrad@004_brow_lowerer_lf")
        bradFace.setAU(4,  "right", "ChrBrad@004_brow_lowerer_rt")
        bradFace.setAU(5,  "both",  "ChrBrad@005_upper_lid_raiser")
        bradFace.setAU(6,  "both",  "ChrBrad@006_cheek_raiser")
        bradFace.setAU(7,  "both",  "ChrBrad@007_lid_tightener")
        bradFace.setAU(10, "both",  "ChrBrad@010_upper_lip_raiser")
        bradFace.setAU(12, "left",  "ChrBrad@012_lip_corner_puller_lf")
        bradFace.setAU(12, "right", "ChrBrad@012_lip_corner_puller_rt")
        bradFace.setAU(25, "both",  "ChrBrad@025_lips_part")
        bradFace.setAU(26, "both",  "ChrBrad@026_jaw_drop")
        bradFace.setAU(45, "left",  "ChrBrad@045_blink_lf")
        bradFace.setAU(45, "right", "ChrBrad@045_blink_rt")

        bradFace.setViseme("open",    "ChrBrad@open")
        bradFace.setViseme("W",       "ChrBrad@W")
        bradFace.setViseme("ShCh",    "ChrBrad@ShCh")
        bradFace.setViseme("PBM",     "ChrBrad@PBM")
        bradFace.setViseme("FV",      "ChrBrad@FV")
        bradFace.setViseme("wide",    "ChrBrad@wide")
        bradFace.setViseme("tBack",   "ChrBrad@tBack")
        bradFace.setViseme("tRoof",   "ChrBrad@tRoof")
        bradFace.setViseme("tTeeth",  "ChrBrad@tTeeth")

        return bradFace

    def setupLipSyncBrad(self):

        diphoneManager = self.scene.getDiphoneManager()
        initDiphoneDefault(diphoneManager)

        self.brad.setBoolAttribute("usePhoneBigram", True)
        self.brad.setBoolAttribute("lipSyncSplineCurve", True)
        self.brad.setDoubleAttribute("lipSyncSmoothWindow", .2)
        self.brad.setStringAttribute("lipSyncSetName", "default")

        return

November 8, 2014
9:57 am
Avatar
Member
Members
Forum Posts: 29
Member Since:
September 30, 2014
sp_UserOfflineSmall Offline

Hello,

I seem to have fixed some of the issues I was having :

I changed how I loaded the skeleton just giving the argument ChrBrad.sk instead of the path from my current working directory to that file as suggested in the topic http://smartbody.ict.usc.edu/f.....lib-part-2

Then I had to change the mapping between my SmartBody skeleton and my Panda skeleton as SmartBody is now giving me the standard SB joint names instead of the names from the model.

What is surprising, is that the animations for the Brad character keep working without having to do the remapping for the animations. Could someone give me more details about how this feature works? Because I might not be able to figure it out when I want to use a custom character

I am still encoutering some issues though:

  •  The facial expressions won't play even though I have properly initialized the facs. The blinking does not work either, and I get the following error message: "character ChrBrad will use 'blink' viseme to control blinking" it seems that SmartBody tries to use the deprecated version of blinking.
  •  When trying to configure the lip-sync, I get the following error messages :
    • " Warning, Bool Attribute usePhoneBigram does not exist."
    • "Warning, Bool Attribute lipSyncSplineCurve does not exist."
    • "Warning, Double Attribute lipSyncSmoothWindow does not exist."
    • "Warning, String Attribute lipSyncSetName does not exist."
  • Still no luck with setting the mesh.

I have the following questions:

  • I use the version of the SmartBody library for Python that is included with the PandaBMLREmbedded package. Could part or all of my issues come from having a version of SB that isn't up-to-date?
  • Do I need to retarget or remap the face definition animations for Brad?
  • I see new joints, named "au_xx" or corresponding to the visemes, when querying the SmartBody character for the state of its skeleton. Do I have to apply the animation units state to some joints in my Panda character? How do I know the mapping?
November 11, 2014
12:31 am
Avatar
Admin
Forum Posts: 983
Member Since:
December 1, 2011
sp_UserOfflineSmall Offline

There's a little bit of history:

The PandaBMLR interface was written by a group in Iceland that works with a very old version of SmartBody and uses the bonebus (network interface).

When I created the Python interface, I created a similar interface, the PandaBMLREmbedded, that uses most of the same code, but uses the embedded version of SmartBody. It is ~not~ well tested, but the basics are there.

You can see in the PandaBMLR.py file that the first file loaded is default-init-empty.py, which in turn has a set of initializations that you can use for your Panda characters.

If you want to use your own Panda character, you will need to create a joint mapping to the SmartBody standard names, so that motions can be retargeted to the character. There is a procedural skeleton hierarchy creator that can be used to take a Panda skeleton and convert it directly into a SmartBody one:

skel = scene.addSkeletonDefinition("myskeleton")

rootJoint = skel.addJoint("root", None)

rootJoint.setOffset(SrVec(0,1,0))

childJoint1 = skel.createJoint(rootJoint, "child1")

childJoint.setOffset(SrVec(1, 2, 0))

....

 

and so forth. You'll need to remove the reference to common.sk in CharacterPawn.py. Then you could retarget the motion through the behavior set interface like this:

scene.run('BehaviorSetMaleMocapLocomotion.py')
setupBehaviorSet()
retargetBehaviorSet("mycharacter")

 

When using Panda, the rendering is entirely handled by that engine, so the mesh is rendered and set up through the panda interface, and the deformableMesh attributes will have no effect. I'm not a panda expert, so I don't know what is involved with making a Panda-compatible mesh

With regards to your facial animations, are you using joints or blendshapes? To make it work you need to:

1) Set up a facial definition with the poses that you will be using

2)  if you are using joints, include the animation file that contains the pose of the face. If you are using blendshapes, set the second parameter to double quotes

 

Regarding the joint mapping, there is a JointMapping object that translates back and forth:

jointMap = scene.getJointMapManager().getJointMap("mymap")

targetJointName = jointMap.getMapSource("sourcejoint")

 

In all, the Panda interface needs a little 'love' to work well.

 

Ari

November 12, 2014
4:31 pm
Avatar
Member
Members
Forum Posts: 29
Member Since:
September 30, 2014
sp_UserOfflineSmall Offline

Hi Ari,

 

Thanks for your advice. I think the info on creating a SmartBody skeleton from the Panda one and getting the right joint name mapping from SmartBody will be very helpful when I try to animate my own character using SmartBody.

So far, before I move on to using my own character, I am only trying to get SmartBody to work with the Brad character, see how its done and check that everything works.

To answer your question, the Brad character uses joints for the face (as you probably already know) and my own characters will also use joints.

Just to avoid any confusion, I am not using the PandaBMLR embedded code but I use the same dlls to use SmartBody as a Python library (I copy-pasted the dlls from smartbody/lib/Panda3D/python/Lib/site-packages to the site-packages directory of my Panda intall).

I have created the face definition for Brad using the .skm animation files provided. I followed carefully the instructions in the documentation and the example scripts. I have succeeded in getting most behaviors to work: setting a posture, playing animations, gazing, performing head movements, and  eye saccades.

But when it comes to facial animations, nothing happens. I don't get any error messages. The only clue I have is: "Character ChrBrad will use 'blink' viseme to control blinking." Which is not consistent with the face definition that I gave, that is supposed to use animation unit 45 to perform blinking.

Regarding speech, I cannot get speech commands to go through to the speech relay (I use the one in the binary SmartBody distribution). I get the following error message:

"remote_speech::rVoiceTimeOut ERR: RemoteSpeechReply Message NOT RECIEVED for utterance #1 . Please check if the remote speech process is on and is accessable by SBM.
ERROR: BML::Processor::speechReply() exception:BehaviorRequest "BML_ChrBrad_sbm_test_bml_3_#1_<speech>" SchedulingException: SpeechInterface error: Remote speech process timed out"

Regarding lip sync, I get the following error messages:

"Warning, Bool Attribute usePhoneBigram does not exist.
Warning, Bool Attribute lipSyncSplineCurve does not exist.
Warning, Double Attribute lipSyncSmoothWindow does not exist.
Warning, String Attribute lipSyncSetName does not exist."

This suggests that I cannot use the "new" lip sync method.

I tryed to switch to a more recent version of SmartBody to see if it fixed some of these issues. But I got stuck as I don't know where to copy the dlls from as the smartbody/core/smartbody/sbm/bin directory mentionned in the manual does not exist. I tryed to use the ones that come with the sbgui application but no luck. In addition, I get a pyhton version conflict with the version used in Panda...

Any advice on how to integrate a newer version of SmartBody as a Python library?

I'm really stuck, and speech and facial animations are the features I need the most for my project. Any idea on how I could fix these issues?

Thanks,

Pierre

November 13, 2014
4:52 pm
Avatar
Member
Members
Forum Posts: 29
Member Since:
September 30, 2014
sp_UserOfflineSmall Offline

Hi,

I have managed to upgrade the SmartBody library to the current version. But even though I don't get the warnings that some attributes don't exist anymore, I didn't really make any progress.

Here are some questions that could help me go further:

1) When coppying the state of all the joints from the SmartBody skeleton over to my character in Panda3D, I copy the quaternions of each joints with the adapted conversion of space coordinates. Is there anything else I should do with the joints of the face?

2) Where can I find a working example of facial animation within a game engine (in C++ or Python)?

3) After setting up the face definition, I get extra joints in the SmartBody skeleton named either "au_xx_left/right" or after a viseme name. So far I have just ignored them. What are they for? Am I supposed to do something with them?

4) How do I connect my application to the TTS Relay? Which TTS Relay should I use? The VH Toolkit's TTS relay, the one in the SmartBody binary distribution or the one I can build from source?

Could someone please help me?

Thanks,

Pierre

November 14, 2014
5:55 pm
Avatar
Admin
Forum Posts: 983
Member Since:
December 1, 2011
sp_UserOfflineSmall Offline

1) The face joints could either use rotations (in which case you copy over the quaternions as you have done) or translations (i.e. eyebrows going up and down might be translational values) in which case you also need to copy over the translations. Generally, the face joints should be treated like the other joints are treated (or at least like the root joint is treated) and copy over all state values (translation + rotation)

 

2) There are no explicit facial animation examples when using joint-based faces, since getting the facial animation is the same as getting the body animation. For blendshapes, you can get the x-translation value of the bone with the same name as the face shape. So if the face shape is 'bmp', get the bone named 'bmp' then get the x-translation value.

 

3) The extra joints are the face animation channels. AU_XX are action units (as per the FACS definitiions; eyebrows up, eyebrows down, etc.) and the other viseme names are used for facial animation (W, Fv, etc.)

 

4) Any TTS relay will work similarly. Make you you have ActiveMQ running, run the TTS relay, set the 'voice' attribute on the character to 'remote' and the 'voiceCode' attribute to one of the voice names that the TTS relay is capable of using. Then create a <speech> BML command, and it should work.

 

Ari

November 16, 2014
5:32 pm
Avatar
Member
Members
Forum Posts: 29
Member Since:
September 30, 2014
sp_UserOfflineSmall Offline

Hi Ari,

Thanks a lot for your answers. I could eventually fix the facial animations by copying also the positions of the facial joints. The tricky part was that the scales between Panda 3D and SmartBody were different and I had to multiply everything by 100.

I'm still working on making the TTS work, I don't know why I fail to connect to the TTS Relay. ActiveMQ seems to be running properly (it appears in the service list in the Windows task monitor) and it works fine when I run the VH toolkit or the sbgui application. I get the following error messages:

"remote_speech::rVoiceTimeOut ERR: RemoteSpeechReply Message NOT RECIEVED for utterance #1 . Please check if the remote speech process is on and is accessable by SBM."

"ERROR: BML::Processor::speechReply() exception:BehaviorRequest "BML_ChrBrad_sbm_test_bml_3_#1_<speech>" SchedulingException: SpeechInterface error: Remote speech process timed out"

I monitored activeMQ's activity and when I run sbgui or the VHToolkit, new tcp connections appear. When I run my SmartBody enabled application, noting happens there. In addition, I cannot find any info on starting the message broker in the documentation.

What should I do to tell SmartBody to connect to activeMQ?

Thanks,

Pierre

November 16, 2014
9:44 pm
Avatar
Admin
Forum Posts: 983
Member Since:
December 1, 2011
sp_UserOfflineSmall Offline

On further thought, I think it's because the Python-based SmartBody is controlled by Panda, and never checks the VHMSG systems. The TTS sends and recieves messages over the VHMSG system. so try this:

 

1) In the initialization, call:

 

scene.getVHMsgManager().connect()

2) On every simulation step, call:

scene.getVHMsgManager().poll()

3) On exit, call:

scene.getVHMsgManager().disconnect()

Ari

November 17, 2014
6:54 pm
Avatar
Member
Members
Forum Posts: 29
Member Since:
September 30, 2014
sp_UserOfflineSmall Offline

Hi Ari,

Thanks for your answer. I thought it was something like that but the SBVHMsgManager class is not mentionned in the Python API reference. I could obtain a connection to activeMQ using this info. In addition, for some reason, I had to also enable the service by calling scene.getVHMsgManager.setEnable(True)

Though, I want to signal a bug: the poll() method is not recognised in the Python API.

I could fix it by adding the following line to the file "SBPython.cpp" at line 778:

.def("poll", &SBVHMsgManager::poll, "Check for VH Message. Call this function every simulation step.") .

I now have a new issue, the voice goes really fast and the lip-sync doesn't have time to follow... In addition, I sometimes get a strange Windows sound at the beginning. The same speed issue occurs with the Festival TTS but without the annoying sound. Lip sync does not continues after the sound is finished playing. If I disable SmartBody's internal audio, the sound is not played (obviously) but the lip sync stops very quickly, before it has time to articulate the whole sentence.

Do you know what can cause this? What should I do about it?

I read somewhere that when integrating SmartBody to a game engine, the internal audio of SmartBody should be turned off and the sound should be played by the game engine. But then, how are the lip sync instructions passed from the engine to SmartBody?

I wanted to add a scene listener, as suggested in the integration advice, but I got stuck when trying to register it as the addSceneListener method expects a pointer which I cannot provide in Python.

Could you please tell me how do I add a scene listener in Python?

Thanks,

Pierre

PS : If you want me to send you the source file that I changed so you gain time to add the bug fix, please let me know.

Edit : I tired using a real-time timer (calling setupTimer() at init and updateTimer(-1) at each loop iteration) but it didn't solve the issue. I have to report another bug though: updateTimer() was not available in the Python API.

I fixed it by adding ".def("updateTimer", &SBSimulationManager::updateTimer, "Update the timer when using real time clock")" at line 62 in SBPythonSimulation.cpp .

November 19, 2014
8:08 pm
Avatar
Admin
Forum Posts: 983
Member Since:
December 1, 2011
sp_UserOfflineSmall Offline

You did the right thing by adding the poll() python function. I'll add that to the code.

Are you running Panda with a fixed time step (say, 60 frames/second) or are you using a real-time clock with a variable frame rate? You don't need to run the real-time timer explicitly with SmartBody, you can just send it the current time that you get from the Panda system.

The TTS voice file will be written to the filesystem (typically, in a cache/audio directory). Is that file running the audio too quickly? Are you sending any additional markup to the TTS system to adjust the vocal quality?

Is the lip sync set up properly? To use the high quality lip sync method you'll need to do this:

 

scene.run('init-diphoneDefault.py')

mycharacter = scene.getCharacter("nameofmycharacter")

mycharacter .setStringAttribute('lipSyncSetName', 'default')
mycharacter .setBoolAttribute('usePhoneBigram', True)
mycharacter .setVoice('remote')
mycharacter .setVoiceCode('Microsoft|Anna')

 

If you want to write a scenelistener in python, you can do this:

class MyListener(CharacterListener):

    def OnCharacterCreate(self, name, type):
        print "Character created..."
        
    def OnPawnCreate(self, name):
        print "Pawn created..."

mylistener = MyListener()

scene.addSceneListener(mylistener)

 

Ari

November 20, 2014
7:49 am
Avatar
Member
Members
Forum Posts: 29
Member Since:
September 30, 2014
sp_UserOfflineSmall Offline

Hi Ari,

Thanks for answering. I will try what you suggest for the scene listener.

Regarding the sound matter:

Are you running Panda with a fixed time step (say, 60 frames/second) or are you using a real-time clock with a variable frame rate? You don't need to run the real-time timer explicitly with SmartBody, you can just send it the current time that you get from the Panda system.

Actually, I have tried both letting Panda give the time to SmartBody and using a real time clock in SmartBody, it did not make any difference. I tried forcing the Panda fps to 60 and the SmartBody "sleep fps" to 60 as well, but no result.

The weirdest thing is that the sound is not played consistently every time I run the program. Sometimes it is so fast that you cannot distinguish the words. Sometimes it is slower, but still too fast and all jagged.

It is also surprising because the timing of the animations I tryed is fine.

 

The TTS voice file will be written to the filesystem (typically, in a cache/audio directory). Is that file running the audio too quickly? Are you sending any additional markup to the TTS system to adjust the vocal quality?

I have checked the sound files and they play normally. When I wrote fast, I meant really fast and jagged up. I have tryed specifying characteristics using SSML as well and the produced files were as desired. So, nothing wrong there. In addition, though I haven't checked in a very detailed way, the timing of the phonemes given by the TTS relay seems reasonable.

 

Is the lip sync set up properly? To use the high quality lip sync method you'll need to do this: [...]

Yes, I have double-checked just in case.

Unless you have other ideas on what is going wrong, I think what I'm left to do now is trying to play the sound in Panda instead of using the internal audio of SmartBody. Could you please give me information about how to get a play sound event, pass the sound file path from SmartBody to Panda and syncronize the lips?

Thanks,

Pierre

November 22, 2014
12:13 am
Avatar
Admin
Forum Posts: 983
Member Since:
December 1, 2011
sp_UserOfflineSmall Offline

If you chance smartbody/src/sbm/mcontrol_callbacks.cpp starting at line 1584 from:

 

        if (SmartBody::SBScene::getScene()->getBoolAttribute("internalAudio"))
        {

 

to

        if (SmartBody::SBScene::getScene()->getBoolAttribute("internalAudio"))
        {

            std::stringstream strstr;
            strstr << soundFile << " " << characterName;
            SmartBody::SBEvent* sbevent = SmartBody::SBScene::getScene()->getEventManager()->createEvent("sound", strstr.str().c_str());
            SmartBody::SBScene::getScene()->getEventManager()->handleEvent(sbevent, SmartBody::SBScene::getScene()->getSimulationManager()->getTime());

 

this will create a SmartBody event called 'sound' with two parameters: the location of the sound file, and the name of the character. You can then use the OnEvent() function in the SceneListener to respond to that event, instead of using the default 'internalAudio' handling that doesn't seem to work with Panda. You should then be able to call a sound-playing function in panda using that information from the EventHandler.

Ari

November 22, 2014
5:02 pm
Avatar
Member
Members
Forum Posts: 29
Member Since:
September 30, 2014
sp_UserOfflineSmall Offline

Hi Ari,

Thanks a lot it works. Now the sound is played normally but I still have a lip sync issue:

It seems that only the first viseme gets played after that the characters' lips stop moving. What can I do about it?

Thanks,

Pierre

November 22, 2014
6:50 pm
Avatar
Admin
Forum Posts: 983
Member Since:
December 1, 2011
sp_UserOfflineSmall Offline

1) Which voice are you using?

2) Are you using a non-English voice?

3) The TTSRelay should output an XML block that describes the phonemes obtained from the TTS engine. Can you post that here?

 

Ari

November 23, 2014
2:08 pm
Avatar
Member
Members
Forum Posts: 29
Member Since:
September 30, 2014
sp_UserOfflineSmall Offline

Hi,

1) I am using Microsoft Zira which is the new default voice in Windows 8. But it doesn't matter which voice I'm using the result is the same whith both Microsoft and Festival voices.

2) No right now I'm not using non-English voices. Eventually I want to but for now I didn't try complicating things

3) I'm not sure which file you are talking about. Where should I look for it? I have looked into the source code and did not see mention of this file. The only xml I see is what is output to the TTS relay's console. Here is what I get in the console for the uterance "Hello. My name is Brad":

Debug: Sending reply: "RemoteSpeechReply ChrBrad 1 OK: <?xml version="1.0" encoding="UTF-8"?>
<speak>
  <soundFile name="C:\Users\Pierre Wargnier\Developements\Test_smartBody\audio\utt_20141123_150007_ChrBrad_1.wav" />
  <viseme start="0" articulation="1" type="_" />
  <viseme start="0,1" articulation="1" type="H" />
  <viseme start="0,18" articulation="1" type="Eh" />
  <viseme start="0,235" articulation="1" type="L" />
  <viseme start="0,31" articulation="1" type="Ow" />
  <viseme start="0,51" articulation="1" type="_" />
  <viseme start="0,91" articulation="1" type="BMP" />
  <viseme start="0,97" articulation="1" type="Aa" />
  <viseme start="1,025" articulation="1" type="Ih" />
  <viseme start="1,08" articulation="1" type="D" />
  <viseme start="1,155" articulation="1" type="Eh" />
  <viseme start="1,212" articulation="1" type="Ih" />
  <viseme start="1,269" articulation="1" type="BMP" />
  <viseme start="1,349" articulation="1" type="Ih" />
  <viseme start="1,409" articulation="1" type="Z" />
  <viseme start="1,474" articulation="1" type="BMP" />
  <viseme start="1,564" articulation="1" type="R" />
  <viseme start="1,654" articulation="1" type="Ah" />
  <viseme start="1,854" articulation="1" type="D" />
  <viseme start="1,934" articulation="1" type="_" />
</speak>"

From what I see, everything seems ok, except for the n of name that is mapped to a phoneme "D".

By the way what is the file cache.xml for? Is it the one you are talking about? When I change the paths to the sound files and to this file, it doesn't create a new one. I don't know if it's worth mentioning though because when I use the TTS in the vhtoolkit which has the right file paths I get the same result as whith the TTS I have compiled from source and moved to my project directory.

Regards,

Pierre

November 26, 2014
6:46 pm
Avatar
Admin
Forum Posts: 983
Member Since:
December 1, 2011
sp_UserOfflineSmall Offline

Yes, that is the XML block of data that I was referring to. It contains timings and a reduced set of phonemes which are internally mapped in SmartBody to the lip and mouth movement.

So now I suspect that the lip sync data isn't being loaded in properly.

Can you try putting this Python code somewhere after the scene.run("init-diphoneDefault.py") line:

 

mgr = scene.getDiphoneManager()

n = mgr.getDiphoneMapNames()

for i in range(0, n):

    print n[i]

 

and you should see 'default' as the diphone set names.

 

Does the character do other things properly (idle, move, head nod?)

 

Ari

November 27, 2014
10:56 am
Avatar
Member
Members
Forum Posts: 29
Member Since:
September 30, 2014
sp_UserOfflineSmall Offline

Hi Ari,

Thanks for your answer. I did as instructed. I do get "default" when I print the list of diphone set names.

To answer your question, the character does everything else properly (idle, eye saccades, gazing, head movements, facial animations and playing body animations). For some reson, the blinking doesn't seem to work but it is not my main concern.

Where else should I look for the source of this issue with the lip sync? Can I force the visemes to play by using the OnViseme callback in the scene listener?

Thanks,

Pierre

November 27, 2014
11:12 am
Avatar
Member
Members
Forum Posts: 29
Member Since:
September 30, 2014
sp_UserOfflineSmall Offline

Hi,

I have added the OnViseme callback function to my scene listener to see what is going on. Here is an extract of what I get with the same sentence "Hello, my name is Brad":

Viseme  PBM  received for character  ChrBrad info: weight =  0.0225366875529  blend time =  0.0

Viseme  open  received for character  ChrBrad info: weight =  -0.000602945801802  blend time =  0.0

Viseme  PBM  received for character  ChrBrad info: weight =  0.0743985846639  blend time =  0.0

Viseme  open  received for character  ChrBrad info: weight =  -0.0024343256373  blend time =  0.0

Viseme  PBM  received for character  ChrBrad info: weight =  0.111196488142  blend time =  0.0

Viseme  open  received for character  ChrBrad info: weight =  -0.00380511698313  blend time =  0.0

Viseme  PBM  received for character  ChrBrad info: weight =  0.131301924586  blend time =  0.0

Viseme  open  received for character  ChrBrad info: weight =  -0.00452801445499  blend time =  0.0

Viseme  PBM  received for character  ChrBrad info: weight =  0.155834048986  blend time =  0.0

[...]

The rest is similar, open then PBM, etc. It seems that the only phonemes that get mapped are these two. Does this gives you more clue about what could be wrong with my program?

Bye,

Pierre

November 27, 2014
3:57 pm
Avatar
Admin
Forum Posts: 983
Member Since:
December 1, 2011
sp_UserOfflineSmall Offline

Try the following:

Put the FaceDefinition command before the createStandardControllers() command and make sure the lip sync commands are uncommented, so instead of this:

 

#give Brad a controller
        self.brad.createStandardControllers()

        #setup Brad's face
        self.bradFace = self.setUpBradFace()
        self.brad.setFaceDefinition(self.bradFace)

        #setup lip-sync
        #self.setupLipSyncBrad()

do this:

#give Brad a controller
        #setup Brad's face

        self.bradFace = self.setUpBradFace()
        self.brad.setFaceDefinition(self.bradFace)

        self.brad.createStandardControllers()

        #setup lip-sync
        self.setupLipSyncBrad()

Rereading this thread, you mention that the blinking isn't working, and the blinking also relies on a fully-functional face like the lip sync does, so I now suspect a problem with the face setup. Hopefully putting the face definition first fixes that problem.

 

Ari

November 28, 2014
8:46 am
Avatar
Member
Members
Forum Posts: 29
Member Since:
September 30, 2014
sp_UserOfflineSmall Offline

Hi Ari,

Thanks for your reply. I put the createStandardControllers() line after the face definition and it did fix the bliking. However, it did not fix the issue I'm having with the lip-sync. It seems to be caused by something else.

Could you please tell me what else I could do about it?

Thanks,

Pierre

November 29, 2014
1:11 am
Avatar
Admin
Forum Posts: 983
Member Since:
December 1, 2011
sp_UserOfflineSmall Offline

Can you send me a screen capture of your current result? Are you seeing Brad or one of the other characters?

November 29, 2014
10:36 pm
Avatar
Member
Members
Forum Posts: 29
Member Since:
September 30, 2014
sp_UserOfflineSmall Offline

Hi Ari,

So far, I didn't want to complicate things and first test using the character Brad, provided with SmartBody. Here is a link of a video that shows the result (The TTS voice is a female voice because I don't have male voices installed right now): https://www.youtube.com/watch?v=tU5zHq-7L70&feature=youtu.be

As you can see, only the first diphone gets articulated.

What can I do to fix it? Is there a workaround?

Thanks,

Pierre

November 30, 2014
12:58 am
Avatar
Admin
Forum Posts: 983
Member Since:
December 1, 2011
sp_UserOfflineSmall Offline

If you like, I can try to remote into your machine to see what is happening. You can email me at shapiro@ict.usc.edu to arrange this if you like.

If you run sbgui from the SmartBody SDK, does the lip sync work properly?

 

Ari

November 30, 2014
11:34 pm
Avatar
Member
Members
Forum Posts: 29
Member Since:
September 30, 2014
sp_UserOfflineSmall Offline

Hi Ari,

I have tried loading a scene in sbgui using the addCharacterDemo script and running BML speech commands from the command window. I get the same lip sync error as with my program. The lips start moving and stop before the uterance is finished.

I guess we could try letting you remote into my machine. I'll send you an email tomorrow.

Thanks,

Pierre

December 1, 2014
12:58 pm
Avatar
Member
Members
Forum Posts: 29
Member Since:
September 30, 2014
sp_UserOfflineSmall Offline

Hi Ari,

With a little help from my PhD advisor, we found the cause of the lip sync issue. It was in fact a very simple cause: when the TTS generates the viseme schedule and sends it in xml format, the C# ToString() method that is used to convert floats to strings uses the locale settings of the system. So, as I run on a French Windows 8 OS, the decimal separator used was a comma instead of a dot. The xml parser at the other end probably didn't apreciate it very much. I could fix it by changing my locale settings.

I really want to thank you for all the help you provided. Thanks also to all of your team for developping SmartBody and making it open-source. This is really an awesome tool.

I have a few more questions to ask you though:

1) To integrate my own custom character in SmartBody. How do you recommend exporting the facial animations (I use bones for facial animations.)?

2) What changes should be made to TTS Relay / SmartBody to map non-English phonemes to visemes?

Thanks,

Pierre

December 1, 2014
6:52 pm
Avatar
Admin
Forum Posts: 983
Member Since:
December 1, 2011
sp_UserOfflineSmall Offline

1) SmartBody should be able to read .dae, .bvh and the native .skm format (which can be converted from .fbx using the fbxtosbconverter program located in tools/ in the SVN distribution. Alternatively, you could procedurally create the face pose:

motion = scene.createMotion()

motion.addChannel("chin", "XPos")

motion.addChannel("left_cheek", "YPos")

motion.addChannel("left_cheek", "Quat")

 ....

data = SrVec()

data.append(".5")

data.append("2.7")

data.append("1")

data.append("0")

 data.append("0")

 data.append("0")

 motion.addFrame(data)

 

2) For French lip syncing, we:

a) first need to assemble all the French phonemes,

b) create a reduced data set from those (in the English set, the 'B', 'M' and 'P' and similar enough such that they are all mapped to a similar viseme).

c)  the TTS relay will need to be modified to accept this reduced French set of phonemes/visemes

d) create a French animation data set that maps pairs of phonemes to sets of animations. We can reuse any of the English phoneme pairs that are similar to French ones.