University of Southern CaliforniaUSC
USC ICT TwitterUSC ICT FacebookUSC ICT YouTube

Interpolate animations | General SmartBody Discussion | Forum

Avatar

Please consider registering
guest

sp_LogInOut Log In sp_Registration Register

Register | Lost password?
Advanced Search

— Forum Scope —




— Match —





— Forum Options —





Minimum search word length is 3 characters - maximum search word length is 84 characters

sp_Feed Topic RSS sp_TopicIcon
Interpolate animations
June 1, 2016
7:19 am
Avatar
Member
Members
Forum Posts: 4
Member Since:
June 1, 2016
sp_UserOfflineSmall Offline

Hello.

 
It is possible to interpolate animations using the blends?
I created some animations and I want to combine them. 
In some cases only plays the last animation that was appended. In others, some in the middle.
 
Thank you.
June 1, 2016
4:29 pm
Avatar
Admin
Forum Posts: 980
Member Since:
December 1, 2011
sp_UserOfflineSmall Offline

I wish it was simpler, but here it is:

1) Use a joint map to map the skeleton of your character to the standard SmartBody naming scheme.

2) Use a joint map to map the motions to the standard SmartBody naming scheme.

3) Use a joint map to map the skeleton associated with the motions to the standard SmartBody naming scheme.

4) Create a retarget instance from the source skeleton (the one associated with the motions) with the target one (the one you want  to play the motion on it)

5) Create a blend from the motions (1d, 2d, or 3d):

5a) add correspondance points that associate the motions with each other (for example, the place where the left foot strikes the ground in motion1 is one correspondance point, and another correspondance point is where the left foot strikes the ground in motion2)

5b) add parameterization values to each motion (for a 1D example, if motion1 is value '1' and motion2 is value '2', etc., for a 2D example, motion1 is value (0,0) and motion2 is value (0,1), etc)

Ari

June 2, 2016
1:42 am
Avatar
Member
Members
Forum Posts: 4
Member Since:
June 1, 2016
sp_UserOfflineSmall Offline

Thanks for the reply.
I think I've done that and it is no working very well.
My animations have the same duration (2.93333292007) and the same sync points:

(ready time: 0.333333
strokeStart time: 0.833333
emphasis time: 1.433333
stroke time: 2.133333
relax time: 2.600000)

Is there a problem in that?

 

If you could take a look, i will appreciate it.
This is the script that i'm using.

 

# Add asset paths
scene.addAssetPath('mesh', 'mesh')
scene.addAssetPath('motion', 'ChrBrad')
scene.addAssetPath("script", "behaviorsets")
scene.addAssetPath('script', 'scripts')
scene.loadAssets()

# Set scene parameters and camera
print 'Configuring scene parameters and camera'
scene.setScale(1.0)
scene.setBoolAttribute('internalAudio', True)
scene.run('default-viewer.py')
camera = getCamera()
camera.setEye(0, 1.71, 1.86)
camera.setCenter(0, 1, 0.01)
camera.setUpVector(SrVec(0, 1, 0))
camera.setScale(1)
camera.setFov(1.0472)
camera.setFarPlane(100)
camera.setNearPlane(0.1)
camera.setAspectRatio(0.966897)
cameraPos = SrVec(0, 1.6, 10)
scene.getPawn('camera').setPosition(cameraPos)

# Set up joint map for Brad
print 'Setting up joint map and configuring Brad\'s skeleton'
scene.run('zebra2-map.py')
zebra2Map = scene.getJointMapManager().getJointMap('zebra2')
bradSkeleton = scene.getSkeleton('ChrBrad.sk')
zebra2Map.applySkeleton(bradSkeleton)
zebra2Map.applyMotionRecurse('ChrBrad')

# Establish lip syncing data set
print 'Establishing lip syncing data set'
scene.run('init-diphoneDefault.py')

# Set up face definition
print 'Setting up face definition'
# Brad's face definition
bradFace = scene.createFaceDefinition('ChrBrad')
bradFace.setFaceNeutral('ChrBrad@face_neutral')
bradFace.setAU(1, "left", "ChrBrad@001_inner_brow_raiser_lf")
bradFace.setAU(1, "right", "ChrBrad@001_inner_brow_raiser_rt")
bradFace.setAU(2, "left", "ChrBrad@002_outer_brow_raiser_lf")
bradFace.setAU(2, "right", "ChrBrad@002_outer_brow_raiser_rt")
bradFace.setAU(4, "left", "ChrBrad@004_brow_lowerer_lf")
bradFace.setAU(4, "right", "ChrBrad@004_brow_lowerer_rt")
bradFace.setAU(5, "both", "ChrBrad@005_upper_lid_raiser")
bradFace.setAU(6, "both", "ChrBrad@006_cheek_raiser")
bradFace.setAU(7, "both", "ChrBrad@007_lid_tightener")
bradFace.setAU(10, "both", "ChrBrad@010_upper_lip_raiser")
bradFace.setAU(12, "left", "ChrBrad@012_lip_corner_puller_lf")
bradFace.setAU(12, "right", "ChrBrad@012_lip_corner_puller_rt")
bradFace.setAU(25, "both", "ChrBrad@025_lips_part")
bradFace.setAU(26, "both", "ChrBrad@026_jaw_drop")
bradFace.setAU(45, "left", "ChrBrad@045_blink_lf")
bradFace.setAU(45, "right", "ChrBrad@045_blink_rt")

bradFace.setViseme("open", "ChrBrad@open")
bradFace.setViseme("W", "ChrBrad@W")
bradFace.setViseme("ShCh", "ChrBrad@ShCh")
bradFace.setViseme("PBM", "ChrBrad@PBM")
bradFace.setViseme("FV", "ChrBrad@FV")
bradFace.setViseme("wide", "ChrBrad@wide")
bradFace.setViseme("tBack", "ChrBrad@tBack")
bradFace.setViseme("tRoof", "ChrBrad@tRoof")
bradFace.setViseme("tTeeth", "ChrBrad@tTeeth")

print 'Adding character into scene'
# Set up Brad
brad = scene.createCharacter('ChrBrad', '')
bradSkeleton = scene.createSkeleton('ChrBrad.sk')
brad.setSkeleton(bradSkeleton)
# Set position
bradPos = SrVec(0, 0, 0)
brad.setPosition(bradPos)
# Set facing direction
bradFacing = SrVec(0, 0, 0)
brad.setHPR(bradFacing)
# Set face definition
brad.setFaceDefinition(bradFace)
# Set standard controller
brad.createStandardControllers()
# Deformable mesh
brad.setDoubleAttribute('deformableMeshScale', .01)
brad.setStringAttribute('deformableMesh', 'ChrBrad.dae')

# Lip syncing diphone setup
brad.setStringAttribute('lipSyncSetName', 'default')
brad.setBoolAttribute('usePhoneBigram', True)
brad.setVoice('remote')

import platform
if platform.system() == "Windows":
windowsVer = platform.platform()
if windowsVer.find("Windows-7") == 0:
brad.setVoiceCode('Microsoft|Anna')
else:
if windowsVer.find("Windows-8") == 0 or windowsVer.find("Windows-post2008Server") == 0:
brad.setVoiceCode('Microsoft|David|Desktop')
else: # non-Windows platform, use Festival voices
brad.setVoiceCode('voice_kal_diphone')

# setup locomotion
scene.run('BehaviorSetMaleMocapLocomotion.py')
setupBehaviorSet()
retargetBehaviorSet('ChrBrad')

# Turn on GPU deformable geometry
brad.setStringAttribute("displayType", "GPUmesh")

# Set up steering
print 'Setting up steering'
steerManager = scene.getSteerManager()
steerManager.setEnable(False)
brad.setBoolAttribute('steering.pathFollowingMode', False) # disable path following mode so that obstacles will be respected
steerManager.setEnable(True)

#Set up blends
blendManager = scene.getBlendManager()

# 1D Blend
print 'Setting up 1D blend'
blend1D = blendManager.createBlend1D("blend1D")
blend1D.setBlendSkeleton('ChrBackovic.sk')

m1 = scene.getMotion("ChrMarine@WalkTightCircleRt")
m2 = scene.getMotion("ChrMarine@WalkTightCircleR")
m3 = scene.getMotion("ChrMarine@WalkTightCircle")
m4 = scene.getMotion("ChrMarine@WalkTightCircl")
m5 = scene.getMotion("ChrMarine@WalkTightCirc")
m6 = scene.getMotion("ChrMarine@WalkTightCir")

motions = StringVec()
motions.append("ChrMarine@WalkTightCircleRt")
motions.append("ChrMarine@WalkTightCircleR")
motions.append("ChrMarine@WalkTightCircle")
motions.append("ChrMarine@WalkTightCircl")
motions.append("ChrMarine@WalkTightCirc")
motions.append("ChrMarine@WalkTightCir")

paramsX = DoubleVec()
paramsX.append(1)
paramsX.append(2)
paramsX.append(3)
paramsX.append(4)
paramsX.append(5)
paramsX.append(6)

for i in range(0, len(motions)):
blend1D.addMotion(motions[i], paramsX[i])

points0 = DoubleVec()
points0.append(0)
points0.append(0)
points0.append(0)
points0.append(0)
points0.append(0)
points0.append(0)
blend1D.addCorrespondencePoints(motions, points0)
points1 = DoubleVec()
points1.append(2.333333)
points1.append(0.833333)
points1.append(2.333333)
points1.append(0.833333)
points1.append(2.333333)
points1.append(0.833333)
blend1D.addCorrespondencePoints(motions, points1)
points2 = DoubleVec()
points2.append(m1.getDuration())
points2.append(m2.getDuration())
points2.append(m3.getDuration())
points2.append(m4.getDuration())
points2.append(m5.getDuration())
points2.append(m6.getDuration())
blend1D.addCorrespondencePoints(motions, points2)

bml.execBML('ChrBrad', '<blend name="blend1D" x="0.1"/>')

 

# Start the simulation
print 'Starting the simulation'
sim.start()

bml.execBML('ChrBrad', '<body posture="ChrMarine@Idle01"/>')
bml.execBML('ChrBrad', '<saccade mode="listen"/>')

sim.resume()

June 2, 2016
9:50 am
Avatar
Admin
Forum Posts: 980
Member Since:
December 1, 2011
sp_UserOfflineSmall Offline

The ready/stroke/end times don't matter. Those specify the blend in/out times for motions and stroke times for gestures. When you are using motion blends, the blends and transitions contol those timings.

There are three problems:

1) You need to joint map the ChrBackovic.sk skeleton and ChrMarine@WalkTightCircleRtX motions like this:

marineSkeleton = scene.getSkeleton('ChrBackovic.sk')
zebra2Map.applySkeleton(marineSkeleton)
zebra2Map.applyMotion(scene.getMotion("ChrMarine@WalkTightCircleRt1"))
zebra2Map.applyMotion(scene.getMotion("ChrMarine@WalkTightCircleRt2"))
zebra2Map.applyMotion(scene.getMotion("ChrMarine@WalkTightCircleRt3"))
zebra2Map.applyMotion(scene.getMotion("ChrMarine@WalkTightCircleRt4"))
zebra2Map.applyMotion(scene.getMotion("ChrMarine@WalkTightCircleRt5"))
zebra2Map.applyMotion(scene.getMotion("ChrMarine@WalkTightCircleRt6"))
zebra2Map.applyMotion(scene.getMotion("ChrMarine@WalkTightCircleRt7"))

2) You need to match the correspondence points within the timeframe of the motions, so use something like this instead of what you already have:

 

points0 = DoubleVec()
points0.append(0)
points0.append(0)
points0.append(0)
points0.append(0)
points0.append(0)
points0.append(0)
blend1D.addCorrespondencePoints(motions, points0)

points1 = DoubleVec()
points1.append(m1.getDuration() / 2.0)
points1.append(m1.getDuration() / 2.0)
points1.append(m1.getDuration() / 2.0)
points1.append(m1.getDuration() / 2.0)
points1.append(m1.getDuration() / 2.0)
points1.append(m1.getDuration() / 2.0)
blend1D.addCorrespondencePoints(motions, points1)

points2 = DoubleVec()
points2.append(m1.getDuration())
points2.append(m2.getDuration())
points2.append(m3.getDuration())
points2.append(m4.getDuration())
points2.append(m5.getDuration())
points2.append(m6.getDuration())
blend1D.addCorrespondencePoints(motions, points2)

3) but the main problem is that your motions do not 'correspond' to each other. The idea behind blends is that you have motions that accomplish the same underlying goal, but do it with different style, timing or fidelity. Here your motions aren't the same set of movement, but rather sequential segments of a larger movement. In other words, I could blend a set of animations that start with the right foot on the ground and lift the left foot and put that left foot down into different areas (front back left right), but I can't blend a motion that (starts with the left foot on the ground and lifts the right foot) with one that (starts with the right foot on the ground and lifts the left foot), the same way you couldn't blend a (crawling motion) with a (bicycle riding motion); they are different and can't be interpolated in space and time.

What you seem to have is a set of motions that would work if you strung them together sequentially, and for that you need to create a blend for each individual motion, then a sequential transition between the blends.

 

Ari

June 3, 2016
3:11 am
Avatar
Member
Members
Forum Posts: 4
Member Since:
June 1, 2016
sp_UserOfflineSmall Offline

Thank you one more time.
I understand what you are saying. I changed some things and i now have something better. But its not what i want yet. It is possible to see that they are blended but it seems that makes the average all the motions.

I don't know if this will give me the result i want.
The motions that i'm working have the same goal.
Like you saw, I have six motions. So, i created an hexagon with them and I built triangles. My question is: the point in the middle is where it is supposed to have all the animations? And is the place where the motions are all blended together?
It is possible to superimpose the animations?

 

I also tried transitions.. Not bad.
I've made six blends. And then, the transitions. If the transitions play at the same time, it will be more realistic. It is possible to do that?

Forum Timezone: America/Los_Angeles

Most Users Ever Online: 733

Currently Online: huberdavidsen3, casey29english, kinneyreynolds06, andersen19eaton, kristensen00leblanc, bager59mayo, eric506784, t9rrzpi307, g2pvwfg490
108 Guest(s)

Currently Browsing this Page:
1 Guest(s)

Top Posters:

jwwalker: 80

jyambao: 51

rbaral: 47

adiaz: 30

WargnierP: 29

lucky7456969: 28

mbarros: 28

avida.matt: 26

JonathanW: 24

laguerre: 23

Member Stats:

Guest Posters: 65

Members: 52505

Moderators: 3

Admins: 4

Forum Stats:

Groups: 1

Forums: 5

Topics: 427

Posts: 2343