SmartBody characters can be designed to automatically respond with nonverbal behavior, such as emotional expressions, head movements, saccadic eye movements and gestures from speech.
To do so, an utterance can be processed by a nonverbal behavior generation process that results in the construction of a BML block which can then be run by SmartBody and animate the character in synchrony with the words spoken. To do this, you must instantiate an extension of the SBParserListener interface, which will contain callbacks for each word and each part of speech (noun phrase, verb phrase, etc.) that is generated from the utterance.
Below is a simple example of such a class which will respond to words such as 'you' or 'me' by using deictic (pointing) gestures using only words and ignoring any syntactical structure
# define a class that can respond to words: me, my, I and you, your, yours class MyListener(SBParserListener): def onWord(timing, word): if word == "me" or word == "my" or word =="I" or word == "mine": self.addBML("<gesture lexeme=\"DEICTIC\" type=\"ME\" start=\"sp1:" + timing + "\"/>") elif word == "you" or word == "your" or word =="yours": self.addBML("<gesture lexeme=\"DEICTIC\" type=\"YOU\" start=\"sp1:" + timing + "\"/>") my = MyListener() # associate the listener with a Character scene.getCharacter("mycharacter").addParserListener(my) # obtain some BML by sending it through the listener bmlblock = scene.getParser().parseUtterance("hello, my name is Brad.") # execute the BML bml.execXML("mycharacter", bmlblock )
To include syntactical constructs, the Charniak parser which processes parts of speech must be instantiated as follows:
parser = scene.getParser() parser.initialize(scene.getMediaPath() + "/parser/EN/", "-T40")
then the SBParserListener must respond to the onPartOfSpeech:
class NonverbalListener(SBParserListener): def onPartOfSpeech(self, timing, partOfSpeech): print "(" + partOfSpeech + ")" if partOfSpeech == "NP": self.addBML("<head amount=\"0.05\" type=\"NOD\" start=\"sp1:" + timing + "\"/>") elif partOfSpeech == "VP": self.addBML("<gesture lexeme=\"BEAT\" stroke=\"sp1:" + timing + "\"/>") else: return nonverbal = NonverbalListener() # obtain some BML by sending it through the listener bmlblock = scene.getParser().parseUtterance(nonverbal, "hello, my name is Brad.") # execute the BML bml.execXML("mycharacter", bmlblock)
A list of partOfSpeech tags include: N (noun), V (verb), NP (noun phrase), VP (verb phrase), PRP (personal pronoun), PRP$ (possessive pronoun) and so forth. A more complete list can be found here: http://www.monlp.com/2011/11/08/part-of-speech-tags/
The following demonstrates a slightly more complex handling of non-verbal behavior, which includes recognizing more words (everything, maybe, nice, ...) and some random behavior:
from _random import Random class NonverbalListener(SBParserListener): def onWord(self, timing, word): if word == "yes" or word == "yeah": self.addBML("<head type=\"NOD\" amount=\".3\" start=\"sp1:\"" + timing + "\"/>") elif word == "no" or word == "nah" or word == "nothing" or word == "cannot" or word == "can't" or word == "don't" or word == "didn't" or word == "couldn't" or word == "isn't" or word == "wasn't" or word == "never": self.addBML("<gesture lexeme=\"METAPHORIC\" type=\"NEGATION\" stroke=\"sp1:" + timing + "\"/><head amount=\"0.05\" type=\"SHAKE\" start=\"" + timing + "\"/><face amount=\"1\" au=\"6\" side=\"BOTH\" type=\"facs\" start=\"sp1:" + timing + "\"/>") elif word == "me" or word == "my" or word =="I" or word == "mine": self.addBML("<gesture lexeme=\"DEICTIC\" type=\"ME\" start=\"sp1:" + timing + "\"/>") elif word == "you" or word == "your" or word =="yours": self.addBML("<gesture lexeme=\"DEICTIC\" type=\"YOU\" start=\"sp1:" + timing + "\"/>") elif word == "but" or word == "however": self.addBML("<face amount=\"1\" au=\"6\" side=\"BOTH\" type=\"facs\" start=\"sp1:" + timing + "\"/><gesture lexeme=\"METAPHORIC\" type=\"CONTRAST\" start=\"sp1:" + timing + "\"/>") elif word == "maybe" or word == "perhaps": self.addBML("<face amount=\"1\" au=\"12\" side=\"BOTH\" type=\"facs\" start=\"sp1:" + timing + "\"/>") elif word == "everything" or word == "all" or word == "whole" or word == "full" or word == "completely": self.addBML("<gesture lexeme=\"METAPHORIC\" type=\"INCLUSIVITY\" start=\"sp1:" + timing + "\"/>") elif word == "really" or word == "very" or word == "quite" or word == "wonderful" or word == "great" or word == "absolutely" or word == "huge" or word == "fantastic" or word == "so" or word == "amazing" or word == "important": self.addBML("<gesture lexeme=\"BEAT\" stroke=\"sp1:" + timing + "\"/><face amount=\"1\" au=\"1\" side=\"BOTH\" type=\"facs\" start=\"sp1:" + timing + "\"/>") elif word == "good" or word == "nice" or word == "great": self.addBML("<gesture lexeme=\"BEAT\" stroke=\"sp1:" + timing + "\"/>") elif word == "why" or word == "when" or word == "what" or word == "where" or word == "how" or word == "do": self.addBML("<face amount=\"1\" au=\"1\" side=\"BOTH\" type=\"facs\" start=\"sp1:" + timing + "\"/><face amount=\"1\" au=\"2\" side=\"BOTH\" type=\"facs\" start=\"sp1:" + timing + "\"/><gesture lexeme=\"METAPHORIC\" type=\"QUESTION\" stroke=\"sp1:" + timing + "\"/>") elif word == "must": self.addBML("<gesture lexeme=\"BEAT\" stroke=\"sp1:" + timing + "\"/>") else: r = Random() num = int(r.random()*5) if num == 1: self.addBML("<face amount=\"1\" au=\"12\" side=\"BOTH\" type=\"facs\" start=\"sp1:" + timing + "\"/>") elif num == 2: self.addBML("<face amount=\"1\" au=\"6\" side=\"BOTH\" type=\"facs\" start=\"sp1:" + timing + "\"/>") elif num == 3: self.addBML("<face amount=\"1\" au=\"45\" side=\"BOTH\" type=\"facs\" start=\"sp1:" + timing + "\"/>") elif num == 4: self.addBML("<face amount=\"1\" au=\"2\" side=\"BOTH\" type=\"facs\" start=\"sp1:" + timing + "\"/>") else: return return def onPartOfSpeech(self, timing, partOfSpeech): if partOfSpeech == "NP": self.addBML("<head amount=\"0.05\" type=\"NOD\" start=\"sp1:" + timing + "\"/>") elif partOfSpeech == "VP": self.addBML("<gesture lexeme=\"BEAT\" stroke=\"sp1:" + timing + "\"/>") else: self.addBML("<gesture lexeme=\"BEAT\" stroke=\"sp1:" + timing + "\"/>") nonverbal = NonverbalListener() # obtain some BML by sending it through the listener bmlblock = scene.getParser().parseUtterance(nonverbal, "That was very nice. I couldn't think of everything like you did.") # execute the BML bml.execBML("*", bmlblock)
The SBParserListener can also be used to generate nonverbal behavior offline for later use. The 'bmlblock' string result can be saved to a .xml file and modified as needed:
bmlblock = scene.getParser().parseUtterance(nonverbal, "That was very nice. I couldn't think of everything like you did.") myutterancefile = open('thatwasnice.xml', 'w') myutterancefile.write(bmlblock) myutterancefile.close()