We'd like to use the text to speech service to control an animatronic. The animatronic has a mouth and needs to manipulate its lips and jaws as it's speaking and Amazon had phoneme and viseme support which is what we were using. However, we're switching to Watson for our upcoming demo and could not find anything related to the "mouth position" that would correspond with the audio. We tried generating the mouth shape using acoustic models but it doesn't look good. We're looking to retrieve both the audio and phonetics while the robot is speaking to control the mouth directly. Is there any way to do that with IBM Watson's Text to Speech system? See image attached for the different mouth shapes used by companies like Disney.
Related links: https://docs.aws.amazon.com/polly/latest/dg/viseme.html