Mouth Software

Once you get the robots jaw or lips moving you need to have it say something.

So you can use simple software or cloud based software or you can make use of telepresence software like Engineered Arts does for some of their public performance robots.

Engineered Arts uses software called Tinman which allows them to have a remote operator see the people talking to the robot in some public space like a museum and the remote operator can then answer questions and have the voice come out of the robot.

Otherwise you need to have the robot either respond from a preprogrammed audio bank or perhaps use a sound to text device.

The simple animatronics from Gemmy like the Singing Santa or the Halloween characters will play songs from a circuit board and sync these songs with the moving mouth of the character. They also have a microphone that you can plug in at the base and then talk or sing into them and the characters jaw will go up and down roughly in sync with what you are saying.

From a simple animatronic you can make better mouth movements with sound,\

There is even the Bruce Wilcox chatscript option that we could look into.

TTS or text to speech is a common technology used where you can type words on a computer which get processed by an Arduino and output into some speech device which can be synched up with moving lips on your robot.

Microft AI is one technology which works with TTS and here is a link to it. https://mycroft-ai.gitbook.io/docs/using-mycroft-ai/customizations/tts-engine

(under construction Oct 5, 2021)