Face Animation

The aim of this project is to synthesize speech-driven emotion expressive avatar facial animation in real-time. The input of this system will be speech in textual format and the output will be the facial animation including lip-syncing, head motion and facial expressions like happiness, anger, sadness, etc.

The solution will be capable of learning the relation between speech and animation of facial animations/head motions directly from videos. CNN’s are used for modeling the speech-facial motion relationship and synthesizing new animations for the given speech input.

1. Avatars
2. Smart Assistants
3. Mobile Assistants
4. Facial Animation Synthesis
5. Lip Syncing
6. Dubbing