Real Time Speech Driven Face Animation

The goal of this project is to construct and implement a real time speech to face animation system. The program is based on the Visage Technologies software. Neural networks are used to classify the incoming speech, and the program shows an animated face which mimics the sound.

The animation is already implemented, so the work done in this thesis is focused on signal processing of an audio signal,and the implementation of speech to lip mapping and synchronization…


1 Introduction
1.1 Thesis outline
1.2 Target group
2 Existing Techniques
2.1 Using Motion Units and Neural Networks
Obtaining the MUs
Real Time Audio-to-MUP Mapping
Training Phase
Estimation Phase
A Similar Approach
2.2 Combining Hidden Markov Models and
Sequence Searching Method
HMM-based Method
Sequence Searching Method
2.3 Lip Synchronization Using Linear Predictive Analysis
System Overview
Energy Analysis
Zero Crossing
Facial Animation
3 Our Real Time Speech Driven Face Animation
3.1 Initial Experiments
3.2 Constructing a phoneme database
3.3 Signal Processing
Mel Frequency Cepstral Coefficients
Fisher Linear Discriminant Transformation
3.4 Recognition with Neural Networks
The Structure of a Neural Network
Training The Neural Networks
Validation Of The Neural Networks
3.5 Classification using a Gaussian Mixture Model
4 Implementation
5 Discussion
5.1 Results
5.2 Limitations and Future Work

Source: Linköping University

Reference URL 2: Visit Now

1 thought on “Real Time Speech Driven Face Animation”

Leave a Comment