- Facial Animation
- currently, most virtual humans will simply morph their facial vertices between
pre-defined expressions. That's a reasonable approximation for some applications, but
others (i.e. simulation of chewing food) obviously requires an articulated jaw at the very
minimum
- like the shoulder, this could be adressed with a bone/muscle simulation.
- Facial animation page at UCSC
Perceptual Science Laboratory
- Layered Compositing of Facial
Expression
- SIGGRAPH 1997 technical sketch, Ken Perlin
- Lips and Tongue
- we need to go beyond simple morphing of facial skin vertices if we want to allow for
realism for actions like talking and eating.
- Sumit Basu - A Three-Dimensional Model of Human Lip Motion (1997)
- "there is a device that is used in speach therapy. It is a plate of sensors that is
attached to the upper palate. It reads the position of the tounge against it during
speach. The display is a 2d map of active sensors. So it would not be an ideal solution
but could be a start. Also the speeds are incredible. The main drawback is that it has to
be custom made for each mouth."