virtual humans: motion and behavior
A physically simulated human needs to learn how to use its muscles - it can't depend on
a script or keyframes to tell it how to move each limb.
- Motion Control
- Intelligent Motion
Control with an Artificial Cerebellum
Russell Smith, University of Auckland
- Dynamic Balance and Walking Control of Planar Bipeds
James J. Troy, Iowa State University, Mechanical Engineering Department
- Realistic Animation of
Legged Running on Rough Terrain
by John Nagle of Animats
- Why Legs have Three
Joints
by John Nagle of Animats
- Chris Bregler at Inteval and UCB
worked on visual motion tracking of full body movement, which is
useful for training motion control
- "During talking, complex configurations and subtle lip motions are generated.
During gesturing, walking, and other actions, coarse articulated limb and body movements
are generated. Depending on the kind of motion and application, different abstractions and
resolutions are required. Some motions are very constrained, like speaking lips or walk
styles; these can be learned from data. Other actions only satisfy very general
constraints. Such constraints can be coded a-priori. Recognition tasks require extracting
and modeling only a few discriminating features. Animation tasks require capturing every
subtle detail."
- WavesWorld: A Testbed
for Three Dimensional Semi-Autonomous Animated Characters
The basic thesis that this dissertation discusses is that by considering computer graphics
and artificial intelligence techniques as peers in a solution, rather than relegating one
or the other to the status of an "implementation detail", we can achieve a
synergy greater than the sum of the parts.
- Zeltzer, D. and M.B. Johnson, Motor Planning: Specifying and Controlling the Behavior of
Autonomous Animated Agents. Journal of Visualization and Computer Animation, April-June
1991, 2(2), pp. 74-80.
- Fully Autonomous Behavior
- would require some degree of AI
- John E. Laird at the Department of Electrical Engineering and Computer Science,
University of Michigan is currently working on the construction of intelligent agents that
interact with complex, dynamic environments. This work is currently being pursued within
the context of the Soar/IFOR component of the WISSARD/IFOR/CFOR project (funded by
ARPA/ASTO). The goal of Soar/IFOR is the development of autonomous computer agents whose
behavior is tactically indistinguishable from humans. These synthetic agents must not only
be lifelike, they must be humanlike with many of the capabilities we commonly associate
with intelligent human behavior: real-time reactivity, goal-directed problem solving and
planning, large bodies of knowledge, adaptation to changing situations, and interaction
and coordination with other intelligent entities. A long term goal of this research is to
extend this technology to education, training, and entertainment where humans can interact
with humanlike intelligent agents in a variety of synthetic environments.
- Use of language to direct or guide the behavior of the virtual human
agents
- Some people, including several of Norm's associates, have done research in this area: