Elsevier

Computers & Graphics

Volume 28, Issue 6, December 2004, Pages 945-954
Computers & Graphics

Technical section
Behaviourally rich actions for user-controlled characters

https://doi.org/10.1016/j.cag.2004.08.006Get rights and content

Abstract

The increasing use of animated characters and avatars in computer games and 3D online worlds requires increasingly complex behaviour with increasingly simple and easy to use control systems. This paper presents a system for user-controlled actions that aims at simplicity and ease of use while being enhanced by modern animation techniques to produce rich and complex behaviour. We use inverse kinematics based motion adaptation to make pre-existing pieces of motion apply to new targets. The expressiveness of the character is enhanced by adding autonomous behaviour, in this case eye gaze behaviour. This behaviour is generated autonomously but is still influenced by the actions that the user is requesting the character to perform. The actions themselves are simple for a designer with no programming experience to design and for an end user to customise. They are also very simple to invoke.

Introduction

With the rise of computer games and 3D interactive environments, real-time computer animated characters are becoming increasingly important. With traditional computer games and single user environments the characters were either user-controlled characters with minimal requirements for realistic behaviour or computer-controlled characters that acted entirely autonomously. However, with the advent of multi-player online games and 3D chat and meeting environment the nature of user-controlled characters changes. They now represent the user to other participants. As such they are, at least partly, responsible for the impression that the user makes. They should foster a sense of co-presence: the sense of actually being with another person rather than a graphical object on a computer screen. People perform very complex non-verbal behaviour in social situations. This has a large number of functions from regulating conversation to expressing emotion and attitudes to other people [1]. In order to replicate a social environment with virtual characters these characters must replicate at least some of this behaviour. However, this behaviour is far too complex for users to control in real time. It is multi-modal so users would have to control many modalities at the same time, much of the behaviour is subconscious so users are unlikely to know what behaviour is appropriate, and the behaviour itself is often rather subtle and involved. What is more, all this activity has to be performed on top of the user's main task, whether that is playing a game or talking to other people in a chat room. To achieve this we believe that the control of the user's behaviour should be made as simple as possible while producing as rich as possible a behaviour. This simple real-time control can be enhanced by off-line customisation that makes characters display a personality chosen by the user (such customisation is already very popular for the graphical bodies of the characters [2]).

In this paper, we demonstrate that modern computer animation techniques can be used to create such a tool. Our techniques aim to be applicable to any type of inhabited virtual world, whether it is a commercial computer game, a social environment, a tool for remote meetings, or an educational world. We base our interface on typical actions that might be used in a virtual world, for example picking up objects, opening doors or, in a sporting simulation, catching balls. These behaviours would typically be implemented as pre-existing pieces of motion, either motion captured or hand animated. This has the advantage that the motion can be very expressive. The disadvantage is that it is only possible to have a small number of motions. This means that motion can be repetitive and actions which act on an object, such as opening a door, cannot be adapted to different positions of that object. We use two main techniques to make this sort of action more flexible while still basing them on pre-existing motion. Firstly, we use motion editing techniques to alter the motion to adapt it to new situations. This allows greater re-use of motions. Secondly, we add autonomous behaviour to the action. This behaviour is secondary to the action itself but it gives a greater sense of life to the character and prevents the action being too repetitive if performed many times. We focus on autonomous expressive behaviour and eye-gaze in particular.

We believe that users should be able to customise the behaviour of the character. It is very important that individual characters behave differently from each other and that their behaviour should be determined by the user who controls them. It has been noted [2] that users of on-line worlds are very keen to customise the graphical appearance of their avatars and it is likely they would be just as keen to customise the behaviour if suitable tools were available. We propose two types of customisation: one for expert users who might be the creators of the virtual world and one for end users. Skilled users can create new actions. We have provided tools that make this easy: the user starts with a piece of motion and adds a small amount of meta-data to it. This is the sort of customisation that might be performed when creating a new environment or game, though our system aims to make it simpler than current methods. The second type of customisation is aimed at less skilled users and involves altering the details of an action to fit with the personality of the character. This might be done when first joining an environment and creating one's avatar.

Section snippets

Overview

Fig. 1 shows an overview of the action system. It consists of two methods of controlling the character's behaviour: off-line customisation (i.e. customisations that occur at times when the user is not directly interacting with the virtual world) and real-time control (i.e. control during real-time interactions with the virtual world). It also contains two main components of that behaviour, which we call primary and secondary behaviour. Primary behaviour is the behaviour that is directly

Motion adaptation

This section describe the inverse-kinematics-based motion adaptation techniques we use for the character's primary behaviour.

Virtual characters, like real people, must act on objects in their environment. Examples of this sort of action in everyday life are drinking a cup of coffee or opening a door; in sport, examples are kicking a football or hitting a tennis ball. We call the objects that are acted on targets. The actions of virtual characters are generally animated by using pre-existing

Integrating autonomous behaviour

This section describes the use of autonomous secondary behaviour and how this is integrated with user-controlled primary behaviour. As an illustrative example, we describe the development and function of our autonomous eye-gaze module.

As described in the introduction, to appear life like and reactive, characters must display a wide range of behaviours, from expressive body language to instinctive reactions to sudden events. The user should only have to control a small proportion of actions and

User interface

One aim of this work is to create easy to use tools both for creating actions that can be used by characters and for invoking these actions when controlling a character. Our aim for action creation tools is that they should be well integrated with current 3D animation methods and not require scripting or programming. The methods of invoking action should be equivalent in ease of use to current interfaces for computer games and virtual worlds. The creation of actions is the more complex task of

Conclusion

We have demonstrated how modern animation techniques can be used to greatly enhance both the expressiveness and usability of end user-controlled avatars and animated characters. Our approach has a number of advantages:

  • Our actions are able to generate complex behaviour with an interface no more complex than that involved in invoking pre-existing motions in current computer games.

  • Inverse kinematics-based motion adaptation allows the re-use of a single piece of motion in different situations.

  • Easy

References (23)

  • M. Gleicher

    Comparing constraint-based motion editing methods

    Graphical Models

    (2001)
  • M. Argyle

    Bodily communication

    (1975)
  • L. Cheng et al.

    Lessons learned: building and deploying virtual environments

  • T. Polichroniadis

    Integrating a multiagent communications architecture with the videotape metaphor for scripting animations

  • A. Witkin et al.

    Motion warping

  • Gleicher M, Litnowicz P. Constraint-based motion adaption. Technical Report Tr 96-153, Apple Computers,...
  • J. Lee et al.

    A hierarchical approach to interactive motion editing for human-like figures

  • Polichroniadis T. High level control of virtual actors. PhD Thesis, University of Cambridge Computer Laboratory,...
  • C.W. Reynolds

    Flocks, herds, and schools: a distributed behavioral model

  • B. Blumberg et al.

    Multi-level direction of autonomous creatures for real-time environments

  • J. Cassell et al.

    Embodiment in conversational interfaces: Rea

  • Cited by (4)

    • SmartManikin: Virtual humans with agency for design tools

      2019, Conference on Human Factors in Computing Systems - Proceedings
    • Animating idle gaze in public places

      2009, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
    • Social perception and steering for online avatars

      2008, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
    1

    Formerly at University of Cambridge Computer Laboratory.

    View full text