Proactive anticipatory virtual characters - An ethology approach

This source preferred by Wen Tang

Authors: Tang, W., Wan, T.R. and Gatzoulis, C.

Start date: 9 August 2004

This data was imported from Scopus:

Authors: Tang, W., Gatzoulis, C. and Wan, T.R.

Journal: Proceedings of the IASTED International Conference on Applied Simulation and Modelling

Pages: 172-177

ISBN: 9780889864016

This paper describes our investigation in ethology approaches to build proactive anticipatory virtual characters. The control architecture of the characters is based on two Reinforcement Learning Algorithms: the zeroth-level Classifier system (ZCS) and the Q-Learning Algorithm. The ZCS consists of inter-connected functional components, each of which is a container of classifiers that evolve through interactions with the environment, and the multiplicity and recursion of the interconnected sub-units of the control system. In the Q-Learning control system, the Q algorithm is combined with eligibility traces, a temporary reward assignment mechanism. Both systems are tested and the results are compared in this framework to investigate the complexity of Reinforcement Learning algorithms and their application in the creation of proactive virtual characters that have internal needs and can discover optimal ways of achieving a predefined goal through the evolutionary learning process.

The data on this page was last updated at 04:57 on May 24, 2019.