Reference
J. van Ast, R. Babuška, and B. De Schutter, "Generalized pheromone
update for ant colony learning in continuous state spaces,"
Proceedings of the 2010 IEEE Congress on Evolutionary Computation
(CEC 2010), Barcelona, Spain, pp. 2617-2624, July 2010.
Abstract
In this paper, we discuss the Ant Colony Learning (ACL) paradigm for non-linear
systems with continuous state spaces. ACL is a novel control policy learning
methodology, based on Ant Colony Optimization. In ACL, a collection of agents,
called ants, jointly interact with the system at hand in order to find the
optimal mapping between states and actions. Through the stigmergic interaction
by pheromones, the ants are guided by each others experience towards better
control policies. In order to deal with continuous state spaces, we generalize
the concept of pheromones and the local and global pheromone update rules. As a
result of this generalization, we can integrate both crisp and fuzzy
partitioning of the state space into the ACL framework. We compare the
performance of ACL with these two partitioning methods by applying it to the
control problem of swinging-up and stabilizing an under-actuated pendulum.
Downloads
BibTeX
@inproceedings{vanBab:10-019,
author = {van Ast, Jelmer and Babu{\v{s}}ka, Robert and De Schutter,
Bart},
title = {Generalized Pheromone Update for Ant Colony Learning in
Continuous State Spaces},
booktitle = {Proceedings of the 2010 IEEE Congress on Evolutionary
Computation (CEC 2010)},
address = {Barcelona, Spain},
pages = {2617--2624},
month = jul,
year = {2010}
}