Reference
J. M. van Ast, R. Babuška, and B.
De Schutter, "Ant colony learning algorithm for optimal control," in
Interactive Collaborative Information Systems (R. Babuska and
F. C. A. Groen, eds.), vol. 281 of
Studies in Computational Intelligence, Berlin, Germany:
Springer, ISBN 978-3-642-11687-2, pp. 155-182, 2010.
Abstract
Ant Colony Optimization (ACO) is an optimization heuristic for solving
combinatorial optimization problems and it is inspired by the swarming behavior
of foraging ants. ACO has been successfully applied in various domains, such as
routing and scheduling. In particular, the agents, called ants here, are very
efficient at sampling the problem space and quickly finding good solutions.
Motivated by the advantages of ACO in combinatorial optimization, we develop a
novel framework for finding optimal control policies that we call Ant Colony
Learning (ACL). In ACL, the ants all work together to collectively learn
optimal control policies for any given control problem for a system with
nonlinear dynamics. In this chapter, we will discuss the ACL framework and its
implementation with crisp and fuzzy partitioning of the state space. We
demonstrate the use of both versions in the control problem of two-dimensional
navigation in an environment with variable damping and discuss their
performance.
Publisher page
Downloads
BibTeX
@incollection{vanBab:10-029,
author = {van Ast, Jelmer M. and Babu{\v{s}}ka, Robert and De Schutter,
Bart},
title = {Ant Colony Learning Algorithm for Optimal Control},
booktitle = {Interactive Collaborative Information Systems},
series = {Studies in Computational Intelligence},
volume = {281},
editor = {Babuska, Robert and Groen, Frans C. A.},
publisher = {Springer},
address = {Berlin, Germany},
pages = {155--182},
year = {2010}
}