Abstract
In a recent Physical Review Letters article, Vicsek et al. propose a simple but compelling discrete-time model of n autonomous agents (i.e., points or particles) all moving in the plane with the same speed but with different headings. Each agent's heading is updated using a local rule based on the average of its own heading plus the headings of its "neighbors." In their paper, Vicsek et al. provide simulation results which demonstrate that the nearest neighbor rule they are studying can cause all agents to eventually move in the same direction despite the absence of centralized coordination and despite the fact that each agent's set of nearest neighbors change with time as the system evolves. This paper provides a theoretical explanation for this observed behavior. In addition, convergence results are derived for several other similarly inspired models. The Vicsek model proves to be a graphic example of a switched linear system which is stable, but for which there does not exist a common quadratic Lyapunov function.
Keywords
Affiliated Institutions
Related Publications
Counterfactual Multi-Agent Policy Gradients
Many real-world problems, such as network packet routing and the coordination of autonomous vehicles, are naturally modelled as cooperative multi-agent systems. There is a great...
Formation Control with Virtual Leaders and Reduced Communications
A feedback control law is given that can achieve a pre-specified formation for a group of mobile autonomous agents in an obstacle-free environment. This formation design uses vi...
Formation Control and Collision Avoidance for Multi-Agent Systems and a Connection between Formation Infeasibility and Flocking Behavior
A feedback control strategy that achieves convergence of a multi-agent system to a desired formation configuration avoiding at the same time collisions is proposed. The collisio...
Stable flocking of mobile agents. I. Fixed topology
This is the first of a two-part paper that investigates the stability properties of a system of multiple mobile agents with double integrator dynamics. In this first part we gen...
Stabilising Experience Replay for Deep Multi-Agent Reinforcement Learning
Many real-world problems, such as network packet routing and urban traffic control, are naturally modeled as multi-agent reinforcement learning (RL) problems. However, existing ...
Publication Info
- Year
- 2003
- Type
- article
- Volume
- 48
- Issue
- 6
- Pages
- 988-1001
- Citations
- 8310
- Access
- Closed
External Links
Social Impact
Social media, news, blog, policy document mentions
Citation Metrics
Cite This
Identifiers
- DOI
- 10.1109/tac.2003.812781