Programming Agent-Based Models by Demonstration

Author/Creator ORCID

Date

2019-01-01

Department

Computer Science and Electrical Engineering

Program

Computer Science

Citation of Original Publication

Rights

Distribution Rights granted to UMBC by the author.
Access limited to the UMBC community. Item may possibly be obtained via Interlibrary Loan thorugh a local library, pending author/copyright holder's permission.
This item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.

Abstract

Agent-based modeling is a paradigm for modeling dynamic systems of interacting agents that are individually governed by specified behavioral rules. Training a model of such agents to produce an emergent behavior by specification of the emergent (as opposed to agent) behavior is easier from a demonstration perspective. While many approaches involve manual behavior specification via code or reliance on a defined taxonomy of possible behaviors, the AMF framework (existing work) generates mapping functions between agent-level parameters and swarm-level parameters which are re-usable once generated. This work builds on that framework by exploring sources of variance in performance, composition of framework output, and the integration of demonstration using images. The demonstrator specifies spatial motion of the agents over time, and retrieves agent-level parameters required to execute that motion. The framework, at its core, uses computationally cheap image processing algorithms. This makes it suitable for time-critical applications. The proposed framework (AMF+) seeks to provide a general solution to the problem of allowing abstract demonstrations to be replicated by agents in a swarm. On solving this problem, the framework has potential usage in a variety of applications such as games, education, surveillance, and search-and-rescue, where the swarm may be controlled remotely. The availability of this software for academic research is therefore also a contribution to the scientific community. The abstraction of demonstration also removes technical requirements for the user. The framework may be used with varied input methodologies, allowing for usage by a wide audience spanning varied demonstration preferences and capabilities. The framework is analyzed in detail for its current and potential capabilities. Our work is tested with a combination of primitive visual feature extraction methods (contour area and shape) and features generated using a pre-trained deep neural network in different stages of image featurization. The framework is also evaluated for its potential using complex visual features for all image featurization stages. Experimental results show significant coherence between demonstrated behavior and predicted behavior based on estimated agent-level parameters specific to the spatial arrangement of agents. The framework is also evaluated using agent-based models or similar systems for comparison.