Samson Abramsky (Oxford university Computing Laboratory) Samson.Abramsky@comlab.ox.ac.uk http://web.comlab.ox.ac.uk/oucl/people/samson.abramsky.html
We describe some perspectives on systems of interacting agents which have arisen in recent work on Game Semantics and Geometry of Interaction. We show how types and logical structure can be used to enforce compositional behaviour, so that we can plug several sub-systems of agents into a compound system with more complex behaviour in a disciplined fashion.
We also show how very simple `copy-cat' agents can be composed to yield a universal model of computation, of a strikingly geometric nature. This model also has connections with reversible and quantum computation.
Self-Healing Electricity Infrastructure
With the tragic events of 9/11 permanently etched in our minds, the recent massive power outages brought eerie reminders of the events that shook our world two years ago. While we were relieved that there was no apparent evidence of terrorism, the cascading blackouts are not merely a warning, but the sudden and stark reality of the vulnerable condition of our electricity infrastructure becoming visible.
Electricity infrastructure touches us all -- therefore the key question is whether we are prepared for the future storms; more pertinent to this workshop, the key challenge is the control of a heterogeneous, widely dispersed, yet globally interconnected system is a serious technological problem in any case. It is even more complex and difficult to control it for optimal efficiency and maximum benefit to the ultimate consumers while still allowing all its business components to compete fairly and freely.
By way of background, the North American power network represents an enormous investment - this infrastructure includes over 15,000 generators in 10,000 power plants, along with hundreds of thousands of miles of transmission lines, and distribution networks; it is estimated to be worth over $800 billion. The transmission and distribution plant-in-service was valued at $358 billion in 2000. With its millions of relays, controls and other components, it is the most complex machine ever invented. The National Academy of Engineering has hailed the North American power delivery system as the supreme engineering achievement of the 20th century because of its ingenious engineering, catalytic role for other technologies and impact in improving quality of life down to the household level.
Through this network, every user, producer, distributor and broker of electricity buys and sells, competes and cooperates in an "Electric Enterprise." Every industry, every business, every store and every home is a participant, active or passive, in this continent-scale conglomerate. Over the next few years, the Electric Enterprise will undergo dramatic transformation as its key participants -- the traditional electric utilities -- respond to deregulation, competition, tightening environmental/land-use restrictions, and other global trends.
From a strategic R & D viewpoint, agility and robustness/survivability of large-scale dynamic networks that face new and unanticipated operating conditions will be presented. A major challenge is posed by the lack of a unified mathematical framework with robust tools for modeling, simulation, control and optimization of time-critical operations in complex multicomponent and multiscaled networks.
In this presentation, I'll present a model and simulation of the "Electric Enterprise" (taken in the broadest possible sense) that has been developed. The model uses autonomous, adaptive agents to represent both the possible industrial components, and the corporate entities that own these components and are now engaged in free competition. The goal in building this tool is to help these corporations evolve new business strategies for internal reorganization, external partnerships and market penetration.
This presentation will also focus on a strategic vision extending to a decade, or longer, that would enable more secure and robust systems operation, security monitoring and efficient energy markets.
Alexis Arias (Icosystem Corporation) email@example.com
In many real world applications of ABMs, enhancing the predictive power of the model is paramount. This requires extensive knowledge of the behavioral rules that govern the agents' actions. However, it is common to face situations where direct information, or agreement among domain experts, regarding these rules is lacking, and only output sample data (usually at an aggregate level) is available. Under these circumstances, it is important to understand whether the behavioral rules can be identified from the observable data. The identification of micro-rules from macro- behavior in ABMs when there are nonlinear interactions between agents is the subject of this presentation.
I will present results from an ongoing project designed to study the possibility of estimating individual behavioral rules in the Minority Game using sample data that exhibits different levels of aggregation. The analysis concentrates on the small sample properties of the Maximum Likelihood Estimator of individual rules. We consider two models that pose different challenges with regard to estimation and three data scenarios: panel data of individuals' actions, time series of the number of individuals in the minority, and time series of the action taken by the minority. For each scenario we study the evolution of the estimation error as the number of individuals and the size of the time series increases. In addition, we analyze the effect on the estimation error of introducing certain restrictions on the model and on the information available a priori to the modeler.
Christopher L. Barrett (Basic and Applied Simulation Science (CCS-5), Los Alamos National Laboratory) firstname.lastname@example.org
A Theoretical and Applied Program in Simulation Science and Interaction Based Computing
Computer simulation can be viewed as a computational approach for explicit calculation of local interactions among system piece parts, resulting in a dynamical representation of an overall system composed of those parts. We will describe a theoretical program and the associated applied program of research that has been developed by following and elaborating this truism.
Formally we begin with a set of local maps, a dependency structure among them and an order of evaluation of those maps that is consistent with the dependency structure. We call such systems Sequential Dynamical Systems, (SDS). In discrete event simulations, these essential elements correspond to agent/objects, their interaction protocols and constraints, and an update schedule. Thus the algebraic, or structural, properties of SDS form an axiomatic foundation for computer simulation. Some of these properties will be described.
While the algebraic treatment of SDS for the most part ignores the details of actual evaluation of the local mapping, when a procedural interpretation of the local maps is introduced, a natural algorithmic perspective to SDS, called Computational SDS, (cSDS) arises. Examples of some complexity bounds on cSDS and their implicit algorithmic interpretations, normal forms, and other such issues will be discussed. One question we address in this regard is, "are simulations merely optional?" Another is, "are simulations computationally limited?"
The application of these conceptual tools to very large socio-technical systems forms the foundation of the Los Alamos approach to our scientific and technical role in the National Infrastructure Simulation and Analysis Center (NISAC), a new national capability established in the Patriot Act of October 2001. A program to produce highly advanced and interoperable transportation-, mobile population-, mobile computation-, commodity markets- and epidemiological simulation technology will be overviewed and related to the theoretical program.
David Francis Batten (Manufacturing and Infrastructure Technology, CSIRO Agent-Based Modelling Working Group, CSIRO Manufacturing and Infrastructure Technology, Australia) email@example.com
CSIRO's Agent-Based Modelling Working Group (poster session)
Australia's largest research organization, CSIRO (Commonwealth Scientific and Industrial Research Organization) established a Centre for Complex Systems Science (CSS) two years ago. Within this CSS Centre, there is a small suite of agent-based modelling projects embracing a broad range of contexts -- from electricity markets to fishing behaviour, rangelands and river catchments. The CSIRO Agent-based Modelling Working Group nurtures each of these projects by facilitating interaction tasks such as regular workshops and working group meetings. International experts from Europe and North America also attend. The Coordinator of this Working Group, Dr. David Batten (David.Batten@csiro.au) is directing the ABM work on Australia's electricity market, but he would be pleased to discuss any of the Working Group's ABM projects with interested parties. Collaboration with research groups engaged in similar research in other countries would be especially welcome.
Filippo Castiglione (Institute for Computing Applications (IAC), Italian National Research Council (CNR), Rome) firstname.lastname@example.org
Large-scale Agent-Based Models: Perspectives and Requirements
Agent Based Models (ABM) are the natural extension of the Ising or Cellular Automata-like models which have been used in the past decades to simulate various physical phenomena.
One important characteristic of ABMs, which distinguishes them from the relatively simple Ising-like models, is the complexity of the internal dynamics of each agent together with the asynchrony of the interactions among agents (and between agents and their environment).
The richness of details one can take into account in its ABM, makes such paradigm very powerful, hence appealing for the simulation of complex phenomena where the behavior of the interacting components are not safely reducible to some stylized or simple mechanism.
In my talk, I will first draw the attention to the need of a standardized method to handle the description of the agent (e.g. finite state automata) and the state-determined interaction rules.
Nowadays, the growth of interest in ABMs is strictly related to the availability of powerful computers. The large number of agents and the complexity of the internal representation of the agent in a typical simulation, dictate the use of high-performance computer-science techniques. With this respect, it is important to review some concepts related to code-optimization. Hence, I will discuss some considerations based on the experience maturated by developing and using two different ABMs, one for the simulation of the immune system activity, the second to simulate the dynamics of the price of commodities in a virtual stock market.
K. Mani Chandy (California Institute of Technology (CALTECH)) email@example.com
and Respond Systems for Crisis Management
Managing crises --- such as terrorist attacks, hurricanes, and pandemics --- and other fast-moving situations, requires sense and respond platforms that can be configured to sense complex conditions in the extended environment and respond appropriately. This talk discusses distributed technologies and design methodologies for configuring sense and respond systems as a crisis unfolds.
William Y.C. Chen (Center for Combinatorics, LPMC, Nankai University, Tianjin 300071, P.R. China) firstname.lastname@example.org. The talk will be presented by Henning S. Mortveit (Los Alamos National Laboratory)
We obtain an evaluation theorem for the sequential dynamical systems (SDS) based on the dependency graph G and the NOR update function. The importance of the NOR-SDS lies in both practical and theoretical considerations. From the graph theoretical point of view, the fixed points and Gardens-of-Eden of NOR-SDS are closely related to the independent sets of the underlying graph, as discovered by C. Reidys. Our evaluation theorem serves as a simplified algorithm of the update scheme given by the original definition.
Stephen Eubank (Computer and Computational Sciences Division, Los Alamos National Laboratory) email@example.com
EpiSims, part of the public health sector module of LANL's Urban Infrastructure Suite, is an agent-based simulation system that models the spread of disease and the effects of mitigation efforts at the level of individuals in a large urban area. This talk will briefly describe the design and implementation of EpiSims and give examples of its use. The UIS has motivated a research effort in dynamics, structure, and function of social networks. I will briefly indicate how these questions arise in EpiSims and some promising research directions.
Richard M. Fujimoto (College of Computing, Georgia Institute of Technology) firstname.lastname@example.org http://www.cc.gatech.edu/fac/Richard.Fujimoto/homepage.html
Title: Large-Scale Network Simulation Abstract (five lines to a half page): Parallel and distributed simulation tools are emerging that offer the ability to perform detailed, packet-level simulations of large-scale networks. This capability offers enormous new opportunities for researchers to perform simulation experiments on networks of a scale that could not be completed previously. At the same time, it also creates challenges to the research community to define scenarios and configurations that are realistic relative to current and future Internet configurations. It creates challenges to tool builders to develop verified and validated simulators that are easy to use and which execute efficiently on parallel and distributed computers over a wide range of network configurations and scenarios. This presentation will quantitatively characterize the state-of-the-art in large-scale network simulation. An approach to realizing scalable network simulations that leverages existing sequential simulation models and software will be described. A recent performance study is described concerning large-scale network simulation on a variety of platforms ranging from workstations to cluster computers to supercomputers.
This research represents joint work of the speaker with Drs. Mostafa Ammar, Kalyan Perumalla, George Riley and several graduate students at Georgia Tech.
Maria Gini (Department of Computer Science and Engineering, University of Minnesota) email@example.com
Scheduling Tasks with Precedence Constraints to Solicit Desirable Bid Combinations (poster session)
Joint work with Wolfgang Ketter and John Collins.
We study the problem of optimizing the time windows in Requests for Quotes that an agent sends to other agents to obtain bids for combinations of tasks with complex time constraints and interdependencies. Our approach uses Expected Utility Theory to reduce the likelihood of receiving unattractive bids, while maximizing the number of bids that are likely to be included in the winning bundle. We describe the model, illustrate its operation and properties, and discuss what assumptions are required for its successful integration into multi-agent applications.
Why are vehicles sometimes stopped by so-called "phantom traffic jams,'' although they all like to drive fast? What are the mechanisms behind stop-and-go traffic? Why are there several different kinds of congestion, and how are they related? Why do most traffic jams occur considerably before the road capacity is reached? Can a temporary reduction of the traffic volume cause a lasting traffic jam? All of this is important to understand from the perspective of intelligent transportation systems. Surprisingly, speed limits can speed up traffic under certain conditions, and traffic lights at on-ramps can reduce the overall travel times. Driver assistance systems have a particularly high potential. And decision experiments are carried out in order to learn re-routing strategies which do not invalidate traffic forecasts. A lot has also been learned about pedestrian streams. In particular, we understand why pedestrians moving in opposite directions normally organize in lanes, while similar systems are "freezing by heating.'' In other cases, one observes fluctuation-induced ordering, oscillations of the flow direction at bottlenecks, rotary traffic, or herding effects. We also understand why panicking pedestrians produce dangerous deadlocks and how these can be avoided by a skillful design of buildings. These insights can be applied to optimize production processes.
I'll describe how multiagent systems provide useful control techniques for modular self-reconfigurable (metamorphic) robots. Such robots consist of many modules that can move relative to each other, thereby changing the overall shape of the robot to suit different tasks. Multiagent control is particularly well-suited for tasks involving uncertain and changing environments. We illustrate this approach through simulation experiments of Proteo, a metamorphic robot system developed at PARC.
For further info: see the paper at http://arxiv.org/abs/cs.RO/0006030 or H. Bojinov et al., "Multiagent Control of Modular Self-Reconfigurable Robots", Artificial Intelligence 142:99-120 (2002).
Joint work with Adriano Cavalcanti.
Developing nanoscale robots (nanorobots) presents difficult fabrication and control challenges . Of particular interest are medical applications  in which the robots operate in fluid microenvironments in the body. While such robots cannot yet be fabricated, theoretical and simulation studies identify plausible designs and capabilities [1,2].
To aid investigation of system-level control algorithms for these robots, we present a physically-based simulator for nanorobots in a simplified fluid environment motivated by medically relevant microenvironments. The robots' motions, characterized by a low Reynolds number, are quite different from common experience with larger, faster flows .
 K. E. Drexler, Nanosystems, Wiley 1992
 R. A. Freitas Jr., Nanomedicine, vol. 1, Landes Bioscience, 1999, at www.nanomedicine.com
 E. M. Purcell, "Life at Low Reynolds Number", American Journal of Physics, 45:3-11 (1977)
 A. A. G. Requicha, "Nanorobots, NEMS and Nanoassembly", to appear in Proc. of IEEE special issue on Nanoelectronics and Nanoprocessing
A Multi-Agent Based Approach to Modeling and Analyzing Spot Electricity Markets
This talk is based on the joint work with Dr. Poonsaeng Visudhiphan and my current graduate student Zhyiong Wu. In the first part of this talk a model derived in the PhD thesis of Dr Visudhiphan is briefly reviewed, and simulations are presented to illustrate electricity prices resulting from this model under several bidding strategies. The results are analyzed and compared with the results obtained using two well-known AI learning techniques. An important conjecture concerning equilibrium results as a function of system demand level is stated and illustrated. The second part of the talk highlights the importance of physical constraints on bidding strategies and market power. In particular, it is shown how the unit-commitment constraints, reflected in start-up and shut down costs and rates of various generation technologies, are likely to make the bidding logic much more difficult, and almost impossible. Finally, a generalized definition of market power in electricity markets as a measure of market inefficiencies with the physical constraints accounted for is suggested. Physical and financial risks are assessed using this generalized notion.
Adversarial Scheduling Models for Game Theoretic Dynamics
Game-theoretic equilibria are steady-state properties; that is, given that all the players' actions correspond to an equilibrium point, it would be irrational for any of them to deviate from this behavior, given that the others stick to their strategy. A major weakness of this type of concept is that it fails to predict how players arrive at this equilibrium in the first place, or how they "choose" one such equilibrium, if several such points exist. One way to justify the emergence of such equilibria is provided by the theory of learning in games, which regards them as the result of an evolutive``learning'' process. Such models assume one (or several) populations of agents that interact by playing a certain game, and updating their behavior based on the outcome of this interaction.
In order for evolutionary results of this sort to offer convincing insights on equilibrium selection in real-life situations, they have to display "robustness" with respect to the various idealizations inherent in the mathematical model. One such idealization is random scheduling: agents that are given the chance to update are chosen according to a scheme that involves random choice. However, "real" social interaction is not random, and it is not clear whether the randomness assumption is essential for the validity of these results.
In this talk (based on results obtained in collaboration with M.V. Marathe and S.S. Ravi) we explicitly advocate a reexamination of the conclusions of the theory of learning in games under adversarial scheduling models, and present a couple of examples from the game-theoretic literature (e.g. Peyton Young stochastically stable equilibria, the "colearning" model due to Shoham and Tennenholtz, etc) that show that such an analysis is feasible (and interesting).
Neil F. Johnson (Physics Department, Oxford University, U.K.) firstname.lastname@example.org
I present a crowd-based theory describing the collective behavior within a generic multi-agent population with limited resources. These multi-agent systems -- whose binary versions we refer to as B-A-R (Binary Agent Resource) collectives -- have a dynamical evolution which is determined by the aggregate action of the heterogeneous, adaptive agent population. Accounting for the strong correlations between agents' strategies, yields an accurate description of the system's dynamics in terms of the 'Crowd-Anticrowd' theory. This theory can incorporate the effects of an underlying network within the population, and is not just limited to the El Farol Problem and the Minority Game. By considering a variety of examples, I will show that the Crowd-Anticrowd theory offers a powerful approach to understanding the dynamical behavior of a wide class of agent-based Complex Systems .
 For applications in the financial domain, see 'Financial Market Complexity' (Oxford University Press, June 2003).
Jeffrey O. Kephart (Agents and Emergent Phenomena, e-Business Solutions and Autonomic Computing, High Integrity Computing Laboratory, IBM Thomas J. Watson Research Center) email@example.com
Humans are on the verge of losing their status as the sole economic species on the planet. In private laboratories and in the Internet laboratory, researchers and developers are creating a variety of autonomous, economically-motivated software agents endowed with algorithms for maximizing profit or utility. Many economic software agents will function as miniature businesses, purchasing information inputs from other agents, combining and refining them into information goods and services, and selling them to humans or other agents. Their mutual interactions will form the information economy: a complex economic web of information goods and services that will adapt to the ever-changing needs of people and agents. The information economy will be the largest multi-agent system ever conceived, and an integral part of the world's economy.
One cannot predict how this new world economy will behave simply by extrapolating from hundreds of years of economies in which humans have been the only players. Economic software agents differ from their human counterparts in several ways. They operate more quickly on more up-to-date and accurate information, yet on the other hand they have much less world experience and common sense. In an effort to both understand and design the macroeconomic behavior of the future information economy, we have simulated several different markets and economies populated with economic software agents employing a variety of plausible pricing and bidding algorithms. I will present several interesting macroeconomic behaviors that we have observed and analyzed, including cyclical price wars and complex strategic interactions that are reminiscent of the prisoner's dilemma. I will then discuss how insights gained from our studies can be used to design not just market mechanisms, but the agents themselves -- an opportunity that traditionally has not been available to economists.
Wolfgang Ketter (Department of Computer Science Universtity of Minnesota) firstname.lastname@example.org
An Evolutionary Framework for Studying Behaviors of Economic Agents (poster session)
agents that interact in electronic marketplaces. We describe how this approach can be used when agents' strategies are based on different methodologies, employing incompatible rules for collecting information and for reproduction. We present experimental results from a simulated market, where multiple service providers compete for customers using different deployment and pricing schemes. The results show that heterogeneous strategies evolve in the same market and provide useful research data.
Natalia Komarova (Department of Mathematics (Rutgers University) and Institute for Advanced Study (Princeton)) email@example.com
We consider the problem of linguistic agents that communicate with each other about a shared world. We develop a formal notion of a language as a set of probabilistic associations between form (lexical or syntactic) and meaning (semantic) that has general applicability. Using this notion, we define a natural measure of the mutual intelligibility, F(L,L'), between two agents, one using the language L and the other using L'. A natural question is this: Given a language L, what language L' maximizes mutual intelligibility with L? We find surprisingly that L' need not be the same as L and we present algorithms for approximating L' arbitrarily well. Next, we consider a population of linguistic agents that learn from each other and evolve over time. Will the community converge to a shared language and what is the nature of such a language? We characterize the evolutionarily stable states of a population of linguistic agents in a game-theoretic setting. Our analysis is relevant for a number of areas in natural and artificial communication where one studies the design, learning, and evolution of linguistic communication systems.
The Emergence of Temporal Correlations in a Study of Global Economic Interdependence
We develop a simple firm-based automaton model for global economic interdependence of countries using modern notions of self-organized criticality and dynamical renormalization group methods. We demonstrate how extremely strong statistical correlations can naturally develop between two countries even if the financial interconnections between those countries remain very weak. Potential policy implications of this result are also discussed.
Joint work with Eric J. Friedman (Cornell University) and Simon Johnson (MIT)
Substantial progress has been made in recent years toward a mathematical foundation for agent-based models. One advantage of such a foundation would be mathematical tools to relate the structure of the model to the resulting dynamics. This talk will focus on deterministic models with a finite state space, that is, finite dynamical systems. Under the assumption that the state set for the variables is a finite field, any such system can be described via a collection of polynomial functions with coefficients in the finite field. In particular, cellular automata and Boolean networks satisfy this assumption. Polynomial dynamical systems over finite fields are amenable to analysis with tools from computational algebra and algebraic geometry. They have been studied for some time in the context of control theory. A variety of (implemented) algorithms allows the algorithmic solution of problems such as reverse-engineering of systems with specified dynamics, as well as the computation of fixed points and limit cycles. Furthermore, computational algebra provides a rigorous framework in which to study the relationship between the structure of the rules/polynomials and the structure of the state space of the system. Several preliminary results will be discussed.
One of the central problems in systems biology is to model gene regulatory networks from experimental data. Several modeling frameworks have been proposed, that can be categorized broadly into static versus dynamic, continuous versus discrete, and deterministic versus stochastic. We present a method that infers a multi-state, discrete dynamic network from one or more time series of DNA microarray measurements. The method utilizes algorithms from computational algebra and algebraic geometry. We validate our reverse-engineering algorithm using simulated data generated by a Boolean network model of the regulatory network responsible for pattern formation in Drosophila melanogaster.
John Lavery (Mathematics Division, Army Research Office, Army Research Laboratory) firstname.lastname@example.org
Classical physics-based models for agent-based systems have been proposed. A preferable approach is to start by defining the internal dynamics of agent-based systems and to create the models based on these dynamics. Whatever the source of the models, they should eventually be justified on the basis of the internal dynamics of agent-based systems. Agent-based systems are needed for sensorwebs, information mining, multi-robot swarming and many other Army/DoD applications.
Kristian Lindgren (Department of Physical Resource Theory, Complex Systems Group, Chalmers/Göteborg University) email@example.com http://www.frt.fy.chalmers.se/cs/index.html http://www.frt.fy.chalmers.se/cs/people/lindgren.html
Cooperation in an Unpredictable Environment (poster session)
Joint work with Anders Eriksson.
One of the main limitations with the Prisoner's Dilemma game and many of the similar games studied is the static character of the interaction situation, i.e that players always have the same payoff elements in all encounters. In practice it is much more common that circumstances change over time, so that the payoff elements are rarely the same at any two encounters.
We investigate the evolution of cooperation in a random environment, when players engage in repeated interactions. Strategies are represented as a small set of behavioral states, with transitions between them.
For very low mistakes rates, the level of cooperation is high but unstable. A qualitative model is introduced to show that the fluctuations are due the breakdown of reciprocation at high levels of cooperation. At higher mistakes rates, the genetic drift is decreased significantly.
We conclude that also with players with very limited faculties, it is possible to establish robust cooperation in the presence of uncertain future payoffs. The possibility of making mistakes makes it harder to establish robust cooperation, but on the other hand it is not as susceptible to genetic drift.
Kristian Lindgren (Department of Physical Resource Theory, Complex Systems Group, Chalmers/Göteborg University) firstname.lastname@example.org http://www.frt.fy.chalmers.se/cs/index.html http://www.frt.fy.chalmers.se/cs/people/lindgren.html
A Scale-Free Network Model of Urban Economics (poster session)
Joint work with Claes Andersson and Alexander Hellervik.
The geographic distribution of human land use exhibits a wide range of large-scale regularities with maybe the most widely known example being the rank-size rule, also known as Zipf's Law. This rule states that a city's size and rank (1=largest, 2= second largest etc) has a power law relation. Many properties of the urban system, in addition to the rank-size rule, exhibit scaling over a considerable number of orders of magnitude, i.e. land value per land area, land value per city and the relationship between urban area and perimeter. Although the fractal nature of the urban system has been known since long, explanations as to what processes are responsible for this behavior are at best incomplete. We have developed a complex network model where nodes are taken to be fixed-size non-overlapping lots of land and connections are trade relations between economic activities in these lots. Through a connection to theory on land markets we can compare node degrees with empirically observed land value data. We obtain excellent agreement on several levels: land value per unit area, land value per city and the relationship between city area and perimeter. In addition we also show emprically that there is a linear relation between land value and population, thus making our results directly applicable to Zipf's Law of city sizes.
Madhav V. Marathe (Basic and Applied Simulation Science (CCS-5), Los Alamos National Laboratory) email@example.com
Sequential Dynamical Systems (SDS) are a special type of communicating automata that can be used to model very large socio-technical systems.
SDS based "formal simulations" potentially provide a rigorous, useful new setting for a theory of interaction-based computation. The setting is natural for comprehension of distributed systems characterized by interdependent, but separately functioning sub-parts. Massively parallel and grid computing and the associated algorithm design issues, advanced communication systems, biological networks, epidemiological processes, markets, socio-technical systems are examples of such systems.
I will focus on the computational aspects of SDS. The concepts and results shed light on the computational complexity of computing phase space properties of SDS. Applicability of these concepts will be described in the context of large scale socio-technical simulations being developed in our group at the Los Alamos National Laboratory.
Akira Namatame (Department of Computer Science, National Defense Academy, Yokosuka, Japan) firstname.lastname@example.org
Collective means any pair of a complex system of autonomous agents, together with a performance criterion by which we rank the behavior of the overall system. In examining collective, we shall draw heavily on the individual behavior. It might be argued that understanding how individuals behave is sufficient to describe collectives. In this presentation, I will take a different view. Although individual behavior is nested within important to understand, it is not sufficient to analyze emergent behaviors of collectives. These situations, in which an agent decision depends on the decisions of the others, are the ones that usually do not permit any simple summation or extrapolation to the aggregates. To make that connection we have to look at the micro-macro loop between agents and the collectivity. There is no presumption that a collection of interacting agents leads to collectively satisfactory results without any central authority. The system performance of interacting agents crucially depends on the type of interactions among agents as well as how they adapt to others. There are two closely related issues concerning collective, (1) the forward problem of how the fine-grained structure of the system underlying a collective determines its complex emergent behavior and therefore its performance, and (2) the inverse problem of how to design the structure of the system underlying a collective to induce optimal performance. I will discuss how agent based simulation contributes to answer these issues.
Joint work with Robert G. Reynolds (Department. of Computer Science, 424 State Hall, Wayne State University, Detroit, Michigan).
Using Cultural Algorithms to Evolve Strategies for Recessionary Markets (poster session)
Cultural Algorithms are computational self-adaptive models which utilize a population and a belief space. In this framework, a white and black box testing strategy is embedded in order to test large-scale GP programs. The model consists of two populations, one supporting white box testing of a genetic programming system and the other supporting black box testing. The two populations communicate with each other by means of a shared belief space. This is applied to the calibration of a multi-agent system by allowing for evolution of near optimal parameters. The Cultural approach is employed to abstract coefficients of pricing strategies that are applied to a complex model of durable goods. This model simulates consumer behaviors as applied in the context of economic cycles.
Agent based techniques are known to complement standard economic theory [Axtell] Due to mathematical tractability constraints, an evolutionary framework can be successful in terms of being able to derive a solution. We have employed the utilization of a multi-agent system, MarketScape, to simulate a real-world consumer market. When we apply heterogeneous factors to the application of this market, it can be demonstrated that traditional economic theory does not hold. [Tassier] An example of this is where the postpone scenario in which consumers will delay purchase of economic goods as a function of past prices and time [Tassier] . This has been noted as affecting purchase behavior. Specific strategies by that of OEMs such as the placement of incentives are demonstrated to actually bring about a decrease in profitability. Another example involves the application of memory based techniques to that of market recession.
Here, we are interested in the application of Software Engineering techniques to that of the calibration of agent-based design. Complementary techniques of White and Black box testing are demonstrated to assist in the efficient design of software. [Ostrowski] In order to accomplish this, the White Box approach is applied to consider the implementation of specific requirements in order to guide the initial and subsequent design process. The assumption in this approach is that the results of testing can be used to guide the search for a program.
 R. Axtell. "Why agents? On the varied motivations for agent computing in the social sciences, 2001. mimeo, Center on Social and Economic Dynamics , The Brookings Institute.
 R. Porter, P.Sattler. Patterns of trade in the market for used durables: Theory and evidence, 1999 NBER Working Paper No W7419.
 Sommerviille, I., Software Engineering, Addison-Wesley, 1996.
 R.G Reynolds, An Introduction to Cultural Algorithms, In the Proceedings of the 3rd annual Conference on Evolution Programming" , Sebalk, A.V. Fogel L.J., River Edge, NJ. World Sientific Publishing, 1994, pp 131-13.
 Chung, Chan-Jin, Reynolds. R.G, "A Testbed for Solving Optimization Problems Using Cultural Algorithmns", In proceedings of the fifth Annual Conference on Evolutionary Programing, MIT Press, 1996.
 Chung, Chan-Jin, Reynolds, R.G., "Knowledge-Based Self-Adaptation Using Cultural Algorithms: An Evolutionary Programming Example", International Journal on Artificial Intelligence Tools, Vol. 7, No.3, September 1998,pp. 239-293.
 Rychtyckyj N Reynolds, "Using Cultural Algorithms to Improve Performance in Semantic Networks", Angeline Peter, CEC99.
 E. Zannoni, "Cultural Algorithms with Genetic Programmig: Learning to Control the Program Evolution Process. Phd. Thesis, Computer Science, Wayne State University , Detroit 1996.
 D. Ostrowski, R.G. Reynold
Bodo Pareigis (Mathematisches Institut, Universität München, Germany) email@example.com
Importance of Admissible Maps Between Sequential Dynamical
Sequential dynamical systems have been used to simulate many different processes. It is desirable to study their mathematical properties. Much progress has come from studying their abstract mathematical structure.
I want to show the advantages to be gained by studying also admissible maps between them.
The best presently known definition of an admissible map will be given. The properties mentioned above will be discussed.
Christian Reidys (Los Alamos National Laboratory) firstname.lastname@example.org
Certain Morphisms of Dynamical Systems
We study a class of discrete dynamical systems that consists of the following data: (a) a finite (labeled) graph Y with vertex set 1...n where each vertex has a binary state, (b) a vertex labeled multi-set of functions We study a class of discrete dynamical systems that consists of the following data: (a) a finite (labeled) graph Y with vertex set 1...n where each vertex has a binary state, (b) a vertex labeled multi-set of functions (F(i,Y):F2n F2n)i and (c) a permutation p. The function F(i,Y) updates the binary state of vertex i as a function of the states of vertex i and its immediate Y-neighbors and leaves the states of all other vertices fixed. The permutation p represents a Y-vertex ordering according to which the functions F(i,Y) are applied. By composing the functions F(i,Y) in the order given by p we obtain the sequential dynamical system (SDS). Let G be the graph representing the phase space of the SDS. A SDS-morphism between the two SDS [FY,p] and [FZ,s] is a triple consisting of a graph morphism v: Y Z, a map e:Sz Sy, where z,y denote the cardinalities of Z and Y, respectively such that e(s)=p and finally a digraph morphism between G(Z) and G(Y). Our main result is that any locally bijective graph morphisms (coverings) between dependency graphs of SDS naturally induce SDS-morphisms and we further give applications of this theorem that allow to identify variuos phases space properties of SDS.
Timothy Schoenharl (Department of Computer Science and Engineering, University of Notre Dame) email@example.com
Agent Based Modeling Approach to Self-Organizing Neural Networks (poster session)
We explore a novel self-organizing neural network topology. We have drawn inspiration from recent research into complex networks and advances in neurobiology, and applied it towards the development of an artificial neural network. An agent based approach is used to model the neurons and their connections, providing a richness of expression not available in other neural network simulations. The agent based paradigm is well suited for our exploration, as local interaction among the neurons drives the evolution of the global network topology. We demonstrate our simulation and discuss results.
Peter Schuster (Institut fuer Theoretische Chemie und Molekulare Strukturbiologie, Universitaet Wien) firstname.lastname@example.org
Optimization through variation and selection and other evolutionary phenomena can be studied in cell-free systems by means of populations of nucleic acid molecules, in particular ribonucleic acid molecules (RNA). Computer modeling of RNA evolution in vitro provides even deeper insights than experiments, because it allows to trace the processes in populations with full genealogical information on all molecules. Molecules, representing partially autonomous agents, multiply and produce variants through imperfect copying. Selection operates on the level of the population, chooses between variants of different reproductive success, and provides a basis for the origin of biological information as well as primitive learning. A theoretical frame for modeling molecular evolution will be presented in the lecture and several illustrative examples of simulations of evolutionary optimization will be discussed.
Reference: James P. Crutchfield & Peter Schuster, Eds. Evolutionary Dynamics - Exploring the Interplay of Selection, Accident, Neutrality, and Function. Oxford University Press, New York 2003.
Structure of Fitness Landscapes
Fitness Landscapes arise as a unifying concept in evolutionary biology, the statstics of disordered systems, and evolutionary computation. A variety of different approaches can be used to quantify global geometric features of landscapes, among the measures of correlation, the distribution of metastable states, and the collection of energy barriers.
Joint work with Deddy Koesrindartoto email@example.com.
Given the serious reliability problems that have recently arisen in the restructured electricity markets in California and the Northeast, policymakers are now calling for more comprehensive testing of market restructuring proposals prior to implementation. The objective of our collaborative project with the Los Alamos National Laboratory is to test the reliability of the Wholesale Power Market Platform (WPMP), a standard market design proposed in April 2003 by the Federal Energy Regulatory Commission for U.S. wholesale electricity markets. The WPMP design has been adopted or submitted for adoption by wholesale electricity market operators in New England, New York, the mid-Atlantic states, California, and the Midwest.
The WPMP design is extremely complex and recent in origin, rendering difficult the application of traditional analytical and statistical tools. We are therefore taking a different approach, the development of an agent-based computational economics (ACE) framework incorporating the key features of the WPMP design. This ACE electricity framework will be used to test the extent to which the WPMP design results in efficient, fair, and orderly market operations over time, despite attempts by market participants to gain advantage through strategic pricing, capacity withholding, and induced transmission grid congestion. This is a challenging issue of utmost importance for social welfare and national security. Our presentation will report on our progress to date.
Agent-Collective Optimization through Influence Network Design
The dynamics of human, and most biological populations is characterized by competition for resources. By its own nature, this dynamics creates the group of ``elites'', formed by those agents who have strategies that are the most successful in the given situation, and therefore the rest of the agents will tend to follow, imitate, or interact with them, creating a social structure of leadership in the agent society. These inter-agent communications generate a complex social network with small-world character which itself forms the substrate for a second network, the action network. The latter is a dynamic, adaptive, directed network, defined by those inter-agent communication links on the substrate along which the passed information/prediction is acted upon by the other agents. By using the minority game for competition dynamics, here we show that when the substrate network is highly connected, the action network spontaneously develops hubs with a broad distribution of out-degrees, defining a robust leadership structure that is scale-free. Furthermore, in certain, realistic parameter ranges, facilitated by information passing on the action network, agents can spontaneously generate a high degree of cooperation making the collective almost maximally efficient.
David H. Wolpert (NASA Ames Research Center) firstname.lastname@example.org http://ic.arc.nasa.gov/ic/projects/bayes-group/people/dhw/
Game Theory to Distributed Control and Back Again
Product Distribution (PD) theory is a powerful new formalism with potential applications throughout science. This talk starts by presenting PD theory in the context of conventional, full-rationality game theory. PD theory is then used to extend conventional game theory, by deriving the information-theoretic formulation of bounded rational game theory. Next the close relationship of that formulation with the canonical ensemble of statistical physics is delineated. This in turn leads to the application of PD theory to distributed control of multi-agent systems. As the audience prefers and time allows, the talk will end with a discussion of PD theory applied to distributed optimization, evolutionary game theory, numerical integration, sampling of probability densities, or management theory.
Connect With Us: