|
[view publications]
[view bibtex]
[view Google Scholar]
S. Stotter, J. Cartlidge, & D. Cliff, (2014),
“Behavioural investigations of financial trading agents
using Exchange Portal (ExPo).”
In: N. T. Nguyen, R. Kowalczyk, A. Fred, & F. Joaquim eds.
Transactions on Computational Collective Intelligence XVII.
Berlin Heidelberg: Springer, pp. 22-45.
[Camera-Ready Copy Available Online]
[Bibtex]
[Abstract]
[Journal Link]
doi:10.1007/978-3-662-44994-3_2
(24 pages, approx. 7,000 words).
Abstract—Some major financial markets are currently
reporting that 50% or more of all transactions are now executed by automated
trading systems (ATS). To understand the impact of ATS proliferation on the
global financial markets, academic studies often use standard reference strategies,
such as AA and ZIP, to model the behaviour of real trading systems. Disturbingly,
we show that the reference algorithms presented in the literature are ambiguous,
thus reducing the validity of strict comparative studies. As a remedy, we suggest
disambiguated standard implementations of AA and ZIP. Using Exchange Portal (ExPo),
an open-source financial exchange simulation platform designed for real-time
behavioural economic experiments involving human traders and/or trader-agents,
we study the effects of disambiguating AA and ZIP, before introducing a novel
method of assignment-adaptation (ASAD). Experiments show that introducing ASAD
agents into a market with shocks can produce counter-intuitive market dynamics.
J. Cartlidge & P. Clamp, (2014),
“Correcting a financial brokerage model for cloud computing: closing the
window of opportunity for commercialisation,”
Journal of Cloud Computing: Advances, Systems and Applications,
vol. 3, no. 2, pp. 1-20, Apr 2014.
[Open Access]
[Bibtex]
[Journal Link]
doi:10.1186/2192-113X-3-2
(20 pages, approx. 8,000 words).
Abstract— In April 2012, Rogers and Cliff (R&C)
demonstrated a theoretical financial brokerage model for cloud computing that
is profitable for the broker, offers reduced costs for cloud users, and generates
more predictable demand flow for cloud providers. Relatively cheap,
long-term reserved instances (RIs) are bulk-purchased by the broker, and
then re-packaged and re-sold as monthly options contracts at a price lower
than a user can purchase “on-demand” from the provider. Thus, the
broker risks exposure on purchase for margin on sales. R&C's result has
generated significant interest in the cloud computing community and is currently
the fifth most accessed research paper of all time in the Journal of Cloud
Computing: Advances, Systems and Applications.
Here, we perform an independent replication of
R&C's brokerage model using CReST, a discrete event simulation platform
for cloud computing developed at the University of Bristol. We identify two
implementation problems in R&C's original work: firstly, the broker buys fewer
RIs than the model suggests; and secondly, the broker is undercharged for RIs used.
We correct R&C's results accordingly: while broker's profits are not as high as
R&C suggest, the model still supports the theoretical possibility of a
profitable brokerage.
However, aggressive competition between cloud
providers has reduced the cost of cloud services to users and led to the
introduction of new secondary markets where users can buy and sell RIs between
themselves. This has squeezed the opportunity for an intermediary brokerage.
By recalibrating R&C's model to fit current market conditions, we conclude
that the commercial viability of R&C's brokerage model has been eradicated.
The window of opportunity has now closed.
J. Cartlidge, (2014),
“Trading experiments using financial agents in a simulated cloud
computing commodity market,” in Proc. 6th Int. Conf. Agents and
Artif. Intelligence, Vol. 2 - Agents (ICAART-2014).
B. Duval, J. van den Herik, S. Loiseau & J. Filipe, Eds.
Angers, France: SciTePress, Mar. 2014, pp. 311-317.
[Camera-Ready Copy Available Online]
[Bibtex]
[Conference Link]
doi:10.5220/0004925303110317
(7 pages, approx. 5000 words).
Abstract—In September 2012, Amazon, the leading
Infrastructure as a Service (IaaS) provider, launched a secondary marketplace
venue for users to buy and sell cloud resources between themselves—the Amazon
EC2 Reserved Instance Marketplace (ARIM). ARIM is designed to encourage users
to purchase more long-term reserved instances, thus generating more
stable demand for the provider and additional revenue through commission on sales.
In this paper, we model ARIM using a multi-agent simulation model populated with
zero-intelligence plus (ZIP) financial trading agents. We demonstrate that ARIM
offers a new opportunity for market makers (MMs) to profit from buying and selling
resources, but suggest that this opportunity may be fleeting. We also demonstrate
that altering the market mechanism from a retail market (where only sellers post
offers; similar to ARIM) to a continuous double auction (where both buyers and
sellers post offers) can result in higher sale prices and therefore higher
commissions. Since IaaS is a multi-billion dollar industry and currently the
fastest growing segment of the cloud computing market, we therefore suggest that
Amazon may profit from altering the mechanism of ARIM to enable buyers to post bids.
P. Clamp & J. Cartlidge, (2013),
“Pricing the cloud: An adaptive brokerage for cloud computing,”
in Proc. 5th Int. Conf. Advances in System Simulation (SIMUL-2013).
M. Bauer & P. Lorenz, Eds.
Venice, Italy: IARIA XPS Press, Oct 2013, pp. 113-121.
[Open Access]
[Camera-Ready Copy Available Online]
[Bibtex]
[Conference Link]
(9 pages, approx. 6,500 words).
Abstract—Using a multi-agent social simulation model to
predict the behavior of cloud computing markets, Rogers & Cliff
(R&C) demonstrated the existence of a profitable cloud brokerage
capable of benefitting cloud providers and cloud consumers
alike. Functionally similar to financial market brokers, the cloud
broker matches provider supply with consumer demand. This
is achieved through options, a type of derivatives contract that
enables consumers to purchase the option, but not the obligation,
of later purchasing the underlying asset—a cloud computing
virtual machine instance—for an agreed fixed price. This model
benefits all parties: experiencing more predictable demand, cloud
providers can better optimize their workflow to minimize costs;
cloud users access cheaper rates offered by brokers; and cloud
brokers generate profit from charging fees. Here, we replicate
and extend the simulation model of R&C using CReST—an opensource,
discrete event, cloud data center simulation modeling platform
developed at the University of Bristol. Sensitivity analysis
reveals fragility in R&C's model. We address this by introducing
a novel method of autonomous adaptive thresholding (AAT) that
enables brokers to adapt to market conditions without requiring a
priori domain knowledge. Simulation results demonstrate AAT's
robustness, outperforming the fixed brokerage model of R&C
under a variety of market conditions. We believe this could
have practical significance in the real-world market for cloud
computing.
G. Baxter & J. Cartlidge, (2013),
“Flying by the seat of their pants: What can High Frequency Trading
learn from aviation?,” in Proc. 3rd Int. Conf. Application and Theory
of Automation in Command and Control Systems (ATACCS-2013).
Naples, Italy: ACM, May 2013, pp. 64-73.
[Camera-Ready Copy Available Online]
[Bibtex]
[Conference Link]
(10 pages, approx. 8,500 words).
Abstract—As we build increasingly large scale
systems (and systems of systems), the level of complexity is also rising. We still
expect people to intervene when things go wrong, however,
and to diagnose and fix the problems. Aviation has a history
of developing systems with a very good safety record.
Domains such as high frequency trading (HFT), however,
have a much more chequered history. We note that there are
several parallels that can be drawn between aviation and
HFT. We highlight the ironies of automation that apply to
HFT, before going on to identify several lessons that have
been used to improve safety in aviation and show how they
can be applied to increase the resilience of HFT in particular.
J. Cartlidge & D. Cliff, (2013),
“Comparison of Cloud Middleware Protocols and Subscription Network
Topologies using CReST, the Cloud Research Simulation Toolkit,”
in Proc. 3rd Int. Conf. Cloud Computing and
Services Science (CLOSER-2013).
F. Desprez, D. Ferguson, E. Hadar, F. Leymann, M. Jarke & M. Helfert, Eds.
Aachen, Germany: SciTePress, May 2013, pp. 58-68.
[Camera-Ready Copy Available Online]
[Bibtex]
[Conference Link]
[Presentation Slides]
(11 pages, approx. 6,000 words).
Shortlisted for Best Paper Award.
Abstract—We introduce the Cloud Research Simulation
Toolkit (CReST), a new cloud computing simulation tool designed to enable cloud
providers to research and test their systems before release. We compare CReST with
other known cloud simulation tools and demonstrate the utility of CReST by
evaluating different distributed middleware protocols and associated subscription
network topologies for robustness and reliability. Our results extend previous
work and demonstrate that the published literature contains inaccuracies. CReST
has been released as open-source under a Creative Commons license on SourceForge,
with the intention that it can be used and extended by the cloud computing
research community.
J. Cartlidge & D. Cliff, (2013),
“Evidencing the “robot phase transition” in experimental
human-algorithmic markets,” in Proc. 5th Int. Conf. Agents and
Artif. Intelligence, Vol. 1 - Agents (ICAART-2013). J. Filipe & A. Fred, Eds. Barcelona, Spain:
SciTePress, Feb. 2013, pp. 345-352.
[Camera-Ready Copy Available Online]
[Bibtex]
[Conference Link]
[Poster]
(8 pages, approx. 5,500 words).
Abstract—Johnson, Zhao, Hunsader, Meng, Ravindar,
Carran, and Tivnan (2012) recently suggested the existence of a phase transition
in the dynamics of financial markets in which there is free interaction between
human traders and algorithmic trading systems ("robots"). Above a particular
time-threshold, humans and robots trade with one another; below the threshold
all transactions are robot-to-robot. We refer to this abrupt system transition as
the "robot phase transition". Here, we conduct controlled experiments where
human traders interact with 'robot' trading agents in minimal models of electronic
financial markets to see if correlates of the two regimes suggested by Johnson
et al. (2012) occur in such laboratory conditions. Our results indicate that when
trading robots act on a super-human timescale, the market starts to fragment, with
statistically lower human-robot interactions than we would expect from a fully
mixed market. We tentatively conclude that this is the first empirical evidence
for the robot phase transition occurring under controlled experimental conditions.
S. Stotter, J. Cartlidge, & D. Cliff, (2013),
“Exploring assignment-adaptive (ASAD) trading agents in financial
market experiments,” in Proc. 5th Int. Conf. Agents and
Artif. Intelligence, Vol. 1 - Agents (ICAART-2013). J. Filipe & A. Fred, Eds. Barcelona, Spain:
SciTePress, Feb. 2013, pp. 77-88.
[Camera-Ready Copy Available Online]
[Bibtex]
[Conference Link]
Winner of Best Student Paper Award.
Abstract—Automated trading systems in the global financial
markets are increasingly being deployed to do jobs previously done by skilled
human traders: very often a human trader in the markets simply cannot tell whether
the counter-party to a trade is another human, or a machine. Clearly, automated
trading systems can easily be considered as “intelligent” software
agents. In this paper we report on experiments with software trader agents
running the well-known “AA” and “ZIP” strategies,
often used as reference benchmarks
in previously published studies; here we suggest disambiguated standard
implementations of these algorithms. Then, using Exchange Portal (ExPo),
an open-source financial exchange simulation platform designed for real-time
behavioural economic experiments involving human
traders and/or trader-agents, we explore the impact of introducing a new method
for assignment adaptation in ZIP. Results show that markets containing only
assignment adaptive (ASAD) agents equilibrate more quickly after market shocks than
markets containing only “standard” ZIP agents. However,
perhaps counterintuitively, in mixed heterogeneous populations of ASAD
agents and ZIP agents, ZIP agents outperform ASAD agents. Evidence
suggests that the behaviour of ASAD agents act as a new signal
in the market that ZIP agents then use to beneficially alter their own behaviour,
to the detriment of the ASAD agents themselves.
J. Cartlidge, C. Szostek, M. De Luca, & D. Cliff, (2012),
“Too fast too furious: faster financial-market trading agents
can give less efficient markets,” in
Proc. 4th Int. Conf. Agents and Artif. Intelligence, Vol. 2 - Agents (ICAART-2012),
J. Filipe & A. Fred, Eds. Vilamoura, Portugal:
SciTePress, Feb. 2012, pp. 126-135.
[Available Online]
[Bibtex]
[Conference Link]
[Presentation Slides]
doi:10.5220/0003720301260135
(10 pages, approx. 7,500 words).
Abstract—For many of the world's major financial
markets, the proportion of market activity that is due to the actions
of “automated trading” software agents is rising:
in Europe and the USA, major exchanges are reporting that 30%—75%
of all transactions currently involve automated traders. This is a major
application area for artificial intelligence and autonomous agents, yet
there have been very few controlled laboratory experiments studying the
interactions between human and software-agent traders. In this paper we
report on results from new human-agent experiments using the OpEx
experimental economics system first introduced at ICAART-2011. Experiments
explore the extent to which the performance of the traders, and of the
market overall, is dependent on the speed at which the agents operate.
Surprisingly, we found that slowing down the agents increased the market's
overall ability to settle to a competitive equilibrium, and that
slow-agent markets were more efficient.
J. Cartlidge & D. Cliff, (2012),
“Exploring the “robot phase transition” in
experimental human-algorithmic markets.”
Foresight,
The Future of Computer Trading in Financial Markets,
Driver Review DR25, Crown Copyright, Apr. 2012.
[Available Online]
[Available Online 2]
[Bibtex]
[Abstract]
(50 pages, approx. 13,000 words). URN: 12/1058
Abstract—Johnson et al. (2012) recently argued that analysis of millisecond-by-millisecond stock-price
movements between 2006 and 2011 suggests the existence of a step-change or phase
transition in the dynamics and behaviour of financial markets, in which human traders and
automated algorithmic 'robot' trading systems freely interact. Above a particular time-threshold,
humans and algorithmic systems trade with one another; below the threshold, there is a
sudden change to a market in which humans cannot participate and where all transactions are
robot-to-robot. We refer to this abrupt system transition from a mixed human-robot phase to an
all-robot phase as the 'robot phase transition'. At sub-second timescales, below the robot
transition, Johnson et al. argue that the robot-only market exhibits 'fractures' - ultra-fast swings
in price akin to mini flash-crashes - that are undesirable, little understood, and intriguingly
appear to be linked to longer-term instability of the market as a whole.
Here, we report on using a complementary approach to the historical data analysis employed by
Johnson et al.: in March 2012 we conducted laboratory-style experiments where human traders
interacted with algorithmic trading agents (i.e., robots) in minimal experimental models of
electronic financial markets using De Luca's (2011) OpEx artificial financial exchange. Our aim
was to see if correlates of the two regimes suggested by Johnson et al. occur in such
laboratory conditions. Our results thus far do indeed indicate that when trading robots act on a
super-human timescale, the market starts to fragment, with statistically lower human-robot
interactions than we would expect from a fully mixed market. In contrast, when robotic
trader agents are slowed to a thinking-and-reaction speed similar to that of humans, less
fragmentation is observed. We tentatively conclude that this is evidence for the robot transition
occurring in controlled experimental financial market systems.
The work reported here also explore the effects of increasing the degree of realism in the
laboratory experiments: we find that some statistically significant effects may be consequences
of constraints introduced to make the analysis of the experiment results easier, and we report
on our discovery of a problem with earlier OpEx experiments that casts doubt on their results.
M. De Luca, C. Szostek, J. Cartlidge, & D. Cliff, (2011),
“Studies of interactions between human traders and algorithmic
trading systems.”
Foresight,
The Future of Computer Trading in Financial Markets,
Driver Review DR13, Crown Copyright, Sep. 2011.
[Available Online]
[Available Online 2]
[Bibtex]
[Abstract]
(60 pages, approx. 23,000 words). URN: 11/1232
Abstract—In recent years there has been
a very significant increase in the percentage of trades in the global
financial markets that are initiated and executed by automated “robot”
algorithmic trading software systems, autonomously performing trading
roles that a decade or more ago would have been performed by human
traders. The anonymity of many current electronic trading systems,
operated by major exchanges and multilateral trading facilities,
mean that an individual trader, whether human or robot, never knows
if the counterparty to a particular trade is a human or not. There
are commonly-quoted estimates that the proportion of robot-executed
trades is approaching 30%-70% on major European and US equity exchanges.
In foreign-exchange markets, where there are no central exchanges,
the proportion of spot (immediate-execution) transactions that are
executed by robots is widely believed to be even higher. From this,
it is clear that the current global financial markets involve a
very significant degree of interaction between human and robot traders.
The interactions between human traders
in electronic markets has long been studied in the field known as
Experimental Economics, and more recently the interactions
between software-agent traders in electronic markets has been the
topic of various abstract research studies in so-called Agent-based
Computational Economics (ACE). These two research fields are
largely distinct: the first studies markets populated entirely by
human traders; the second studies markets populated entirely by
algorithmic software-agent traders. There is a surprising lack of
studies of the interactions between human traders and robot traders.
That is, there is very little scientific literature that explores
heterogeneous markets, populated by both humans and
robots.
In this document we review the very small
amount of published literature that does describe scientific studies
of interactions between human and robot traders under experimental
conditions. We contend that the relative lack of such studies is
a serious omission from the literature. We propose that De Luca’s
(2010) Open Exchange (OpEx) open-source design for studying
human-robot interactions in electronic marketplaces should be used
as a free de facto standard for future work in this area.
We illustrate the use of OpEx by summarising recently published
peer-reviewed accounts of early experiments with OpEx, and then
present a detailed description and analysis of results from some
new experiments, conducted specifically for this review document,
where we relax some of the artificial experimental constraints that
have been used in earlier work.
Experiments with the OpEx system indicate
that the previously reported outperformance of the algorithmic trading
systems over humans may well be related to the artificial nature
of the experiment design that was employed in the earlier research:
a design essentially unchanged since the first experimental economics
results were published in the early 1960’s. When the flow of orders
in the market was trickled in gradually (rather than all orders
being released simultaneously, which was an artificial constraint
in the designs of the earlier experiments) the performance of the
software agents was no longer so clearly superior. Furthermore,
when the software agents were slowed to operate on the same sort
of timescales that human traders act on, the data we have thus far
indicates that the market dynamics alter, but further experiments
would be required to establish the significance of this with appropriate
levels of certainty.
J. Cartlidge & I. Sriram, (2011), “Modelling resilience in
cloud-scale data centres,” in Proc. 23rd European Modeling &
Simulation Symposium (EMSS-2011), A. G. Bruzzone et al., Eds. Rome, Italy:
University of Genoa Press, Sep. 2011, pp. 299-307.
[Available Online]
[Bibtex]
[Abstract]
[Conference Link]
[Presentation Slides]
(9 pages, approx. 6,000 words).
Abstract—The trend for cloud computing has
initiated a race towards data centres (DC) of an ever-increasing size.
The largest DCs now contain many hundreds of thousands of virtual
machine (VM) services. Given the finite lifespan of hardware, such
large DCs are subject to frequent hardware failure events that can
lead to disruption of service. To counter this, multiple redundant
copies of task threads may be distributed around a DC to ensure that
individual hardware failures do not cause entire jobs to fail. Here,
we present results demonstrating the resilience of different job
scheduling algorithms in a simulated DC with hardware failure. We
use a simple model of jobs distributed across a hardware network to
demonstrate the relationship between resilience and additional
communication costs of different scheduling methods.
J. Cartlidge & D. Ait-Boudaoud, (2011), “Autonomous
virulence adaptation improves coevolutionary optimization,”
IEEE Trans. Evol. Comput., vol. 15, no. 2, pp. 215–229, Apr. 2011.
[Available Online]
[Bibtex]
[Abstract]
[Journal Link]
doi:10.1109/TEVC.2010.2073471
(15 pages, approx. 13,000 words).
Abstract—A novel approach for the autonomous
virulence adaptation (AVA) of competing populations in a coevolutionary
optimization framework is presented. Previous work has demonstrated
that setting an appropriate virulence, v, of populations
accelerates coevolutionary optimization by avoiding detrimental
periods of disengagement. However, since the likelihood of disengagement
varies both between systems and over time, choosing the ideal value
of v is problematic. The AVA technique presented here uses
a machine learning approach to continuously tune v as system
engagement varies. In a simple, abstract domain, AVA is shown to
successfully adapt to the most productive values of v. Further
experiments, in more complex domains of sorting networks and maze
navigation, demonstrate AVA’s efficiency over reduced virulence
and the layered Pareto coevolutionary archive.
J. Cartlidge & S. Phelps, (2011), “Estimating demand
for dynamic pricing in electronic markets,” GSTF
Journal on Computing (JoC), vol. 1, no. 2, pp. 128–133, Feb. 2011.
[Available Online]
[Bibtex]
[Abstract]
[Journal Link]
doi:10.5176/2010-2283_1.2.50
(6 pages, approx. 4,500 words).
Abstract—Economic theory suggests sellers
can increase revenue through dynamic pricing; selling identical
goods or services at different prices. However, such discrimination
requires knowledge of the maximum price that each consumer is willing
to pay; information that is often unavailable. Fortunately, electronic
markets offer a solution; generating vast quantities of transaction
data that, if used intelligently, enable consumer behaviour to be
modelled and predicted.
Using eBay as an exemplar market, we introduce
a model for dynamic pricing that uses a statistical method for deriving
the structure of demand from temporal bidding data. This work is
a tentative first step of a wider research program to discover a
practical methodology for automatically generating dynamic pricing
models for the provision of cloud computing services; a pertinent
problem with widespread commercial and theoretical interest.
J. Cartlidge & S. Phelps, (2010), “Estimating consumer
demand from high-frequency data,” in Proc. Ann. Int. Academic
Conf. Business Intelligence and Data Warehousing (BIDW-2010), K. Kumar,
Ed. Singapore: Global Science and Technology Forum (GSTF),
Jul. 2010, pp. 132–138.
[Available Online]
[Bibtex]
[Abstract]
[Conference Link]
doi:10.5176/978-981-08-6308-1_14
(7 pages, approx. 5,000 words).
Abstract—Price discrimination offers sellers
the possibility of increasing revenue by capturing a market’s consumer
surplus: arising from the low price elasticity segment of customers that
would have been prepared to pay more than the current market price.
First degree price discrimination requires a seller to know the maximum
(reserve) price that each consumer is willing to pay. Discouragingly,
this information is often unavailable; making the theoretical ideal
a practical impossibility.
Electronic commerce offers a solution; with
loyalty cards, transaction statements and online accounts all providing
channels of customer monitoring. The vast amount of data
generated—eBay alone produces terabytes daily—creates an
invaluable repository of information that, if used intelligently,
enables consumer behaviour to be modelled and predicted. Thus, once
behavioural models are calibrated, price discrimination can be tailored
to the level of the individual.
Here, we introduce a statistical method
designed to model the behaviour of bidders on eBay to estimate
demand functions for individual item classes. Using eBay’s temporal
bidding data—arrival times, price, user id—the model
generates estimates of individual reserve prices for market participants;
including hidden, or censored, demand not directly contained within
the underlying data. Market demand is then estimated, enabling eBay
power sellers—large professional bulk-sellers—to optimize
sales and increase revenue. Proprietary software automates this process:
analyzing data; modelling behaviour; estimating demand; and generating
sales strategy.
This work is a tentative first step of a
wider, ongoing, research program to discover a practical methodology
for automatically calibrating models of consumers from large-scale
high-frequency data. Multi-agent systems and artificial intelligence
offer principled approaches to the modelling of complex interactions
between multiple individuals. The goal is to dynamically model market
interactions using realistic models of individual consumers. Such models
offer greater flexibility and insight than static, historical,
data analysis alone.
J. Cartlidge, (2008), “Dynamically adapting parasite virulence
to combat coevolutionary disengagement (abstract),” in Proc.
11th Int. Conf. Simulation and Synthesis Living Systems (Alife-11),
S. Bullock et al., Eds. Winchester, UK: MIT Press, Aug. 2008, p. 757.
[Available Online]
[Bibtex]
[Abstract]
[Conference Link]
[Presentation Slides]
(1 page, approx. 500 words).
Abstract— Participating in an evolutionary
arms-race, natural coevolutionary predator-prey and host-parasite
systems often exhibit accelerated evolution. Competitive Coevolutionary
Genetic Algorithms (CCGAs) attempt to harness this evolutionary
acceleration by engaging (multiple) evolving populations in competitive
self-play.
By evaluating individuals competitively,
CCGAs afford the possibility of tackling problems that are illdefined,
open-ended and lacking in formalism. This offers CCGAs a potential
advantage over more traditional Genetic Algorithms (GAs) when fitness
evaluation is difficult to operationally define. Analogously, one imagines
that it is much easier to formally define the rules of the game of tennis
than it is to define tennis playing ability. In practice, however,
defining an appropriate game is often non-trivial.
Competitive evaluation leaves CCGAs
susceptible to some adverse evolutionary dynamics. One such hindrance
is “disengagement”. This occurs when one coevolving population gets the
upper hand and begins to easily outperform the other. Since it becomes
impossible to discriminate between individuals according to ability, the
selection gradient disappears and the coevolving populations begin to
stagnate. The result is a stymied system that is left to flounder
aimlessly.
To prevent disengagement, Cartlidge has
previously introduced the “Reduced Virulence” technique
(Cartlidge & Bullock, 2004, Evol. Comp., 12, p.193).
This technique helps avoid disengagement by reigning in a population
that inherits an advantageous bias. Rather than reward individuals
that maximally damage a competitor, Reduced Virulence favors individuals
that give opponents a chance. Perhaps counter-intuitively, Reduced
Virulence enables accelerated evolutionary progress by disadvantaging
a population’s most successful individuals.
In this work, Reduced Virulence undergoes a
rigorous sensitivity analysis in the Counting Ones domain
(introduced by Watson & Pollack, 2001, GECCO, p.702, Morgan Kaufmann);
an analytically tractable substrate designed to highlight the dynamics
of coevolution. Following intuition, it is shown that for optimal
performance, virulence should be increasingly reduced as the
asymmetrical bias (and thus likelihood of disengagement) between
coevolving populations increases. Interestingly, even when coevolution
is unbiased, “Maximum Virulence”—equivalent to the canonical fitness
evaluation of “reward all victories”—is shown not to be ideal. Thus,
results suggest that (in the Counting Ones domain) when population
sizes are small, it is never the case that the canonical coevolutionary
setup should be favored. The generality of this result, however,
is an open question.
Utilizing this information, a novel
“Dynamic Virulence” algorithm is introduced. This algorithm adapts
population virulence over time as populations evolve. It is shown that
Dynamic Virulence is able to cope with varying bias better than fixed
virulence and allows the discovery of optimal solutions under a much
wider range of conditions than any individual fixed virulence setting.
Finally, it is discussed how analyzing the
role of virulence in artificial systems may allow us to better
understand virulence in nature. For instance, perhaps there is
potential for a “Reduced Virulence” approach to tackling infectious
diseases. Rather than killing mosquitoes to eradicate malaria, one
could alternatively encourage malaria-resistant strains that are
better able to survive.
J. Cartlidge & S. Bullock, (2004), “Combating coevolutionary
disengagement by reducing parasite virulence,” Evol. Comput.,
vol. 12, no. 2, pp. 193–222, Summer 2004.
[Available Online]
[Bibtex]
[Abstract]
[Journal Link]
doi:10.1162/106365604773955148
(30 pages, approx. 15,000 words).
Abstract—While standard evolutionary algorithms
employ a static, absolute fitness metric, coevolutionary algorithms
assess individuals by their performance relative to populations
of opponents that are themselves evolving. Although this arrangement
offers the possibility of avoiding long-standing difficulties such
as premature convergence, it suffers from its own unique problems,
cycling, over-focusing and disengagement.
Here, we introduce a novel technique for
dealing with the third and least explored of these problems. Inspired
by studies of natural host-parasite systems, we show that disengagement
can be avoided by selecting for individuals that exhibit reduced
levels of “virulence”, rather than maximum ability to defeat coevolutionary
adversaries. Experiments in both simple and complex domains are
used to explain how this counterintuitive approach may be used to
improve the success of coevolutionary algorithms.
J. Cartlidge & S. Bullock, (2004), “Unpicking tartan
CIAO plots: Understanding irregular coevolutionary cycling,”
Adaptive Behaviour, vol. 12, no. 2, pp. 69–92, Jun. 2004.
[Available Online]
[Bibtex]
[Abstract]
[Journal Link]
doi:10.1177/105971230401200201
(24 pages, approx. 14,000 words).
Abstract—We report results from a series
of studies coevolving players for simple Rock–Paper–Scissors games.
These results demonstrate that “Current Individual versus Ancestral
Opponent” (CIAO) plots, which have been proposed as a visualization
technique for detecting both coevolutionary progress and coevolutionary
cycling, suffer from ambiguity with respect to an important but
rarely discussed class of cyclic behavior. While regular cycling
manifests itself as a characteristic banded plot, irregular cycling
produces an irregular tartan pattern which is also consistent with
random drift through strategy space. Although this tartan pattern
is often reported in the literature on coevolutionary algorithms,
it has received little attention or analysis. Here we argue that
irregular cycling will tend to be more prevalent than regular cycling,
and that it corresponds to a class of coevolutionary scenario that
is important both theoretically and in practice. As such, it is
desirable that we improve our ability to distinguish its occurrence
from that of random drift, and other forms of coevolutionary dynamic.
J. Cartlidge, (2004), “Rules of Engagement:
Competitive Coevolutionary Dynamics in Computational Systems,”
PhD thesis, Sch. Comput., Univ. Leeds, UK.
[Available Online]
[Bibtex]
[Abstract]
[Department Link]
(210 pages, approx. 75,000 words).
Abstract—Given that evolutionary biologists
have considered coevolutionary interactions since the dawn of Darwinism,
it is perhaps surprising that coevolution was largely overlooked
during the formative years of evolutionary computing. It was not
until the early 1990s that Hillis’ seminal work thrust coevolution
into the spotlight.
Upon attempting to evolve fixed-length
sorting networks, a problem with a long and competitive history,
Hillis found that his standard evolutionary algorithm was producing
sub-standard networks. In response, he decided to reciprocally evolve
a population of test lists against the sorting network population;
thus producing a coevolutionary system. The result was impressive;
coevolution not only outperformed evolution, but the best network
it discovered was only one comparison longer than the best-known
solution. For the first time, a coevolutionary algorithm had been
successfully applied to problem-solving.
Pre-Hillis, the shortcomings of standard
evolutionary algorithms had been understood for some time: whilst
defining an adequate fitness function can be as challenging as the
problem one is hoping to solve, once achieved, the accumulation
of fitness-improving mutations can push a population towards local
optima that are difficult to escape. Coevolution offers a solution.
By allowing the fitness of each evolving individual to vary (through
competition) with other reciprocally evolving individuals, coevolution
removes the requirement of a fitness yardstick. In conjunction,
the reciprocal adaptations of each individual begin to erode local
optima as soon as they appear.
However, coevolution is no panacea. As
a problem-solving tool, coevolutionary algorithms suffer from some
debilitating dynamics, each a result of the relative fitness assessment
of individuals. In a single-, or multi-, population competitive
system, coevolution may stabilize at a suboptimal equilibrium, or
mediocre stable state; analogous to the traditional problem of local
optima. Populations may become highly specialized in an unanticipated
(and undesirable) manner; potentially resulting in brittle solutions
that are fragile to perturbation. The system may cycle; producing
dynamics similar to the children's game rock-paper-scissors. Disengagement
may occur, whereby one population out-performs another to the extent
that individuals cannot be discriminated on the basis of fitness
alone; thus removing selection pressure and allowing populations
to drift. Finally, coevolution's relative fitness assessment renders
traditional visualization techniques (such as the graph of fitness
over time) obsolete; thus exacerbating each of the above problems.
This thesis attempts to better understand
and address the problems of coevolution through the design and analysis
of simple coevolutionary models. “Reduced virulence”—
a novel technique specifically designed to tackle disengagement—is
developed. Empirical results demonstrate the ability of reduced
virulence to combat disengagement both in simple and complex domains,
whilst outperforming the only known competitors. Combining reduced
virulence with diversity maintenance techniques is also shown to
counteract mediocre stability and over-specialization.
A critique of the CIAO plot—a visualization
technique developed to detect coevolutionary cycling—highlights
previously undocumented ambiguities; experimental evidence demonstrates
the need for complementary visualizations. Extending the scope of
visualization, a first exploration into coevolutionary steering
is performed; a technique allowing the user to interact with a coevolutionary
system during run-time. Using a simple model incorporating reduced
virulence, the coevolutionary steering demonstration highlights
the future potential of such tools for both research and education.
The role of neutrality in coevolution
is discussed in detail. Whilst much emphasis is placed upon neutral
networks in the evolutionary computation literature, the nature
of coevolutionary neutrality is generally overlooked. Preliminary
ideas for modelling coevolutionary neutrality are presented.
Finally, whilst this thesis is primarily
aimed at a computing audience, strong reference to evolutionary
biology is made throughout. Exemplifying potential crossover, the
CIAO plot, a tool previously unused in biology, is applied to a
simulation of E. Coli, with results confirming empirical observations
of real bacteria.
J. Cartlidge & S. Bullock, (2003), “Caring versus sharing:
How to maintain engagement and diversity in coevolving populations,”
in Proc. 7th Eur. Conf. Artif. Life (ECAL’03), W. Banzhaf et al., Eds.
Dortmund, Germany: Springer Verlag, Sep. 2003, pp. 299–308.
[Available Online]
[Bibtex]
[Abstract]
doi:10.1007/978-3-540-39432-7_32
(10 pages, approx. 5,000 words).
Abstract—Coevolutionary optimisation suffers
from a series of problems that interfere with the progressive escalating
arms races that are hoped might solve difficult classes of optimisation
problem. Here we explore the extent to which encouraging moderation
in one coevolving population (termed parasites) can alleviate the
problem of coevolutionary disengagement. Results suggest that, under
these conditions, disengagement is avoided through maintaining variation
in relative fitness scores. In order to explore whether standard
diversity maintenance techniques such as resource sharing could
achieve the same effects, we compare moderating virulence with resource
sharing in a simple matching game. We demonstrate that moderating
parasite virulence differs significantly from resource sharing, and
that its tendency to prevent disengagement can also reduce the
likelihood of coevolutionary optimisation halting at mediocre stable
states.
S. Bullock, J. Cartlidge, & M. Thompson, (2002), “Prospects
for computational steering of evolutionary computation,” in
Workshop Proc. 8th Int. Conf. Artif. Life, E. Bilotta et al., Eds.
Sydney, Australia: MIT Press, Dec. 2002, pp. 131–137.
[Available Online]
[Bibtex]
[Abstract]
[Conference Link]
(7 pages, approx. 4,500 words).
Abstract—Currently, evolutionary
computation (EC) typically takes place in batch mode: algorithms
are run autonomously, with the user providing little or no
intervention or guidance. Although it is rarely possible to
specify in advance, on the basis of EC theory, the optimal
evolutionary algorithm for a particular problem, it seems
likely that experienced EC practitioners possess considerable
tacit knowledge of how evolutionary algorithms
work. In situations such as this, computational steering
(ongoing, informed user intervention in the execution
of an otherwise autonomous computational process) has
been profitably exploited to improve performance and
generate insights into computational processes. In this
short paper, prospects for the computational steering of
evolutionary computation are assessed, and a prototype
example of computational steering applied to a coevolutionary
algorithm is presented.
J. Cartlidge & S. Bullock, (2002), “Learning
lessons from the common cold: How reducing parasite virulence
improves coevolutionary optimization,” in Proc. Congr.
Evol. Comput. (CEC’02), D. Fogel et al., Eds.
Honolulu, HI: IEEE Press, Jun. 2002, pp. 1420–1425.
[Available Online]
[Bibtex]
[Abstract]
[Conference Link]
doi:10.1109/CEC.2002.1004451
(7 pages, approx. 4,500 words).
Abstract—Inspired by the virulence of natural
parasites, a novel approach is developed to tackle disengagement,
a detrimental phenomenon coevolutionary systems sometimes experience.
After demonstrating beneficial results in a simple model,
minimum-comparison sorting networks are coevolved, with results
suggesting that moderating parasite virulence can help in practical
problem domains.
©John Cartlidge. Page last modified on:
|