Next: A Framework for GBML
Up: Introduction
Previous: Machine Learning
Contents
GBML methods are a niche approach
to machine learning and much less well-known than the main
non-evolutionary methods, but there are many good reasons to consider
them.
- Accuracy
- Importantly, the classification accuracy of the best evolutionary and
non-evolutionary methods are comparable [95] §12.1.1.
- Synergy of Learning and Evolution
- GBML methods exploit the synergy of learning and evolution, combining
global and local search and benefitting from the Baldwin effect's
smoothing of the fitness landscape §2.3.
- Epistasis
- There is some evidence the accuracy of GBML methods
may not suffer from epistasis as much as typical non-evolutionary
greedy search [95] §12.1.1.
- Integrated Feature Selection and Learning
- GBML methods can combine feature selection and learning in one
process. For instance feature selection is intrinsic in LCS
methods §3.5.
- Adapting Bias
- GBML methods are well-suited to adapting inductive bias. We can adapt
representational bias by e.g. selecting rule condition
shapes §3.5.2, and
algorithmic bias by e.g. evolving learning rules §3.4.
- Exploiting Diversity
- We can exploit the diversity of a population of solutions to combine
and improve predictions (the ensemble approach §3.3) and to generate Pareto sets for multiobjective problems.
- Dynamic Adaptation
- All the above can be done dynamically,
to improve accuracy, to deal with non-stationarity, and to minimise
population size. This last is of interest in order to reduce
overfitting, improve run-time and improve human-readability.
- Universality
- Evolution can be used as a wrapper for any learner.
- Parallelisation
- Population-based search is easily
parallelised.
- Suitable Problem Characteristics
- From an optimisation perspective,
learning problems are typically large, non-differentiable, noisy,
epistatic, deceptive, and multimodal [207]. To this list
we could add high-dimensional and highly constrained. EAs are a good
choice for such problems.
See [62] and §3.4
for more arguments in favour of GBML. At the same time there are
arguments against using GBML.
-
- Algorithmic Complexity
- GBML algorithms are typically more
complex than their non-evolutionary alternatives. This makes them
harder to implement, harder to analyse, and means there is less
theory to guide parameterisation and development of new algorithms.
- Increased Run-time
- GBML methods are generally much slower
than the non-evolutionary alternatives.
- Suitability for a Given Problem
- No single learning method
is a good choice for all problems. For one thing the bias of a given
GBML method may be inappropriate for a given problem. Problems to
which GBML methods are particularly prone include prohibitive
run-time (or set-up time) and that simpler and/or faster methods may
suffice. Furthermore, even where GBML methods perform better the
improvements may be marginal.
See the SWOT (Strengths, Weaknesses, Opportunities, Threats) analysis
of GBML in [224] for more.
Next: A Framework for GBML
Up: Introduction
Previous: Machine Learning
Contents
T Kovacs
2011-03-12