The aim of the framework presented in this section is to structure the range of GBML systems into more specific categories about which we can make more specific observations than we could about GBML systems as a whole. We present two categorisations. In the first (§2.1), GBML systems are classified by their role in learning; specifically their application to i) sub-problems of machine learning, ii) learning itself, or iii) meta-learning. In the second categorisation (§2.2), GBML systems are classified by their high-level algorithmic approach as either Pittsburgh or Michigan systems. Following this, in §2.3 we briefly review ways in which learning and evolution interact and in §2.4 we consider various models of GBML not covered earlier.
Before proceeding we note that evolution can output a huge range of phenotypes, from scalar values to complex learning agents, and that agents can be more or less plastic (independent of evolution). For example, if evolution outputs a fixed hypothesis, that hypothesis has no plasticity. In contrast, evolution can output a neural net which, when trained with backpropagation, can learn much. (In the latter approach evolution may specify the network structure while backpropagation adapts the network weights.)