See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/228934339 A Quick Presentation of Evolutionary Computation Article · January 2010 DOI: 10.4018/978-1-60566-814-7.ch002 CITATIONS READS 0 28 1 author: Pierre Collet University of Strasbourg 173 PUBLICATIONS 1,407 CITATIONS SEE PROFILE Some of the authors of this publication are also working on these related projects: Artificial Immune Ecosystems View project Knowledge modeling for inventive design View project All content following this page was uploaded by Pierre Collet on 23 May 2014. The user has requested enhancement of the downloaded file.
A quick presentation of Evolutionary Computation Pierre Collet Laboratoire des Sciences de l'Image, de l'Informatique et de la Télédétection, Université de Strasbourg – France pierre.collet@unistra.fr ABSTRACT Evolutionary computation is an old field of computer science, that started in the 1960s nearly simultaneously in different parts of the world. It is an optimization technique that mimics the principles of Darwinian evolution in order to find good solutions to intractable problems faster than a random search. Artificial Evolution is only one among many stochastic optimization methods, but recently developed hardware (General Purpose Graphic Processing Units or GPGPU) gives it a tremendous edge over all the other algorithms, because its inherently parallel nature can directly benefit from the difficult to use Single Instruction Multiple Data parallel architecture of these cheap, yet very powerful cards. INTRODUCTION AND HISTORY The development of evolutionary algorithms almost dates back to the dark ages of computers. To put back everything in perspective, Computer Science really started when John von Neumann designed the EDVAC (Electronic Discrete Variable Automatic Computer) in 1945, but the first prototype was actually implemented in 1949 with Wilkes' EDSAC (Electronic Delay Storage Automatic Calculator). Then, for a while, the only commercially available machines used valves and were therefore not that reliable (IBM 650 in 1953). A quantum leap was made when transistors became available around the 1960s and finally, Integrated Circuits in 1964. By that time, evolutionary computation had about ten independent beginnings in Australia, United States and Europe, starting in 1953, traced by David Fogel's excellent Fossil Record (Fogel, 1998): Alex Fraser had evolved binary strings using crossovers (Fraser, 1957), Friedberg had already thought of self-programming computers through mutations (Friedberg, 1958; Friedberg, Dunham, & North, 1958), and Friedman of how evolution could be digitally simulated (Friedman 1959). However, the main evolutionary trends that survived are: Evolutionary Strategies (continuous optimization), by Rechenberg and Schwefel, best described in Rechenberg (1973) and Schwefel (1995), Genetic Algorithms, by Holland, later popularised by Goldberg on the US East Coast (Michigan) (Holland, 1975; Goldberg, 1989}, Genetic Programming, by Cramer (1985) and later developed by John Koza (1992).
Evolutionary computation cannot, therefore, be seen as a recent development of computer science, or even classified as artificial intelligence, which is a different concept that can also be traced back to the mid 1950s, with John Mc Carthy and many others. However, until the principles of evolutionary computation were clearly understood, these techniques necessitated a larger amount of computer power than was available until the beginning of the 1990s. Thus, although evolutionary computation really started in the late 1960s it only came of age when computers had enough power to make it competitive with other (posterior) stochastic optimization paradigms such as simulated annealing (Kirkpatrick, 1983) or Tabu Search (Glover, 1977, 1989, 1990). Now that the field is mature, a second drastic change is taking place with the advent of General Purpose Graphic Processing Units (GPGPUs) which are massively parallel cards developed by the billions of dollars of the gaming industry. Announced for the first quarter of 2010, NVidia's GeForce GTX395 card based on 40nm Fermi chips should give 5 TeraFlops for less than $1000. This tremendous power is directly usable by evolutionary programs, that share the very same parallel workflow than the graphic pixel and vertex shaders for which these cards have been designed. SHORT PRESENTATION OF THE EVOLUTIONARY COMPUTATION PARADIGM The general idea comes from the observation that animals and plants are very well adapted to their environment. Back in 1859, Charles Darwin came with an explanation for this called natural selection , that is now widely accepted (Darwin, 1859) and shown at work in a wonderful recent book by Neil Shubin (2008): Your Inner Fish . The rationale is that individuals that are not well adapted to their environment do not survive long enough to reproduce, or have less chances to reproduce than other individuals of the same species that have acquired beneficial traits through variations during the reproduction stage. Adaptation to the environment is also called fitness . Artificial evolution grossly copies these natural mechanisms in order to optimise solutions to difficult problems. All optimisation techniques based on Darwinian principles are de facto members of the evolutionary computation paradigm, even though a distinction must be made between two different kinds of algorithms. ``Standard'' evolutionary algorithms evolve a fixed string of bits or reals that is passed to an evaluation function that assesses the ``fitness'' of the individual, while genetic programming evolves individuals that are evaluated on a number of fitness cases . This distinction may seem tenuous, but one will see in the end of this chapter that that it is important. A UNIFIED EVOLUTIONARY ALGORITHM
Kenneth DeJong has been giving a GECCO tutorial on the unification of evolutionary algorithms for several years now and has come up with a recent book on the subject (DeJong 2005). Indeed, the previously quoted trends (Evolutionary Strategies, Genetic Algorithms, Evolutionary Programming, Genetic Programming) all share the same principles copied from natural selection. Rather than describing each algorithm, this chapter will describe a generic and complete version that can emulate virtually any paradigm, depending on chosen parameters. We will mainly focus on ``standard'' evolutionary algorithms. Genetic programming will be shortly evoked when necessary. Representation of individuals Due to the similarities between artificial evolution and natural evolution that was the source of its inspiration, a good part of the vocabulary was borrowed to biologists. In artificial evolution, a potential solution to a problem is called an individual . Using a correct representation to implement individuals is a very essential step, that is trivial for some kinds of problems, and much less for others. The American trend (genetic algorithms) advocates using a representation that is as generic as possible, i.e. a bit string (even to code real values). The German trend (evolutionary strategies) that was designed to optimise continuous problems advocates using real variables. Genetic Programming evolves programs and functions that are typically (but not exclusively) implemented as trees. Although using bitstrings makes sense for combinatorial problems or for theoretical studies, representing real values with bits, while feasible, has many drawbacks (Hinterding, Gielewski, & Peachey, 1995). It seems much more reasonable to use an appropriate representation tailored to the problem at hand. If one tries to optimise a recipe for French crêpes (pancakes) that are made using flour, milk, eggs and salt, a reasonable encoding for an individual can be four ``genes,'' namely: (float cupsFlour, float pintMilk, int nbEggs, int pinchSalt) This example will be used throughout this chapter because even though it is very simple and easy to grasp, it is complete enough to explain most of the problems encountered in artificial evolution. For instance, in this case, the fitness function will consist of measuring the width of the smile of the person who tastes the crêpes . This makes it easy to understand the essential point that individuals are not evaluated on their genotype ( i.e. the ingredients) but on their phenotype (the cooked crêpe ). In many problems, the relationship between genotype and phenotype is not very clear, and there can be some intercorrelation between genes. In the crêpes example, one can understand that the salt ingredient is independent from the others. If the crêpe tastes too salty, the problem is simple to solve: put less salt in the next experiment. However, an essential characteristic of crêpes is that the texture of the batter should be such that it is both liquid enough to be poured into the saucepan, and thick enough so that when cooked, a crêpe becomes solid enough to be eaten with your fingers (the most enjoyable way to eat them). The solution for this problem is not as easy as
Recommend
More recommend