Just think of an OPTIMIZATION problem as a black box. A large black box. As large as, for example, a Coca-Cola vending machine. Now, we don't know nothing about the inner workings of this box, but see, that there are some regulators to play with, and of course we know, that we want to have a bottle of the real thing... Putting this everyday problem into a mathematical model, we proceed as follows: (1) we label all the regulators with x and a number starting from 1; the result is a vector x, i.e. (x_1,...,x_n), where n is the number of visible regulators. (2) we must find an objective function, in this case it's obvious, we want to get k bottles of the real thing, where k is equal to 1. [some might want a "greater or equal" here, but we restricted ourselves to the visible regulators (we all know that sometimes a "kick in the right place" gets use more than 1, but I have no idea how to put this mathematically...)] (3) thus, in the language some mathematicians prefer to speak in: f(x) = k = 1. So, what we have here is a maximization problem presented in a form we know from some boring calculus lessons, and we also know that there at least a dozen utterly uninteresting techniques to solve problems presented this way... What can we do in order to solve this problem? We can either try to gain more knowledge or exploit what we already know about the interior of the black box. If the objective function turns out to be smooth and differentiable, analytical methods will produce the exact solution. If this turns out to be impossible, we might resort to the brute force method of enumerating the entire SEARCH SPACE. But with the number of possibilities growing exponentially in n, the number of dimensions (inputs), this method becomes infeasible even for low- dimensional spaces. Consequently, mathematicians have developed theories for certain kinds of problems leading to specialized OPTIMIZATION procedures. These algorithms perform well if the black box fulfils their respective prerequisites. For example, Dantzig's simplex algorithm (Dantzig 66) probably represents the best known multidimensional method capable of efficiently finding the global optimum of a linear, hence convex, objective function in a SEARCH SPACE limited by linear constraints. (A USENET FAQ on linear programming is maintained by John W. Gregory of Cray Research, Inc. Try to get your hands on "linear-programming-faq" (and "nonlinear-programming-faq") that is posted monthly to sci.op-research and is mostly interesting to read.) Gradient strategies are no longer tied to these linear worlds, but they smooth their world by exploiting the objective function's first partial derivatives one has to supply in advance. Therefore, these algorithms rely on a locally linear internal model of the black box. Newton strategies additionally require the second partial derivatives, thus building a quadratic internal model. Quasi-Newton, conjugate gradient and variable metric strategies approximate this information during the search. The deterministic strategies mentioned so far cannot cope with deteriorations, so the search will stop if anticipated improvements no longer occur. In a multimodal ENVIRONMENT these algorithms move "uphill" from their respective starting points. Hence, they can only converge to the next local optimum. Newton-Raphson-methods might even diverge if a discrepancy between their internal assumptions and reality occurs. But of course, these methods turn out to be superior if a given task matches their requirements. Not relying on derivatives, polyeder strategy, pattern search and rotating coordinate search should also be mentioned here because they represent robust non-linear OPTIMIZATION algorithms (Schwefel 81). Dealing with technical OPTIMIZATION problems, one will rarely be able to write down the objective function in a closed form. We often need a SIMULATION model in order to grasp reality. In general, one cannot even expect these models to behave smoothly. Consequently, derivatives do not exist. That is why optimization algorithms that can successfully deal with black box-type situations habe been developed. The increasing applicability is of course paid for by a loss of "convergence velocity," compared to algorithms specially designed for the given problem. Furthermore, the guarantee to find the global optimum no longer exists! But why turn to nature when looking for more powerful algorithms? In the attempt to create tools for various purposes, mankind has copied, more often instinctively than geniously, solutions invented by nature. Nowadays, one can prove in some cases that certain forms or structures are not only well adapted to their ENVIRONMENT but have even reached the optimum (Rosen 67). This is due to the fact that the laws of nature have remained stable during the last 3.5 billion years. For instance, at branching points the measured ratio of the diameters in a system of blood-vessels comes close to the theoretical optimum provided by the laws of fluid dynamics (2^-1/3). This, of course, only represents a limited, engineering point of view on nature. In general, nature performs adaptation, not optimization. The idea to imitate basic principles of natural processes for optimum seeking procedures emerged more than three decades ago (cf Q10.3). Although these algorithms have proven to be robust and direct OPTIMIZATION tools, it is only in the last five years that they have caught the researchers' attention. This is due to the fact that many people still look at organic EVOLUTION as a giantsized game of dice, thus ignoring the fact that this model of evolution cannot have worked: a human germ-cell comprises approximately 50,000 GENEs, each of which consists of about 300 triplets of nucleic bases. Although the four existing bases only encode 20 different amino acids, 20^15,000,000, ie circa 10^19,500,000 different GENOTYPEs had to be tested in only circa 10^17 seconds, the age of our planet. So, simply rolling the dice could not have produced the diversity of today's complex living systems. Accordingly, taking random samples from the high-dimensional parameter space of an objective function in order to hit the global optimum must fail (Monte-Carlo search). But by looking at organic EVOLUTION as a cumulative, highly parallel sieving process, the results of which pass on slightly modified into the next sieve, the amazing diversity and efficiency on earth no longer appears miraculous. When building a model, the point is to isolate the main mechanisms which have led to today's world and which have been subjected to evolution themselves. Inevitably, nature has come up with a mechanism allowing INDIVIDUALs of one SPECIES to exchange parts of their genetic information (RECOMBINATION or CROSSOVER), thus being able to meet changing environmental conditions in a better way. Dantzig, G.B. (1966) "Lineare Programmierung und Erweiterungen", Berlin: Springer. (Linear pogramming and extensions) Kursawe, F. (1994) " Evolution strategies: Simple models of natural processes?", Revue Internationale de Systemique, France (to appear). Rosen, R. (1967) "Optimality Principles in Biologie", London: Butterworth. Schwefel, H.-P. (1981) "Numerical OPTIMIZATION of Computer Models", Chichester: Wiley. ------------------------------ End of ai-faq/genetic/part3 ***************************Go Back Up