Genetic Algorithms Digest Tuesday, October 18, 1994 Volume 8 : Issue 40
- Send submissions (articles) to GA-List@AIC.NRL.NAVY.MIL
- Send administrative (subscribe, unsubscribe, change of address, etc.,)
requests to GA-List-Request@AIC.NRL.NAVY.MIL
- Do not send email to gadistr@aic.nrl.navy.mil
- For GA code, papers, conference announcements, GA-List back issues, etc.,
please use our anonymous ftp archive: FTP.AIC.NRL.NAVY.MIL [192.26.18.68]
Information is in /pub/galist/FTP
Today's Topics:
- Seeding populations (Re: v8n38,39)
- Evolvable Fitness Formula (Re: v8n39)
- Dynamic Hill Climbing - Is it really that good? (Re: v8n30-32,35)
- Announcement of paper
- Tenure Track Faculty Position
----------------------------------------------------------------------
****************************************************************************
CALENDAR OF GA-RELATED ACTIVITIES: (with GA-List issue reference)
GAs in Image Processing and Vision Colloquium, Savoy Place (v8n16) Oct 20, 94
AI'94 Workshop on Evol Comp, Armidale, NSW, Australia (v8n15) Nov 22, 94
ACM SAC '95 Symposium on Applied Computing, Tennessee (v8n20) Feb 26-28, 95
EP95 4th Ann Conf on Evolutionary Programming, San Diego,CA(v8n6) Mar 1-4, 95
ICTS95 3rd Intl Conf on Telecommunications, Tennessee (v8n21) Mar 16-19, 95
CSCS10 10th Intl Conf on CONTROL SYSTEMS & CS, Romania (v8n37) May 24-26, 95
ICANNGA95 Intl Conf on Artificial NNs and GAs, France (v8n10) Apr 18-21, 95
IAS-95 Intelligent Adaptive Systems, Melbourne Beach FL (v8n32), April 26, 95
ECAL95 3rd European Conf on Artificial Life, Granada, Spain(v8n5) Jun 4-6, 95
AAICG95 Symposium on Appl of AI in Geophysics, Boulder, CO (v8n39) Jul 12, 95
ICGA-95 Sixth Intl Conference on GAs, Pittsburgh PA (v8n32), July 15-20, 95
PASE95 Wrkshp Parallel Appl in Stat and Econ, Trier (v8n39), Aug 29-Sep 2, 95
ASI-AA-95 Practice and Future of Autonomous Agents (v8n19) Sep 23-Oct 1, 95
ICEC '95 IEEE Intl Conf on Evol. Computing, Perth (v8n36) Nov 29-Dec 1, 95
(Send announcements of other activities to GA-List@aic.nrl.navy.mil)
****************************************************************************
------------------------------
From: Peter Hancock (RF)
Date: Wed, 12 Oct 94 11:09:27 BST
Subject: Seeding populations (Re: v8n38, v8n39)
In v8n39 Inman Harvey replied to a query from Bill Langdon in v8n38
about seeding populations from one problem to a slightly different
one. I've also used the "gradually get harder" technique that Inman
describes with success (e.g. Pruning neural nets by genetic algorithm,
in Artificial Neural Networks 2, Aleksander and Taylor, (Eds) 1992).
I gradually added random values to the seed weights for a net, until
the GA was able to produce a net that trained from random values. I
would recommend it as a method for problems that are just too hard to
be done in one go. There is, of course, no guarantee that there will
be a route in parameter space from the easy problem to the hard one,
but from my experiences it's worth a bash. However, I'm not sure it
is quite what Bill Langdon asked: his problem is merely different, not
necessarily harder.
Peter Hancock
University of Stirling
------------------------------
From: henrik@caio.irmkant.rm.cnr.it
Date: Thu, 13 Oct 1994 20:04:48 -0500
Subject: Evolvable Fitness Formula (Re: v8n39)
In response to questions in GA Digest v8n39 where work on dynamic fitness
functions was asked for.
EVOLVABLE FITNESS FORMULA.
I propose the concept of a fitness formula as a property of each
individual in the GA. The individuals in the GA generate offspring
as a function of the degree they satisfy a criterion of fitness, or
a fitness formula.
From a biological point of view, the fitness formula can be argued
to be an evolvable trait of organisms, and given a particular environment
individuals can be shown to evolve behaviors appropriate to both the
environment and the fitness formula.
In almost all simulations using GAs the fitness formula is fixed and
decided by the researcher. But a moment of reflection suggests that in
biological reality the fitness formula is a trait of organisms.
More precisely, the fitness formula summarizes a number of properties
of a particular species of organisms related to their nutritional needs,
to the mechanisms and processes in their bodies which extract energy
from ingested materials, etc. Like all other traits of organisms, the
fitness formula can evolve. In fact, we can interpret the fitness
formula of a particular species of organisms as an inherited property
of that species of organisms. If the fitness formula varies (slightly)
from one individual to another, is inherited, and is subject to random
mutation, we can study its evolution in a population of organisms as we
may study any other trait. Furthermore, fitness formulae can change
during the life of an organism as any other developmental trait.
To further convince ourselves that what is called the fitness formula
is an evolvable trait of individuals we should consider the notion of
ecological niche. The notion of ecological niche designates the
environmental resources from which a given species of organisms
extracts the energy needed for survival and reproduction. An ecological
niche includes properties of the environment (e.g. what types of
elements it contains) but also properties of the individuals living
in that environment, such as sensory apparatus, behavior, and food
processing mechanism, that can be expressed by the fitness formula.
If any of these properties changes it should be expected that the other
properties will change as well. In other words, the various components
of an ecological niche co-evolve.
The argumentation of an evolvable fitness formula has been elaborated
further in Lund, H. H. & Parisi, D. "Simulations with an Evolvable
Fitness Formula" Technical Report PCIA-1-94, C.N.R., Rome, 1994, in
which we also introduced the concept of a limited evolvable fitness
formula, and through simulations showed that the various components of
the ecological niche (sensory apparatus, behavior, and fitness formula)
actually are co-adapted and co-evolve.
A general implication of the simulations with an evolvable fitness formula
is that they allow us to see applications of GAs in a different light.
The classical view of GAs is that they are methods for developing systems
that maximize fitness. This is why the fitness formula is typically fixed
and it is chosen by the researcher as the starting point of a simulation.
But if one adopts an artificial life framework and applies GAs to
populations of ecological neural networks, i.e. networks which behave in
and interact with an environment, things begin to look different. The
goal of the GA may cease to be that of maximizing a given fitnes formula
and may become that of developing systems that exhibit a certain type of
behavior in a particular environment. If this is the goal, then the
the fitness formula ceases to be a fixed a priori and it becomes an
independent variable which can be manipulated in order to obtain the
desired behavior.
However, this is still looking at GAs from an engineering point of
view, as tools for obtaining some desired result. If GAs are viewed
as models of natural biological phenomena, then it is not clear that
nature has some desired result to obtain - a particular behavior or
the maximation of some function. In the biological world all is change
and all components of an evolving system tend to co-evolve. In this
perspective the fitness formula becomes a trait of organisms like all
other traits. More precisely, in the context of ALife simulations, the
fitness formula represents some property of the mechanisms and
processes in organisms that extract energy from ingested materials.
Fitness formulae are different in different species of organisms and
in different individuals of the same species; they are inherited (or
have a genetically inherited component) and are subject to mutations
and to recombination in sexually reproducing organisms. Therefore,
they evolve. In the paper mentioned above, we have made a first
attempt to analyze how fitness formulae co-evolve with behavior
and how the particular sensory apparatus constrains this co-evolution.
Two of the papers on evolvable fitness formula can be obtained by
anonymous ftp :
Simulations with an Evolvable Fitness Formula
Lund, H. H. & Parisi, D.
Abstract.
The concept of a fitness formula as a property of an organism is proposed.
In artificial life simulations with organisms living in an environment, the
fitness formula can be interpreted as the ability of organisms to extract
energy from potential food sources distributed in the environment. In
simulations where the goal of the genetic algorithm is that of developing
systems which exhibit a certain type of behavior in a particular environment,
the fitness formula becomes an independent variable which can be manipulated
in order to obtain the desired behavior. The fitness formula can be viewed
as an evolvable trait of organisms, and therefore not fixed and decided
by the researcher. Simulations with fixed and evolvable fitness formulae
show that the fitness formula, the sensory apparatus, and the behavior of
organisms may co-evolve and be co-adapted.
ftp://kant.irmkant.rm.cnr.it/pub/econets/lund.fitform.ps.Z
Pre-adaptation to Dynamic Environments
Lund, H. H. & Parisi, D.
Abstract.
Populations of artificial neural networks with individual evolvable fitness
formulae are shown to be pre-adapted to environments different from those in
which they evolved. An analysis of the neural network units explains the
behavioral strategies, i.e. the specialization to specific ecological niches,
and how the organisms are able to make sudden changes in ecological niche
preference. Pre-adaptation to new habitats is also found in the nature.
Parallels between the emergence of speciation modeled by sympatric speciation
via habitat specialization in natural organisms and the change in ecological
niche preference in populations of artificial neural networks are drawn.
ftp://kant.irmkant.rm.cnr.it/pub/econets/lund.preadapt.ps.Z
Henrik Hautop Lund e-mail: henrik@caio.irmkant.rm.cnr.it
National Research Council
15, Viale Marx
00137 Rome, Italy
------------------------------
From: andy.keane@eng.ox.ac.uk (Andy Keane)
Date: Fri, 14 Oct 94 09:13:51 BST
Subject: Dynamic Hill Climbing - Is it really that good ?
There have been some interesting exchanges (GA-List v8n30-32,35)
on DHC [1,2,3] recently. I am reporting my own brief study in this
area. One of my activities is maintaining a diverse library of optimization
methods for use on a mixture of real and test problems. I have thus built
a DHC routine using Yuret's local optimize code and the pseudo code in
his master's thesis.
It seemed clear to me from reading [1,2,3] that the hype surrounding DHC
was likely to be rather over the top and this is indeed what I found !
That is not to say that DHC does not work and is quite good on some
problems, just that it is in no way a competitive alternative to GA
on really hard multi-modal problems. It seems from my studies that it is
the degree of multi-modality that is crucial to such assessments.
DHC obviously tries to find peaks in a function. The more peaks the
problem has the more it must find to have a good chance of getting the
globally best, particularly if the globally best is close to other,
poorer peaks (imagine a mountain surrounded by foothills in an otherwise
flat terrain - DHC will find the foothills and will be directed AWAY from
them by its diversity strategy and might never penetrate to the mountain).
Clearly the number of local searches and thus trials needed will be directly
proportional to the number of peaks in the function.
Moreover any local search method, such as that in the references,
starts to consume lots of trials as the number of dimensions rises
(my experiments with THEIR code shows around 40 trials for a 2d problem
rising to 1,000 on a 50d problem, even with a threshold value of 0.5%
of the variable ranges). It is perhaps also worth noting that the local
search routine in references [1-3] is by no means as innovative as some
people seem to think. It is just a variant on the well known Hooke &
Jeeves method that has been around for more than 30 years [4]. It is
surprising not to find this work referred to in [1-3] since it is so
popular among the classical search community [5].
So if the number of local maxima also rises with the number of dimensions,
as it mostly does in hard problems, DHC runs into real problems.
I recently put such a problem on the GA digest (see v8n16) which, for the
50d problem is a tough challenge, since there are many millions of peaks in
this function, most of them having heights of around 0.4 and many many
fewer ones of double that height. My GA [6] gets to good solutions (0.75)
after 20,000 trials and 0.8 by 140,000 trials. David Schaffer tried CHC [7]
on this and got about the same results (he got to slightly better results in
the end but with slower assent). I have also tried Grefenstette's genesis v5.0
on this and it gets only to about 0.5 after 20,000 trials and 0.6 after
140,000.
Using DHC on this problem allows the low grade peaks of height 0.4 to be found
quite quickly. However, as I expected, it never gets beyond this, simply
finding more and more of these moderately good solutions. This is much
better than a simple hill climber will do without restarts, slightly better
than simulated annealing with a general purpose cooling schedule and also
faster to the foothills than a GA. However, it is massively outperformed
by modern GA's and easily overtaken by quite old and traditional ones when
scaling the heights. Hence my opening remarks in this note.
Nonetheless, I have added DHC to my library now and will compare it against
the 40 or so other methods I have on new problems as they come up. Time
will tell whether it changes the way people do optimization.
I would thus agree Matthew Hobbs that the titles of [1] and [3] constitute
an element of misrepresentation. There IS a strong element
of strawman, comparing DHC with GAs on "relatively simple"
problems that GAs are known to perform poorly on.
At the end of [2], the authors seek challenges to find a continuous
optimization problem on which GAs do better ! I hope this article
satisfies them.
[1] D Yuret & M de la Maza: "Dynamic Hill Climbing:
Overcoming the limitations of optimization techniques".
In Proc. of the 2nd Turkish Symposium on AI and Artificial
Neural Networks, 254-260, 1993.
[2] M de la Maza & D Yuret: "Dynamic hill climbing".
AI Expert 9(3):26-31, March 1994.
[3] D Yuret: "From Genetic Algorithms to Efficient
Optimization", submitted for M.Sc. thesis at MIT, May 1994.
[4] R Hooke & T A Jeeves "Direct search of numerical and
statistical problems" Westinghouse Research labs, Scientific Paper, 1960.
[5] J N Siddall "Optimal engineering design", Marcel Dekker, 1982.
[6] A J Keane "Experience with optimizers in structural design",
Conference on adaptive computing in engineering design and control, Plymouth,
UK, Sept 94.
[7] L J Eshelman "The CHC adaptive search algorithm: how to have safe
search when engaging on nontraditional genetic recombination", FOGA 90.
NB: [6] above can be supplied by the author, see v8n28.
Andy Keane
Department Of Engineering Science
Oxford University
Parks Road
Oxford OX1 3PJ
U.K.
------------------------------
From: Ari Juels
Date: Fri, 14 Oct 1994 18:51:04 -0700
Subject: Announcement of paper
Stochastic Hillclimbing as a Baseline Method for Evaluating
Genetic Algorithms
By Ari Juels and Martin Wattenberg
(UC Berkeley)
We investigate the effectiveness of stochastic hillclimbing as
a baseline for evaluating the performance of genetic algorithms (GAs)
as combinatorial function optimizers. In particular, we address four
problems to which GAs have been applied in the literature: the maximum
cut problem, Koza's 11-multiplexer problem, MDAP (the Multiprocessor
Document Allocation Problem), and the jobshop problem. We demonstrate
that simple stochastic hillclimbing methods are able to achieve
results comparable or superior to those obtained by the GAs designed
to address these four problems. We further illustrate, in the case of
the jobshop problem, how insights obtained in the formulation of a
stochastic hillclimbing algorithm can lead to improvements in the
encoding used by a GA.
The paper is filed as UC Berkeley Computer Science Division
Technical Report csd-94-834, and available by anonymous ftp at
tr-ftp.cs.berkeley.edu in the directory
pub/tech-reports/csd/csd-94-834.
Thank you very much,
Ari Juels
------------------------------
From: ERKENBRACKD@central.edu
Date: Wed, 12 Oct 1994 8:16:10 -0500 (CDT)
Subject: TENURE TRACK FACULTY POSITION
TENURE TRACK FACULTY POSITION
Central College (Pella, IA) offers a full-time nine-month appointment
beginning 1 Sept 95 in plant genetics; salary and rank are open, depending on
the successful applicant's qualifications and experience.
Qualifications: PhD preferred; ABD considered. This position is for a plant
scientist with a laboratory/greenhouse orientation. Graduate work in
population genetics, plant physiology and botany is expected; experience with
lower plants helpful.
Responsibilities include: teaching introductory biology, introductory &
population genetics and Evolution; designing and teaching a new course in Plant
Physiology; assisting seniors with research projects and participation in
departmental seminars.
Review of applications will commence 7 Nov 94. Central College is an AA/EOE
employer. We actively seek and encourage applications from women and
minorities.
To apply, send C.V., transcripts and three letters of recommendation BY
CONVENTIONAL MAIL to Dr. Virginia Coombs, Dean of the College, Central
College, Pella IA 50219 USA. No electronic applications please, but internet
requests for the full two-page position description
will be honored, although the descriptions sent in response to these will go
via conventional mail.
Other requests for information may be sent to Dr. D. E. ErkenBrack, chair,
Biology Dept
------------------------------
End of Genetic Algorithms Digest
******************************