Genetic Algorithms Digest Thursday, February 8, 1996 Volume 10 : Issue 7 - Do NOT send email or reply to gadistr@aic.nrl.navy.mil (GA List Moderator) - Send submissions (articles) to GA-List@AIC.NRL.NAVY.MIL - Send administrative (subscribe, unsubscribe, change of address, etc.,) requests to GA-List-Request@AIC.NRL.NAVY.MIL ****************************************************************************** - You can access back issues, GA code, conference announcements, etc., either through the WWW at URL http://www.aic.nrl.navy.mil/galist/ or through anonymous ftp at ftp.aic.nrl.navy.mil [192.26.18.68] in /pub/galist. ****************************************************************************** Today's Topics: - speedup definition for parallel GAs - conference announcement - MSc Machine learning and adaptive computing - PGAPack V1.0 Release - 2 Postdocs required at ATR's brain builder group ---------------------------------------------------------------------- CALENDAR OF GA-RELATED ACTIVITIES: (with GA-List issue reference) SAC96 Symposium on Applied Computing GA & Opt Track, PA (v9n33) Feb 17-19, 96 Appl of Fuzzy Technologies in Business and Industry, Germany (v9n54)Feb 29,96 EP96 5th Conf on Evol Programming, San Diego, CA (v9n18) Feb 29-Mar 3, 96 CIFEr96 Comp Intelligence for Financial Eng, NY City (v9n53) Mar 24-26, 96 AAAI-96 Spring Symposium Series, Stanford University, CA (v9n44)Mar 25-27, 96 ACEDC96 Adaptive Computing in Eng. Design & Control, UK (v9n28) Mar 26-28, 96 SOCO96 Intl Symposia w/ Workshops/Tutorials, Reading, UK (v9n35)Mar 26-28, 96 AISB96 Workshop on Evolutionary Computing, Sussex, UK (v9n54) Apr 1-2, 96 SECTAM96 AI Techniques in Eng and Mechanics, Tuscaloosa (v9n46) Apr 14-16, 96 ALifeV Artificial Life Conference, Nara, Japan (v9n45) May 16-18, 96 ICEC96 IEEE Intl Conf on Evol Comp, Nagoya, Japan (v9n18) May 20-22, 96 ISRAM96 Session on Evol Algs in Robotics, Montpellier (v9n46) May 27-30, 96 ROSYCS Romanian Symposium on Computer Science, Romania (v10n4)May 30-Jun 1,96 EvCA96 Evol Comp and Its Applications, Moscow, Russia (v9n59) Jun 24-27, 96 MENDEL96 2nd Intl Mendel Conference on GAs, Brno, Czech (v9n61) Jun 26-28, 96 IPMU96 Granada, Spain (v9n31) Jul 1-5, 96 ICML96 Intl Conf on Machine Learning, Bari, Italy (v9n54) Jul 3-6, 96 GP96 Genetic Programming Conference, Stanford, CA (v9n9) Jul 28-31, 96 FOGA4 Foundations of Genetic Algorithms, San Diego, CA (v9n47) Aug 3-5, 96 AAAI96 Intl Wrkshp Intelligent Adaptive Agents, Portland (v10n4) Aug 4-8, 96 2NWGA 2nd Nordic Wrkshp on Genetic Algorithms, Finland (v9n56) Aug 19-25, 96 EUFIT96 Intelligent Techniques and Soft Computing, Germany (v9n60)Sep 2-5, 96 SAB96 From Animals to Animats, Cape Cod, Massachusetts (v9n31) Sep 9-13, 96 WCNN96 Session on Evolutionary Algorithms, San Diego (v9n62) Sep 15-20, 96 PPSN96 Parallel Problem Solving from Nature, Berlin (v9n17) Sep 22-27, 96 ICGI96 Colloquium on Grammatical Inference, Montpellier (v9n45) Sep 25-27, 96 (Send announcements of other activities to GA-List@aic.nrl.navy.mil) ------------------------------ From: ivan@irsip.na.cnr.it (Ivan de Falco) Date: Tue, 6 Feb 1996 12:04:55 +0100 Subject: speedup definition for parallel GAs Dear GA users, this is my second attempt in defining a common basis for the evaluation of the performance of Parallel Genetic Algorithms. This is very imortant since I see from papers that everybody follows his own way. Let us suppose to be trying to solve a given problem, whichever you may think of, and let us suppose to perform many experiments, both in sequential and in parallel, as a function of the population size. Let us suppose that we have obtained the results reported in the following table, where those numbers are of course averaged based on different seeds (so, a first question Q1 arises: Q1: which is the minimum number of runs to be performed so that each of these numbers may be considered meaningful? 3, 5, 10, 20, 50, 100 or other? How many runs do you perform?). Let us suppose that the problem under exam consists in a minimisation, that the global best value is 100, and that we use a coarse-grained model for the parallel GA based on migration. If the best is reached, the time convergence is reported, otherwise the maximum execution time needed to perform all the evaluations required (let us say, 100,000). POP_SIZE FINAL_VALUE TIME IN SEC. NUMBER_OF_EVALS SEQUENTIAL RUNS 20 120 1000 100,000 50 100 850 85,000 100 100 500 40,000 200 100 700 70,000 500 110 1000 100,000 PARALLEL RUNS 2x50 100 375 75,000 4x5 130 250 100,000 4x12 115 250 100,000 4x20 110 250 100,000 4x25 100 225 90,000 4x50 100 100 40,000 4x100 100 163 65,000 4x125 100 175 70,000 4x200 100 200 80,000 4x500 105 250 100,000 5x20 105 200 100,000 10x10 110 100 100,000 Which runs among all of these should I relate in order to achieve the speed-up S of my parallel program? It is obvious to note that I can follow many different ways: 1- Since the best sequential run is for Npop=100, I investigate for a global parallel population of 100 elements, so I try to split it into as many parts as there are processors available (2x50, 4x25, 5x20, 10x10,..., or even with some approximation or slight modification 3x33, 6x16...), so that I start from the best serial population; in this case for 4 processors I have S_4=225/100=2.25; 2- I can start from determining the best parallel population size for a given number of processors, let us say 4, I discover it is in this case 4x50, so I compare it against the 4x50=200 sequential population; thus S_4=700/100=7.0; 3- The serial and the parallel populations needn't be related in any way, the only important thing being the convergence times to the best; in this case I compare Tpar of 4x50 against Tseq of 100 elements; this for the speed-up definition, which must take into account only the time factors, while all the problems related to number of evaluations must be addressed to another parameter I call the genetic efficiency, defined as the ratio between the number of evaluations of the sequential version Nev_seq divided by the number of evaluations of the parallel version Nev_par (the two versions here considered are the same used for the computation of the speed-up); in this case S_4=500/100=5.0. It is evident from the above that for the solution of the same problem and using the same GA I can obtain at least three different speed-up figures. Does this seem sensible to you? Finally (sorry for the lengthy email!) the question Q2: Q2: Which is, in your opinion, the most correct way of behaving? Please note that, as regards choice 1, as it is reported in literature, people following this way very report why they size their sequential population in that way (e.g. Tanese in ICGA89 starts to solve Walsh polinomials with 256 elements, but does not perform any analysis on the best sequential population, so what would happen if she used 20 elements, or 100, or 500?). The usage of choice 2, as reported by Muehlenbein in "The parallel genetic algorithm as function optimizer" (1991), on the other hand, has as a result that he cannot achieve any measure of speedup for his parallel run with 4 populations, since the best parallel population is such that the corresponding sequential population does not reach the global best, so he can only define "crossed speedups" going from 4 to 8 processors. The two former definitions are, in my opinion, somehow misleading since they try to relate two objects (the two population sizes) which are not related in the definition of speedup as usually given in Computer Science; the definition in fact is: the speedup of a parallel algorithm is defined as the ratio of the serial running time of the fastest sequential algorithm for solving the problem to the running time of the parallel algorithm to solve the same problem on P procesors (provided that the P processors and the one used in sequential are identical). With reference to this please see for example F. Thomson Leighton, Introduction to Parallel Algorithms and Architectures, Morgan Kaufmann, 1992, p.7, or Kumar, Grama, Gupta and Karypis, Introduction to Parallel Computing: Design and Analysis of Algorithms, The Benjamin/Cummings Publishing Company, 1993, or B. Codenotti and M. Leoncini, Fondamenti di Calcolo Parallelo Addison Wesley, 1990, p. 82 (in Italian). In my opinion, instead, definition 3 is the best suited, as speedup is just a measure given by ratio of execution times, and we cannot expect that it gives us also more information than it can contain, i.e. genetic efficiency. This must be given as a separate parameter. I would really appreciate an answer from you on this subject, so that a common way of performing experiments could be started, and everybody could understand what another researcher means when he says "using 4 processors I have achieved a speedup of 3.75". Ivanoe De Falco Institute for Research on Parallel Information Systems (IRSIP) - National Research Council of Italy (CNR) email: ivan@irsip.na.cnr.it ------------------------------ From: higuchi@etl.go.jp (Tetsuya Higuchi) Date: Thu, 1 Feb 96 11:45:18 JST Subject: conference announcement Conference Announcement and Call for Papers First International Conference on Evolvable Systems: from Biology to Hardware (ICES96) Tsukuba, Japan October 7-8, 1996 General Chair: Tetsuya Higuchi Electrotechnical Laboratory, MITI Program Cochairs: Daniel Mange Swiss Federal Institute of Technology Hiroaki Kitano CSL, SONY Hitoshi Iba Electrotechnical Laboratory, MITI Recently, the ideas of hardware evolution and self-reproducing hardware by using concepts derived from natural systems have received an increasing amount of interest. This led to the first international workshop towards evolvable hardware held in Lausanne last year. Following the success of the workshop, ICES96 will be held in Japan. Topics of particular interest include evolvable hardware systems, self-repair/reproduce hardware, evolutionary hardware design methodology, and so on, but are not limited to these. The objective of the conference is to bring together researchers who use biologically inspired concepts to implement real systems in artificial intelligence, artificial life, robotics, VLSI design and related fields. Publisher: Springer Verlag (Lecture Notes in Computer Science) Important Dates: June 14th: Submission of full papers August 1st: Notification of acceptance September 6th: Submission of revised accepted papers Submissions should be sent to Hitoshi Iba Electrotechnical Laboratory 1-1-4, Umezono, Tsukuba Ibaraki 305 Japan General queries to: higano@etl.go.jp For the details and the latest information, please refer to, WWW page: http://www.etl.go.jp:8080/etl/kikou/ICES96 ------------------------------ From: tcf@btc.uwe.ac.uk Date: Fri, 2 Feb 96 14:27:36 GMT Subject: MSc MACHINE LEARNING AND ADAPTIVE COMPUTING University of the West of England, Bristol Intelligent Computer Systems Centre MSc MACHINE LEARNING AND ADAPTIVE COMPUTING Applications are invited for entry in October 1996 to the Master of Science (MSc) Degree in Machine Learning and Adaptive Computing. The degree can be taken in one year full-time or two years part-time. Students will follow compulsory taught courses in core techniques, options in application areas and a project leading to a Masters Thesis. INTRODUCTION Over the next few years the new technologies of machine learning and adaptive computing will enhance established computing technologies to provide a new class of intelligent computer systems. This one year full-time MSc is aimed at good graduates in computing, mathematics and engineering who wish to be involved in the research, development and exploitation of machine learning and adaptive computing. UWE has acknowledged expertise in the areas of evolutionary computing, machine learning, neural networks, case-based reasoning and knowledge acquisition and their application to problems in business, science and engineering. AIMS To develop an understanding of machine learning and adaptive computing. To demonstrate the applicability of these fields to current computing and engineering problems To provide the stimulus and opportunity for students to make a significant contribution to reasearch and/or development in these fields. OBJECTIVES To produce students who know the range of applicability of machine learning and adaptive computing and which technique and which tool to use for a particular problem. To provide students with experience in the development and use of one technique on a range of problems or the solution of one problem using a mix of techniques. ORGANISATION 1st TERM Two weeks induction and introduction including a 2-3 day off-site residential course. Five compulsory core modules in: Machine Learning. Case-Based Reasoning. Evolutionary Computing. Neural Networks. Knowledge Acquisition. 2nd TERM A Case Study module. Students will work in groups of three/four and will specify, design, implement, test, and document a small system or research study. A compulsory module on Organisational and Business Aspects of IT Innovation Six applications modules (subject to demand) chosen from the following: Expert Databases Image Processing Distributed and Parallel Computing Smart Buildings Communication Systems Intelligent Control Multi-robot Cooperation Manufacturing Systems Computer Aided Design Examinations 3rd TERM A substantial project will be undertaken, as part of an industrial placement or in one of the University's research groups, leading to the submission of a masters thesis. STAFF INVOLVED Brian Carse, Dr. Nouhman Chalabi, Dr. Stephen Drewer, Dr. Terence Fogarty, Terry Hill, Owen Holland, Dr. Ye Huang, Ian Johnson, Dr. Wei Zhong Liu, Dr. Richard McClatchey, Dr. Roger Miles, Tony Pipe, Dr. Peter Sharpe, Rob Stevens, Prof. Sam Waters COMPANIES INVOLVED Quadstone, British Aerospace, Hewlett Packard, British Telecom, PACT Bristol, Integral Solutions Limited APPLICATION PROCEDURE Application forms are available from the course secretary: Mrs. Fay Coleman Intelligent Computer Systems Centre Faculty of Computer Studies and Mathematics University of the West of England Bristol BS16 1QY, UK Tel: +44 (0)1179 656261 x3183 Email: fay@btc.uwe.ac.uk Further information is available from the course director: Dr. Terry Fogarty at the same address Tel: +44 (0)1179 656261 x3179 Email: tcf@btc.uwe.ac.uk ------------------------------ From: David Levine Date: Mon, 5 Feb 1996 14:39:24 -0600 Subject: PGAPack V1.0 Release PGAPack is a general-purpose, data-structure-neutral, parallel genetic algorithm library. It is intended to provide most capabilities desired in a genetic algorithm library, in an integrated, seamless, and portable manner. Significant new features since the beta release include support for parallel execution of the global model using MPI, a restart operator, character strings as a native data type, and new mutation features. Key features include: * Callable from Fortran or C. * Runs on uniprocessors, parallel computers, and workstation networks. * Binary-, integer-, real-, and character-valued native data types. * Full extensibility to support custom operators and new data types. * Easy-to-use interface for novice and application users. * Multiple levels of access for expert users. * Parameterized population replacement. * Multiple crossover, mutation, and selection operators * Easy integration of hill-climbing heuristics. * Extensive debugging facilities. * Large set of example problems. * Detailed users guide PGAPack is available: -- via anonymous ftp from ftp.mcs.anl.gov in file pub/pgapack/pgapack.tar.Z -- via the web at http://www.mcs.anl.gov/pgapack.html David Levine levine@mcs.anl.gov http://www.mcs.anl.gov/home/levine MCS 221 C-216 Argonne National Laboratory Argonne, Illinois 60439 (708)-252-6735 Fax: (708)-252-5986 ------------------------------ From: Hugo de Garis Date: Tue, 6 Feb 96 14:34:05 JST Subject: 2 POSTDOCS REQUIRED AT ATR's BRAIN BUILDER GROUP 2 POSTDOCS REQUIRED AT ATR's BRAIN BUILDER GROUP, EVOLUTIONARY SYSTEMS DEPT, KYOTO, JAPAN ATR's Brain Builder Group, Kyoto, Japan, needs 2 US postdocs in the fields of A) Mini-Robotics/Mechatronics (to build a robot kitten for ATR's Artificial Brain) B) Evolvable Hardware (to apply Genetic Algorithms to FPGAs (Field Programmable Gate Arrays)) (e.g. Xilinx's XC6200) ATR's Evolutionary Systems Dept (ESD) is (arguably) the strongest ALife group in the world with people such as Tom Ray (of Tierra fame) and Chris Langton (father of ALife, and regular ESD visitor and collaborator). One of the highlights of the ESD is the CAM-Brain Project, which builds/grows/evolves a billion neuron artificial brain using cellular automata based neural modules which will grow inside our cellular automata machine (a hundred billion cell updates a second). This artificial brain requires a body to house it, hence our group needs a body builder. If you have extensive experience in building minirobots with off the shelf components, then you might like to join our brain builder group. Ideally, we want to grow/evolve our neural circuits directly in hardware at hardware speeds. We are looking for a second postdoc in the new field of evolvable hardware. If you have extensive experience in FPGA use, and are familiar with genetic algorithms and neural networks, then please join us. Applicants should have a PhD, be US citizens (or have a green card). The working period is from 3 months to 2 years, preferably 2 years, granted by the US NSF (National Science Foundation). The actual money comes from the Japanese "Japan Foundation" and their Center for Global Partnership. The grants cover salary, airfare, rent, but not research costs. Selection will be a two phase process. The first is to be recommended by us. Then your application has to be sent to the NSF in Washington DC by April 1 1996. (Applications are received twice yearly, April 1 and November 1). The NSF people say that if the candidate and the project are good, the odds of selection are 50%. Probable starting date in Japan would be about September 1996. If you do a good job, there's a possibility that you could stay on at ATR on a long term basis. Sabbatical leave grants are also possible for more senior candidates. The type of candidates we are looking for need to be big egoed dreamers with strong vision and creativity. The senior members of ESD are all pioneers. If you are a CD type (i.e. competent dullard, meaning high in analytical skills, but lacking in vision and creativity), then this spot is not for you). If you are interested, please send your resume by email to - Dr. Hugo de Garis, Brain Builder Group, Evolutionary Systems Dept., ATR Human Information Processing Research Labs, 2-2 Hikari-dai, Seika-cho, Soraku-gun, Kansai Science City, Kyoto-fu, 619-02, Japan. tel. + 81 774 95 1079, fax. + 81 774 59 1008, email. degaris@hip.atr.co.jp For more information from the NSF, contact - email. info@nsf.gov tel. 703 306 1234 or 703 306 0090 web. http://www.nsf.gov/ If you have friends you might be interested, please forward this to them. ------------------------------ End of Genetic Algorithms Digest ******************************