Melanie Mitchell, Stephanie Forrest,
Genetic Algorithms as a form of Artificial Life.
Abstract: This document review the history and current
scope of research on genetic algorithms in artificial life, using
illustrative examples in which the genetic algorithm is used to
study how learning and evolution interact, and to model
ecosystems, immune system, cognitive systems, and social
systems.
Stephanie Forrest, Brenda Javormik, Robert E. Smith, Alan S. Perelson,
Using Genetic Algorithms to Explore Patterns in the Human Immune
System.
Abstract: This document describes an immune system based
on a universe of binary strings. The model is directed at
understanding the pattern recognition processes and learning that
take place at both the individual and species levels in the
immune system.
Edinburgh Parallel
Computing Center,
List of Genetic Algorithm Publications: FTP Site
Availability
Abstract: This paper contains a list shows the
availability of genetic algorithms related papers in Postscript
form through ftp sites.
Sushil J. Louis, Gregory J. E. Rawlins,
Predicting Convergence Time for Genetic Algorithms
Abstract: It is difficult to predict a genetic algorithm's
behavior on an arbitrary problem. Combining genetic algorithm
theory with practice we use the average hamming distance as a
syntactic metric to derive bounds on the time convergence of
genetic algorithms. Analysis of aflatfunction provides worst case
time complexity for static functions. Further, em- ploying
linearly computable runtime information, we provide bounds on the
time beyond which progress is unlikely on arbitrary static
functions. As a byproduct, this analysis also provides
qualitative bounds by predicting average fitness.
Timothy Perkis,
Stack-Based Genetic Programming
Abstract: Some recent work in the field of Genetic
Programming has been concerned with finding optimum
representations for evolvable and efficient computer programs. In
this paper, the author describe a new GP system in which target
programs run on a slack-based virtual machine. The system is
shown to have certain advantages in terms of efficiency and
simplicity of implementation, and for certain classes of
problems, its effectiveness is shown to be comparable or superior
to current methods.
David J. Montana,
Strongly Typed Genetic Programming
Abstract: Genetic programming is a powerful method for
automatically generating computer programs via the process of
natural selection [Koza 92]. However, it has the limitation known
as "closure", i.e. that all the variables, constants, arguments
for functions, and values returned from functions must be of the
same data type. To correct this deficiency, we introduce a
variation of genetic programming called "strongly typed" genetic
programming (STGP). In STGP, variables, constants, arguments, and
returned values can be of any data type with the provision that
the data type for each such value be specified beforehand. This
allows the initialization process and the genetic operators to
only generate parse trees such that the arguments of each
function in each tree have the required types. An extension to
STGP which makes it easier to use is the concept of generic
functions, which are not true strongly typed functions but rather
templates for classes of such functions. To illustrate STGP, we
present three examples involving vector and matrix manipulation:
(1) a basis representation problem (which can be constructed to
be deceptive by any reasonable definition of "deception"), (2)
then-dimensional least-squares regression problem, and (3)
preliminary work on the Kalman filter.
Ian Douglas,
Unarmed and Dangerous
Abstract: This article was published in a local electronic
mag at the beginning of the year, and also posted onto the
FidoNet virus echoes. I am posting it here as it has some
relevance to the debate about good and bad viruses.
D. Cliff, P. Husbands, I. Harvey,
Evolving Visually Guided Robots
Abstract: We have developed a methodology grounded in two
beliefs: that autonomous agents need visual processing
capabilities, and that the approach of hand-designing control
architectures for autonomous agents is likely to be superseded by
methods involving the artificial evolution of comparable
architectures. In this paper we present results which demonstrate
that neural-network control architectures can be evolved for an
accurate simulation model of a visually guided robot. The
simulation system involves detailed models of the physics of a
real robot built at Sussex; and the simulated vision involves
ray-tracing computer graphics, using models of optical systems
which could readily be constructed from discrete components. The
control-network architecture is entirely under genetic control,
as are parameters governing the optical system. Significantly, we
demonstrate that robust visually-guided control systems evolve
from evaluation functions which do not explicitly involve
monitoring visual input. The latter part of the paper discusses
work now under development, which allows us to engage in
long-term fundamental experiments aimed at thoroughly exploring
the possibilities of concurrently evolving control networks and
visual sensors for navigational tasks. This involves the
construction of specialized visual-robotic equipment which
eliminates the need for simulated sensing.
Francis Wong, Pan Yong Tan,
Neural Networks And Genetic Algorithm For Economic
Forecasting
Abstract: This paper describes the applications of an
enhanced neural network and genetic algorithm to economic
forecasting. The author proposed approach has several significant
advantages over conventional forecasting methods such as
regression and the Box-Jenkins methods. Apart from being simple
and fast in learning, a major advantage is that no assumptions
need to be made about the underlying function or model, since the
neural network is able to extract hidden information from the
historical data. In addition, the enhanced neural network offers
selective activation and training of neurons based on the
instantaneous causal relationship between the current set of
input training data and the output target. This causal
relationship is represented by the Accumulated Input Error (AIE)
indices, which are computed based on the accumulated errors
back-propagated to the input layers during training. The AIE
indices are used in the selection of neurons for activation and
training. Training time can be reduced significantly, especially
for large networks designed to capture temporal information.
Although neural networks represent a promising alternative for
forecasting, the problem of network design remains a bottleneck
that could impair widespread applications in practice. The
genetic algorithm is used to evolve optimal neural network
architectures automatically, thus eliminating the many pitfalls
associated with human engineering approaches. The proposed
concepts and design paradigm were tested on several real
applications , including the forecast of GDP, air passenger
arrival and currency exchange rates.
E-G. Talbi, P. Bessiere,
A Parallel Genetic Algorithm For The Graph Partitioning
Problem
Abstract: Genetic algorithms are stochastic search and
optimization techniques which can be used for a wide range of
applications. This paper addresses the application of genetic
algorithms to the graph partitioning problem. Standard genetic
algorithms with large populations suffer from lack of efficiency
(quite high execution time). A massively parallel genetic
algorithm is proposed, an implementation on a SuperNode of
Transputers and results of various benchmarks are given.
I. Harvey, P. Husbands,
Issues in Evolutionary Robotics
Abstract: This paper authors propose and justify a
methodology for the development of the control systems, or
`cognitive architectures', of autonomous mobile robots. Author
argue that the design by hand of such control systems becomes
prohibitively difficult as complexity increases. Author discuss
an alternative approach, involving artificial evolution, where
the basic building blocks for cognitive architectures are
adaptive noise-tolerant dynamical neural networks, rather than
programs. These networks may be recurrent, and should operate in
real time. Evolution should be incremental, using an extended and
modified version of genetic algorithms. Author finally propose
that, sooner rather than later, visual processing will be
required in order for robots to engage in non-trivial navigation
behaviours. Time constraints suggest that initial architecture
evaluations should be largely done in simulation. The pitfalls of
simulations compared with reality are discussed, together with
the importance of incorporating noise. To support our claims and
proposals, we present results from some preliminary experiments
where robots which roam office-like environments are
evolved.
Michael Soegtrop, Henrik Klagges,
A Massively Parallel Neurocomputer
Abstract: This paper describes a SIMD massively parallel
digital neural network simulator - called GeNet for Generic
Network - which can evaluate large networks with a variety of
learning algorithms at high speed. A medium-size installation
with 256 physical nodes and 1 Gbyte of memory can sustain e.g.
1.7 giga 16bit-connection crossings/sec at network sizes of 2
layers with 64K neurons each, a fan-in of 1K and a random wired
topology. The neural network core operations are supported by
optimized and balanced computation and communication hardware
that sustains heavily pipelined processing. In addition to an
array of processing units with one global scalar (16 bit) bus,
the system is equipped with a ring-shifter (32 bit) and a
parallel (256 \002 16 bit) vector bus that feeds a tree-shaped
global vector accumulator. This eases backward communication and
the calculation of scalar products of distributed vectors. The
VLIW-architecture is highly scalable. A prototype has been
cost-effectively implemented without custom VLSI chips.
Thomas S. Ray,
Evolution Ecology and Optimization of Digital
Oranisms
Abstract: This paper presents here aims to parallel the
second major event in the history of life, the origin of
diversity. Rather than attempting to create prebiotic conditions
from which life may emerge, this approach involves engineering
over the early history of life to design complex evolvable
organisms, and then attempting to create the conditions that will
set off a spontaneous evolutionary process of increasing
diversity and complexity of organisms.
Pierre Bessier, Ali Chams, Traian Muntean,
A Virtual Machine Model For Artificial Neural Network
Programming
Abstract: This paper introduces the model of a virtual
machine for A.N.N. (Artificial Neural Networks). The context of
this work is a collaborative project to study new V.L.S.I.
implementations and new architectures for neuronal machines. The
work consists in the specification and a prototype implementation
of a description language for A.N.N., of the associated virtual
machine, of the compiler between them and of the compilers
mapping the virtual machine on different highly parallel
computers. In this short paper we present the virtual machine
model which combines the features of various parallel programming
paradigms. Our model allows, in particular, to have the same
A.N.N. program running on both synchronous or asynchronous type
of machines. In this framework a parallel architecture
(S.M.A.R.T.) and a dynamically reconfigurable parallel machine of
Transputers (SuperNode) are considered as target machines.
Egbert J.W, Herman Kuiper,
Biological metaphors and the Design of Modular Artificial Neural
Networks
Abstract: In this thesis, a method is proposed with which
good modular artificial neural network structures can be found
automatically using a computer program. A number of biological
metaphors are incorporated in the method. It will be argued that
modular artificial neural networks have a better performance than
their non-modular counterparts. the human brain can also be seen
as a modular neural network, and the proposed search method is
based on the natural process that resulted in the brain: Genetic
algorithms are used to imitate evolution, and L-systems are used
to model the kind of recipes nature uses in biological
growth.
Thomas S. Ray,
Thoughts on the Synthesis of Life
Abstract: This paper discusses an approach to AL that
parallels the major event in the history of life, the origin of
diversity. Rather than attempting to create pre-biotic conditions
from which life may emerge, this approach involves engineering
over the first 3 billion years of life's history to design
complex evolvable artificial organisms, and attempting to create
the biological conditions that will set off a spontaneous
evolutionary process of increasing diversity and complexity of
organisms.
Jeffrey O. Kephart,
A Biologically Inspired Immune System for Computers
Abstract: Computer viruses are thefirst and only form of
artificial life to have had a measurable impact on society.
Currently, they are a relatively manageable nuisance. However,
two alarming trendsare likely to make computer viruses a much
greater threat. First, the rate at which new viruses are being
written is high, and accelerating. Second, the trend towards
increasing interconnectivity and interoperability among computers
will enable computer viruses and worms to spread much morerapidly
than they do today. To address these problems, we have designedan
immune system for computers and computer networks that takes much
of its inspiration from nature. Like the vertebrate immune
system, our system develops antibodies to previously
unencountered computer viruses or worms and remembers them so as
to recognize and respond to them more quicklyin the future. We
are careful to minimize the risk of an autoimmune response, in
which the immune system mistakenly identifies legitimate software
as being undesirable. Wealso employ nature's technique of
fighting self-replication with self-replication, which our
theoretical studies have shown to be highlyeffective. Many
components of the proposed immune system are already beingused to
automate computer virus analysis in our laboratory, and we
anticipate that this technology will gradually be incorporated
into IBM's commercial anti-virus product during the next year or
two.
Stephanie Forrest, Alan Perelson,
Self-Nonself Discrimination in a Computer
Keywords: immune system, discrimination, intrusion
detection, virus
Abstract: The problem of protecting computer systems can
be viewed generally as the problem of learning to distinguish
self from other. We describe a method for change detection which
is based on the generation of T cells in the immune system.
Mathematical analysis reveals computational costs of the system,
and preliminary experiments illustrate how the method might be
applied to the problem of computer viruses.
Eugene
H. Spafford,
Computer Viruses as Artificial Life
Abstract: This paper begins with a description of how
computer viruses operate and their history, and of the various
ways computer viruses are structured. It examines ow viruses meet
properties associated with life as defined by some researchers in
the area of artificial life and self-organizing systems. The
paper concludes with some comments directed towards the
definition of artificially "alive" systems and
experimentation.
Built by Mark Crosbie and Ivan Krsul.