Background

Understanding the workings of nature is a fundamental pursuit of most sciences. Thanks to its vast diversity, it has offered us many examples and baffled us with its ingenuity.

It is striking to see that mechanisms that are in essence quite simple, can give birth to intelligent beings like ourselves. While the basic behavior and function of a single neuron or the process of evolution can be understood with relative ease, understanding the intelligence that it can produce has proved to be far more difficult.

The fact that we have the ability to consciously think about the world around us, suggests that we have enormous (symbolic) processing capacities. Human understanding goes far beyond the learning behavior that most (if not all) living organisms exhibit. The ability to learn has served as a stepping stone towards comprehension. This ability itself is one that has developed over many generations, species and millennia. The general tendency is that higher species (which exhibit more complex behavior) are more recent products of evolutions [MSS95].

The pursuit to understand nature has produced quantum physics, which in turn made possible innovations such as microprocessors. These microprocessors have also proved their enormous, ever growing, processing capacity. While modern microprocessors are also increasingly complex, they too are based on a relatively simple concept: the transistor. In essence, a transistor is quite similar to a neuron. The way in which these two building blocks are organized, however, is quite different.

The dominant system architecture for microprocessors is `von Neumann', the common substrate silicon and the routing architecture VLSI1.1. The computational core of a microprocessor goes through a serial process of fetching and executing instructions at speeds of multiple GHz. A decade ago clock frequency was a factor of a 100 slower (measured in MHz). Also, the amount of transistors on a chip has increased exponentially, according than Moore's famous law [Moo65]. The quantitative aspects of the physical structure (or physiology) of microchips is immense. Ray Kurzweil, AI researcher, inventor and analyst of technology trends, predicts that ``[s]upercomputers will achieve one human brain capacity by 2010, and personal computers will do so by about 2020'' [Kur99,Kur90].

Most models of brain organization consider nerve cells and their connections to be the brain's fundamental units of information processing. However, profoundly complex and intelligent activities occur within nerve cells [Ham03, pp. 9-10].

The first neural models developed by the psychologist Rosenblatt are highly similar to those developed in electrical engineering [Fau94, p. 23] [Widrow & Hoff]. Now, ANNs and evolution are approached from the field of computing, besides biology, psychology, complexity and information theory. This highlights the interdisciplinary nature of these biologically inspired fields.

The architecture of the nervous system in humans is an enormous quantity of neurons working in parallel. Frequency of `calculations' per neuron are very modest, at most 200 Hz [KW01, p. 129]1.2, but the brain makes up for this because of the parallelism: up to $10^{11}$ neurons are actively processing information at any moment [PWB87, p. 4]. Action potentials speeds range from 0.55 m/s and up to 120 m/s1.3.

A topic on the application of biologically inspired principles to computing is necessarily multidisciplinary of nature. The questions that are addressed in this thesis could not be answered without drawing from a huge body of research spanning evolutionary biology, complexity theory, neurology, neurophysiology, evolutionary psychology1.4, theoretical and applied computer science, Artificial Intelligence (AI), robotics and electronic engineering. Creation of AI requires the insights of many of these fields.

Erik de Bruijn 2007-10-19