Reader small image

You're reading from  Hands-On Neuroevolution with Python.

Product typeBook
Published inDec 2019
Reading LevelExpert
PublisherPackt
ISBN-139781838824914
Edition1st Edition
Languages
Right arrow
Author (1)
Iaroslav Omelianenko
Iaroslav Omelianenko
author image
Iaroslav Omelianenko

Iaroslav Omelianenko occupied the position of CTO and research director for more than a decade. He is an active member of the research community and has published several research papers at arXiv, ResearchGate, Preprints, and more. He started working with applied machine learning by developing autonomous agents for mobile games more than a decade ago. For the last 5 years, he has actively participated in research related to applying deep machine learning methods for authentication, personal traits recognition, cooperative robotics, synthetic intelligence, and more. He is an active software developer and creates open source neuroevolution algorithm implementations in the Go language.
Read more about Iaroslav Omelianenko

Right arrow

Overview of Neuroevolution Methods

The concept of artificial neural networks (ANN) was inspired by the structure of the human brain. There was a strong belief that, if we were able to imitate this intricate structure in a very similar way, we would be able to create artificial intelligence. We are still on the road to achieving this. Although we can implement Narrow AI agents, we are still far from creating a Generic AI agent.

This chapter introduces you to the concept of ANNs and the two methods that we can use to train them (the gradient descent with error backpropagation and neuroevolution) so that they learn how to approximate the objective function. However, we will mainly focus on discussing the neuroevolution-based family of algorithms. You will learn about the implementation of the evolutionary process that's inspired by natural evolution and become familiar with...

Evolutionary algorithms and neuroevolution-based methods

The term artificial neural networks stands for a graph of nodes connected by links where each of the links has a particular weight. The neural node defines a kind of threshold operator that allows the signal to pass only after a specific activation function has been applied. It remotely resembles the way in which neurons in the brain are organized. Typically, the ANN training process consists of selecting the appropriate weight values for all the links within the network. Thus, ANN can approximate any function and can be considered as a universal approximator, which is established by the Universal Approximation Theorem.

For more information on the proof of the Universal Approximation Theorem, take a look at the following papers:

  • Cybenko, G. (1989) Approximations by Superpositions of Sigmoidal Functions, Mathematics of Control...

NEAT algorithm overview

The method of NEAT for evolving complex ANNs was designed to reduce the dimensionality of the parameter search space through the gradual elaboration of the ANN's structure during evolution. The evolutionary process starts with a population of small, simple genomes (seeds) and gradually increases their complexity over generations.

The seed genomes have a very simple topology: only input, output, and bias neurons are expressed. No hidden nodes are introduced into the seed from the beginning to guarantee that the search for a solution starts in the lowest-dimensional parameter space (connection weights) possible. With each new generation, additional genes are introduced, expanding the solution search space by presenting a new dimension that previously did not exist. Thus, evolution begins by searching in a small space that can be easily optimized and...

Hypercube-based NEAT

Intelligence is a product of the brain, and the human brain as a structure is itself a product of natural evolution. Such an intricate structure has evolved over millions of years, under pressure from harsh environments, and while competing with other living beings for survival. As a result, an extremely complex structure has evolved, with many layers, modules, and trillions of connections between neurons. The structure of the human brain is our guiding star and is aiding our efforts in creating artificial intelligence systems. However, how can we address all the complexity of the human brain with our imperfect instruments?

By studying the human brain, neuroscientists have found that its spatial structure plays an essential role in all perceiving and cognitive tasks from vision to abstract thinking. Many intricate geometric structures have been found...

Evolvable-Substrate HyperNEAT

The HyperNEAT method exposes the fact that geometrical regularities of the natural world can be adequately represented by artificial neural networks with nodes placed at specific spatial locations. That way, the neuroevolution gains significant benefits and it allows large-scale ANNs to be trained for high dimensional problems, which was impossible with the ordinary NEAT algorithm. At the same time, the HyperNEAT approach is inspired by the structure of a natural brain, which still lacks the plasticity of the natural evolution process. While allowing the evolutionary process to elaborate on a variety of connectivity patterns between network nodes, the HyperNEAT approach exposes a hard limitation on where the network nodes are placed. The experimenter must define the layout of the network nodes from the very beginning, and any incorrect assumption...

Novelty Search optimization method

Most of the machine learning methods, including evolutionary algorithms, base their training on the optimization of the objective function. The main focus underlying the methods of optimization of the objective function is that the best way to improve the performance of a solver is to reward them for getting closer to the goal. In most evolutionary algorithms, the closeness to the goal is measured by the fitness of the solver. The measure of an organism's performance is defined by the fitness function, which is a metaphor for evolutionary pressure on the organism to adapt to its environment. According to that paradigm, the fittest organism is better adapted to its environment and best suited to find a solution.

While direct fitness function optimization methods work well in many simple cases, for more complex tasks, it often falls victim...

Summary

In this chapter, we began by discussing the different methods that are used to train artificial neural networks. We considered how traditional gradient descent-based methods differ from neuroevolution-based ones. Then, we presented one of the most popular neuroevolution algorithms (NEAT) and the two ways we can extend it (HyperNEAT and ES-HyperNEAT). Finally, we described the search optimization method (Novelty Search), which can find solutions to a variety of deceptive problems that cannot be solved by conventional objective-based search methods. Now, you are ready to put this knowledge into practice after setting up the necessary environment, which we will discuss in the next chapter.

In the next chapter, we will cover the libraries that are available so that we can experiment with neuroevolution in Python. We will also demonstrate how to set up a working environment...

Further reading

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Hands-On Neuroevolution with Python.
Published in: Dec 2019Publisher: PacktISBN-13: 9781838824914
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Iaroslav Omelianenko

Iaroslav Omelianenko occupied the position of CTO and research director for more than a decade. He is an active member of the research community and has published several research papers at arXiv, ResearchGate, Preprints, and more. He started working with applied machine learning by developing autonomous agents for mobile games more than a decade ago. For the last 5 years, he has actively participated in research related to applying deep machine learning methods for authentication, personal traits recognition, cooperative robotics, synthetic intelligence, and more. He is an active software developer and creates open source neuroevolution algorithm implementations in the Go language.
Read more about Iaroslav Omelianenko