Master’s thesis: Intra-Life Learning using parallel Neuroevolution

This post briefly explains what my master’s thesis is about. I have written my thesis at the Chair of Database and Information Systems and the Chair of Practical Computer Science at the University of Rostock. In the following I will first describe the overall goal of the thesis. Subsequently I present the content of the single chapters in a comprehensive manner. If you would like to read my whole thesis, you can download it here (link will be added soon).

Work goals

The goal of my master’s thesis was to investigate to what extent neuroevolution (NE) algorithms can be used for intra-life learning. The field of neuroevolution deals with alternative algorithms for neural networks, which can be used either in addition to or as a substitute for conventional learning algorithms such as backpropagation. Due to the fact that this sort of algorithms is inspired by natural evolution, one of their characteristics is that they are highly parallelizable. In this context I also shed light on the topic of executing such algorithms on a computer cluster, in order to make the single computations faster in terms of needed wall clock time.

Neural Networks and its learning algorithms

I first introduced the topic of neural networks in the second chapter. That means that I have specified what neural networks are, how they work and what different architectures exist for them. In chapter 3 I also shed light on how neural networks are learning. In this context I first introduced specific topics on gradient-based optimization and analysis. Based on that I’ve explained the backpropagation algorithm and its variants in further detail.

Neuroevolution

In chapter 4 I presented the state of the art in the area of neuroevolution (NE). As I already mentioned, NE algorithms can be viewed as a substitute or in addition to conventional learning algorithms. First I explained a well-known algorithm called NEAT in this context, before I discussed how such algorithms could be parallelized. In addition to that I also introduced topics like diversity and novelty and indirect encodings, as well as its corresponding algorithms. Especially the approaches related to indirect encoding should be of even greater importance in the further course of the work.

Intra-Life learning

The chapter 5 then dives deeper into the main topic of the thesis – intra-life learning. Because the term intra-life learning is loosely defined in literature I came up with my own definition. In this context I first formalized what is meant by meta learning to provide the reader with a better understanding about this topic. Afterwards I subdivided intra-life learning into different scenarios. For every of this single scenarios I identified a set of different problems, which has to considered when doing intra-life learning. Based on these problems I constructed a comparison framework to evaluate single algorithms in terms of their applicability to intra-life learning. Part of this evaluation were algorithms of the following techniques: meta learning, synaptic plasticity and neuromodulation. As a result I found that there is no algorithm which could meet all requirements important in context of intra-life learning.

NeRO – Neuromodulated Reuse of evolved Optimizers

Due to the fact that there is no algorithm yet, which caould meet all requirements of intra-life learning, I came up with my own solution called NeRO which is presented in chapter 6. It is based on two main principles: The Baldwin effect and Complementary Learning Systems (CLS) theory. Thus, NeRO uses as an optimizer NE techniques (Baldwin effect) and its architecture is divided into neocortex and hippocampus (CLS theory). These two components have different tasks (e.g. fast consolidation of new knowledge and longtime storage) and are connected to a meta learning problem each. It is also described how the algorithm can be parallelized on a computer cluster with the help of Apache Spark.

Final Notes

First I want to mention that the NeRO algorithm is not implemented yet. It’s only a concept of how an algorithm for intra-life learning could look like and it is completely based on the insights of chapter 5. On the one hand, this is due to the fact that the topic of the MA was formulated very broadly and accordingly many areas had to be covered. On the other hand, the term intra-life learning has still not been firmly defined in the literature, so that different learning scenarios can be understood under it. Thus, I had to develop an independent definition for it, which makes sense with regard to the different goals of the work. Another fact is that none of the already mentioned topics were covered in detail by my courses I’ve already heared at this point. So I had to investigate in all these topics comletely on my own within the 20 weeks I had to write my thesis.
However, In the following blog posts I will describe the topics covered in the single chapters in a more detailled way. So if you are interested in neural networks and its learning algorithms or in machine learning in general, feel free to read my other posts as well.

Leave a Reply

Your email address will not be published.