Mansplainings

A primer on Memetic Automatons

We discussed how we leverage data to select memes in the previous article. However, as we noted, in data-driven meme selection, we start the search for the solution from scratch. What an algorithm learns during its run is only applicable to that particular problem and cannot be used to improve solutions to other similar problems. Nevertheless, this is not how humans work as we apply experience gained from one problem to solve other similar problems. Accordingly, to reify Artificial General Intelligence (AGI), we should find a way to transfer knowledge from one problem to another. This is what we try to accomplish through memetic automatons.

Memetic automatons mimic humans by improving their intelligence over time by learning from previous experience and their interaction with other automatons. Consequently, instead of starting from scratch, an Evolutionary Algorithm (EA) can use a meme produced either when solving a different but similar problem in the past or by other automatons working on similar problems simultaneously.

Similarity between problems is imperative for knowledge transfer to be effective. For instance, we may use EA to find the fastest path between two nodes in a computer network. We may also use EA to find the most energy-efficient path between two nodes. This is an example of two similar problems and knowledge transfer is more likely to be profitable in such cases. On the other hand, the knowledge of a solution to an antenna-designing problem is less likely to be useful to any of these two problems.

Unified encoding in memetic automatons

A fundamental challenge when transferring knowledge between two different problems is the representation of the knowledge. In EAs, the solution to a problem is the population of individuals that the algorithm converged on. These individuals may be represented by different genetic encoding schemes. So, to facilitate knowledge transfer, we should convert the genetic encoding used for each problem to a unified encoding that all problems can understand.

It is not possible to come up with one unified encoding mechanism for all problems. However, random-key representation has shown significant promise in practice. In random-key representation, we use continuous values between 0 and 1 to represent an individual. For each problem, we need to come up with an encoding scheme so that we can convert the solution to this unified code and a decoding scheme to convert the unified code back.

Using probability distribution as memes

After encoding each individual, the encoded individuals can form a meme. However, having several individuals form a meme may not always be convenient. The book Memetic Computation by Abhishek Gupta proposes using the probability distribution constructed from the individuals as a meme.

We can store these probability distributions in a knowledge base and have the automatons pick the most suitable one. However, how can an automaton know which meme will be the most apposite one? We can assign weight values to the memes and learn the weight values over time so that the most suitable ones will end up with the highest weight values.

Along with the memes, the automaton also computes the probability distribution of the individuals in a given generation evolved using EA. The weighted sum of all these distributions will produce a probability distribution. The goal is to learn the right weight values so that the individuals sampled from this probability distribution will be closer to the optimal individuals in the search space. The automaton uses the Expectation-Maximization (EM) algorithm to learn these weight values. We omit the mathematical details of this algorithm to keep this article simple.

Once the automaton obtains the weight values, it can then build the probability distribution using the weighted sum of the probability distributions from other problems (memes) and the current generation of individuals. Then, the automatons sample the offspring from this distribution to produce the next generation.

Adaptive Memetic Transfer Optimizer

Now, how do we integrate this automaton with an EA? Enter Adaptive Memetic Transfer Optimizer (AMTO). AMTO takes an EA and runs the automaton alongside it. To that end, we set a transfer interval. This interval decides how often we should run the automaton.

We begin by following the usual steps in an EA. We start by initializing the population, and then we calculate the fitness score. Next, we select the parents, perform crossover and mutation, and repeat the whole process until the termination condition is met. However, every time the transfer interval elapses, instead of proceeding with the EA, we run the automaton. The following pseudocode provides a summary of this algorithm.

Initialize the population pg (g denotes the generation and it is 0 at the beginning)

Repeat

    If transfer interval elapsed

        Encode individuals

        Build probability distribution using memes and probability distribution of individuals from pg by learning the weight values

        Sample new individuals from this probability population

        Decode individuals to produce offspring population o

     else

        For each individual i in pg

        Calculate the fitness fi

        Select parents q from pg based on fi

        Perform crossover and mutations on q to produce offspring population o.

      Select who will make it to the next generation pg=g+1 from o, q (irrelevant if transfer interval elapsed), and (pg – q)

Until the termination condition is met

The transfer interval allows us to tune the influence of memes in moving toward the solution space in the search space. By producing individuals via genetic evolution and sampling from probability distributions of solutions to previous problems, we expedite the optimization process.

However, we are transferring knowledge only from solutions to previous problems. We call this sequential knowledge transfer. Can we solve multiple different yet similar problems simultaneously and transfer the knowledge among these problems at the same time to hasten the process of optimization? This is what we will be discussing in the next article.

Theviyanthan Krishnamohan

Tech geek, cricket fan, failing 'writer', attempted coder, and politically incorrect.

Recent Posts

Multitask Knowledge Transfer in Genetic Algorithms

The previous article discussed transferring knowledge from one completed problem to a new problem. However,…

8 months ago

Data-driven meme selection

This article discusses how we can use data to automatically select the best meme from…

10 months ago

An introduction to memetic computation

Memetic computation is an extension of evolutionary computation that combines the use of memes and…

11 months ago

An introduction to evolutionary algorithms for dummies

Evolutionary algorithms are heuristic search and optimization algorithms that imitate the natural evolutionary process. This…

12 months ago

Running multiple containers within virtual hosts in Mininet

Mininet is a popular network emulator that allows us to create virtual networks using virtual…

1 year ago

The nuts and bolts of Service Function Chaining (SFC)

Service Function Chaining (SFC) leverages Network Function Virtualization (NFV) and Software Defined Networking (SDN) to…

1 year ago