In my Artificial Intelligence class, we were given a project in which we had to use Genetic Algorithms in order to solve the following problem:
Develop a Genetic Algorithm (GA) to solve the following problem: Given a list of n distinct positive integers, partition the list into two sublists, each of size n/2, such that the difference between the sums of the integers in the two sublists is minimized. To obtain your initial list, use a linear congruential random number generator to generate 100 random integers between 1 and 10,000.
After a closer observation of the problem, it turns out that it is actually a variant of the Subset Sum problem, known as the Partition Problem. In the Partition Problem, the goal is that given a set of numbers, we want to partition these numbers into two sets S1 and S2 such that the sums of these two sets are equal. In particular, we explore the optimization variant of the Partition Problem in which we want to mimimize the differences in sums of these two sets (which is known to be NP-Hard) using a Genetic Algorithm.
Genetic Algorithms consist of various operators known as Selection, Crossover and Mutation. These operators take binary strings as input, so we’ll need to come up with a representation for our problem in terms of binary strings. Since we have two subsets whose difference in sums needs to be minimized, we’ll generate an initial population of 20 binary strings each consisting of 50 zeroes and ones, where zeroes correspond to “this number will belong to S1“ while ones correspond to “this number will belong to S2“.
Problem Invariant: Due to the problem specifying that each subset must contain n/2 integers, meaning that all of our genetic operators must result in strings containing an equal number of zeroes and ones.
For each binary value within each binary string, we’ll partition numbers in our master set S into their respective subsets S1 and S2. Speaking of our list of integers, we’ll go over how to implement the Linear Congruential Generator to obtain our initial list of 100 unique, random integers.
Implementing a Linear Congruential Generator
A Linear Congruential Generator is a very simple algorithm for generating pseudo-randomized numbers. Due to the nature of pseudo-randomized numbers, their sequence is determined by their initial values. In order to generate a list of 100 unique random integers between 1 and 10,000, we will need to choose the parameters of our Linear Congruential Generator wisely.
By selecting the parameters m, a, c and the seed of the recurrence relation carefully, we can guarantee that the period will be long enough such that our numbers will be unique. In order to generate numbers up to 10,000, we specify our modulus m to be equal to 10,000 - which will also assist in generating unique integers by having our value of c to be relatively prime to each other. While there are two other conditions for having a maximal period, this alone will suffice for generating our initial sequence of integers. The implementation of the Linear Congruential Generator is shown below:
And we generate our initial list with the following function call:
Due to the nature of pseudo-random number generators, the above configuration will always return the same list, whose output is:
Despite this, having a consistent initial list will allow us to accurately measure the performance of our Genetic Algorithm at the end. If there is a need for different numbers in the future, we can always adjust a and c appropriately. With our initial list generated, we move onto determining how to assign values to the binary strings in our population in order to measure how well they solve our problem.
In Genetic Algorithms, we need some method of assigning values to binary strings which correspond to how well they solve the problem. In our case, we want to assign binary strings that have the smallest difference in sums between our sets higher fitness - meaning that these strings more closely solve our minimization problem.
In order to compute a fitness value for each of our binary strings, we’ll have to finally partition our list of values corresponding to each binary string in our population. Below is a method for doing so:
In this method, we loop over each binary string in our population and put them in S1 if the binary value is zero or into S2 if the binary value is 1. At the end of this, we’ll have numerical representations corresponding to permutations of our list of numbers generated by our Linear Congruential Generator. Next, we implement our method to assign fitness values to each of these numerical representations which is shown below:
In this method, we compute the difference in sums between our two subsets and then sort the values. As mentioned above, we assign a higher fitness value to the strings corresponding to smaller differences, and therefore the numbers at the beginning of our sorted list will have the highest fitness values. Since our initial population consists of 20 binary strings, the first member in our sorted list will have a fitness of 19, meaning that this value is less than 19 other values in this population. With our fitness values in place, we move onto implementing our genetic operators.
At each iteration of our genetic algorithm, we will select a certain amount of strings from our initial population which will be used to breed the new generation of strings. In order to do so, we need some way of selecting these strings based on their fitness values. While it may seem beneficial to only keep the strings which have the largest fitness values, this may actually lead to our algorithm getting stuck at local minimums and therefore will never truly converge. In order to fairly select strings based on their fitness values, we will implement the roulette wheel selection method.
Here, we compute the total fitness of our current population. Using this, we can compute a weighed fitness for each string in our population proportional to the total fitness of the population. With this, we can assign probabilities to each of our strings - where the probability for each string to be selected is proportional to their fitness values relative to the entire population. We then generate a random float between 0 and 100 and use this to select one string from our current population which will be used to generate the new generation.
In Crossover, we select two strings from our current population and use them to generate two children to form the new generation. Each of these children will have a certain amount of “genetic material” from each parent, which should theoretically generate new generations which have closer and closer fitness values that solve our problem better. While there are several crossover methods, we’ll use the simple One-point crossover method which is implemented below:
As mentioned in our problem invariant, we have to make sure that after crossover both of the generated children have an equal amount of zeroes and ones in their genetic material. In our crossover method, we select two parents at random by making two calls to our selection method. We then choose a random crossover point which corresponds to the index of our parent strings that we will splice to generate the children. Upon splicing both parents and generating two child strings, we check that we’ve successfully generated new strings that uphold our problem invariant. If this is done in one iteration of the inner while loop, we then run our mutation method on these children and then store these new children. Finally, we repeat the outer while loop until we’ve generated a new population of our desired size.
Our last genetic operator, mutation - randomly toggles bit values in a string. The implementation of the mutation algorithm is shown below:
As with our crossover algorithm, we need to uphold the problem invariant. In order to do so, we compute a mutation rate for each bit in our binary string (a child generated from our crossover method) and toggle the corresponding bits. If the number of zeroes and ones in our new mutated string are equal, then we allow the mutation and return the new mutated string. Otherwise, we simply return the child string.
With these genetic operators at play, our genetic algorithm follows the following procedure until convergence:
- Generate new population
- Partition into subsets
- Assign fitness values for each string in our population
- Check for convergence (minimized difference in sums)
- Select new individuals to breed new generation
- Generate new children through crossover and mutation
Out of all of these steps, we haven’t discussed what it means for our algorithm to “converge”.
A genetic algorithm can terminate based on several conditions, based on the problem at hand. These conditions are shown below but not limited to:
- A minimum value has been found based on the problem
- Termination after X amount of generations have been generated
- Newer generations fail to improve fitness over a given time
By sheer coincedence, it turns out that the values chosen for our Linear Congruential Generator outputs integers whose partitions can actually be minimized to zero. Therefore, we will say that our algorithm has converged once the difference in our partitioned sets has reached zero. If we were changing the seed values and parameters of our random number generator, we would simply keep a frequency table in order to inspect whether our genetic algorithm has converged (whether at a minimal value or a local plateau).
Last, we analyze the results of running our genetic algorithm.
In order to analyze the results of our genetic algorithm, we’ll observe how changing the mutation rate and population size affects the number of generations that are formed until convergence. Each of our simulations consists of running our genetic algorithm 10 times with a set population size and mutation rate, which is increased for each simulation. The entire results of these simulations can be found here.
Let’s take a look at some results.
population size is set to 20 and our
mutation rate is at a measly 1.0%, we can see that the average number of generations formed until convergence is 1920. What happens when we increase the population size to 50, and 100 while maintaining the same mutation rate?
As we can see, the average number of populations needed to reach convergence decreases greatly as population size increases. What happens when the mutation rate is increased with populations of 20, 50 and 100?
mutation rate of 2% and a population size of 20, we can see that our genetic algorithm has converged more quickly on average as opposed to when the mutation rate was 1%. Likewise, the average generations needed for convergence when the population increases is also significantly lower than their 1% mutation rate counterparts.
Finally, let’s observe what happens when the mutation rate is increased to 10%.
While the number of generations needed for convergence when the population size is 20 or 50 is lower than previous simulations, the average gain when the population size is 100 actually increased from previous simulations. In fact, increasing the mutation rate even more while the population size is increased has very little effect on convergence - the gains are minimal or sometimes negative.
Overall, we can conclude that increasing the population size decreases the generation count for convergence, however the increasing the mutation rate only has positive effects up to a threshold. This isn’t surprising, as too much mutation introduces too much diversity, generating strings which take us further away from our solution.