

Introduction
The Python Scipy library optimize class provides a global optimization function called differential_evolution(). In this article we look at the inner workings of this function. This function is based on an optimization method called Genetic Optimization.
Optimization is a fundamental aspect of many scientific, engineering, and industrial problems. Traditional optimization techniques, while effective for well-defined convex problems, often struggle with non-linear, multi-modal (non-convex), or high-dimensional spaces. Genetic Optimization is a subset of evolutionary algorithms that offers a stochastic approach to navigating such complex optimization landscapes.
Genetic optimization, based on Darwin’s theory of natural evolution, has emerged as a robust metaheuristic for solving complex optimization problems. Genetic algorithms (GAs), first proposed by John Holland in the 1970s, simulate natural evolution processes to explore and exploit solution spaces. They utilize mechanisms such as selection, crossover, and mutation to evolve solutions across generations. We explore the foundational concepts of genetic optimization, examine its diverse applications across domains, and discuss the challenges.
1. Genetic Optimization
Genetic Optimization is a heuristic optimization technique based on the process of natural evolution. It is used to find optimal or near-optimal solutions to difficult problems, especially when the search space is huge or complex.
It mimics biological evolution using:
- Selection – best candidates are chosen
- Crossover – Reproduction, combine parts of parents to form new offspring
- Mutation – introduce variation
- Fitness Function – Survival of the fittest, measures how good a solution is
2. Fundamental Concepts
2.1 Population
In GAs, a finite set of Individuals (candidate solutions) form a Population (subset of the solution space).
2.2 Chromosome Representation
Each individual, a solution, is typically represented as a chromosome consisting of a set of genes, encoded as a string (binary, real-valued, or symbolic) or it can be an array or numbers or symbols, where each gene corresponds to a parameter in the solution space.
2.2 Fitness Function
The fitness function evaluates the quality of an individual, guiding the evolutionary process. Fitness is problem-specific and plays a crucial role in determining which individuals (solutions) survive and reproduce.
2.3 Genetic Operators
- Selection: Determines which individuals are chosen for reproduction, often using methods like roulette wheel selection, tournament selection, or rank selection.
- Crossover (Recombination): Combines parts of two parents to generate offspring. One-point, two-point, and uniform crossover are common techniques.
- Mutation: Introduces small, random changes to offspring to maintain genetic diversity and prevent premature convergence.
2.4 Evolutionary Cycle
The evolutionary process follows a cycle: initialize a population, evaluate fitness, select parents, apply crossover and mutation, and repeat until a stopping criterion is met (e.g., a maximum number of generations or convergence threshold).
Example
Let’s say we want to maximize the function:
f(x) = x2
where x ∈ [0,31]
We will use a Genetic Algorithm to find the value of x that gives the maximum value of f(x).
Step-by-step Implementation
Step 1: Chromosome Representation
We represent each solution x as a binary string of 5 bits (because 25 = 32).
Step 2: Fitness Function
fitness(x) = x2
For example:
- x=10, x2 = 100
- x=20, x2 = 400
- x=31, x2 = 961 (best)
Step 3: Initial Population
Let’s start with 4 random individuals (chromosomes):
Step 4: Selection
We pick the best parents (based on fitness). Using roulette wheel selection, individuals with higher fitness have more chance of being selected.
Let’s say we pick:
- Parent 1: 11000 (x = 24)
- Parent 2: 10101 (x = 21)
Step 5: Crossover
We randomly pick a point and swap genes.
Crossover point: after 2nd bit
- Parent1: 11000
- Parent2: 10101
New offspring:
- Child1: 11101 (x = 29)
- Child2: 10000 (x = 16)
Step 6: Mutation
Each bit has a small chance (e.g., 10%) to flip.
Child1: 11101 may mutate and become 11111
Step 7: Replace & Repeat
Replace old population with new individuals and repeat steps 3–6 for several generations, until we find the best value of x.
After several generations, the algorithm will converge toward the optimal solution:
x=31
Download python code
3. Advantages and Limitations
3.1 Advantages
- Capable of global optimization.
- Easily parallelizable.
- No requirement for gradient information.
- Adaptable to various types of problem domains.
3.2 Limitations
- Computationally intensive.
- Sensitive to parameter tuning (e.g., population size, mutation rate).
- May suffer from premature convergence.
4. Applications
4.1 Engineering Optimization
GAs have been applied to structural optimization, control system design, and robotics. For example, GAs are used in designing aircraft wing structures to balance weight and strength.
4.2 Machine Learning and Neural Networks
In machine learning, GAs are used to optimize hyperparameters and architectures of neural networks, offering alternatives to gradient-based methods.
4.3 Bioinformatics
In genomics and proteomics, GAs assist in sequence alignment, gene selection, and protein structure prediction.
4.4 Scheduling and Resource Allocation
GAs are effective in solving NP-hard scheduling problems like job-shop scheduling, exam timetabling, and vehicle routing.
4.5 Portfolio Optimization
In finance GAs are used for optimizing portfolio asset allocation.