import numpy as np
import plotly.express as px
import plotly.graph_objs as go
def objective(x):
return np.sum(x**2)
def constraint(x):
return 25 - max(x)
2024-05-02
OBP:
See: Homem-de-Mello, Kong, and Godoy-Barba (2022)
Tutorial: Using the Multipoint Approximation Method (MAM) for Mixed Integer-Continuous Optimization Problems
From: Liu and Toropov (2016)
The Multipoint Approximation Method (MAM) is a robust technique for solving mixed integer-continuous optimization problems where some variables are integers and others are continuous. This tutorial aims to explain MAM step-by-step, supplemented by Python code examples to help you implement the method.
Step 1: Problem Formulation
Start by clearly defining the optimization problem including the objective function to minimize or maximize, and any constraints.
Example: Minimize the weight of a structure subject to stress constraints.
Step 2: Initial Setup
Define the bounds of your design variables and initialize them. For integer variables, ensure they can only take integer values.
Example:
# Design variables: x1 (integer), x2 (continuous)
= [(1, 10), (0.0, 5.0)]
bounds = [5, 2.5]
initial_guess = 20 no_samples
Step 3: Sampling and Surrogate Model Construction
Generate a sample of design points considering the discrete nature of some variables. Construct surrogate models (metamodels) to approximate the objective and constraint functions.
Example:
from sklearn.gaussian_process import GaussianProcessRegressor
# Generate sample points
0)
np.random.seed(= np.random.randint(low=bounds[0][0], high=bounds[0][1]+1, size=(no_samples, 1)) # Integer variable
sample_points = np.hstack((sample_points, np.random.uniform(low=bounds[1][0], high=bounds[1][1], size=(no_samples, 1)))) # Add continuous variable
sample_points
# Surrogate model
= np.array([objective(x) for x in sample_points])
objective_values = GaussianProcessRegressor().fit(sample_points, objective_values)
model
model.score(sample_points, objective_values)
1.0
#Plot sample points and objective values
= px.scatter(x=sample_points[:, 0], y=sample_points[:, 1], color= objective_values, labels={'x': 'Integer Variable', 'y': 'Continuous Variable', 'color': 'Objective values'}, title='Distribution of Sample Points in Design Space', color_continuous_scale='haline')
fig
=dict(size=12))
fig.update_traces(marker
# Show the plot
fig.show()
Step 4: Optimization Using Surrogate Model
Optimize the surrogate model within a trust region. Adjust the size and position of the trust region based on the model’s accuracy and the optimization results.
Example:
from scipy.optimize import minimize
# Trust region bounds
= [(max(bounds[0][0], initial_guess[0]-2), min(bounds[0][1], initial_guess[0]+2)),
trust_bounds max(bounds[1][0], initial_guess[1]-1.0), min(bounds[1][1], initial_guess[1]+1.0))]
(
# Minimize the surrogate model
= minimize(lambda x: model.predict(x.reshape(1, -1)), x0=initial_guess, bounds=trust_bounds)
result print("Optimized parameters:", result.x)
Optimized parameters: [3. 1.5]
# Plot trust bounds
fig.add_shape(# Rectangle reference to the axes
type="rect",
=trust_bounds[0][0], y0=trust_bounds[1][0], x1=trust_bounds[0][1], y1=trust_bounds[1][1],
x0=dict(
line="Tomato",
color=2,
width
),#fillcolor="Red",
#opacity=0.2
)
=[result.x[0]], y=[result.x[1]], mode='markers', marker=dict(color='Tomato', size=14), name='Optimized Point', showlegend = False))
fig.add_trace(go.Scatter(x
fig.show()
Step 5: Update and Iterate
Update the trust region and surrogate model based on the new information obtained. Repeat the optimization until convergence criteria are met.
Example:
# Example of updating the trust region and re-optimizing
= [(max(bounds[0][0], result.x[0]-1), min(bounds[0][1], result.x[0]+1)),
trust_bounds max(bounds[1][0],result.x[1]-0.5), min(bounds[1][1], result.x[1]+0.5))]
(
= minimize(lambda x: model.predict(x.reshape(1, -1)), x0=result.x, bounds=trust_bounds)
result print("Updated optimized parameters:", result.x)
Updated optimized parameters: [2. 1.]
# Plot trust bounds
fig.add_shape(# Rectangle reference to the axes
type="rect",
=trust_bounds[0][0], y0=trust_bounds[1][0], x1=trust_bounds[0][1], y1=trust_bounds[1][1],
x0=dict(
line="DodgerBlue",
color=2,
width
),#fillcolor="Blue",
#opacity=0.2
)
=[result.x[0]], y=[result.x[1]], mode='markers', marker=dict(color='DodgerBlue', size=14), name='Optimized Point', showlegend = False))
fig.add_trace(go.Scatter(x
fig.show()
Iteration
# Example of updating the trust region and re-optimizing
= [(max(bounds[0][0], result.x[0]-1), min(bounds[0][1], result.x[0]+1)),
trust_bounds max(bounds[1][0],result.x[1]-0.5), min(bounds[1][1], result.x[1]+0.5))]
(
= minimize(lambda x: model.predict(x.reshape(1, -1)), x0=result.x, bounds=trust_bounds)
result print("Updated optimized parameters:", result.x)
Updated optimized parameters: [1. 0.5]
# Plot trust bounds
fig.add_shape(# Rectangle reference to the axes
type="rect",
=trust_bounds[0][0], y0=trust_bounds[1][0], x1=trust_bounds[0][1], y1=trust_bounds[1][1],
x0=dict(
line="MediumSeaGreen",
color=2,
width
),#fillcolor="Blue",
#opacity=0.2
)
=[result.x[0]], y=[result.x[1]], mode='markers', marker=dict(color='MediumSeaGreen', size=14), name='Optimized Point', showlegend = False))
fig.add_trace(go.Scatter(x
fig.show()
# Example of updating the trust region and re-optimizing
= [(max(bounds[0][0], result.x[0]-1), min(bounds[0][1], result.x[0]+1)),
trust_bounds max(bounds[1][0],result.x[1]-0.5), min(bounds[1][1], result.x[1]+0.5))]
(
= minimize(lambda x: model.predict(x.reshape(1, -1)), x0=result.x, bounds=trust_bounds)
result print("Updated optimized parameters:", result.x)
Updated optimized parameters: [1. 0.]
# Plot trust bounds
fig.add_shape(# Rectangle reference to the axes
type="rect",
=trust_bounds[0][0], y0=trust_bounds[1][0], x1=trust_bounds[0][1], y1=trust_bounds[1][1],
x0=dict(
line="Orange",
color=2,
width
),#fillcolor="Blue",
#opacity=0.2
)
=[result.x[0]], y=[result.x[1]], mode='markers', marker=dict(color='Orange', size=14), name='Optimized Point', showlegend = False))
fig.add_trace(go.Scatter(x
fig.show()
Conclusion
The Multipoint Approximation Method (MAM) is a powerful technique for handling optimization problems involving both discrete and continuous variables. It uses surrogate models to approximate the objective and constraints, reducing the computational cost of evaluations. Iterative trust region adjustments ensure that the optimization converges effectively. This method is particularly useful in engineering design where the evaluation of the objective function and constraints can be computationally expensive.