507 KiB
Algorithm project¶
Project context ¶
ADEME has launched a call to promote new mobility solutions. CesiCDP, with its team of 4 people, is responding to this call by focusing on the management of delivery rounds. The aim is to minimize the total duration of the round, taking into account traffic and other constraints. Attractive financing is at stake to develop CesiCDP's business. Adding further constraints to the problem will make the study more realistic and more difficult to solve, but it will demonstrate the team's expertise.
Our aim is to develop an algorithm that will enable us to pass through all the delivery points in an optimized time.
Selected constraint¶
The constraint we chose was the following:
- To have several trucks available simultaneously to make deliveries.
Formulation of the problem ¶
Consider a graph $G=(V,E)$, where $V$ is the set of cities (or delivery points) and $E$ is the set of roads between cities. There are $k$ trucks available to make deliveries.
The problem is to find a route for each truck, so that all deliveries are made in the shortest possible time, both to and from the depot.
The problem we have with the above constraints is the VRP (Vehicle Routing Problem).
Problem constraints ¶
List of problem constraints:
- All customers must be served
- A customer can only be served by one vehicle.
- When leaving a customer, a vehicle can only go to one other customer.
We will therefore assign each customer to a route served by a single vehicle.
Demonstrating the complexity of the vehicle routing problem (VRP) ¶
Introduction¶
The Vehicle Routing Problem (VRP) is an extension of the Traveling Salesman Problem (TSP), and is known to be an NP-hard problem. We will demonstrate the complexity of VRP based on the Hamiltonian chain problem, which is known to be NP-complete.
Preliminary definitions¶
Hamiltonian chain problem**: the problem consists in determining whether a given undirected graph has a Hamiltonian chain, i.e. a path that visits each vertex exactly once.
Traveling Salesman Problem (TSP)**: the problem consists in finding the shortest possible circuit that visits each city in a given set of cities and returns to the original city.
Vehicle Routing Problem (VRP)**: the problem consists in delivering a series of customers with several vehicles while minimizing the total cost, such as delivery time or distance traveled.
Proof of the complexity of the TSP.¶
The TSP is an extension of the Hamiltonian chain problem. In fact, a special case of the TSP is the Hamiltonian chain problem, in which all edges have the same weight (or cost). In this case, finding the shortest circuit in the TSP is equivalent to finding a Hamiltonian chain in the graph. Since the Hamiltonian chain problem is NP-complete, the TSP must be at least as difficult, so the TSP is NP-complete.
Proof of the complexity of the VRP.¶
The VRP is an extension of the TSP. In fact, a special case of the VRP is the TSP where there is only one vehicle available to make deliveries. In this case, finding the solution to the VRP is the same as finding the solution to the TSP.
Since the TSP is NP-complete, the VRP must be at least as difficult, so the VRP is NP-difficult. Furthermore, VRP introduces additional constraints, such as multiple vehicles and potentially vehicle capacities and time windows, making it even more complex.
Conclusion¶
In conclusion, we have shown that VRP is an NP-hard problem by reducing it to the NP-complete TSP problem, which in turn can be reduced to the NP-complete Hamiltonian chain problem. This demonstration highlights the difficulty of solving the VRP, particularly for large instances. In practice, (meta)heuristic and approximate methods are often used to solve the VRP, as we shall see later.
Mathematical modeling ¶
Set and parameters¶
$V=\{0,1,2,...,n_v\}$ : the set of cities, where 0 is the base (or depot), $1,2,...,n_v$ are the delivery cities. $n_v+1$ will be the depot for the return.
$K=\{1,2,...,k\}$ : all trucks.
$E$ represents the arcs between two customers $i,j \in V$
$G=(V,E)$ : the graph with V as vertices (cities) and E as edges
$d_{ij}$ : distance (or travel time) from city i to city $j$ (travel cost)
$M$ : a great constant.
Decision variables¶
$x_{ijk}$ : binary variable worth 1 if truck $k$ moves from city $i$ to city $j$, and 0 otherwise.
Objective function¶
$$\min \sum_{k∈K} \sum_{i∈V} \sum_{j∈V} d_{ij} x_{ijk} $$
VRP constraints¶
Each city is visited once and only once: $$\sum_{k \in K} \sum_{j \in V} x_{ijk} = 1, \forall i \in V, i \ne 0$$
If a truck visits a city, it must leave it: $$\sum_{i \in V} x_{ijk} = \sum_{j \in V} x_{ijk}, \forall k \in K, \forall i \in V, \forall j \in V $$
Sub-tour elimination constraint (to ensure that each truck completes a full tour) : $$\sum_{i \in S, j \notin S} x_{ijk} \geq 1, \forall k \in K, \forall \; subset \; S \; de \; V, 0 \in S, S \ne V $$
VRPTW constraints¶
Each city is visited once and only once: ∑𝑘∈𝐾∑𝑗∈𝑉𝑥𝑖𝑗𝑘=1, ∀𝑖∈𝑉, 𝑖≠0
If a truck visits a city, it must leave it: ∑𝑖∈𝑉𝑥𝑖𝑗𝑘=∑𝑗∈𝑉𝑥𝑖𝑗𝑘, ∀𝑘∈𝐾, ∀𝑖∈𝑉, ∀𝑗∈𝑉
Sub-tour elimination constraint (to ensure that each truck completes a full tour): ∑𝑖∈𝑆,𝑗∉𝑆𝑥𝑖𝑗𝑘≥1, ∀𝑘∈𝐾, ∀𝑠𝑢𝑏𝑠𝑒𝑡𝑆𝑑𝑒𝑉, 0∈𝑆, 𝑆≠𝑉
Each truck arrives within the time window of each city: a_ik + t_ij ≤ a_jk + M(1 - x_ijk), ∀k∈K, ∀(i, j)∈A, ∀a_ik ∈ [e_i, l_i]
Each truck leaves the city after the service time: a_ik + s_i = d_ik, ∀k∈K, ∀i∈V, ∀a_ik ∈ [e_i, l_i]
Note: Here, a_ik is the arrival time of vehicle k at node i, t_ij is the travel time from node i to node j, M is a large positive number, x_ijk is a binary variable that is 1 if vehicle k travels from node i to node j and 0 otherwise, e_i is the earliest time that a service can start at node i, l_i is the latest time that a service can start at node i, and s_i is the service time at node i.
These constraints take into account not just the need to visit each city once, but also the requirement to arrive and depart within specified time windows, which makes the problem more complex than the standard VRP.
Resolution algorithm (with coordinates)¶
Importing the necessary libraries¶
from sklearn.cluster import KMeans import matplotlib.pyplot as plt import numpy as np import random, time, math from tests.libs.clustering import split_tour_across_clusters
Resolution algorithm (with matrixes)¶
Importation des bibliothèques nécessaires¶
import random import numpy as np import networkx as nx import matplotlib.pyplot as plt import matplotlib from IPython.display import display, Markdown from sklearn.cluster import KMeans import matplotlib.cm as cm
Fonction pour générer les villes (distances, fenêtres de temps et graphe)¶
# Fonction pour générer une matrice de distances aléatoires entre les villes def generer_matrice_distances(num_villes, probabilite_incompletude): distances = np.zeros((num_villes, num_villes)) # Création d'une matrice de zéros de taille num_villes*num_villes # Remplir la matrice avec des distances aléatoires ou infinies (si les villes ne sont pas connectées) for i in range(num_villes): for j in range(i+1, num_villes): if random.random() < probabilite_incompletude: distances[i, j] = float('inf') distances[j, i] = float('inf') else: distances[i, j] = random.randint(1, 10) distances[j, i] = distances[i, j] return distances # Fonction pour générer les fenêtres de temps pour chaque ville def generer_fenetres_temps(num_villes, heure_debut, heure_fin): fenetres_temps = [] for _ in range(num_villes): debut = random.randint(heure_debut, heure_fin - 1) fin = random.randint(debut + 1, heure_fin) fenetres_temps.append((debut, fin)) return fenetres_temps # Fonction pour générer le graphe de villes en utilisant Networkx def generer_graphe_villes(distances, fenetres_temps): num_villes = distances.shape[0] G = nx.Graph() # Création d'un graphe vide # Ajouter des nœuds au graphe (chaque ville est un nœud) for i in range(num_villes): G.add_node(i+1, fenetre_temps=fenetres_temps[i]) # Ajouter des arêtes au graphe (les distances entre les villes) for u in range(num_villes): for v in range(u+1, num_villes): if distances[u, v] != float('inf'): G.add_edge(u+1, v+1, distance=distances[u, v]) return G
Classe réprésentant une particule dans l'algorithme de PSO¶
class Particle: def __init__(self, num_villes, num_camions, distances, fenetres_temps, infinite_distance_value=1e6): self.position = np.random.permutation(range(1, num_villes+1)) self.velocity = np.zeros_like(self.position) self.best_position = self.position.copy() self.best_cost = float('inf') self.num_camions = num_camions self.distances = distances self.fenetres_temps = fenetres_temps self.infinite_distance_value = infinite_distance_value # Nouveau : regroupement des villes par clustering self.villes_cluster = self.cluster_villes() #Méthode pour regrouper les villes par clustering def cluster_villes(self): # Remplacer les distances infinies par une valeur élevée pour le clustering distances_clustering = np.where(self.distances == float('inf'), self.infinite_distance_value, self.distances) kmeans = KMeans(n_clusters=self.num_camions, n_init=10) clusters = kmeans.fit_predict(distances_clustering) return clusters def evaluate_cost(self): # Modification : assigner les villes aux camions en fonction des clusters camions = [[] for _ in range(self.num_camions)] for i in range(len(self.position)): ville_actuelle = int(self.position[i]) camion = self.villes_cluster[ville_actuelle - 1] camions[camion].append(ville_actuelle) total_cost = 0 for camion in camions: if len(camion) > 0: heure_depart = self.fenetres_temps[camion[0] - 1][0] for i in range(len(camion) - 1): u, v = int(camion[i]), int(camion[i+1]) distance = self.distances[u-1, v-1] if self.distances[u-1, v-1] != float('inf') else self.infinite_distance_value total_cost += distance heure_arrivee = heure_depart + distance fenetre_temps = self.fenetres_temps[v-1] if heure_arrivee < fenetre_temps[0]: total_cost += fenetre_temps[0] - heure_arrivee heure_depart = fenetre_temps[0] elif heure_arrivee > fenetre_temps[1]: total_cost += heure_arrivee - fenetre_temps[1] heure_depart = heure_arrivee else: heure_depart = heure_arrivee return total_cost # Fonction pour mettre à jour la vitesse de la particule def update_velocity(self, global_best_position, inertia_weight, cognitive_weight, social_weight): r1 = np.random.random() r2 = np.random.random() cognitive_component = cognitive_weight * r1 * (self.best_position - self.position) social_component = social_weight * r2 * (global_best_position - self.position) self.velocity = inertia_weight * self.velocity + cognitive_component + social_component # Fonction pour mettre à jour la position de la particule def update_position(self): indices = np.random.choice(range(len(self.position)), 2, replace=False) self.position[indices[0]], self.position[indices[1]] = self.position[indices[1]], self.position[indices[0]] current_cost = self.evaluate_cost() if current_cost < self.best_cost: self.best_cost = current_cost self.best_position = self.position.copy()
Définition des paramètres et génération d'une ville¶
num_villes = 50 num_camions = 5 num_particles = 50 max_iterations = 300 inertia_weight = 2 cognitive_weight = 1 social_weight = 1.5 probabilite_incompletude = 0.8 heure_debut = 8 heure_fin = 18 infinite_distance_value = 1e6 distances = generer_matrice_distances(num_villes, probabilite_incompletude) fenetres_temps = generer_fenetres_temps(num_villes, heure_debut, heure_fin) G = generer_graphe_villes(distances, fenetres_temps)
Graphe de la ville¶
pos = nx.spring_layout(G) labels = {node: str(node) for node in G.nodes()} nx.draw_networkx_nodes(G, pos) nx.draw_networkx_edges(G, pos) nx.draw_networkx_labels(G, pos, labels) plt.title("Graphe des villes avec fenêtres de temps") plt.axis("off") plt.show()
Algorithme PSO¶
particles = [] global_best_position = np.random.permutation(range(1, num_villes+1)) global_best_cost = float('inf') for _ in range(num_particles): particle = Particle(num_villes, num_camions, distances, fenetres_temps, infinite_distance_value) particles.append(particle) particle_cost = particle.evaluate_cost() if particle_cost < global_best_cost: global_best_cost = particle_cost global_best_position = particle.position.copy() iteration = 0 best_costs = [] while iteration < max_iterations: for particle in particles: particle.update_velocity(global_best_position, inertia_weight, cognitive_weight, social_weight) particle.update_position() particle_cost = particle.evaluate_cost() if particle_cost < global_best_cost: global_best_cost = particle_cost global_best_position = particle.position.copy() best_costs.append(global_best_cost) iteration += 1 best_position_str = ", ".join(str(node) for node in global_best_position) best_cost_str = str(global_best_cost) markdown_text = f"### Meilleure solution trouvée:\n\n- **Positions des villes**: {best_position_str}\n- **Coût total**: {best_cost_str}" display(Markdown(markdown_text))
Meilleure solution trouvée:¶
- Positions des villes: 22, 28, 5, 12, 50, 34, 32, 30, 46, 21, 17, 27, 49, 14, 6, 9, 24, 1, 19, 43, 15, 4, 11, 35, 38, 18, 31, 33, 13, 41, 2, 7, 42, 48, 26, 37, 29, 45, 20, 3, 25, 36, 16, 39, 8, 40, 44, 10, 23, 47
- Coût total: 87000972.0
Génération de la carte représentant le chemin de chaque camion¶
chemins_camions = [] # Génération de la carte des couleurs cmap = matplotlib.colormaps['gist_ncar'] colors = [cmap(i/num_camions) for i in range(num_camions)] for camion in range(num_camions): chemin = [] for i in range(len(global_best_position)): if i % num_camions == camion: ville = global_best_position[i] chemin.append(int(ville)) chemin.append(chemin[0]) chemins_camions.append(chemin) plt.figure(figsize=(10, 6)) plt.title("Chemins des camions") plt.axis("off") for i in range(num_camions): chemin = chemins_camions[i] chemin_str = " -> ".join(str(ville) for ville in chemin) text = f"Camion {i+1}: {chemin_str}" display(Markdown(text)) edges = [(chemin[j], chemin[j+1]) for j in range(len(chemin)-1)] nx.draw_networkx_edges(G, pos, edgelist=edges, edge_color=colors[i], label=f"Camion {i+1}") nx.draw_networkx_labels(G, pos, labels) plt.legend() plt.show()
Camion 1: 22 -> 34 -> 17 -> 9 -> 15 -> 18 -> 2 -> 37 -> 25 -> 40 -> 22
Camion 2: 28 -> 32 -> 27 -> 24 -> 4 -> 31 -> 7 -> 29 -> 36 -> 44 -> 28
Camion 3: 5 -> 30 -> 49 -> 1 -> 11 -> 33 -> 42 -> 45 -> 16 -> 10 -> 5
Camion 4: 12 -> 46 -> 14 -> 19 -> 35 -> 13 -> 48 -> 20 -> 39 -> 23 -> 12
Camion 5: 50 -> 21 -> 6 -> 43 -> 38 -> 41 -> 26 -> 3 -> 8 -> 47 -> 50
Génération de l'histogramme représentant le coût total de chaque camion¶
# Calculer le coût total de chaque chemin couts_parcours = [] for chemin in chemins_camions: cout_parcours = 0 for i in range(len(chemin) - 1): u, v = chemin[i] - 1, chemin[i + 1] - 1 # Ajuster les indices # Si la distance est infinie, utilisez une valeur de remplacement. distance = distances[u, v] if distances[u, v] != float('inf') else infinite_distance_value cout_parcours += distance couts_parcours.append(cout_parcours) # Créer un histogramme des coûts du parcours plt.figure(figsize=(10, 6)) plt.bar(range(1, num_camions + 1), couts_parcours, color=cm.rainbow(np.linspace(0, 1, num_camions))) plt.xlabel("Camion") plt.ylabel("Coût du parcours") plt.title("Coût du parcours pour chaque camion") plt.show()
Génération du graphique du meilleur coût en fonction des itérations¶
# Graphique de l'évolution du meilleur coût plt.figure(figsize=(10, 6)) plt.plot(range(1, max_iterations + 1), best_costs) plt.title("Évolution du meilleur coût au fil des itérations") plt.xlabel("Itération") plt.ylabel("Meilleur coût") plt.grid(True) plt.show()
Test de la variation du coût optimal en faisant varier le paramètre "inertia weight"¶
# Liste des valeurs d'inertie à tester inertia_values = np.linspace(0.1, 5, 10) # Liste pour stocker le coût optimal pour chaque valeur d'inertie optimal_costs = [] # Boucle sur chaque valeur d'inertie for inertia_weight in inertia_values: # Initialisation des particules et du meilleur coût global particles = [] global_best_position = np.random.permutation(range(1, num_villes+1)) global_best_cost = float('inf') for _ in range(num_particles): particle = Particle(num_villes, num_camions, distances, fenetres_temps, infinite_distance_value) particles.append(particle) particle_cost = particle.evaluate_cost() if particle_cost < global_best_cost: global_best_cost = particle_cost global_best_position = particle.position.copy() iteration = 0 best_costs = [] while iteration < max_iterations: for particle in particles: particle.update_velocity(global_best_position, inertia_weight, cognitive_weight, social_weight) particle.update_position() particle_cost = particle.evaluate_cost() if particle_cost < global_best_cost: global_best_cost = particle_cost global_best_position = particle.position.copy() best_costs.append(global_best_cost) iteration += 1 # Stocker le coût optimal pour cette valeur d'inertie optimal_costs.append(global_best_cost) # Graphique de l'évolution du coût optimal en fonction de l'inertie plt.figure(figsize=(10, 6)) plt.plot(inertia_values, optimal_costs) plt.title("Évolution du coût optimal en fonction de l'inertie") plt.xlabel("Inertie") plt.ylabel("Coût optimal") plt.show()
Test de la variation du coût optimal en faisant varier le paramètre "cognitive weight"¶
# Liste des valeurs de poids cognitif à tester cognitive_values = np.linspace(0.1, 5, 10) # Liste pour stocker le coût optimal pour chaque valeur de poids cognitif optimal_costs = [] # Boucle sur chaque valeur d'inertie for cognitive_weight in cognitive_values: # Initialisation des particules et du meilleur coût global particles = [] global_best_position = np.random.permutation(range(1, num_villes+1)) global_best_cost = float('inf') for _ in range(num_particles): particle = Particle(num_villes, num_camions, distances, fenetres_temps, infinite_distance_value) particles.append(particle) particle_cost = particle.evaluate_cost() if particle_cost < global_best_cost: global_best_cost = particle_cost global_best_position = particle.position.copy() iteration = 0 best_costs = [] while iteration < max_iterations: for particle in particles: particle.update_velocity(global_best_position, inertia_weight, cognitive_weight, social_weight) particle.update_position() particle_cost = particle.evaluate_cost() if particle_cost < global_best_cost: global_best_cost = particle_cost global_best_position = particle.position.copy() best_costs.append(global_best_cost) iteration += 1 # Stocker le coût optimal pour cette valeur de poids cognitif optimal_costs.append(global_best_cost) # Graphique de l'évolution du coût optimal en fonction du poids cognitif plt.figure(figsize=(10, 6)) plt.plot(cognitive_values, optimal_costs) plt.title("Évolution du coût optimal en fonction de cognitive Weight") plt.xlabel("Cognitive Weight") plt.ylabel("Coût optimal") plt.show()