This article of mine is different from all those I’ve written so far. You won’t find formulas, algorithms, or other mind-blowing wizardry. Instead, I want to talk about a topic I care about: metaheuristics. That art of searching in a non-intuitive way for good enough solutions, when the optimal one is just a mirage. And I want to do it with a non-technical, but rather human and reflective approach. It’s a praise of the (almost) optimal: of exploring without the urge to dominate, of improving step by step, accepting a bit of uncertainty along the way.

Because, let’s be honest: without uncertainty and without mystery... what kind of life would that be? Would it really be worth living an existence where we know down to every detail what will happen, never having to test ourselves to reach our goals? This after all leads us to another question: is it really worth living only to achieve our goals? Perhaps life has value when there is mystery. When uncertainty forces us to look around, to diverge, to change course and then find ourselves almost by chance exactly where we wanted to be without ever fully expecting it.

The Science of Compromise

We live in a world that constantly pushes us toward the optimal. At school, at work, in projects and even in relationships: everything seems to be measured by how close we get to an ideal of perfection. And yet, if we think about it most of the choices we make every day are compromises. Not optimal, not perfect, but they work and allow us to move forward.

Now I know what you’re thinking: “easy for you to settle!” but think about it. When you open your trusty Google Maps and choose the fastest route instead of the scenic one. When you grab something quick to eat because you have to get back to work, even though you’d gladly devour a fancy dish, but hey, who has time to cook it?

Call them what you want but I define them as compromises or in other words, settling. Which, mind you, is not wrong. Let’s not demonize the word settling because it’s precisely compromise that makes us humanly rich.

You chose Maps because you want to spend ten more minutes with your boyfriend or girlfriend. What’s so wrong with that? You grabbed something quick to eat because you don’t have time to cook the meal you wanted, but damn... when the day is over, you have a warm and cozy nest precisely thanks to that small sacrifice.

So, where’s the line in all this? When did we start hating the word settling? Well, maybe when compromise goes far beyond our will and freedom to be. When compromise starts to strip us of who we are. But hear me out: that’s not compromise. That’s submission. And I truly hope you’ll never have to go through that, even though I find myself opening the window in November to let out a wish in vain.

From Human Compromise To Computational One

Metaheuristics are born precisely from this: from recognizing that perfection in complex systems, is often an illusion. What do you do when you seek the optimum of something and don’t know where to start? You explore, try, fail, and improve. That’s how a metaheuristic works: it accepts uncertainty but still chooses to move. It grants itself the luxury of error because it knows that every attempt, even a failed one, can reveal useful information. The secret of its success? Exploration. That’s what makes them so human. Or rather so natural. Yes, because most metaheuristic algorithms draw inspiration directly from nature: from how animals search for food, how they communicate with each other, or even how they court individuals of their own species.

Because come on, let’s be honest: we give our best when it comes to eating and when it comes to courting. For everything else, we go back to being our usual bumpkin selves.

Want examples? Fine.

Ant Colony Optimization was born from observing ants: each one leaves a chemical trail along its path, and the more a path is followed the more “attractive” it becomes to the others. In the end among hundreds of routes explored without maps or supervision, the colony finds the best one to reach the food.

The Firefly Algorithm on the other hand is inspired by fireflies. The brightest ones attract the others and the whole mechanism is, in the end, a gigantic courtship ritual. Here too, there’s no perfection, only desire. An attraction that moves that guides that makes them draw closer and change their path.

But it doesn’t end there. There are metaheuristic algorithms inspired by almost anything: genetic algorithms draw from Darwin’s theory of evolution, the Artificial Immune System from how the immune system works. There’s even one inspired by the physics of black holes. In short there’s something for everyone. There are many of them and (almost) all as already mentioned, share one thing in common: they are inspired by nature.

Because the nature that surrounds us in a completely spontaneous way tends to approach the best possible outcome asymptotically.

If you’re curious to know how many there are (and I’m not even sure they’re all listed), take a look here.

But let’s get back to us. Why do we talk about the “almost optimal”?

Because none of these methods are designed to reach in a certain way the optimal solution to a problem. Often they succeed. Sometimes they don’t. We are protected by our immune system: it works and it works well. But does that mean we never get sick? It goes without saying that’s not the case. And yet, we are content, just as we are when we use metaheuristic algorithms. If we find the optimal solution we’re all happy. But do we really need it? Maybe to solve a problem it isn’t necessary to have the optimum, something close to it is enough. And don’t think this is an alien idea, just read the next paragraph to understand why.

The Reason For The Almost Optimal

If you’re reading this article you’re using a PC or a smartphone. Inside your device there’s a processor that schedules processes, meaning it decides which one to run first. You could run an “optimal” algorithm like Shortest Job First (SJF) but no, your processor doesn’t care and instead uses (for example) a Round Robin (RR). But there’s a reason behind that. SJF requires knowing in advance how long a process will last. Something it obviously cannot know. And there you have it: proof that the optimal solution is not only not always necessary, but sometimes not even achievable. And yet if you’re still reading, even a non-optimal algorithm like RR is working perfectly fine.

Another example. Let's go back to Google Maps. Here we are talking about finding a path, but with a fundamental difference compared to metaheuristics: Maps does not want a "near-optimal" solution. Maps wants the best possible route. The problem is that the possible paths are huge: millions of combinations, many of which make no sense.

So what does it do? Before even searching for the optimum, it does something profoundly human: it simplifies the world. It discards everything that is absurd (like going from Milan to Turin by passing through Rome) and narrows the search to only the “plausible” routes. These are the well-known heuristics: small shortcuts that allow the computer to say:

This route makes no sense, I won't even consider it. This one instead seems reasonable, I keep exploring.

Only after reducing the universe of possibilities can it finally attempt to compute the true optimum. And it is precisely in this contrast that metaheuristics shine. Maps relies on heuristics to make the optimum computable. Metaheuristics, instead, arise when the optimum is not a realistic goal: when the problem is so complex, uncertain, or vast that the only sensible thing to do is to move, be guided by exploration, and accept a solution that is "good enough". And here lies a subtle but enormous lesson: it does not always make sense to look at everything that is possible to do. If we worry about every path, every outcome, every theoretical deviation, we risk never starting at all. Sometimes the best way to get closer to the optimum, in an algorithm as in life, is precisely to stop trying to foresee every possibility and take that single step that sets us in motion.

Maps simplifies the world in order to reach the optimum. Metaheuristics embrace uncertainty to move closer to the "near-optimal". They are two sides of the same coin: how we face the infinite when we cannot see it all.

Yes, I know I talked about metaheuristics (or rather the idea behind them), while earlier I mentioned heuristics. Is there a difference? Absolutely yes. A heuristic is a single practical method to find a good solution while a metaheuristic is the broader approach that guides or combines heuristics to explore possibilities more effectively. In short the metaheuristic uses heuristics to navigate through the labyrinth of possible solutions.

Given these examples, when is it necessary to use metaheuristics?

When the problem is so complex that an exhaustive search or a deterministic algorithm becomes impractical. When we need to explore huge solution spaces avoiding getting trapped in local optima (that is, solutions that seem optimal but really aren’t) and we want to find near-optimal solutions in reasonable time, even without knowing the structure of the problem in advance.

To Wrap Up

In the end, metaheuristics are not just algorithms. They are a way of looking at the world. A reminder that the search for the optimal does not always coincide with the search for meaning. In code as in life sometimes you don’t need to reach the finish line perfectly. You need to move, explore, learn from your attempts, and, when it happens, know how to settle wisely. Metaheuristics teach us that perfection is a direction, not a destination. That there is no failure in making mistakes only more information to choose better next time. That you must take that small step into the dark that leads you to the (almost) optimal. I want to emphasize that this is not, and does not aim to be, a technical article. If you have read this far and still don’t know what a metaheuristic is, that’s because I haven’t actually explained it. There will be time and space to get technical later, for now I just hope you’ve grasped the philosophy and the dualism behind it. With that said, I’ll conclude by saying that I hope I’ve sparked your curiosity about metaheuristics and made you reflect on the value of every uncertain step and every compromise you make each day of your life.

Until next time.


References

Published

Category

Between Code And Life

Tags

Contacts