From patterns to causes in the next generation of AI: The scientist pushing AI to stop guessing and start understanding
There is a familiar way to talk about artificial intelligence: feed the machine oceans of data, let it find patterns, and hope the patterns are useful. That is the world of today’s machine learning: prediction machines and pattern engines.
Dr. Hector Zenil’s work starts from a different question. He does not ask, “What pattern is in the data?” But “What kind of hidden rule could have produced the data in the first place?”
That difference may sound subtle, but it is enormous. It is the difference between recognising a song on the radio and understanding how music is composed. It is the difference between noticing that traffic is jammed and understanding the chain reaction that caused the pileup.
Zenil and modern AI
Zenil’s work, as the founder and CEO of Algocyte and an associate professor at King’s College London, sits at the intersection of complexity theory, artificial intelligence, and causal discovery, but the heart of it is almost poetic: the universe may look messy, but science has proven that most of that mess is generated by simple hidden programs in the form of short equations and small scientific models. His research is about finding those programs in an automated way, or at least getting closer to them, going beyond pattern-matching.

Modern AI is very good at finding correlations. It can learn that certain words tend to follow other words, that certain pixels tend to form a cat, and that certain medical measurements tend to suggest a disease. But correlation is not cause. A rooster may crow before sunrise, but the rooster does not cause the sun to rise. An AI system trained only on correlations can become very good at prediction while still having a weak grasp of why and how things happen.
Zenil’s approach points toward a different kind of AI: one less obsessed with surface patterns and more interested in the machinery underneath. Instead of only asking, “What usually comes next?” it asks, “What process could have generated this, and what would happen if we changed part of it?”
The bridge between complexity and causality
Suppose you find a pattern in the world. There are many possible explanations for it. Some explanations are wildly complicated: a giant pile of exceptions, special cases, coincidences, and patches. Others are elegant: one small rule that generates the whole thing.
Zenil’s method gives more weight to the elegant explanation. Not because nature is always simple, but because simple rules can produce many of the structured things we see. A snowflake looks intricate, but it is not designed branch by branch. Zenil’s work tries to make that intuition mathematical and useful for science.
That has major consequences for causal discovery. If you remove one part of a system and suddenly the whole thing becomes much harder to explain, that part may have been carrying important structure. If you change another part and almost nothing happens, it may have been less central. If a tiny intervention makes the system collapse into disorder, that tiny piece may have been a hidden control point.
This gives a fresh way to think about causation. Causes are not just things that come before effects. They are parts of the machinery that help reprogram a system’s behaviour. A cause leaves a fingerprint on the structure of the whole.
Where Modern AI Falls Short
This is also why his work speaks to one of the biggest limitations of mainstream AI. Today’s dominant AI systems are powerful, but much of their power comes from scale: more data, more parameters, more computing power. Zenil’s research belongs to another tradition, one that asks whether intelligence also requires the ability to infer compact explanations. In other words, can a machine learn not just from examples, but from principles?

That does not mean Zenil’s work is anti-machine learning. It means it presses on a blind spot. Pattern recognition is not enough. Prediction is not enough. A system that predicts well may still misunderstand the world. To discover causes, AI must be able to ask what kind of process made the data, and what would happen under intervention.
Dr. Hector Zenil’s applications to cell and molecular biology fit naturally into this bigger picture, though they are not the whole story. Biology is full of systems that are hard to understand from data alone: gene networks, cell behavior, developmental pathways, disease progression, and evolution. His work is really about the science of hidden mechanisms. There is something almost rock-and-roll about that. While much of modern AI culture celebrates bigger data, Zenil’s work keeps returning to a leaner, sharper idea: maybe understanding means finding the small rule inside the big mess.
