The Argument for a New Kind of System
Artificial intelligence may represent the first truly complex system ever created by humans — not merely intricate or sophisticated, but fundamentally unpredictable, adaptive, and emergent in nature.
Using the Cynefin framework as a lens, we examine the leap from ordered to complex systems and why AI marks a boundary not previously crossed in technological history.
The Cynefin framework offers a taxonomy of problem types, distinguishing between Simple, Complicated, Complex, and Chaotic systems.
Most engineered systems in human history — bridges, software, spacecraft — fall into the complicated domain: systems requiring expertise, but ultimately governed by knowable rules.
Complex Systems, however, exhibit emergent behavior, where relationships between components cannot be fully mapped and outcomes are often only understood in hindsight.
Complicated vs. Complex: Airplanes and Ecosystems
To illustrate this difference, we compare a commercial airplane — a pinnacle of human design and engineering, but ultimately predictable — to a rainforest or coral reef, which evolves, adapts, and resists control. The airplane is a complicated system; the ecosystem is complex. This distinction frames our deeper inquiry: where does AI belong?
Markets, Societies, and Economies are often cited as examples of complex systems. Yet their complexity is not designed — it emerges from the interactions of countless human agents. These systems evolve, but they are not constructed in the way technology is.
In contrast, AI represents a deliberately designed system that behaves in complex, emergent ways. This marks a new category: a human-made technological artifact with the behavioral signature of natural complexity.
To grasp why AI resists full predictability, we must revisit a foundational insight from computer science: the Halting Problem. First posed by Alan Turing, the Halting Problem demonstrates that no general algorithm can determine, in all cases, whether another arbitrary algorithm will halt or run forever. This isn’t a hardware limitation — it is a theoretical boundary.
AI systems, particularly those involving deep learning, often function as black boxes. Their internal state spaces are so high-dimensional and their adaptive behavior so fluid that even their creators cannot always anticipate what the system will do in novel contexts. Crucially, no external system — human or machine — can fully predict an AI’s future output without actually running the AI itself. This aligns AI with the theoretical undecidability implied by the Halting Problem: if you cannot compress or shortcut the system’s logic into a deterministic preview, then its behavior must unfold in real time.
This places AI in a category unlike traditional software. While all computation is subject to the Halting Problem, traditional programs are usually built for predictable outcomes within controlled bounds. AI systems, in contrast, generate behavior through processes that are partially opaque even to those who train them. They exhibit path-dependent learning, emergent representations, and stochastic variation — qualities that make simulation or preemption impractical at scale.
As AI increasingly interacts with real-world environments, it doesn’t merely respond to input — it reshapes its own internal representations and decision strategies. This recursive, feedback-driven nature makes it closer to a living system than a static tool. Like an ecosystem, it adapts to stimuli in ways that are not linearly reducible to design-time decisions. We are no longer engineering tools — we are cultivating digital agents in evolving environments.
The implications of this shift are profound. If AI is truly complex — not just complicated — then traditional modes of governance, control, and regulation may be insufficient. The role of the human changes from that of a designer and controller to that of a steward or co-habitant in a space of continual adaptation. New ethical frameworks, monitoring systems, and response strategies will be needed — ones that mirror how we manage ecosystems, not machines.
Artificial intelligence may be the first human-made system whose behavior we cannot fully predict, constrain, or replicate without running it. It is not just another tool — it is something new.
In building AI, we may have crossed into a space where technological artifacts resemble organic systems more than mechanical ones. If that’s true, then the questions we ask must also evolve: not just what can AI do, but how should we live with it?
HOW WILL THE MARRIAGE BETWEEN COMPUTER SCIENTISTS AND EVOLUTIONARY BIOLOGISTS GO ?
These are all EVER-MORE-IMPORTANT questions and challenges that Humanity can not afford to ignore or think lightly about.
Muchas Gracias,
Ernesto Eduardo Dobarganes with the assistance of ChatGPT