The Thread That Got Dropped
In 1956, a group of researchers convened at Dartmouth College and named their field "artificial intelligence." The name stuck. But two of the most consequential people in that room — Herbert Simon and Allen Newell — were never entirely comfortable with it. They preferred a different phrase: information processing systems. Simon would later say, without apology, that what they were doing before Dartmouth they had simply called operations research.
The distinction wasn't modesty. It was precision.
An information processing system takes inputs, transforms them according to rules, and produces outputs. That is a clean, formal, measurable description of what computers actually do. "Artificial intelligence" smuggles in something else — a suggestion of mind, of understanding, of something more than transformation. Simon and Newell resisted the smuggling. In The Sciences of the Artificial (1969), Simon drew the line explicitly: designed systems require a different science than natural systems. Natural science describes what is. The science of the artificial studies what could be — systems specified to meet goals, authored to behave in particular ways.
Simon's science of the artificial was organized around a single question: not how are things? — that is natural science's domain — but how ought things to be designed in order to function and attain goals? It is the designer's question. It requires specifying what a system is, how it is bounded, what it must do. No amount of observation produces that specification. It has to be authored.
Simon and Newell also gave AI its foundational claim: the Physical Symbol System Hypothesis — that intelligence is physical symbol manipulation. The field took that claim and built increasingly powerful symbol-manipulating machines. But Simon's science of the artificial required physical symbols for a different purpose — not to exhibit intelligence, but to formally specify designed systems. That second use of symbols is the one that got dropped.
Simon framed every artificial system as an interface between an inner environment and an outer one — the internal composition and processes on one side, the external world the system is embedded in on the other. That framing, articulated in 1969, is almost verbatim what BERT's boundary concept formalizes: inner environment as subsystems and internal network, outer environment as sources, sinks, and milieu, and the boundary itself — with its interfaces, porosity, and perceptive fuzziness — as Simon's interface in formal computational clothing. Mobus formalized the concept in 2022. BERT implements it in code. That is a 57-year arc from insight to computational artifact.
This convergence is not accidental. Simon won the Nobel Prize in Economics (1978) for bounded rationality and the Turing Award (1975) with Newell for the PSSH. He is simultaneously the intellectual father of behavioral economics, cognitive science, artificial intelligence, and design theory. The work described here sits at the intersection of all four — which is what systems science is.
Fifty years passed.
What the Field Built
The last decade has produced three dominant paradigms, each more capable than anything Simon's generation could have imagined.
Models
Models
Models
Three paradigms. Each more sophisticated than the last. Each sharing a single epistemic posture: structure is extracted from data. The architecture changes. The epistemology doesn't.
The First Crack
Judea Pearl saw the limitation clearly in Causality (2000). His ladder of causation has three rungs: association, intervention, counterfactual. Statistical models — no matter how large, no matter how well-trained — are permanently on the first rung. They can tell you what correlates with what. They cannot tell you what would happen if you intervened. They cannot reason about causes.
Causal models require asserting mechanism, not inferring it. You have to say: this variable causes that one, through this pathway. No training run produces that claim. It has to be authored.
Pearl's insight is the first crack in the paradigm. It points toward a different kind of knowledge — structural, mechanistic, explicit — that cannot be learned from data regardless of how much data you have.
The Gap
Here is the question none of the three paradigms ask. Not vision models. Not language models. Not world models. Not even Pearl's causal models, fully:
What kind of system is this,
and why does it behave the way it does?
This is not a prediction question. It is not a pattern recognition question. It is an ontological and mechanistic question. Answering it requires:
- Composition What entities constitute the system
- Environment What the system is embedded within and coupled to
- Structure The relations among components
- Mechanism The processes that generate emergent behavior
This is Bunge's CESM ontology. This is Mobus's systems science formalism. An LLM knows what has been said about Bitcoin. A systems model specifies what Bitcoin is — its boundary, its subsystems, the flows of energy and information between them, the feedback mechanisms that generate decentralization as an emergent property.
These are different kinds of knowledge. One is compressed from observation. The other is formally authored from theory. This is what no training run produces.
This is not a fourth option on the same menu. Vision models, language models, and world models all answer variations of the same question — what pattern is in this data? Systems models answer a different question entirely. They don't compete; they occupy a different epistemic dimension.
The Resolution — BERT
BERT — the Bounded Entity Reasoning Toolkit — is the authoring environment for that fourth question.
It implements a typed System Language grounded in Mobus's 8-tuple formalism: every system has components, an internal network, an environment, external flows, a boundary, a transformation function, a history, and a characteristic timescale. These are not metadata fields. They are ontological commitments.
BERT's primitives — Subsystem, Source, Sink, Interface, Flow — are physical symbols in exactly Simon and Newell's sense: discrete tokens that designate real-world entities, participate in formal operations, and can be created, composed, and destroyed. But they are deployed not for intelligent action, as the Physical Symbol System Hypothesis intended, but for ontological specification. The System Language is a physical symbol system whose purpose is to formally assert what kind of system exists — not to exhibit intelligence about it.
BERT models are machine-readable — an OWL/RDF ontology with 40 implemented concepts, a JSON schema, and a simulation bridge to Mesa currently 60% complete — the BERT JSON parser and archetype-to-behavior mappings are built; the final wiring of BERT subsystems to Mesa agent step logic is in progress. They are not diagrams. They are formal specifications that drive simulation.
These assertions are not informal. BERT's grammar constraints are machine-verified in Lean 4, with a bridge theorem that formally characterizes what is preserved and what is lost when a Mobus model projects down to Bunge's CES ontology. Six categories of information have no Bunge counterpart — milieu, flow capacity, boundary properties, transformation functions, history, and timescale. SL models contain strictly more information than Bunge-style descriptions. The theorem proves it.
The AgentModel's Reactive/Anticipatory/Intentional hierarchy is Simon's bounded rationality made formal — agents modeled not as perfect optimizers but as systems with cognitive limits, whose interactions generate the emergent behavior that formal specification exists to capture.
Why Now
The world in March 2026 is drowning in complexity it cannot describe. Climate systems, financial contagion, AI governance, cryptoeconomic infrastructure, geopolitical realignment — every consequential problem of this moment is a systems problem. The dominant tools for reasoning about them are either too reductive, too informal, or too opaque. None of them can answer Simon's question.
The AI moment makes this more urgent, not less. As AI agents begin operating inside economic and governance systems at scale — and they are — the need to formally specify those systems becomes critical. You cannot govern what you cannot describe. This is not philosophical abstraction in March 2026 — it is the operational bottleneck for every team implementing the EU AI Act, the US executive orders on AI, and China's algorithmic regulations. Every governance framework struggling to define system boundaries, specify accountability, and mandate transparency is facing the absence of formal systems models. You cannot simulate what you have not formally modeled.
The current AI paradigm is extraordinarily good at compressing what has already happened. It is structurally incapable of formally specifying what should be built. That gap — between pattern recognition and system design — is precisely where formal systems models live.
The Intellectual Stack
| Figure | Work | Contribution |
|---|---|---|
| Simon & Newell | Human Problem Solving (1972) | PSSH — intelligence is symbol manipulation; but symbols also specify |
| Simon | The Sciences of the Artificial (1969) | Designed systems require a different science |
| Pearl | Causality (2000) | Statistical models cannot reach mechanism |
| LeCun | A Path Towards Autonomous Machine Intelligence (2022) | Even world models are learned, not asserted |
| Mobus | Systems Science (2022) | Formal ontology for what systems models assert |
| Bunge | CESM ontology | Composition, Environment, Structure, Mechanism |
The Argument, for Different Rooms
The same argument lands differently depending on where you start. Choose your audience:
Vision models, language models, world models — all learn structure from data. Systems models assert structure from theory. BERT is the authoring environment for that fourth paradigm.
Simon called it information processing — transformation functions all the way down. Systems models describe the systems those functions run inside. BERT is the authoring environment for that distinction.
Every AI model learns patterns from data. A systems model formally describes what a system is — its parts, its boundaries, what flows through it, and why it behaves the way it does. That's not a learning problem. It's an authoring problem.