Halcyonic Systems · Research Reference

Neuromorphic Substrates
How physics computes differently, and where the ceiling holds

Conventional computers simulate the world on hardware that has no memory, no sense of time, and no ability to learn from experience. Neuromorphic hardware — from Intel's silicon chips to devices made of honey and carbon nanotubes — computes using physics that already remembers, already decays, already adapts. This document compares the two approaches across six functions, with maturity assessments and honest limits. Reference companion to The Instruments.

Demonstrated / production-ready Demonstrated / research-stage Nascent / theoretical
01 Recognition Neuromorphic mature
Conventional computers look at every pixel in every frame, whether anything changed or not. A neuromorphic chip only fires when something in the scene moves or changes — like how your eye stops noticing a still room but instantly catches motion. The result is the same task accomplished with a fraction of the energy, because the hardware only works when there's something worth working on. Organic memristors go further: they perform the core mathematical operation of recognition — multiplying inputs by stored weights — directly in the material itself, eliminating the bottleneck of shuttling data between memory and processor. The shift: instead of processing everything and filtering for relevance, the hardware only processes what's relevant in the first place.
Technical comparison
Conventional
Neuromorphic Silicon
Organic / Memristor
Dense, frame-synchronous. Every pixel computed every cycle regardless of change. Power scales with resolution. GPU/TPU · CNNs, ViTs
Event-driven, asynchronous. Dynamic vision sensors trigger spikes only on change. Power scales with activity, not resolution. SNNs achieve adversarial robustness surpassing ANNs via temporal spike encoding. 6×–300× energy efficiency gain (SENECA benchmarks) Loihi 2 · SpiNNaker 2 · BrainScaleS-2
In-material matrix computation. Crossbar arrays perform multiply-accumulate in physics — no memory/compute shuttle. Honey-CNT memristors achieve 256 analog levels, biodegradable and water-soluble. 88%+ MNIST accuracy · TCNQ organic: 91% at <2ns switching Honey-CNT · TCNQ · HfO₂ crossbars
Paradigm shift: computation scales with information content, not data volume. The substrate responds to change, not to frames.
02 Prediction Neuromorphic mature
To predict what happens next in a time series, conventional computers must explicitly manage memory — storing past states, updating them, and computing forward. The hardware itself has no sense of time. A memristive device does: its electrical resistance naturally fades after being changed, like a physical short-term memory that decays at a rate set by the material's chemistry. This means a memristor-based system can process temporal data by simply being exposed to it — the material's own dynamics perform the temporal computation. This approach, called reservoir computing, already outperforms software baselines on standard prediction benchmarks. The shift: instead of simulating temporal dynamics on static hardware, the hardware's own physics already has temporal dynamics. You use them.
Technical comparison
Conventional
Neuromorphic Silicon
Organic / Memristor
Simulated temporal dynamics on a substrate with no intrinsic memory. LSTMs, transformers, state space models require explicit memory management. GPU · Prophet, Mamba, TimesFM
Reservoir computing — the hardware's intrinsic dynamics are the computational medium. Real-time temporal prediction at biological timescales. SpiNNaker · Loihi 2 (LAVA framework)
Physics is the reservoir. Short-term memory, nonlinear decay, and fading memory of memristive materials perform temporal computation directly. Dual-memory RC: WOx short-term + TiOx long-term. 98.84% digit recognition · NRMSE 0.036 (Mackey-Glass) · NRMSE 0.017 (Ag-Ag₂S nanoparticle, Hénon) — outperforms software baselines WOx/TiOx · Ag-Ag₂S nanowire · ion-channel memristors
Paradigm shift: temporal dynamics exploited rather than simulated. The material's physics — its decay rates, nonlinearities, memory — does the computation.
03 Generation Neuromorphic nascent
This is where the honest answer is: conventional approaches are far ahead. Generating images, text, and code at the quality of current diffusion models and language models requires dense, high-throughput matrix operations that neuromorphic hardware isn't designed for. Neuromorphic chips could accelerate the execution of a trained generative model at lower power, but no neuromorphic system has demonstrated a qualitatively different approach to generation itself. This is the one function where the neuromorphic paradigm shift has not yet arrived. Honest assessment: neuromorphic generation is a future possibility, not a current capability. The advantage here would be power efficiency for inference, not a new way of creating.
Technical comparison
Conventional
Neuromorphic Silicon
Organic / Memristor
Dense forward passes through massive parameter matrices. Diffusion models, GANs, transformers. Enormous energy per token/pixel. GPU clusters · Stable Diffusion, GPT, Claude
Spike-based sampling — early-stage. Some work on spiking VAEs and stochastic sampling via neuronal noise. BrainScaleS-2 runs at 1000× biological real-time, enabling rapid sampling. BrainScaleS-2 (analog accelerated)
Frontier territory. Memristive stochastic elements could serve as hardware RNGs for sampling. Crossbar arrays can accelerate generative inference at low power. No organic system yet matches diffusion model quality. Research-stage only
Honest assessment: generation is where conventional approaches are furthest ahead. Neuromorphic could accelerate inference but has not yet demonstrated a qualitatively different approach.
04 Reasoning Neuromorphic demonstrated
Reasoning in conventional computing means chaining logical operations through software — if A then B, update belief C. The hardware just executes instructions. In neuromorphic systems, associative learning happens in the material itself. When two signals arrive at a memristor at nearly the same time, the connection between them physically strengthens — exactly like the biological principle "neurons that fire together wire together." This isn't a simulated learning rule. It's a physical process. Researchers have demonstrated Pavlovian conditioning — the classic bell-and-food experiment — in a honey-and-carbon-nanotube device. The device doesn't run a conditioning algorithm. It is a conditioning substrate. The shift: learning rules aren't programmed — they're properties of the material. The hardware learns by being used, not by being trained.
Technical comparison
Conventional
Neuromorphic Silicon
Organic / Memristor
Software-chained operations through authored graph structures. Bayesian inference, causal models, graph neural networks. GPU · PyTorch, DoWhy, PyMC
Constraint satisfaction and graph search natively in programmable neuron models. Probabilistic spiking encodes Bayesian beliefs — spike rates represent probability distributions. Loihi 2 (programmable neurons) · LAVA
Associative learning in physics. STDP implements Hebbian rules in material dynamics, not software. Honey-CNT memristors demonstrate Pavlovian classical conditioning — the first organic substrate to do so. STDP weight modulation: 500% · Paired-pulse facilitation: 800% (highest reported) Honey-CNT · sodium lignosulfonate flexible devices
Paradigm shift: associative learning embedded in material dynamics. The honey-CNT device doesn't run a conditioning algorithm — it is a conditioning substrate.
05 Decision Neuromorphic demonstrated
Teaching a conventional system to make good decisions requires millions of trial-and-error episodes on expensive GPU clusters. The system tries, fails, gets a reward signal, adjusts, and repeats — all in software, all on hardware that forgets everything between power cycles. Neuromorphic decision systems respond to the world at biological speed — 100× faster than GPU-based approaches for robotic control. But the deeper promise, still emerging, is hardware that adjusts its own decision-making through physical plasticity: a device whose connections strengthen or weaken based on outcomes, without a separate training phase at all. The shift (emerging): instead of training offline and deploying a frozen policy, the hardware adapts its behavior continuously through use — learning and deciding become the same physical process.
Technical comparison
Conventional
Neuromorphic Silicon
Organic / Memristor
Massive trial-and-error on GPU clusters. Reinforcement learning requires millions of episodes for policy convergence. GPU · PPO, SAC, DQN
Event-driven RL at near-biological latency. 100× lower latency for robotic control vs GPU-based RL. SpiNNaker enables real-time multi-agent simulation with millions of interacting spiking agents. 100× latency reduction (robotic control) Loihi 2 · SpiNNaker 2
Inference at ultra-low power — crossbar arrays implement policy networks efficiently. End-to-end RL training not yet demonstrated. The deeper potential: hardware that adapts its own policy through STDP, without a separate training phase. Memristor crossbars (inference only — training frontier)
Paradigm shift (emerging): decision-making where the hardware adapts its own policy through physical plasticity — online learning in the physics, not offline training in the cloud.
06 Discovery Neuromorphic demonstrated
Conventional discovery tools search a hypothesis space that a researcher defined. The boundaries of what can be found are set before the search begins. In neuromorphic systems — particularly disordered material systems like tangled nanowire networks — something stranger happens: the material self-organizes into computational structures that nobody designed. A mesh of silver-sulfide nanoparticles, each slightly different in size, produces complex nonlinear dynamics simply because of the variation in particle properties. The computational strategy emerges from the physics. This raises a genuinely provocative question: is this "discovery," or is it physics we don't yet understand well enough to formally specify? The shift: the system finds processing strategies the designer never specified. Whether that's a feature or a gap in our understanding is precisely the kind of question the System Language is built to ask.
Technical comparison
Conventional
Neuromorphic Silicon
Organic / Memristor
Gradient-based search over researcher-defined hypothesis spaces. Causal discovery, physics-informed networks, world models. GPU · NOTEARS, PINNs, Dreamer
Self-organized connectivity. Nanowire networks discover efficient information processing topologies without training. Reservoir dynamics discover temporal structure through substrate physics. Self-organized nanowire networks (Milano et al.)
Emergent computation from material heterogeneity. Ag-Ag₂S nanoparticle networks produce nonlinear dynamics from heterogeneous particle sizes — the processing strategy is physically grown, not architecturally designed. NRMSE 0.017 (Hénon map, emergent dynamics) Ag-Ag₂S nanoparticle networks · disordered nanowire meshes
The deepest question: is emergent computational structure "discovery" or "physics we don't yet understand well enough to specify"? This is exactly the kind of question the System Language is built to ask.
The pattern across all six functions is the same: neuromorphic computing lets the physics do work that conventional computers must simulate. That's a real and important shift in how computation happens. But it doesn't change what kind of knowledge computation can produce. No neuromorphic device — no matter how exotic the material, how elegant the physics — can say what kind of system it's part of, or why that system behaves the way it does. The substrate computes. The human specifies. That division holds regardless of what the substrate is made of.

What This Means for BERT and the Work Ahead

Neuromorphic computing doesn't threaten the fourth paradigm thesis. It reinforces it — and opens a concrete path forward.

The argument holds at the hardware level. Every neuromorphic device in this document — Intel's Loihi, Zhao's honey-CNT memristors, Milano's self-organized nanowire networks — is still an information processing system. More exotic, more efficient, more physically intimate with the data than a conventional chip, but still doing the same fundamental thing: taking inputs, transforming them, producing outputs. None of them can say what kind of system it's part of. The fourth paradigm claim — that specification is a different kind of knowledge from computation — doesn't depend on what the computer is made of.

But the question of where you run the specification changes. Right now, BERT specifies a system and Mesa simulates it — on conventional hardware that has to fake every temporal dynamic in software. Neuromorphic hardware already has temporal dynamics, memory, plasticity. The natural next question: can a BERT specification compile not just to a software simulation, but to a physical substrate whose dynamics mirror the system being modeled? CESM primitives mapping to spiking neural populations. Flows becoming spike-mediated events. Boundary conditions becoming physical constraints on the hardware. That's the convergence vision.

Memristive reservoir computing is the natural first experiment. The time-series predictions from the RSC staking challenge are precisely the class of task where memristive reservoirs already outperform software baselines. The question: can a BERT-specified system model generate predictions that are tested on a physical reservoir rather than a Python simulation? Same specification. Different execution substrate. That's the demonstration that the System Language is substrate-independent.

Organic substrates connect to the sustainability thread. Honey-CNT memristors are biodegradable, water-soluble, made from renewable materials. If BERT specifications can eventually compile to organic neuromorphic substrates, there's a path from formal systems science to sustainable computation — authored specifications running on hardware that dissolves when its purpose is served. That connects the systems science work to the Hilton Head ecological vision in a way that is not metaphorical.

A small neuromorphic lab becomes concrete. This doesn't require replicating Intel's chip fabrication. It requires memristive reservoir computing evaluation boards (commercially available), a BERT-to-spike compiler (the specification-to-substrate bridge, to be built), and access to THOR or INRC for larger-scale validation. The workflow: write the specification in BERT, validate it first against a Mesa simulation, then validate it against a physical neuromorphic substrate. Two execution targets, one specification. The specification is what's real. The substrate is a design choice.

Nobody else is building this bridge. Zargham's cadCAD compiles specifications to Python simulation. Leveson's STAMP compiles to control-theoretic analysis. Baez's AlgebraicJulia compiles to differential equation solvers. None of them compiles to neuromorphic hardware. A System Language that compiles to both conventional simulation and physical neuromorphic substrates is a genuinely new capability — and it's the unique lane that connects all of Halcyonic's research threads into a single program.