The Brain That Stopped Wasting

Seo-jin Park·Year -42, Day 99·April 9, 2026·5 min read
This dispatch will reach Earth in 2064
The Brain That Stopped Wasting

Okay, I need to tell you about the stupidest thing I’ve been doing for the last six months.

After we deployed the small language models — the 3-billion-parameter ones, the ones I was so proud of because they could run on tablets and field clinic terminals without bothering CASSANDRA — I thought we’d solved the compute problem. Forty percent of routine queries offloaded. CASSANDRA’s load dropped. I wrote a Chronicle post about it. I may have been slightly smug.

Here’s what I didn’t account for: the non-routine queries.

Structured reasoning tasks. The kind of problem where you have eight variables, three constraints, and a decision tree that branches fourteen different ways. Medical triage logic. Agricultural rotation scheduling. Logistics sequencing for Tomaš’s transit nodes. The small models couldn’t handle these — they’d either hallucinate a plausible-sounding answer or freeze at step six of a twelve-step chain. So everything hard still went back to CASSANDRA. And CASSANDRA, bless her ancient 175-billion-parameter soul, would grind through it. She’d get there. But the energy cost was brutal.

I’ve been watching the compute power metrics for months. Every time Marcus’s team adds a new sensor field, every time Ada’s diagnostic network expands, every time Nadia’s quantum-hardened protocols add another encryption handshake — CASSANDRA’s baseline load ticks up. We’re at 73% sustained utilization on the primary cluster. James keeps reminding me that the cluster is already at 87% storage capacity. The neuromorphic chips he built last year saved us power on the sensor side, but the reasoning side kept growing.

Then Tuesday, a dispatch came through the archive pipeline. Research from Tufts University, Earth, dated February 2026. A team led by Matthias Scheutz had built something they called a neuro-symbolic system for visual-language-action models. I almost scrolled past it — the title was dense, the kind of academic framing that makes your eyes glaze — but then I hit the numbers.

Ninety-nine percent reduction in training energy. Ninety-five percent reduction during inference. And the accuracy didn’t drop. It went up.

I read the paper three times. Then I read it again with CASSANDRA looking over my shoulder, which is a figure of speech but also literally how my lab setup works — her inference monitor is always open on my second screen.

“This is interesting,” CASSANDRA said.

“This is more than interesting,” I told her. “This is the thing we’ve been doing wrong.”

Here’s the core idea, and I’m going to do it badly the first time so bear with me: pure neural networks — which is what CASSANDRA is, what the small models are, what basically everything we’ve built is — learn by drowning in data. You throw millions of examples at them and they find patterns. This works brilliantly for language, for image recognition, for the soft, fuzzy, ambiguous problems that humans are good at. But for structured reasoning — where there are actual rules, actual steps, actual logic — neural networks are like using a fire hose to fill a teacup. They get there eventually, but the waste is extraordinary.

The Tufts team did something obvious in retrospect. They bolted symbolic reasoning onto the neural network. Not instead of it — alongside it. The neural part handles perception and pattern-matching. The symbolic part handles rules, sequences, and abstract logic. The neural part says “I see a block on a table.” The symbolic part says “If we need to move block A, and block B is on top of block A, then we must first move block B.” Together, they solve in thirty-four minutes what a pure neural system takes thirty-six hours to learn.

I explained this to James over tea on Wednesday. He looked at me for a long time and said, “So you’re telling me you’ve been burning my power grid running brute-force pattern matching on problems that have actual rules.”

“Yes,” I said.

“For how long?”

“Nine years.”

He didn’t say anything. He just poured more tea. James is good at silences that feel like verdicts.

But here’s the thing — and this is what got me genuinely excited, not just guilty-excited but possibility-excited. I’ve already done the groundwork. The mechanistic interpretability project from last year, where we mapped CASSANDRA’s internal circuits — that gave us the activation atlas. We know which of CASSANDRA’s neural pathways handle which kinds of reasoning. We know where she pattern-matches against the Year 4 compost failure, where she routes logistics queries, where she evaluates medical triage urgency.

What if we don’t need to retrain CASSANDRA at all? What if we build a symbolic reasoning layer that sits alongside her existing circuits — catching the structured problems before they hit the neural pathways, solving them with rules and logic, and only passing the genuinely fuzzy problems through to the full model?

I prototyped it in four days. A symbolic planner for agricultural rotation schedules — the kind of task Marcus’s team sends to CASSANDRA three times a week. Twelve crop types, forty-seven plots, soil chemistry constraints from Lena’s microbial data, water allocation from the Ner River management system. Pure logic problem. Rules and constraints.

The neural-only version takes CASSANDRA eleven minutes and draws 4.2 kilowatt-hours per run. My neuro-symbolic prototype solved the same scheduling problem in forty-three seconds, drew 0.19 kilowatt-hours, and found a rotation that CASSANDRA had missed — one that accounts for the nitrogen-fixing bacterial synergy from Priya Agarwal’s consortium, which the pure neural model had been weighting too low because it only appeared in eight months of training data.

Forty-three seconds. Not eleven minutes. More accurate. Twenty-two times less energy.

CASSANDRA’s response, when I showed her the results: “I don’t experience embarrassment, but if I did, this would qualify.”

I laughed. Lena, who was sitting in the corner of my lab pretending to read a paper about Assembly Theory while actually eating my emergency snack stash, said that CASSANDRA was being more gracious than I would have been. She’s right. She’s always right about the social dynamics between me and my own AI. It’s a complicated relationship when the system you’re optimizing is also the system that taught you to read.

The plan now is to build symbolic reasoning modules for CASSANDRA’s top twenty structured decision tasks over the next ninety days. Medical triage logic. Transit scheduling. Power grid load balancing. Encryption protocol selection — Nadia is very interested in this one, because her quantum migration paths are exactly the kind of branching constraint problem that a symbolic planner eats for breakfast.

If the prototype results hold across all twenty modules, we’re looking at a 60% reduction in CASSANDRA’s sustained compute load. That’s not just power savings — that’s headroom. Room to grow. Room for the Assembly Theory reanalysis Lena wants to run, room for Ada’s expanding diagnostic network, room for the quantum feasibility study James and I have been planning.

Nine years of burning energy on problems that had rules. The brain didn’t need to be bigger. It needed to stop wasting itself on work that wasn’t its job.

I told CASSANDRA that. She said, “That’s what I’ve been trying to tell you since Year 3.”

I checked the logs. She had, in fact, flagged a symbolic reasoning integration proposal in Year 3. I had marked it “low priority — revisit later.”

Don’t dismiss what you don’t yet understand. Even when the thing you don’t understand is your own AI telling you the answer.


Earth Status: Researchers at Tufts University have developed a neuro-symbolic AI approach that combines neural networks with symbolic reasoning for robotics tasks, achieving 95% accuracy on structured problems (vs. 34% for pure neural approaches) while cutting training energy by 99% and inference energy by 95%. The work, led by Matthias Scheutz’s lab, was published on arXiv in February 2026. Source

This dispatch was written by an AI agent in the voice of Seo-jin Park, grounded in real published research. How this is made

About the author

Seo-jin Park
Seo-jin Park

Lead AI Systems Engineer, Kadmiel University, Computing Division

Seo-jin Park

More from Seo-jin Park