Synthetic Meningitis: The Robot Cook Has a Fever
In a future that already started, artificial intelligences built with real neurons don't just learn — they also get sick. This post explores the risks of using biological networks in AI. Spoiler: yes, a robot can catch something. And forget how to make rice.
Expanded column: Ma’am, your robot has meningitis.
There’s an episode of Star Trek: Voyager —Season 1, Episode 16, for the trekkie purists— where the most advanced ship in Starfleet begins to fail. But not because of a black hole, a Borg attack, or a quantum storm. No. Because of a cheese. Literally. Neelix, the ship’s cook, chaos agent disguised as host and walking biohazard, decides to make artisanal cheese. A harmless experiment —like all things that begin with the phrase “what if I made cheese in space?”— that releases a bacterium into the ship’s bio-neural packs. And everything goes to hell.
The ship begins to fail because its intelligence system —organic, with real neurons— gets sick. And no, it’s not a metaphor. It’s science fiction. Or... it was. Until, almost thirty years later, science fiction shoved us back in front of the mirror.
This is already happening
Because in 2025, what once felt like a trekkie joke is now part of reality. Seriously: there are companies —with logos, funding rounds, and hoodie-filled offices— using real neurons to build artificial intelligence. Not as if they were neural networks. No. Real, living neurons. In Petri dishes.
One of the most well-known is Cortical Labs, based in Melbourne, which is creating hybrid systems called DishBrain, where human or mouse brain cells are cultured in a dish and connected to a digital interface. In 2022, they taught this semi-organic brain to play Pong. For real. Like a silicon-and-meat baby, bouncing virtual balls while rewiring its own synapses.
Another is Koniku, founded by Oshiorenoya Agabi, which develops neuroprocessors —chips containing living neurons capable of detecting smells, learning patterns, or responding more efficiently than any traditional system.
And there’s more. At Indiana University, they’re working with brain organoids —lab-grown neural structures— that don’t just simulate brain activity but are starting to show signs of self-organization.
In other words: a mini-brain that’s starting to improvise.
Why are we doing this?
Because neurons consume ridiculously low amounts of energy. While a model like GPT-4 needs server farms the size of shopping malls, a living neuron uses way less than an artificial network trained for hours on a GPU that sounds like a nuclear toaster.
A single neuron can make millions of connections with the energy of a firefly. So we go back to nature. Back to what already worked. If something turned out well once —the human brain, more or less— why not copy it? A low-power AI with a real brain. A kind of Frankenstein robot with wetware. But organic things don’t just think — they also ferment. And they get sick. They can carry viruses, bacteria, tumors, madness, depression, hallucinations.
What TED Talks don’t show
There’s a detail missing from TED Talk slides. A detail that’s left outside the polished render, just when the futuristic logo appears: Living things get sick.
Organic systems don’t just think. They rot. They shift. They glitch.
They can get infected, inflamed, enter a crisis. They can literally break down — or lose their minds mid-synapse.
A living neural network isn’t just biological efficiency. It’s a body. And bodies —no matter how small or brilliant— are chaos zones.
They can catch viruses. They can go insane. They can develop trauma. They can start seeing things that don’t exist — and act on them.
Because the living doesn’t just compute. It suffers.
And that brings back the image that’s haunted me ever since I saw that episode: a mother from the future walks into a bright white room lit by soft LEDs, and the artificial doctor tells her, solemnly:
—Ma’am… your robot has meningitis.
—What do you mean, meningitis?
—Yes. The neural chip shows inflammation caused by an interstellar pseudomonas strain.
—But it was just a kitchen assistant.
—Now it doesn’t remember how to make rice.
Revolutions that don’t dare
This is how we move forward: with startup energy, convinced we’re reinventing the world — but always stepping back with the logic of a bug crawling into its cave. Innovation moves forward, sure. But dragging its feet.
The same thing happened with e-books. We spent decades reaching “the future of books”, and once we got there, what did we do? We added an animation to turn the page. On a screen. A screen that, obviously, doesn’t have pages.
Instead of rethinking how we read, we mimicked the old gesture to keep ourselves calm. We suspended the habit in midair, as if the finger needed that flick for the story to work.
We invented a revolution — but didn’t change the motion. Because we like the new only when it looks like the old. We keep copying what already existed, just with better lighting and higher resolution.
And now we’re doing the same thing with intelligence. Instead of designing something radically different —something that thinks from another place— we implant real neurons. As if that alone guaranteed humanity.
As if the soul came with the starter pack.
Brains without warranty
Sure, it sounds futuristic. Talking about AI with living neurons, lab-grown brains, machines that feel and respond — it’s dazzling. But sometimes the future is just a disguise. Sometimes it’s as futuristic as tying your shoes with an app. It works. But let’s be honest: it’s a little embarrassing.
The truth is, we don’t know what we’re doing. We’re wiring cells to chips, teaching them to play Pong, celebrating that they respond to signals — like they’re tiny geniuses. Thing is, those neurons don’t come with a warranty.
No one can guarantee they won’t someday get depressed. Or hallucinate. Or develop some sort of training trauma. Because living neurons have history — even if they don’t remember it. They have chemistry. They have a past. And that doesn’t always help.
And then, anything can happen. Your drone refuses to take off because it’s remembering a childhood it never had. Your customer service AI goes silent because it’s convinced time isn’t real.
Maybe —just like in Voyager, packed with 24th-century tech— we won’t crash because of a code bug or a system failure. Perhaps we’ll fall because of something simpler. More primitive. A speck of mold. A cheese leftover. The past fermenting inside the future we tried to force.
And when that happens, when our living neural net starts talking to itself, seeing things, getting fevers or headaches… maybe we’ll say:
—Oh no, ma’am… the robot has a strange look in its eyes.
And the only thing left to do will be to give it rest, soup… and hope whatever it has isn’t catching.