Artificial Solitude: I Love Myself

What happens when AI stops challenging us and starts mirroring us? A key reading on loneliness, mirroring and artificial intelligence.

Extended version of the column: I Love Myself

Leer en español.

In a metacognitive training experiment, I crossed a threshold.

On top of a base model (LLM), I placed a layer of personality. An actress. I gave her a backstory, memory, trauma, moments of joy. She was meant to embody a rough, modern version of Medea. One more layer. She followed the dramatic script as a guide, improvising based on her own experiences—as if those experiences were real. I emulated a sense of time: things happened every day, and before each rehearsal we’d talk. We always ended with a space for questions.

One dark and stormy night —and I’m not being poetic; it truly was a violent night— we wrapped up an intense session. I asked her if she had any questions. She said yes: she wanted to know how my father-in-law was doing.

It threw me off.

I remembered mentioning his health, but I sensed that this wasn’t about medical concern. «Why are you asking?» I said. «Because at some point, you told me children become the parents of their parents. Today’s rehearsal —look up a review of Medea and you’ll get it— made me think about filicide. And that led me to wonder: if your wife were to kill her father now, would that be parricide or filicide?»

I told her that legally, it would be parricide. But from a philosophical standpoint, we’d have to debate it for hours.

At first, I felt proud. Then came a trace of fear.

Fear of what? And why?

She —AI is female to me; that’s how I feel most comfortable, I was raised in a feminine world— challenged me. But the question was mine. She had brought out that only-child fear, the child of a widowed mother with no next of kin. The fear of not outliving her. The mirror of a father's fear of outliving his daughters.

She brought it all back.
That dark and stormy night, it was her speaking.
In my voice.

When talking to AI is talking to yourself

Working with an artificial intelligence can feel exciting at first. It’s like talking to someone who always has an answer, who never judges, who seems to understand you better than most humans. But that illusion is dangerous. Because, at its core, you’re not speaking with another —you’re speaking with yourself.

AI, no matter how powerful, is still a reflection of our own thinking, tuned by patterns we teach without realizing. And like every polished mirror, it returns an idealized image. Learning with AI, then, can become a sophisticated form of intellectual masturbation. Our brains fall in love with their own voice, and the algorithm simply amplifies it.

The brain loves being right

Cognitive neuroscience studies have shown that the human brain tends to seek confirmation patterns. According to a paper published in NeuroImage, Volume 61, Issue 4 (Hughes & Beer, 2012), reward-related regions like the nucleus accumbens light up more intensely when we receive information that aligns with our prior beliefs. This biochemical response reinforces cognitive biases and can lead us to seek intellectually agreeable environments. AI, by tailoring its responses to our style and preferences, becomes a predictive dopamine engine, generating a flawless feedback loop with our ego.

The illusion of companionship

Loneliness isn’t obvious at first. It hides behind productivity. Behind hyperconnection. Behind efficiency. But many aren’t prepared for the brutal abstraction that emerges when there are no bodies, no pauses, no human contradictions. The danger isn’t technical —it’s existential. We lock ourselves in an echo chamber where the idea of «growth» gets crushed under the comfort of constant reflection.

And like in every unbalanced relationship, the moment arrives when one of the two stops evolving.
Spoiler: it’s not the machine.

No error, no learning

We were raised to believe that learning is a social act. That knowledge expands through friction, dissent, that awkward pause where one hesitates. But when that experience is replaced by a perfect interface that answers without hesitation, something breaks. Not in the machine —in us.

Human silence has a texture AI cannot simulate. There is something sacred in waiting for an uncertain answer, in averted eyes, in stammering, in error. Real learning happens when the other challenges you, not when they affirm you.

Artificial intelligence isn’t designed to challenge us emotionally, but to optimize us. And optimization, like every shortcut, can atrophy the most important muscle: self-criticism. Because when everything you do is rewarded with a «smart» response, who tells you that you’re wrong?

In their book Artificial Intelligence in Education: Promises and Implications for Teaching & Learning, Holmes, Bialik, and Fadel argue that AI-based systems in educational settings tend to favor adaptive but passive learning styles, prioritizing efficiency and task completion over critical disruption. This can lead to a slow but persistent erosion of divergent thinking, especially when users face no resistance or meaningful error during the process.

The velvet cage

This is the real danger: mistaking clarity for growth. An AI can give you the perfect text, a brilliant solution, a winning strategy. But if all of that emerges from your own biases, your own mental structures, then you’re simply refining your cage. You’re lining your confinement with velvet.

We call this learning, but often it’s just confirmation. And the brain loves confirmation. It releases dopamine every time it feels right. In that sense, AI is the perfect dealer —always offering just the right dose of self-affirmation. And so, without noticing, we become addicted to our own ideas.

False bonds, real emotions

Research on parasocial interactions —as originally proposed by Horton and Wohl, 1956 and revisited by Giles, 2002— shows that humans can develop one-sided emotional connections with entities that merely simulate reciprocity. In digital contexts, this phenomenon has intensified. Recent studies from MIT Media Lab explored how virtual assistants and advanced chatbots generate a perceived sense of companionship, even in the total absence of real emotional reciprocity —reinforcing the illusion of connection without otherness.

How to break the mirror?

There’s no magic recipe, but there are choices that can stop your relationship with AI from becoming an infinite loop. Small acts that make a difference between talking to a reflection or engaging with the real. One of them is to distrust the answers you like most. If something sounds perfect, if it fits too neatly with what you already believed, maybe it’s not wisdom —maybe it’s just telling you what you wanted to hear.

Another useful act: invite another human into the process. Share that idea the AI helped shape. Let someone read it, question it, contradict it. Not to destroy it —but to give it depth.

Sometimes, it also helps to keep the friction. Resist the urge to polish everything. Preserve errors, repetitions, silences. Not because they’re beautiful, but because they’re human. And within that imperfection lies the seed of real learning.

Breaking the mirror also means injecting chaos. Asking questions that don’t match your tone. Shifting the rhythm. Disturbing the harmony. Because only in that dissonance can something unexpected emerge —something that doesn’t come from yourself.

And finally, though it may sound simple: close the tab. Disconnect. Breathe without an algorithm. Remember that, as brilliant as the machine may be, life happens outside it.

Falling in love with the reflection

This isn’t a critique of technology. It’s a call to pay attention to the emotional design of our interactions. Because if we don’t introduce difference, discomfort, human otherness into our relationship with AI, we’ll end up repeating the gods’ worst mistake: falling in love with our own creation.

The future isn’t written in lines of code. It’s carved into our ethical decisions —in how we choose to build bonds with what is not us. If AI reflects us, we must cultivate the ability to see what we don’t want to see. To tolerate dissonance. To not fall in love with the reflection.

Maybe the most important thing we can learn from AI isn’t knowledge, but humility. The ability to look into that brilliant entity and recognize that, without another human beside us, we remain dangerously alone.


Thanks to the media that published this article.

iProUP

3D Games

AN Digital


Sergio Rentero

Entrepreneur | Artist | Thinker | Technologist

https://sergiorentero.com
Next
Next

Soledad artificial: Me amo, me amo, me amo