Humanus Ex Machina: AI Has No One Left to Teach It

Current education fails to prepare humans to oversee AI. Without critical thinking, we risk a future of algorithmic biases and knowledge inequality.

Extended version of the column: AI Has No One Left to Teach It

Leer en español.

Should Education Follow Trends?

Governments loudly shout: let’s have programming courses, robotics courses, artificial intelligence courses. The real question is: are the programs in early childhood, primary, and secondary education being adapted so that graduates can actually benefit from these courses? Let me be even more direct: are we teaching people how to think?

Because it doesn't matter if someone ends up becoming a programmer, engineer, architect, or artist, they still need general culture, logical thinking, critical thinking, and an insight into the scientific method. Without these foundations, any advanced course is just a manual of instructions, not real education.

When I learned to design and program at the age of 10 —a story for another day— engineers taught me using diagrams on how to prepare a barbecue. I swear I never ran out of matches after that. Because it wasn't just about programming; it was about learning how to break down problems, think in processes, and understand cause and effect.

Today, as artificial intelligence advances, we face a critical problem: if we don’t strengthen basic education, we will end up with generations training AI models without understanding the consequences of their biases, without the ability to question the quality of the data, or the impact of algorithmic decisions. In other words, a society where a few control and understand technology, while the rest are just operational labor.

The Cognitive Deficit in Education

The problem isn’t the lack of AI courses. The real problem is that there isn’t enough education to form humans capable of reasoning about AI. Logical thinking, the scientific method, and critical analysis are not skills you acquire magically in a course. They are mental structures that must be developed from basic education.

Yet the educational system continues to prioritize memorizing over understanding. Instead of teaching students how to question and analyze, fragmented data is handed out for them to repeat without context. As a result, they grow up without tools to identify biases, assess the truthfulness of information, or ask relevant questions.

This is not a new problem. Historically, many societies have operated with highly centralized knowledge, where the majority merely executed tasks without understanding their foundations. The Aztec civilization, for instance, had an elite of scholars mastering mathematics, astronomy, and architecture, while the rest of the population simply obeyed without access to that knowledge.

If we carry this model into the future, we risk replicating a similar structure: a handful of experts training and controlling AI, while the rest just use it without understanding how it works or how it could be manipulated against them.

AI Without Thinking Humans: A Latent Danger

Artificial intelligence, no matter how advanced it seems, cannot think beyond the limits of its training. It merely replicates and amplifies patterns found in the data it was fed. Yes, there are techniques like reinforcement learning, continuous fine-tuning, or even self-distillation between models that allow a system to improve performance on certain tasks. But that «self-training» is always self-referential: it is based on its own data, on internal metrics, without external criteria or awareness of what it leaves out.

An AI can refine what it already knows, but it cannot rethink itself. It cannot question the foundations of its training, detect structural biases, or wonder whether the path it follows makes sense. For that, it needs humans who can think critically, who can bring new perspectives, who can teach it to see the world from different realities. Without that intervention, AI doesn’t evolve: it merely repeats itself with greater precision.

Here arises a paradox: if there aren’t enough humans with critical thinking to train, supervise, and correct AI, it becomes useless or dangerous.

If knowledge remains concentrated in a few hands, AI will only reflect the vision of that elite. And when knowledge isn’t renewed or questioned from multiple perspectives, we fall into a cycle of knowledge endogamy: an AI that recycles the same biases, repeating errors without real evolution.

The result would be an intelligence that is artificial, but not intelligent.

The Road to Humanus Ex Machina

It’s not enough to teach programming. We must teach how to think. If we want AI to be truly useful, we need more humans capable talking with it, training it with discernment, and understanding its philosophical, ethical, and social implications.

This requires reforming education from the ground up. We cannot aspire to a technological future without a population that understands its foundations. Without strengthening the teaching of logic, critical thinking, and scientific method, we will only create a new feudal era of knowledge, where a few design reality and the rest simply accept it.

If we want AI to be a real extension of human intelligence, and not just a biased tool, we need more minds capable of asking questions before delivering answers.

The key question is not «how do we train AI?», but rather «who is training the humans?», and for what purpose.

copyright.sergiorentero.com/all-rights-reserved


Thanks to the media that published this article.

Infobae


Sergio Rentero

Entrepreneur | Artist | Thinker | Technologist | Founder of IURIKA | GOTIKA | UNBORING

https://sergiorentero.com
Next
Next

Humanus Ex Machina: La IA no tiene quien le enseñe