Puedes leer o compartir la versión original de este Artículo en español.
This article was translated with the help of ChatGP
There are two main “types” of AI: Narrow AI and General AI. All the Large Language Models (LLMs) we are familiar with today fall under the category of Narrow AI: designed to focus on a single objective. General AI, on the other hand, is an abstract concept referring to intelligent systems that we envision will one day match human adaptability—capable of performing everything a human can do. Whether we are ten years away from achieving this or if it will ever happen remains uncertain.
"I believe that in five years, we will most likely have a general AI that will be as good as, if not better than, humans."
— Geoffrey Hinton (2023)
With this article, I hope to provide readers with an understanding of just how challenging that goal truly is.
The Core of AI is Mathematical
“Neural Networks” are not hardware; they are mathematical programs that function like neurons. In the early days of artificial Neural Networks (1950), a single “neuron” was a simple equation that contained a value between 0 and 1. If the value was 1, it would “fire” or pass on the result; if it was 0, it would not. Over time, multiple layers of these artificial neurons were added, generating a multiplying effect on the interconnections between them. Initially designed as prediction programs, these networks were highly inefficient at adjusting their outputs. Each adjustment—known as a “weight”—required human intervention.
In the 1980s, Geoffrey Hinton known as the ‘grandfather of AI’, introduced “backpropagation”, a matheatical mechanism that allowed Neural Networks to automatically adjust their equations, that later led to the development of Deep Neural Networks. This innovation greatly improved efficiency in neuron interconnection and facilitated a learning process. However, progress was still limited by two bottlenecks: the amount of data available to train the system and the computational power required to perform billions of calculations per second.
AI is Only Imitating One of Our Brain’s Abilities
Narrow AI is designed to master a single task and becomes extraordinarily proficient at it. However, it still requires human “assistance” to achieve its goals—without us, it would be nothing more than an extremely expensive calculator. A striking example is AI’s ability to detect lung cancer.
If AI is exposed to thousands of lung X-rays, each with a confirmed diagnosis, and trained with a specific objective: to predict whether a patient has cancer; then after analyzing tens of thousands of labeled X-rays, AI can calculate the probability of cancer in a newly presented X-ray with an accuracy of 99.9999%. The remaining fraction accounts for an inherent possibility of error—perhaps one in a million cases.
The human brain’s ability to recognize patterns is what enabled our ancestors to survive and avoid predators on the African savanna four million years ago. Our capacity to detect when something seems “off” within a pattern is the very skill AI replicates—through statistical probability.
In the field of image recognition, such as analyzing X-rays or identifying objects in images, started in the 90’s with dataset MNIST, developed by Yann LeCun. He trained AI with 60,000 images of handwritten numbers. The network then deconstructed the visual elements that formed each number, assigning segments to different artificial neurons. Remarkably, with minimal human intervention, the system running on Neural Networks quickly improved its accuracy in predicting and correcting its own outputs.
Applying This Process to the Alphabet
The next logical step was to apply this process to letters. After learning to predict numbers, the system’s ability to recognize letters grew exponentially—just as it did with words, then phrases and sentences, then paragraphs and ideas, and eventually, complex concepts and abstractions.
Large Language Models (LLMs) like ChatGPT (OpenAI), Copilot (Microsoft), and Gemini (formerly Bard, Google) have benefited from an extraordinary evolution of a multiple areas of scientific and mathematical investigation from 1950 to 2025—75 years in which we’ve built the foundational tools to support and develop LLM to face aoutlandish feats with AI.
This is, in fact, how human consciousness operates: when we have a goal, we scan everything around us, filtering out what stands out from an infinite sea of possibilities. Through an inner storytelling (to make sense of it all), we decide how to reach the objective we’ve set. This process took millions of years of evolutionary refinement (through DNA) before we could effectively run it ourselves.
The Incredible Prediction Machine: The Human Being
We, as humans, exist in two realities at once:
A biological brain—an intricate neural network distributed throughout the entire body.
And (lets call it) an “operating system” —Consciousness—whose true nature remains a mystery despite all our technological advancements. Even after 140 years of psychology and psychiatry, we’re still fumbling in the dark, trying to understand how it works.
Ironically, to grasp the similarities between artificial and human intelligence, we often separate these concepts, yet in reality in us, they are one and the same.
Mathematics is the language of creation. It is not our invention; we discovered it—like uncovering a hidden treasure; it is a language that represents the engineering structure of creation. There is design in all of creation, in every organism and object, one of the best ways to express it is the language of mathematics. That much is true.
However, there’s a deeper issue: human consciousness craves meaning—not just logical structures, but significance, as in the meaning of life.
"Science takes things apart to see how they work. Religion puts things together to see what they mean."
—Rabbi Jonathan Sacks
The Birth of AI: A New Creation
We have created AI—a technology filled with immense promise. It’s as if we can hear the trumpets calling us to ascend the mountain of the gods. But this theme is not new. We’ve seen it before in Frankenstein or The Modern Prometheus (1818) by Mary Shelley, and even in the ancient story of The Tower of Babel:
"They said to each other, ‘Come, let us make bricks and bake them thoroughly.’ They used brick instead of stone, and tar for mortar. Then they said, ‘Come, let us build ourselves a city, with a tower that reaches to the heavens, so that we may make a name for ourselves.’"
—Genesis 11:3-4
Both endeavors end in failure and tragedy because they overlook something fundamental: who we are and what we truly seek.
A Reflection on the Complexity of General AI
Have you ever stopped to think about how, every moment, you perform the same process as Narrow AI? Even something as simple as making yourself a cup of coffee in the morning involves hundreds of highly complex processes. They don’t seem complicated because they happen automatically. Your final goal—making a coffee—guides you through all the microtasks involved, keeping you focused. Meanwhile, thoughts, ideas, and worries pass through your mind, yet you keep moving forward.
A true General AI would require the ability to generate a meta-goal with laser precision—one that cuts through the web of microtasks. But how would we even program such a goal into AI when we struggle to define it for ourselves?
We already know what happens when humans pursue objectives for the wrong reasons: totalitarianism, famine, corruption, environmental destruction, and human suffering. Now, imagine an intelligence with unlimited processing power but zero wisdom to guide it.
Reality is made up of infinite possibilities, each competing for our attention. Every human being is a unit of focused awareness, shifting attention thousands of times per hour in pursuit of multiple goals. It’s as if we are all part of a vast, intricate structure—eight billion individuals shaping history in real time.
Yet, we haven’t even figured out what we want as individuals, and now we’re creating a Primordial Fire that expands and deepens our abilities to an almost infinite degree.
So, the real question is: What will we do with that power if we still don’t understand who we are or what we seek?