Si quieres compartir o leer este Artículo en Español.
For those unfamiliar with the term blind spot: it’s what you fail to see precisely because you're focused on something else. Discovering it reveals not only what was hidden– but what may be shaping, distorting, or even potentially derailing your aim.
We humans see so we can go places. We aim with our sight to discern and direct our ethos and act upon the world. In our world before civilizations, our blind spot could be hiding a predator that aimed at you to make us its meal. Don’t be so naïve to think that today, whatever is hiding in your blind spot, cannot make you its slave or pawn.
AI has a blind spot: we are using it, in many cases, as an executor of our aim: we aim and shoot with AI. Our blind spot will exponentially project over the results of our objective. We are AI’s blind spot.
If you haven’t read our previous articles, this is the conclusion of a series that aims to explore our ethical proposal for AI and intertwines it with an insightful investigation that Cathy O’Neil conducted in her book Weapons of Math Destruction.
Since we are following up on all the ideas of previous articles, it’s to your advantage to refer to them. Series: Weapons of Math Destruction.
The Frame Problem in our world
The ethical problem with AI is so vast and it touches so many pieces that it has everyone dancing– people, businesses and governments – around empty chairs waiting for the music to stop and see who will be left without a chair to sit on.
In this article we will briefly review each area that is being of worldwide concern, the key debates that are taking place in each theme and a reflection on the global stage amidst this unpredictable scenario.
1.- Bias, Discrimination & Fairness
Our first article, Weapons of Math Destruction and third article False Goods, Broken Models, touched precisely on this theme. When avoiding the difficulty of discriminating and evaluating the data and disengaging with the accountability of decisions, our biases and discrimination result in social impact and unfairness. The AI models are efficient and thus expand our lack of accountability and project our prejudices.
Key debates: Should AI be used in decisions that affect human lives? Who is accountable when bias is discovered?
2.- Transparency and Explainability
The AI models (especially the deep learning models used in businesses and governments) operate their processes in “black boxes;” it was an outcome of developing the technology – it just happened. That makes it impossible to audit and understand their decision-making.
Key debates: Should high-stakes AI applications (like in justice or finance) be required to offer explainable results? My question is: “Can they?”
3.- Accountability and Responsibility
Who is responsible: the AI, the developer, the deployer? Determining accountability is becoming increasingly complex as the lines between programming, implementation, and use continue to blur. The very fabric of Western civilization is built on the ability to assign responsibility; therefore, not knowing whom to hold accountable undermines the foundations of the rule of law.
Key debates: Can an organization delegate critical decisions to AI and still claim innocence when harm occurs?
4.- Labor Displacement and Economic Inequality
Corporations and organizations are attempting to reduce friction, increase efficiency and generate productivity; in the process they are not doing the due diligence to determine the weak links in the decision making and thus causing disengagement on the mild consequence side and unreasonable load of work or unlawful terminations.
Key debates: Should AI developers or employers contribute to social safety nets? How can we retrain workers effectively to implement the integration of AI?
5.- Autonomy and Human Agency
Our personal, organizational and governmental blind spots are so gigantic, that AI is filling them and we are willing to be replaced by the suppose intended consequences: productivity and efficiency, power and economic gain, reputation, social recognition – in the process our autonomy and agency will diminish. We will be enslaved by our blind spots... as it has always happened – but now the consequences are exponential.
Key debates: Should AI systems be required to preserve meaningful human agency in high-impact contexts?
6.- Surveillance and Privacy
China has implemented advanced AI surveillance technologies capable of identifying individuals based on their gait– the unique way they walk– even when their faces are not visible. AI-driven surveillance tools (facial recognition, behavioral analytics, emotion detection) are expanding rapidly and even Western countries are adopting them.
Key debates: Where is the line between national security and individual privacy? Should some AI capabilities be banned? Again, “Can they?”
7.- Weaponization and Military Use
It is staggering the ethical decisions on this issue and in the Surveillance and Privacy area. They are the most consequential in their foundations for freedom and for society, both of them raise existential and legal concerns. In the present case, delegating lethal decisions to machines may violate international humanitarian law and moral norms.
Key debates: Should there be an international ban on fully autonomous weapons? Can it be enforced?
There is a common denominator in all “key Debates” that are been raised to address the ethical areas of AI in our world, they all try to patch leaks and bail water from a sinking boat, it does not resolve the opening that is flooding the vessel we are on.
Global Key Trends and The Legislative Focus
I could ramp up the rhetoric on the global trends and future consequences, but it all boils down to two basic themes: the confrontation between Western freedom and the totalitarian alliance worldwide, and the second is; can anyone of them afford to limit the expansion of AI due to the high stakes that it represents?
The West vs the Totalitarian Alliance
If the reader thought that the dividing line between Communism and the democratic countries of the west ended with the Berlin Wall and the Cold War – better revise your history. The West has not been immune to the seductions of control. Over the past decade, cultural and ideological battles have eroded trust in institutions and blurred the lines between governance and influence. During the Biden administration, technology firms faced increasing regulatory pressure– often framed as necessary, but in practice, stifling innovation and deepening the gap between public interest and technological progress. With the return of Trump to the presidency, a new posture is emerging: one that openly confronts the coalition of totalitarian states and aims to restore the foundations of Western freedom, especially in the realm of speech, enterprise, and technological leadership. Whether this realignment succeeds or not, one thing is clear: the future of AI will be shaped as much by values as by code– and the moral blindness of our societies may prove more dangerous than any supercomputer or atomic bomb.
The Collision of Totalitarian States
At the head of this monster is China, Russia, and Iran together with other satellite players like North Korea, Venezuela, Cuba, Nicaragua and other minor countries around the world. For the theme of AI development, the totalitarian’s usage of AI is no minor issue. China has developed effective and thorough social control technology that makes Russia’s attempts to create a polar alliance of communist countries in the post WWI era look like a child’s play. China as well as Muslim countries have on their ethos the vision of compulsive domination and the strategic moves both are making towards Latin American countries do imply their interest in expanding their grasp further than their natural area of influence.
The Tower of Babel Syndrome
We wrote an article investigating this specific theme, The AI Illusion: Power Without Sacrifice! We have created our own Tower of Babel; in the process of building the Tower of AI, we don’t know who is responsible for what. We can’t come to an agreement how to solve the problem to keep social cohesion, our relations have become discordant, and we could keep enumerating consequences of our endeavor with AI. We don’t understand each other – just like what God did to the Son’s of Cain when attempting to reach heaven on their own accord.
More than the resurgence of technological marvel, the AI era, in the midst of geopolitical confrontation like we have now, may cause a Armageddon Apocalypses; and that may very well be where we are happily chanting towards. It may conclude in the collapse of global infrastructure, an escalation into full-scale AI warfare, cognitive and psychological disintegration of our societies, biotechnological weaponization, the death of global governance, the rise of world-wide technocratic totalitarianism, and our species moral and spiritual breakdown.
I have to say… there is a huge HOWEVER. True, nothing can prepare us for what’s coming. We must open our eyes and embrace for impact… but in the course of history, civilizations have collapsed, and life went on. Why did they? Because we are humans and extremely curious, there is an existential need to know what is true, our spirit is stronger when things go wrong, and our determination turns resilient. As a species, we are on the top of the food chain – but what we are confronting with AI is challenging each one of us individually to assume responsibility and see, act and be who we need to, so our weaknesses do not use this power tool, and our blind spot does not become our doom.
In humans the blind spots are a reflection of what we actively don`t want to see – it is a reflection of our subconscious – and thus it usually poses a clear-and-present danger… that is where our monsters hide.
ARE YOU UP TO THE CHALLENGE?
With this article, we close this series. Our intent is to continue exposing the ethical dimension of AI, for our vision is a culture empowered by personal transformation. And since AI has become such a defining phenomenon in our time – posing a serious challenge to our stability and purpose – we aim to transform ourselves, so we may wield our lives with intention and determination.