Puedes leer y compartir este Artículo en Español.
We’ve been circling around this for weeks.
Across this series, I’ve dug into what makes artificial intelligence not just powerful, but dangerous– because of what it sees, but because of what it misses. Its blind spots aren’t bugs in the system– they’re reflections of our own: our biases, shortcuts, and ethical failures scaled into code.
This isn’t a technical issue. It’s a human one. A foundational one.
Guided by Cathy O’Neil’s Weapons of Math Destruction, this journey has explored how invisible assumptions become automated harm– and how, without discernment, we risk handing judgment to systems that cannot see what matters. Because AI ethics isn’t a footnote.
It’s the heart of the story– and it’s still ours to write.
Wapons of Math Destruction
Contrary to common opinion, the most important field of AI is ethics, rather than technical achievement. Mrs. O’Neil insightfully book “Weapons of Math Destruction” furthers our column objective with practical cases.
What happens when we hand moral judgment to an opaque formula?
False Idols, True Losses
There is a great crisis in all Western countries. They are divided down the middle and at odds like brothers who can no longer stand one another. We're only seeing the surface of the problem... You have to cut through that chaos and see beyond the divisions.
False Goods, Broken Models
Maybe the problem with AI isn’t what it does—but what we’re asking it to do. We better rethink the question before the journey takes us where we never meant to go.
The Frame Problem
When I first began to understand what AI was, I was truly baffled by its implications for human intelligence… there is no escaping the ethical dilemma it poses.
The Blind Spot of AI
All the ethical problems of AI arise from, and are amplified by, our blind spots.
The real danger isn’t technological power– it’s moral blindness.