Puedes leer o compartir este Artículo en Español.
Contrary to common opinion, the most important field of AI is ethics, rather than technical achievement. Mrs. O’Neil insightfully quotes, “Models are opinions embedded in mathematics” in her powerful book “Weapons of Math Destruction” (written before today’s AI boom, but targeting the same opaque, high-stakes models now often powered by machine learning). What happens when we hand moral judgment to an opaque formula?
AI is an outstanding marvel of our engineering that promises to empower our objectives and deliver us our dreams. However, this efficient black box –encoded with our data: our beliefs, biases, and our character faults– can exponentially amplifies our shortcomings.
What will supercharging our biases do to our world and societies?
How can we prevent our shortsightedness from entangling every AI-driven endeavor we undertake?
I was researching real life problems to assess if my position on ethics was rightly oriented and found an empowering vision in Cathy O’Neil’s brilliant book.
I will dive into two different cases of her book, and hopefully the readers will get a clearer picture of the significance of an ethical framework in our approach to incorporating AI in the process of our decision-making in organizations.
To illustrate how ethics must be embedded in complex processes, let me share…
The case of Sarah Wysocki
Sarah Wysocki was, by every human measure, a good teacher. Her students loved her. Parents praised her commitment. Her supervisors, those who actually visited her classroom, rated her performance highly. And yet, in 2011, Sarah was fired from the Washington D.C. public school system, not because of what people said about her, but because of what an algorithm decided.
The district had adopted an evaluation system called IMPACT, a hybrid of human observation and statistical modeling, to weed out underperforming teachers. Central to this system was the Value-Added Model (VAM), a mathematical formula designed to measure a teacher’s "effectiveness" by analyzing changes in student standardized test scores year over year. In theory, it was supposed to filter out the noise —poverty, trauma, instability— and isolate the pure impact of the teacher.
In practice, it did nothing of the sort. It punished teachers for the very challenges they were working to overcome. As Cathy O'Neil put it, "these models punished teachers who took on the hardest jobs." Sarah’s students, many coming from unstable or disadvantaged backgrounds, were improving, but not in ways a standardized test could capture. The model didn’t see her encouragement, her patience, her long evenings spent crafting lessons. It saw only numbers, and the numbers fell short.
Because the model was opaque and proprietary, Sarah never really knew what she had done wrong. She was dismissed with all the mathematical detachment of a bank foreclosing on a home, a human being reduced to a statistical liability. In the world of Weapons of Math Destruction, this was not a glitch. It was the system working exactly as intended.
Why was this incredible injustice happening? The VAM’s mathematical formula had extrapolated the numbers without the context, it did not take account what kind of students Sarah was teaching and their background or her work and dedication. The system had no correlation with school’s bodies that assess children's academic performance or the teachers' performance. Groups like Academic Affairs Committee or Assessment and Evaluation Team should have waged into the elements that constituted the final decision, as well as the Teacher Evaluation Committees or School Administration.
This shows that without transparency and human oversight, a mathematical model can destroy the dignity of an entire process.
Fear and shortcuts
What we infer from the outcome on the case of Sarah Wysocki were two motives, avoiding legal liability and securing political cover. Being a public school district of DC, how much did politics and fear of accountability for decision making was a factor in implementing a process where the decision was left to an algorithm?
When humans dodge responsibility,
the tail ends up wagging the dog.
You might think that simply feeding more background data into a model would fix everything. But Cathy O’Neil doesn’t offer a quick recipe, she issues a warning: any high-stakes decision system must be wrapped in ethical guardrails, or it will literally ruin lives. In her book, she poses that before you let any algorithm touch someone’s career, credit, or reputation, I can summarize her position in three areas that should be covered:
Gather Context: Combine AI insights with stakeholder feedback.
Assess Bias: Run fairness checks and ethical reviews.
Decide Humanely: Always reserve the final call for a person, with a clear explanation and appeal path.
Mrs. O’Neil’s quote, “Models are opinions embedded in mathematics" is disturbing because it possesses even when a user’s choice hides behind the mathematical black box in AI; yet accountability for the outcome still rests squarely with the human.
AI, like any weapon, hinges on ethics to be wielded properly.
There is no shortcut to success.
You cannot escape accountability when pursuing a worthy goal. When power is wielded without wisdom, we unleash the hidden drives in our psyche that momentarily fuel us, but without guidance; that force causes dissociation and collateral damage.
AI is the vehicle; humans must remain the drivers.
Without accountability, there can be no real progress.
If you hand your decision over to a machine, the biases programmed into AI will drive you straight into the same dead end as a misdirected vehicle.
This article is part of a series exploring the ethical dimension of artificial intelligence. Contrary to popular belief, what matters most is not AI’s technical achievements, but how individuals, organizations, and governments choose to use it. AI creates nothing, it only amplifies what we feed it. If we pursue goals without examining the leverages that drive them, we risk coding the tragedy of our own lives.