Puedes leer y compartir este Artículo en Español.
Disclaimer: the analysis and source gathering would have taken me weeks if not assisted by ChatGPT.
There’s a quiet, ancient truth hiding beneath our technological revolution: what we reach for reveals what we believe will save us. In the previews article, False Idols, True Losses, I explored how we often build our lives around substitutes: money, security, validation – that offer a quick existential fix, only to leave us emptier than before. They are not real; they are abstractions of what we don’t know: models of what we don’t want to face; representations of the good that we substitute for the source of the Good. When we mistake them for ultimate truth, they seduce our direction, mask our emptiness, and replace the difficult journey of discernment with the illusion of clarity.
“Models are opinions embedded in mathematics”
—Cathy O’Neil
Artificial Intelligence, at its core, is just another model. A set of abstractions. But our treatment of AI often reveals that we’re not just using it — we’re worshiping it. Through it, we give power to that which in us aspires to reign and rule. Delegating moral responsibility to a machine because it “its all-powerful,” when in truth, it only mirrors the biases we feed it.
What if AI isn't dangerous because it's too powerful, but because it's too obedient to the broken orientations we’ve never questioned? The most dangerous algorithm isn't one that makes a mistake. It's one that reflects our own unexamined biases with perfect efficiency – turning prejudice into policy and power into invisible chains. The original sin is not in the math, but in the modeler. The first failure is not technical; it’s spiritual… as it has always been!
This is where Weapons of Math Destruction hits hardest. Dr. Cathy O'Neil argues that harmful algorithms share three traits: opacity, scale, and damage. But underneath them all lies something even more urgent: the moral detachment of the designers. We create models that sort resumes, determine creditworthiness, predict crime, and allocate educational resources—and in the process, we encode and automate society’s idols: performance over purpose, control over understanding, prediction over wisdom.
The ethical crisis, then, begins long before the first line of code. It begins in the human heart—when we choose efficiency over discernment, abstraction over truth, and certainty over the accountability of being human, as messy as that may be.
There is a pressing question from our last article, what were the idols that created disparity and injustice in the Case of Sarah Wysocki?
CASE: DENIED IN SILENCE
She had done everything right—or so she thought. Her bills were paid, she worked full-time, and she was trying to build a better future. When she applied for a loan to move into a safer neighborhood, she didn’t expect a problem. But the bank said no. Not only that—they gave no explanation, no appeal, no conversation, just a digital rejection.
What she didn’t know was that she’d been scored by an alternative credit model. Not only FICO, but a system that drew from her shopping habits, her ZIP code, even how often she changed phones. The model didn’t care that she paid rent on time or that she was trying to escape a dangerous area. It saw her as high-risk—because the people around her were, statistically.
And that was that.
She wasn’t just denied a loan. She was denied mobility, safety, and dignity—based on data she never agreed to share, in a system she didn’t even know existed. We accept conditions in bank processes and web pages, and know little about what idols hang from that consent.
Let’s break down the idol’s structure
The idol behind this bank’s model has a name. It even has a formula. And it’s so common, we’ve stopped questioning it.
FICO stands for Fair Isaac Corporation, “an American data analytics company known for pioneering the FICO score, a widely used credit scoring system.“ (CFPB) It created the original credit scoring model in 1989. "A credit score that calculates based on five factors… (Investopedia)
Payment history: 35%
Amounts owed: 30%
Length of credit history: 15%
New credit: 10%
Credit mix: 10%
It looks like a reasonable structure to define a value assessment for a credit score. However, we humans are messy, and so are our lives. Letting numbers define our decisions will disrupt even more the very fabric of our lives. FICO evaluates history, not context or intent, not present situation, and vision nor determination to change – all human dimensions of life, and important ones at that.
To top it all, there is a trend of very shady usage of personal data for credit scores in the States that adds an explosive mix to this cocktail.
Prevalence of Alternative Credit Scoring Practices
Financial institutions and fintech companies are increasingly utilizing ‘alternative credit data—encompassing information like bank account transactions, utility payments, and even social media activity’ – to evaluate borrowers, especially those lacking extensive credit histories. (Stripe)
Similarly, the U.S. Government Accountability Office (GAO) notes that lenders are exploring alternative data, including educational background and employment history, to assess loan eligibility for individuals without traditional credit scores. (GAO) Moreover, companies like RiskSeal offer services that ‘analyze digital footprints, including social media presence and mobile phone usage, to predict credit risk.’ (RiskSeal)
All these have a shadowy legal status, as using alternative data in credit scoring is nuanced and subject to regulatory scrutiny. In the United States, the Fair Credit Reporting Act (FCRA) and the Equal Credit Opportunity Act (ECOA) govern the use of consumer data in credit decisions. These laws require that credit scoring models do not discriminate and that consumers have the right to know what data is used and to dispute inaccuracies (Stripe) – which is evidently not been done.
The Consumer Financial Protection Bureau (CFPB) has expressed concerns about the use of alternative data, particularly when such data may not directly relate to a consumer's financial behavior. (Consumer Financial Services Law Monitor) Additionally, a publication by Goodwin Law highlights that ‘while alternative credit data can improve access to credit for underserved populations, it also raises concerns about privacy and the potential for disparate impact.’ (Goodwin)
Math Model turned into a Weapon – What is at the core?
Plain and simple, according to Cathy O’Neil in Weapons of Math’s algorithms or models are dangerous because they rest on a three-legged system:
Opacity – The model is a black box. People don’t know how it works or how decisions are made.
Scale – The model operates on a massive scale, affecting millions of lives quickly and automatically.
Damage – The model causes real harm—denying jobs, loans, or freedom—often disproportionately affecting the vulnerable.
In the cases of Sarah Wysocki and Denied in Silence, the three-legged system is transformed in a weapon of mass destruction by the human factor. When that fails, the others—auditing and oversight—are just cosmetic. Most crucially even, inserting the human at the end, or even at the beginning of the decision chain, does not ensure a positive outcome unless the true objective rises to the surface.
The moral dilemma with AI and other math models
When the human heart is misaligned and knows not what the real aim is, oversight becomes cosmetic. Auditing won’t fix what intention distorts. Even placing a person at the start or end of the decision chain changes nothing—unless we first ask: what was the model built to serve?
We don’t need better models until we become better modelers. Because every system we design is just a mirror—and we can’t blame the reflection for what we refuse to face.
This article is part of a series exploring the ethical dimension of artificial intelligence. Contrary to popular belief, what matters most is not AI’s technical achievements, but how individuals, organizations, and governments choose to use it. AI creates nothing, it only amplifies what we feed it. If we pursue goals without examining the leverages that drive them, we risk coding the tragedy of our own lives