The Frame Problem
The core breakthrough in cognitive science came when we tried to build intelligence from scratch.
Puedes leer y compartir este Artículo en Español.
When first I began to understand what AI was, it truly baffled me the implications on human intelligence. I knew I was stumbling on the Frame Problem, but did not know the name or had a clear vision of the scope or how it revealed to early investigators the biggest obstacle they were facing to make intelligence artificial.
Truly, the more we uncover any-one-thing…
the more there is to uncover.
The human brain filters meaning, not just facts
Daniel Dennett (1942–2024) – who was a renowned American philosopher, cognitive scientist, and author, widely recognized for his work on the philosophy of mind – pointedly stated:
“In fact, the frame problem is not just a technical problem; it is a profound philosophical problem that has not been given anything like the attention it deserves.”
The problem was first formulated by John McCarthy and Patrick J. Hayes in their 1969’s paper titled Some Philosophical Problems from the Standpoint of Artificial Intelligence. The consequences of this paper for cognitive understanding of how human intelligence worked are nothing short than a monumental milestone in the history of science and philosophy.
How was the “focus-problem” uncovered
In the late 1940s Dr. W. Grey Walter created two robotic tortoises, he nicknamed Elmer and Elsie. They had a motorized system with a light sensor device and when they sensed bright light, they turned away; also, when their battery was low, they went to dock to recharge. They appeared to act with a certain sense of “independence and intelligence.” Even though Elmer and Elise were robots that had orientation and “purpose”, and they were able to move around in a habitat, they lacked the capacity to discern changing and complex scenarios, and to choose a response to change. For that, a robot needed to recognize changes, from an internal representation and decide what to act upon – that difference was what separated us from animal behavior.
In that respect AI pioneers like John McCarthy, Marvin Minsky, Allen Newell, and Herbert Simon believed that intelligence could be formalized using symbolic logic. They used situation calculus, a logical formalism developed to model actions and change to make these representations – but they found an insurmountable wall. To perceive reality, the system needed to recognize changes occurring, and the representation needed to take account for what changed and what remained the same. The problem was that the mathematical representation of “what remained the same” was infinite: if a door opens in a room, the door per se changed position, color, and form. Aside from that, all those things that did not move and remained in the same position still showed changes in the reflection of light. The amount of data changes and computational power needed to assess it was staggering – so the pressing question became: what counts as relevant? That’s not just a technical issue – it’s philosophical and even ethical, because what we choose to attend to reflects what we value.”
In humans, we use half of our brain’s computational power to see the world. You see, to solve the problems that simple biological systems face when confronted by predators and feeding, instinct will do. But when animals evolve to be more effective in hunting, and so they develop camouflage, skills, and organize in packs to hunt more successfully, humans needed to develop more complex social interactions to derive safety. For that complex scenario, instinct does not suffice. It required an internal representation to assess orientation and action to predict, “what will happen, what lies ahead, or how others will respond?” – and that was more efficient than instinct.
The basis for all that filtering could not be handled by instinct, it required a focal hinge that would sift the whole endeavor – the process demanded a value structure – and that can only be delivered by consciousness.
The moment you begin to perceive, you begin to ignore. And in ignoring, you reveal what matters most to you, even if you never chose it consciously. That’s why relevance is a moral act. It’s not that the world is made of facts. It’s made of frames. And you are always choosing – by what you look at – what reality becomes.
Perspective of The Frame Problem
For those who don’t know, there is no clear definition or limit to anything in nature; what we see as an entity or object is a controlled illusion, so as to permit us to move and act in a world without being overwhelmed by the information.
“We can only know things as they appear to us,
not as they are in themselves.”
Immanuel Kant’s - Critique of Pure Reason (1781)
Literally, if we see molecules at near-atomic resolution with the most potent microscope, we could gaze from rock to wood to hand, the boundaries blur at this level, not because of the size, but because nature resists clean segmentation. All is intimately connected. You may think, “but as things become bigger, then you can tell defining limits to what a thing or a being is.” I tell you, your vision operates on autopilot and framing doesn’t allow you to see the complete picture.
If you look at yourself in the mirror, and see your image, you are watching one frame of the complete resolution of who you are: the physical aspect. What you are is something far more than your physical aspect – you know this – you are your beliefs and the ideas you have of yourself, you are the product of the values you hold as true for yourself and the groups you move in, you are your attitudes and what you expect to find while you attempt to act; you are your history and how you understand it; you are a product of your relations, your culture, and the society you live in… what you are can keep expanding in different dimensions: the emotional, the intellectual, the social, the biological, the historical, the molecular. There is nothing simple about you, it depends on the resolution that you look at yourself.
The spiritual perspective takes into account a wider range of who we are and what reality is than a scientific or phenomenological perspective.
Also, as the West entered the 20th Century, we became more and more protected by a wall of certainty that assures us we can be at ease; true that has been threatened in this century. For argument’s sake, if you enter a classroom, everything in it tells you what to do and how to behave. The seats are all oriented to the front, there is a board to write on, whoever is at the front is going to share with the rest of the participants; there is a hierarchical structure to the physical space. If the classroom is ordered differently, say chairs are set in a circle, it tells you about different expectations for the attendees. Also, when you are there, you can relax and pay attention to what is happening and not be concerned with the hundreds of possible things that can happen. You trust the architects and engineers that constructed the building, so you are not worried about the building falling, or the roof caving in; there are freezing temperatures outside you are not concerned that electricity might fail and the temperature could fall to threaten your survival, there are thousands of people working diligently to maintain the flow of electricity stable.
You can be carefree in the face of all the inclemency of nature and life because there is a wall of protection that assures your safety. You can dispense with everything so you can focus on what it is that you need to continue with your aim.
To every single unit, object or being in the Cosmos, there is an infinite number of layers or frames of resolution that makeup what it is. The amplitude of the spectrum does not end – inward or outward – and we choose a frame of reference not to go nuts if we need to make a decision to go forward.
Ethics amplifies your capacity to perceive reality’s spectrum
If it’s still not clear, the ethical element to an intelligent being is fundamental for its capacity to discriminate, act, grow and be in any scenario.
If the elements that determine what we can see are who we are, then we must conclude that there are certain principles that will enhance our individual capacity to amplify and develop.
1st Principle: Know Your-Self.
There is a principle of neuroscience that states: “What you focus your attention on... expands.” There is not beating around the bush, if “you don’t see things as they are, you see them as you are” (Anaïs Nin) then who you are needs to expand in your perception and assessment of what is truly you.2nd Principle: Expand Your Knowledge.
There is a symbiosis between ‘knowing yourself’ and expanding your knowledge. Also, the basis for ‘where you focus on, expands’ applies to what you seek to learn. Following your interest allows your curiosity to expand on things that attract your attention and thus expand your boundaries.3rd Principle: Go Deep.
I’ve mentioned before, my professor in university used to say, if you want to become a deep thinker, choose a book and read it all your life. Everything you encounter has an outstanding depth; choose what to aim for and go deep into it. I dare to mention that I chose the Bible some 35 years ago, through it I have come to understand myself, life, people, relations, language, universal history, ethics, ancient cultures, evolution of social rights, religion, spirituality, the world of the unseen, anthropology, paleontology, technology, neuroscience – some of those in depth – the list is staggering, I just cannot make a complete list.
The Frame Problem: how we wield AI within organizations
This problem is at the heart of the solution for what can go wrong with AI in our societies, our organizations and individually. The book, Weapons of Math Destruction, points to a serious scenario we need to address. As much as I agree with the symptom Dr. O’Neil so artfully shows in her book, and as much I do share with her in some of her prescriptions to resolve the problem – they address the problem like contemporary medicine approaches the prescription to most diseases – we need to go beyond the symptoms of the problem if we are to prevent the social “body of this illness.”
The real danger with AI isn’t that it sees too much. It’s that it lacks discernment. Without the human true frame of relevance – born of our taking accountability for our vulnerability, agency, and values – we’ll end up following machines whose only virtue is that they never stop calculating.
I dare share a conference by Dr. Jordan Peterson Reality and the Sacred, it inspired me on how to address the final article to the hairy problem of AI in our world today. Because most times I don’t have any idea where I am going with something, I just let it guide me… and everything around me conspires to make it worth it.
I encourage you to watch this conference.
This article is part of a series exploring the ethical dimension of artificial intelligence. Contrary to popular belief, what matters most is not AI’s technical achievements, but how individuals, organizations, and governments choose to use it. AI creates nothing, it only amplifies what we feed it. If we pursue goals without examining the leverages that drive them, we risk coding the tragedy of our own lives
.