top of page

Heidegger's Hammer: A Philosophical Inquiry into AI

Updated: Dec 21, 2025

 Introduction to Heidegger's Hammer


[At the "AI in Action" panel coming in November, we're going to talk about where AI is heading and what it means for our community. In the article "The Unfolding: How AI's Seventy-Five-Year Pattern Reveals What Comes Next", I tried to provide some background and context for the transition we're experiencing and the emergence of Artificial Intelligence and consumer friendly AI tools, and questions around the potential impact on work, education, businesses, government. But there's a deeper question underneath all of this media hype. What is intelligence, really? Not the engineering question. The philosophical one. This sidebar that follows is a deeper dive into why philosophy matters for AI.]


When Hubert Dreyfus was critiquing AI in the 1960s and 70s, he wasn't just being a curmudgeon philosopher throwing stones from the sidelines at the engineers' parade. He was pointing to something fundamental that many AI researchers were getting wrong, or at least Dreyfus thought they got wrong. Something about what it means to be, and to be a human mind aware of itself in the world. Critical assumptions and models that, if you get wrong, likely dooms your entire AI enterprise. Doom.

 

"To understand a hammer, for example, does not mean to know that hammers have such and such properties and that they are used for certain purposes—or that in order to hammer one follows a certain procedure, i.e., understanding a hammer at its most primordial sense means knowing how to hammer." — Hubert Dreyfus

 

The insight comes from Martin Heidegger's phenomenology articulated in Being and Time (1927), particularly his concept of Dasein, literally "being-there" in German, Heidegger's term for the distinctively human way of existing in the world.

 

Being-in-the-World: Heidegger's Alternative

 

For Heidegger, we don't start by being isolated minds that then have to figure out how to connect with an external world. That's Descartes' mistake, which Baruch Spinoza overthrew but couldn't fully escape from. We're not brains in vats receiving sensory data that we then have to interpret and act upon. Rather, we're always already in the world. Already engaged. Already involved. Already coping.

 

Heidegger distinguishes two fundamental ways things show up for us. When we're absorbed in using equipment, hammering a nail, typing on a keyboard, driving a car, things have what he calls readiness-to-hand (Zuhandenheit). The hammer isn't an object with properties that we observe and then decide to use. It withdraws from conscious attention. It becomes transparent. We don't experience "a hammer" at all. We experience hammering. The distinction between us and our tools vanishes.

 

Only when something breaks down, the hammer breaks, the keyboard sticks, the car stalls, do things become present-at-hand (Vorhandenheit). Only then do we step back and observe them as objects with properties. Only then do we think about them, reason about them, solve problems.


GOFAI, that is, Good Old Fashioned AI, assumed human intelligence meant manipulating internal symbolic representations of external objects via exhaustively codified rules. Everything was present-at-hand. Everything about the world was and could be made explicit. Everything was a fact with properties that could be encoded as symbols and manipulated according to rule sets. But that's not how beings exist in the world. Not beings like you and me.


When I walked into my college math lab in 1990, I didn't explicitly represent "doorway with specific dimensions at coordinates X,Y,Z with door in open position enabling passage." I pressed into the possibility of going through. My body responded to an affordance, ecological psychologist James Gibson's term for the action possibilities the environment offers. The doorway solicited me to enter.

 

This is what Heidegger means by being-in-the-world. I don't represent the world and then act on those representations. I'm always already immersed in meaningful situations that directly draw responses from me. The world isn't a collection of meaningless facts to which I assign significance and manipulate via a rules engine. The world is significant. It's organized in terms of my concerns, my projects, my skills, my body's capabilities.

 

The Frame Problem: Why This Matters

 

Here's why this matters for AI: the frame problem. The frame problem asks when something in the world changes, how does a system know which of its beliefs or representations need updating and which can stay the same? If I move a block from one side of the table to the other, the table doesn't move, the floor doesn't move, the walls don't move. But how does a system know that?

 

For GOFAI, this was unsolvable. Every fact is independent. Every relationship must be explicitly encoded. The problem scales exponentially. You'd need to specify that moving the block doesn't affect the temperature in Beijing, doesn't cause the moon to change phase, doesn't alter yesterday's weather.

 

But Dreyfus, following Heidegger, showed this is only a problem if you start with isolated, meaningless facts. For embodied beings like us, the frame problem doesn't arise. Our past experience of moving things is sedimented in how the world appears to us now. We directly perceive what's relevant and what's not. We don't compute it.

 

The Current AI Breakthrough: Bypassing the Problem?

 

The recent AI breakthroughs, particularly with large language models (LLMs), are interesting precisely because they seem to bypass some of these problems. Not by solving the frame problem through better representations, but by training on such massive amounts of human language and behavior that patterns of significance emerge statistically rather than through explicit encoding.

 

But there's still a problem. These systems still aren't in the world. They don't have bodies. They don't have needs or desires or purposes that arise from being mortal, vulnerable, biological entities navigating a physical environment. They can process information about the world at superhuman scale, but they don't dwell in the world. They don't care about anything. They don't have what Heidegger calls Befindlichkeit (Gesundheit!), the mood or attunement that reveals what matters to us. Mattering, as it were, matters.

 


 

Why Mattering Matters for The Deployment Phase

 

When we deploy AI systems to make decisions that affect human lives, medical diagnoses, loan approvals, criminal sentencing, hiring decisions, we're delegating judgment to systems that have no being-in-the-world. They have no skin in the game. No lived experience of what it means to be denied healthcare, refused credit, wrongly imprisoned, not picked for the team, or passed over for employment.

 

They're computationally smart. But they're not wise. Wisdom requires something more than pattern matching at scale. It requires the accumulated experience of being a vulnerable, finite, embodied, frail human being who must cope with an dreadfully uncertain world. Who makes mistakes and suffers consequences. Who learns what matters through living, not just through training.

 

Dreyfus saw this in the 1960s. His critique seemed cranky and obstructionist at the time. But sixty years later, as I become more cranky and obstructionist, we stand in yet another AI summer, his insights feel compelling.


We can build systems that are extraordinarily competent at specific tasks. But building systems with genuine human-like intelligence, intelligence that includes common sense, contextual understanding, ethical judgment, wisdom, would require not just better algorithms, more data, and different architectures, but systems that share our form of being-in-the-world. That's not a technical challenge. That's a human challenge with significant moral and ethical implications.

 

The Takeaway for Strategy Design

 

Which is why, as we shape the future of A.I., perhaps we need philosophers and poets as much as we need venture capitalists and technical co-founders. We need people asking not just "Can we build it?" but "Should we build it? And why?"There is a gulf of human experience, wisdom, and judgement that stands between "Is," and "Ought."

 

The philosophical backstory matters. Descartes gave us a picture of intelligence that seemed obvious but was fundamentally wrong. He said mind and body are separate things. Thinking happens in here, in some internal mental space. The world is out there, external and meaningless until we impose meaning on it through our mental representations.


This picture has consequences. If intelligence is internal representation plus rules for manipulating those representations, then building artificial intelligence becomes an engineering problem. Just formalize the rules, encode all the possible representations, give it enough processing power. That's what old fashions A.I. tried to do.


Baruch Spinoza saw immediately that Descartes had created an impossible problem. If mind and body are truly separate substances with nothing in common, how do they interact at all? How does deciding to raise your arm cause your arm to raise? His solution was radical for the 1600s. There's only one substance. What we call "mind" and "body" are just two different ways of describing the same thing. Two attributes of one reality.


This was revolutionary and he was excommunicated for this audacity of insight. It meant you couldn't separate thinking from being, mind from world, internal from external. But Spinoza was still working within a rationalist framework. He still believed reality had a logical structure that reason could grasp. He escaped Descartes' dualism but kept Descartes' faith in pure reason (abstraction) as the path to truth.


Three centuries later, Edmund Husserl tried to rescue Descartes by going deeper into consciousness itself. Instead of asking "how does mind connect to world?", ask "how does consciousness structure experience?" Study the phenomena directly. Bracket questions about external reality and focus on the structures of the experiences themselves.


This phenomenological turn was powerful. It shifted philosophy away from abstraction (which sometimes fell into metaphysics (and onto-theology)), toward studying how things actually show up for us in the world. But Husserl still assumed consciousness was fundamentally inner and separate (some people still believe this!), that the starting point was an isolated subject trying to make sense of experience (object).


Heidegger, Husserl's student, realized that this entire endeavor was backwards. The problem wasn't Descartes' specific answer. The problem was Descartes' question. We don't start as isolated minds wondering if there's a world out there. We're always already in the world, engaged with it, coping with it, absorbed in it. Being-in-the-world isn't a conclusion we reach through reasoning. It's the starting point that makes reasoning possible.


This is a powerful reframing. Intelligence isn't primarily about internal representations and logical rules (if only those Dartmouth professors in 1956 had read Heidegger). It's about skillful coping with meaningful situations. The world isn't raw sensory data waiting for our minds to impose meaning. The world shows up as already meaningful, organized in terms of our concerns, our skills, our bodies.


When Dreyfus encountered AI researchers at MIT in the 1960s, he recognized immediately what they'd done. They'd taken Descartes' picture, the one that assumed intelligence equals internal representation plus formal rules, and turned it into a research program. They'd inherited not just Descartes' method but his fundamental misconception about what intelligence is.


That's why GOFAI was doomed. Not because the engineers weren't smart enough or the computers weren't powerful enough. Because the entire approach was based on a philosophical error about the nature of intelligence itself.


You can't fix a problem at the level of implementation when the problem is at the level of conception.

Here's what this means for anyone working with AI today.

If we don't understand what human intelligence actually is, not as computation but as a way of being embedded and coping in meaningful contexts, we'll keep making category mistakes. We'll build systems that can manipulate symbols brilliantly but can't grasp context and these systems will be brittle. That can optimize for metrics but can't understand what matters. That can process information at superhuman scale but can't exercise judgment.

The question now is whether we've learned the lessons from the past. I doubt it, but let's hope.Large language models give the appearance if bypassing some of these problems by training on massive amounts of human language and behavior. Patterns of significance emerge statistically rather than through explicit encoding. That's progress. But these systems still aren't in the world. They don't have bodies, needs, purposes that arise from being mortal beings navigating uncertain environments.

They're incredibly capable. But capability isn't wisdom. And we're deploying them to make decisions about human lives, healthcare, criminal justice, immigration, hiring, credit, all without fully grasping what they can and can't do. Are we making the same mistakes at a larger scale? With more money, more hype, more at stake?

That's the question worth dwelling in the discomfort of. Not to be pessimistic. But to be clear-eyed about what we're actually building and why and for whom and at what price.

 
 
 

Comments


bottom of page