top of page

Heidegger's Hammer: A Philosophical Inquiry into AI

Updated: 17h

ree

 Introduction to Heidegger's Hammer


[At the "AI in Action" panel coming in November, we're going to talk about where AI is heading and what it means for our community. In the article "The Unfolding: How AI's Seventy-Five-Year Pattern Reveals What Comes Next", I tried to provide some background and context for the transition we're experiencing and the emergence of Artificial Intelligence and consumer friendly AI tools, and questions around the potential impact on work, education, businesses, government. But there's a deeper question underneath all of this media hype. What is intelligence, really? Not the engineering question. The philosophical one. This sidebar that follows is a deeper dive into why philosophy matters for AI.]


When Hubert Dreyfus was critiquing AI in the 1960s and 70s, he wasn't just being a curmudgeon philosopher throwing stones from the sidelines at the engineers' parade. He was pointing to something fundamental that many AI researchers were getting wrong, or at least Dreyfus thought they got wrong. Something about what it means to be, and to be a human mind aware of itself in the world. Critical assumptions and models that, if you get wrong, likely dooms your entire AI enterprise. Doom.

 

"The AI guys had inherited Descartes' failure."

 

The insight comes from Martin Heidegger's phenomenology articulated in Being and Time (1927), particularly his concept of Dasein, literally "being-there" in German, Heidegger's term for the distinctively human way of existing in the world.

 

Being-in-the-World: Heidegger's Alternative

 

For Heidegger, we don't start by being isolated minds that then have to figure out how to connect with an external world. That's Descartes' mistake, which Baruch Spinoza overthrew but couldn't fully escape from. We're not brains in vats receiving sensory data that we then have to interpret and act upon. Rather, we're always already in the world. Already engaged. Already involved. Already coping.

 

Heidegger distinguishes two fundamental ways things show up for us. When we're absorbed in using equipment, hammering a nail, typing on a keyboard, driving a car, things have what he calls readiness-to-hand (Zuhandenheit). The hammer isn't an object with properties that we observe and then decide to use. It withdraws from conscious attention. It becomes transparent. We don't experience "a hammer" at all. We experience hammering. The distinction between us and our tools vanishes.

 

Only when something breaks down, the hammer breaks, the keyboard sticks, the car stalls, do things become present-at-hand (Vorhandenheit). Only then do we step back and observe them as objects with properties. Only then do we think about them, reason about them, solve problems.


GOFAI, that is, Good Old Fashioned AI, assumed intelligence meant manipulating internal symbolic representations of external objects via exhaustively codified rules. Everything was present-at-hand. Everything about the world was and could be made explicit. Everything was a fact with properties that could be encoded as symbols and manipulated according to rule sets. But that's not how beings exist in the world. Not beings like you and me.


When I walked into my college math lab in 1990, I didn't explicitly represent "doorway with specific dimensions at coordinates X,Y,Z with door in open position enabling passage." I pressed into the possibility of going through. My body responded to an affordance, ecological psychologist James Gibson's term for the action possibilities the environment offers. The doorway solicited me to enter.

 

This is what Heidegger means by being-in-the-world. I don't represent the world and then act on those representations. I'm always already immersed in meaningful situations that directly draw responses from me. The world isn't a collection of meaningless facts to which I assign significance and manipulate via a rules engine. The world is significant. It's organized in terms of my concerns, my projects, my skills, my body's capabilities.

 

The Frame Problem: Why This Matters

 

Here's why this matters for AI: the frame problem. The frame problem asks when something in the world changes, how does a system know which of its beliefs or representations need updating and which can stay the same? If I move a block from one side of the table to the other, the table doesn't move, the floor doesn't move, the walls don't move. But how does a system know that?

 

For GOFAI, this was unsolvable. Every fact is independent. Every relationship must be explicitly encoded. The problem scales exponentially. You'd need to specify that moving the block doesn't affect the temperature in Beijing, doesn't cause the moon to change phase, doesn't alter yesterday's weather.

 

But Dreyfus, following Heidegger, showed this is only a problem if you start with isolated, meaningless facts. For embodied beings like us, the frame problem doesn't arise. Our past experience of moving things is sedimented in how the world appears to us now. We directly perceive what's relevant and what's not. We don't compute it.

 

The Current AI Breakthrough: Bypassing the Problem?

 

The recent AI breakthroughs, particularly with large language models, are interesting precisely because they seem to bypass some of these problems. Not by solving the frame problem through better representations, but by training on such massive amounts of human language and behavior that patterns of significance emerge statistically rather than through explicit encoding.

 

But there's still a problem. These systems still aren't in the world. They don't have bodies. They don't have needs or desires or purposes that arise from being mortal, vulnerable, biological entities navigating a physical environment. They can process information about the world at superhuman scale, but they don't dwell in the world. They don't care about anything. They don't have what Heidegger calls Befindlichkeit, the mood or attunement that reveals what matters to us. Mattering, as it were, matters.

 


ree

 

Why Mattering Matters for The Deployment Phase

 

When we deploy AI systems to make decisions that affect human lives, medical diagnoses, loan approvals, criminal sentencing, hiring decisions, we're delegating judgment to systems that have no being-in-the-world. They have no skin in the game. No lived experience of what it means to be denied healthcare, refused credit, wrongly imprisoned, not picked for the team, or passed over for employment.

 

They're computationally smart. But they're not wise. Wisdom requires something more than pattern matching at scale. It requires the accumulated experience of being a vulnerable, finite, embodied, frail human being who must cope with an dreadfully uncertain world. Who makes mistakes and suffers consequences. Who learns what matters through living, not just through training.

 

Dreyfus saw this in the 1960s. His critique seemed cranky and obstructionist at the time. But sixty years later, as I become more cranky and obstructionist, we stand in yet another AI summer, his insights feel prophetic.


We can build systems that are extraordinarily competent at specific tasks. But building systems with genuine human-like intelligence, intelligence that includes common sense, contextual understanding, ethical judgment, wisdom, would require not just better algorithms, more data, and different architectures, but systems that share our form of being-in-the-world.

 

And that's not a technical challenge. It’s a human challenge. It's an ontological challenge.

 

The Takeaway for Strategy Design

 

Which is why, as we shape the future of AI, we need philosophers and poets as much as we need venture capitalists and technical co-founders. We need people asking not just "Can we build it?" but "Should we build it? And why?"

 

I think the philosophical lineage matters. Descartes gave us the wrong picture. Spinoza (the master lens craftsman)  saw the error but couldn't fully escape it. Husserl tried to rehabilitate Descartes on phenomenological grounds. Heidegger showed why the entire Cartesian project was Trump Steak to begin with. And Dreyfus brought these insights to bear on artificial intelligence, showing why GOFAI (and GenAI, FWIW) was doomed from the start.


Because if we don't understand what human intelligence actually is, not as computation but as a way of being, we'll keep making the same category mistakes, just with more computational power behind them and a lot less rain forest. Worse, we'll have failed to adapt, upskill, and shape how this new technology is diffused across society.

 
 
 

Comments


bottom of page