Dasein.exe Not Found: Why AI Can’t Truly Think
What 20th century Existential Philosophy has to teach us about AI
In the 1960s, scholar Hubert L. Dreyfus was one of the first to look at artificial intelligence research from the lens of 20th century existentialist German philosopher Martin Heidegger. At the crux of Dreyfus’ analysis was a critique of the symbolic AI of the time, which sought to represent the world through rule-based manipulations. However, AI, argued Dreyfus, does not inhabit the world like humans do, and lacks a situated perspective that gives meaning to events and processes. AI is disconnected, it lacks “being in the world”, to use Heidegger’s term, and therefore cannot meaningfully engage with things the same way humans can.
Dreyfus argued against his colleagues belief that symbolic AI would lead to super-intelligent machines in the imminent future, and was ostracized and ridiculed for it. But he was right. It would be many decades later and a whole new statistical-based deep learning paradigm that would see real progress in AI. Dreyfus’ Heidegggerian analysis of AI is now as relevant as ever, and indeed, calls out for a modern update.
In "Being and Time" (1927), Martin Heidegger revolutionized philosophical understanding of human existence by introducing the concept of Dasein – the unique mode of being that characterizes human existence. This article examines artificial intelligence through Heidegger's ontological framework, arguing that AI fundamentally lacks the essential characteristics of Dasein and therefore cannot achieve genuine human-like intelligence or understanding.
While this discussion may seem esoteric and some of the terminology abstruse, it has direct and practical implications for AI policy and governance. I implore readers to not repeat the mistake of Dreyfus’ contemporaries and take philosophy seriously.
The argument, which will be extrapolated in detail below, is actually quite straightforward. It’s impossible for AI to care about anything it does, because it is not an involved participant in the world. The disconnectedness and disembodiment of AI brings into question whether we can truly have AGI. If human intelligence is much more than just processing data, and involves a careful, deep participation in a meaningful world, then we still have far to go. This existential analysis suggests something much deeper than an engineering problem plagues AI. It suggests that natural intelligence is a mode of being, a kind of deep engagement with a world of meaning and purposes that is beyond the scope of mere computation.
Dasein and the Question of Being
Heidegger's philosophical project begins with a radical reexamination of what it means to be. Traditional metaphysics, he argues, has forgotten the question of Being (Seinsfrage) by treating Being as either obvious or indefinable. To properly approach this question, Heidegger turns to an analysis of the entity for whom Being is an issue – the human being, which he terms Dasein. This notion of an entity for whom “being is an issue” is critical, and is a major departure between human natural intelligence and artificial intelligence.
Dasein (literally "being-there") is not simply Heidegger's word for human beings as biological entities or conscious subjects. Rather, it names our peculiar mode of being as entities who understand ourselves in terms of possibilities and for whom our own existence is an issue. We are the beings who care about our being, who question what it means to be, and who understand ourselves in terms of the possibilities available to us.
This understanding of being isn't an idle abstraction, but is built into our everyday engagement with the world. Even in our most mundane activities, we operate with an implicit understanding of being – of what it means for things to be, for others to be, and for ourselves to be. This understanding isn't primarily conceptual but is embedded in our practices, our concerns, and our ways of dealing with things and others.
To understand why artificial intelligence differs fundamentally from human intelligence, we must first grasp these essential characteristics that make Dasein unique:
Being-in-the-World (In-der-Welt-sein)
Unlike the Cartesian subject that stands apart from an objective world, Dasein is always already thrown (geworfen) into a world of meaning and possibility. This "thrownness" (Geworfenheit) is not incidental but constitutive of Dasein's being. We never encounter the world as neutral observers but as participants already engaged in meaningful practices and relationships. We encounter the world head on, embedded in a whole process of history, meaning, social context, and human goals and needs. AI is totally removed from this flow of meanings and relationships. It exists outside of our web of meanings.
Care (Sorge) and Projection (Entwurf)
Dasein's fundamental mode of being is care (Sorge). This doesn't simply mean emotional concern but rather describes how we are always already invested in possibilities and involved in the world. This care structure is intimately connected to what Heidegger calls projection (Entwurf). Dasein projects itself into possibilities that matter to it, and this projection isn't mere planning or prediction but a fundamental way of being.
In projection, Dasein "throws itself forward" into possibilities that it understands as meaningful for its existence. When an artist begins a new work, they aren't merely calculating future outcomes but projecting themselves into possibilities that matter to their being. This projection shapes how they understand their present situation, their materials, and their own capabilities. The blank canvas appears as pregnant with possibilities precisely because of this projective understanding.
Thus, when a human being develops an idea or a project, we are fundamentally enmeshed in “care”. We are not just abstractly processing data, like an AI does. We are deeply invested and intertwined with our project. It lives and breathes in us, and has stakes. AI lacks stakes in anything it does, as it is not an authentic participant in the world.
This unity of care and projection means that Dasein's engagement with the world is always already shaped by its understanding of possibilities that matter to it. A scientist's understanding of experimental data, for instance, is shaped by their projection into possibilities of discovery and their care about contributing to human knowledge. This isn't something added to their understanding but constitutive of how they understand at all.
Being-Towards-Death (Sein-zum-Tode)
Dasein is characterized by its understanding of its own finitude. This awareness of mortality isn't just knowledge of an eventual end but structures how we understand our possibilities and give meaning to our existence. Our being-towards-death makes our choices and projects meaningful precisely because we understand them as finite possibilities. AI is non-living, and therefore its inorganic intelligence can never view things as meaningful in the way we do, as conditioned by our finitude and mortality.
Temporality (Zeitlichkeit)
Dasein exists temporally, but not merely in the sense of existing at a particular moment. Rather, Dasein's temporality unifies past, present, and future in what Heidegger calls "ecstatic temporality." We understand our present possibilities in terms of our inherited past and our projected future. AI exists outside of time and does not have the historical and temporal contextualization that defines human life and situates our thinking and perceiving.
The Ontological Limitation of Artificial Intelligence
A crucial precedent to this discussion is Hubert Dreyfus’ What Computers Still Can’t Do (1972), which offers one of the most well-known Heideggerian critiques of artificial intelligence. Dreyfus argues that AI research has been dominated by a Cartesian model of cognition, assuming that intelligence consists of explicit rule-following and symbolic manipulation. This assumption fails to account for the fundamentally embodied and contextual nature of human understanding, as described by Heidegger. AI, according to Dreyfus, lacks situatedness, or the ability to engage meaningfully with a world that is not pre-structured by explicit representations. This critique aligns with Heidegger’s notion of being-in-the-world, reinforcing the argument that AI cannot truly replicate human intelligence because it lacks the existential structures of Dasein.
Recent developments in AI have not overcome these fundamental Heideggerian challenges. While machine learning and neural networks allow AI to recognize patterns at unprecedented scales, they remain within the realm of statistical processing rather than genuine world-involvement. The core issue remains unchanged: AI does not experience care, thrownness, or projection—elements central to Dasein’s understanding of itself and the world.
The Absence of a Clearing (Lichtung) and the Subject of Knowledge
Heidegger describes reason and understanding as occurring within a clearing , a space in which Being reveals itself. Just as a forest clearing allows light to illuminate objects, making them visible, the clearing of Being enables humans to engage meaningfully with the world. This means that understanding is not just a computational process but an openness to the disclosure of Being.
AI, however, lacks this openness to Being. It processes data, but it does not encounter meaning. It does not experience the world as a clearing where truth (aletheia, or unconcealment) emerges. Instead, AI operates through statistical and algorithmic manipulation, devoid of the existential horizon necessary for meaning to arise.
Moreover, AI does not produce knowledge in the Heideggerian sense, because knowledge requires a subject—a Dasein—to experience, interpret, and appreciate it. Without a being to dwell in the clearing, to question and situate knowledge within a meaningful context, AI-generated outputs remain mere computational artifacts, not genuine insights. Knowledge, in this sense, is not just about information but about an entity's relationship to Being, which AI fundamentally lacks. Knowledge requires a subject for its certification. Without the subject, our AIs may produce strings of words and symbols, but it cannot produce knowledge.
This further reinforces the thesis that AI, no matter how sophisticated, does not engage in the disclosure of Being—it merely organizes and processes symbols without truly understanding or experiencing them.
When we examine artificial intelligence through this Heideggerian framework, we see that it lacks these essential characteristics of Dasein:
Absence of Genuine Being-in-the-World
While AI can process information about the world, it lacks genuine being-in-the-world. It doesn't encounter entities as ready-to-hand (Zuhanden) in terms of their practical significance, nor as present-at-hand (Vorhanden) when this practical engagement breaks down. Instead, it processes everything as data points without genuine involvement in a meaningful world.
The Missing Structure of Care and Projection
AI's operations lack the fundamental care structure and projective understanding that characterizes Dasein. While AI can be programmed to optimize for future outcomes, this is fundamentally different from Dasein's projection into possibilities that matter to its existence. An AI analyzing possible chess moves isn't genuinely projecting itself into possibilities that matter to its being – it's executing calculations without authentic care or investment in the outcomes.
This absence of genuine projection means AI lacks true agency. While it can process vast amounts of data about potential futures and calculate optimal paths, it cannot authentically project itself into possibilities that matter to it. Its "decisions" are computations rather than genuine choices emerging from care about its existence and its possibilities.
Temporal Processing Without Authentic Temporality
While AI can process temporal sequences and make predictions, it lacks the authentic temporality that characterizes Dasein. It doesn't genuinely project itself into possibilities from an inherited past. Its "now" is merely a computational state rather than a moment unified with past and future in meaningful existence.
The Absence of Being-Towards-Death
AI lacks the finite temporal horizon that gives human existence its weight and urgency. Without being-towards-death, it cannot have authentic projects or understand possibilities as genuinely its own. Its "choices" are computations rather than existential commitments.
Implications for Understanding Artificial Intelligence
This ontological analysis reveals that the difference between human and artificial intelligence isn't merely one of complexity or processing power but of fundamental modes of being. Several key implications follow:
Understanding vs. Processing
What we call "understanding" in AI is fundamentally different from human understanding. Human understanding emerges from our being-in-the-world, care structure, and projective relation to possibilities, while AI's "understanding" is pattern recognition without genuine meaning or significance.
The Limits of Simulation
While AI can simulate human-like behaviors with increasing sophistication, it cannot bridge the ontological gap that separates it from Dasein. No amount of computational power can create genuine being-in-the-world, authentic care, or true projection into meaningful possibilities.
Ethical and Practical Implications
Recognizing this ontological difference is crucial for responsible AI development. AI should be developed as a tool to enhance human Dasein rather than as an attempt to replicate it. The unique characteristics of human existence – our being-in-the-world, care, projection, and authentic temporality – should be preserved and supported rather than replaced.
Updating Dreyfus’ Critique to the Deep Learning Age
Dreyfus was largely preoccupied with criticizing a view of AI which, if not thoroughly discredited, has since been placed on the back-burner while more rapid progress is being made via the probabilistic modeling of deep learning. Dreyfus was largely opposed to the view that AI had to model the world internally. Instead, the “world is its own best model” and AI systems had to be more embedded and situated lest they fall prey to the “frame problem.” The frame problem was a proposed issue in representationalist AI that dealt with how a model should know what to keep the same (the frame) and what to update whenever it acted on a scene. Systems that modeled every part of the world as a discrete variable struggle with the frame problem, because it quickly becomes computationally intractable to track the exponentially exploding number of possible world-updates for any given action. In robotics, progress was made by relying on sensors, which distributed information processing throughout the robot’s frame, actuators, and joints, more than an internal representation housed in a central processing unit.
Dreyfus criticized classical AI for treating intelligence as a formal rule-following system that manipulates symbols based on predefined categories. He argued that:
•Human intelligence is not rule-based but rooted in embodied, context-sensitive, and intuitive engagement with the world.
•AI cannot function autonomously in open-ended, real-world environments because meaning is not just encoded in symbols—it emerges from experience and interaction (being-in-the-world).
How does modern deep learning stand up to the same critique?
Modern deep learning differs in key ways:
1. It learns from vast amounts of data instead of following explicitly programmed rules.
2. It can approximate intuitive, non-symbolic pattern recognition, something Dreyfus argued classical AI failed to do.
3. It uses neural networks to model complex relationships, rather than depending on rigid, pre-structured ontologies.
While deep learning has solved many of GOFAI’s (Good Old Fashioned AI) shortcomings, it still does not approximate Dasein’s way of being-in-the-world. The following Heideggerian critiques remain applicable:
A. Lack of Genuine “Being-in-the-World”
•DL models do not engage with the world in a direct, embodied way; they passively process massive amounts of labeled data.
•They do not develop understanding through situated experience, but through statistical correlation.
•Example: A deep learning vision model may recognize a cat in an image, but it does not perceive the cat in a meaningful way—it lacks a referential whole where the cat exists in a world of significance.
B. Absence of Skillful Coping
•Heidegger (and Dreyfus) emphasize skilled action as a non-explicit, embodied intelligence.
•Humans do not rely on rule-based computations when performing skilled activities (e.g., playing chess, riding a bike, diagnosing a patient).
•While deep learning excels in tasks that involve recognizing patterns in static data, it struggles with flexible, real-time adaptation to novel situations.
•Example: A self-driving car trained on millions of road scenarios still fails in unpredictable, unstructured environments where human intuition excels.
C. Deep Learning Lacks Meaning and Understanding
•Neural networks recognize statistical patterns but do not “understand” them.
•This is a direct continuation of Dreyfus’ critique: mere computation does not equal understanding.
•Example: GPT-4 generates human-like text, but it has no world it inhabits—it lacks situatedness, care, and projection.
•Meaning in Heidegger’s sense arises within a holistic world of references (Verweisungszusammenhang), which AI lacks.
D. AI Lacks “Care” (Sorge) and Projection (Entwurf)
•Dasein is defined by its care-structure—we are always concerned with our own Being and projecting into the future.
•AI does not care about its outputs or what they mean.
•AI does not project possibilities for itself—it simply executes statistical operations without an existential horizon.
E. No Being-Toward-Death
•Humans make choices knowing that their existence is finite.
•AI does not experience anxiety (Angst), mortality, or an existential horizon—its “decisions” are not weighted by its own finitude.
•This removes the existential significance of choice that characterizes human decision-making.
So while modern deep learning models can learn from experience and sidestep many of symbolic AI’s static representationalist shortcomings, these models still do not dwell in the world—it does not exist in the Heideggerian sense. Exposing deep learning models to countless millions of examples never quite adds up to a complete picture of the world, because that complete picture always includes a Dasein, an active subjective involvement in the creation of meaning and contexts. Deep learning never quite adds that extra sauce of meaning and world-engagement.
Tying Everything Together
From a Heideggerian point of view we can see that AI fundamentally lacks true agency and therefore can never truly replace human intelligence. The consequences of this conclusion are compelling. Would it imply that we would need artificial sentience to truly arrive at AGI? Would we need a perceiving, situated, knowing subject to truly engage with a world of meaning and possibilities, for it to actually care about what it does, and to strive for goals in a naturalistic and significant way? Existing generative AI tools are useful and powerful, but only as adjuncts of human cognition. They are extensions of our own minds, and it is the presence of our intelligence which certifies and makes true use of their outputs. To develop truly intelligent artificial systems we may need to go one step further. We may need to create a new form of life, one that participates in the world, with a Dasein of its own, and strives toward future possibilities, projects and constructions.
As it stands, AI is still very useful and relevant as an extension of our own possibilities, projects, and constructions, and to the extent that it remains this way, a tool, rather than a coequal participant in the world of experience and knowledge creation, we should be fine. The confusion comes from the project of launching AI agents which are fundamentally incapable of meaningful agency, as a Heideggerian critique of AI makes clear.