Philosophical Foundations of Learning: Insights from Wittgenstein
and Heidegger on AI Cognition
Zhifei He
College of Arts — School of Humanities, University of Glasgow, Glasgow, U.K.
Keywords: Wittgenstein, Existential Engagement, Language and Thought, AI Ethics, Situated Learning.
Abstract: This paper attempts to develop a perspective on how Wittgenstein and Heidegger can be fruitfully synthesized
to bear upon the questions involved in designing AI. Currently, when AI technologies are basically good at
track following and note making, they lose traits toward social and existential intelligibility. The loss comes
in comprehending meanings and intentions. Indeed, the “language games” concept of Wittgenstein tends to
emphasize that meaning is negotiated through dynamisms induced by social practices on the one hand, though
Heidegger argues on the other that understanding comes from engagement emanating from the body in the
world. The context with elements of learning and adaptation that AI has today is seen as limitations, and hence
the paper unearths philosophy-based methods that can be used to improve the performance of AI through real
social and environmental embedding.
1 INTRODUCTION
The stunning pace of artificial intelligence has
effectively elevated capacities pertaining to certain
miracles of language generation, image
identification, and autonomous systems. However,
the success of AI significantly relies on
representational models; in other words, the system
crutches enormous data to leverage patterns and
create outcomes that link statistically. Of course,
these approaches produce results but not as much as
human cognition does in terms of significance,
context, and intention. All these hereby claim a shift
in paradigm from data-driven pattern recognition
toward models of engagement, where AI systems
dynamically interact with the world and its manifold
complexities.
Both philosophers, Wittgenstein and Heidegger,
have put forward insights as to where the future
development of AI should lead. They push the theory
of representational AI toward the relational,
contextual, and existential aspects of being.
“Language games” explain how meaning is
constructed through practical use in shared practices,
while Heidegger, through the notion of “Being-in-
the-world,” stresses that understanding is acquired
through an engaged and embodied relationship with
an environment. These views highlight the limitation
of present-day AI systems but also serve as markers
on the road toward building machines that might
more humanly engage with the world. One of its most
glaring deficits is that AI cannot be social and
existential in creating meaning. According to
Wittgenstein and Heidegger, human language and
thought derive fundamentally from social praxis and
life experience. Meaning, for Wittgenstein, comes
from the common rules and practices of individual
communities—rules that are acquired actively, not
passively, through shared participation. Heidegger,
within an ontological framework of reference,
demonstrates for us that human interpretation is
impossible without context. Take an example, the
carpenter perceives a hammer not through abstract
knowledge about how it is made but through
knowledge about use and specifically how it is
employed within the work of building.AI lacks the
ability to engage with social and physical contexts,
making its operations rigid compared to human
cognition. Modern AI language models, such as GPT,
are very good at simulating human text generation,
but they completely lack an understanding of the
intentions, the cultural normativity, or even the
existential consequences of what they are doing. A
machine can produce a very grammatically correct
and seemingly relevant response to something deeply
sensitive but cluelessly miss out on the emotional or
even social nuance underpinning the conversation
because its building corpus does not account for it.
This is because AI operates within static datasets and
He, Z.
Philosophical Foundations of Learning: Insights from Wittgenstein and Heidegger on AI Cognition.
DOI: 10.5220/0013977700004912
Paper published under CC license (CC BY-NC-ND 4.0)
In Proceedings of the 1st International Conference on Innovative Education and Social Development (IESD 2025), pages 231-238
ISBN: 978-989-758-779-5
Proceedings Copyright © 2025 by SCITEPRESS Science and Technology Publications, Lda.
231
representational logics that end up taking the world as
symbols and data points rather than engaging with it
as a dynamic and relational field.
Furthermore, Heidegger’s reflections highlight
another key aspect of interaction embodied with the
environment. While artificial intelligence in the
current period is increasingly abstract in nature, the
same intelligence, when Heideggerian, would possess
sensorial and relational sensitivities—enabling robots
to understand their environments contextually. The
confluence of philosophical thought between
Wittgenstein and Heidegger surfaced incredibly
substantial ethical considerations—to place the two
cultures at such a level of interface where the very
design of AI must consider respect for cultural
diversity, norms, and existential values. An AI system
trained to detect and make a joke at its own expense
must be able to appreciate various meanings that
could attach to the same expressions in a different
setting. Likewise, in the quest for AI that is
empathetic, it should be made sure that machines do
not take advantage of human emotions to manipulate
them. Therefore, these developments can be guided
by the challenges through a philosophy that
synthesizes such human-centered principles, to help
build systems that over all better the quality of human
life rather than undermining it.
This paper examines how philosophy can be
synthesized with AI to create a future in which
machines can interact with the world in a more
meaningful way. Wittgenstein and Heidegger,
through criticism of representation, stress context,
practice, and relationality, and this paper correlates
their arguments to some limitations and possible
routes toward an AI that can surpass mere intelligence
to be adaptive and sensitive to context and even be
ethically grounded. A good integration between
philosophy and technology can make it possible for
AI to evolve from its purely functional orientation to
the level of cognition and values that humans also
have.
2 IMPLICATIONS OF
PHILOSOPHICAL
FOUNDATIONS FOR AI:
LEARNING BEYOND DATA
Human learning is not just a linear process of pattern
recognition but active and reflective engagement with
the world. It is dynamic. Human cognition comes to
grips with new situations by constantly reflecting on
meaning and interpreting reality in ever-changing
ways. This makes human learners much more than
information gatherers and users in an adaptive
process of meaning seeking and reality making. The
Heideggerian "being in the world" sees purpose,
emotional states, cultural meanings, and lived
experiences all determine and be part of the learning
process through informal learning. However, much of
machine learning, especially in today’s AI, rests with
large-scale pattern processing in data. A scientific
theory of cognition and the possibility of a single
worldview have been hindered by the
phenomenology of AI and robotics, which has
ignored representational features (Gómez, Bravo,
2009). It is training machines to identify and then
output statistical correlations, based typically on
statistical relationships abstracted from any
meaningful context. These models learn rather
sophisticated syntax, picking up on n-grams
(sequences of words) and probabilities that allow
fluency text production.
The constraints regarding AI pertain to a lack of
real understanding. Ai can simulate intelligence
because it provides apt answers or executes tasks in
an efficient manner, yet this “knowledge” is entirely
utilitarian, not involving any reflective or conscious
thought. An output from the AI results from the
optimized algorithm, not from any reflection on the
content. Humans have a better understanding of
things; they perceive reality and can ask
metacognitive questions about their learning. Take,
for instance, a model like GPT. Such models do very
well in predicting the next word in a sequence, based
on a huge amount of data, or generating output that
sounds convincing. But they do not have a deep
understanding of the very essence and intentions of
humans influencing language use to express thoughts.
For Heidegger, AI stands apart from the realm of
meaning, unable to grapple with existence and what
it means for being in the world. When humans
articulate words using their language as an example
for empathy, humor, or sarcasm, behaviors closely
linked to social practice and lived experience,
language-using machines only issue token responses
to simulate these same expressions. They do so
without having a glimmer of what those words signify
or any concern about their meaning. This cognitive
limitation underscores the deeper philosophical
problem that AI systems are essentially
representational in their approach to learning. They
reduce reality to data and symbols, and thus never get
to grips with the world in a practical, intentional, or
empirical manner. Consequently, their "knowledge"
remains disconnected from that environment which
gives human understanding its depth and plasticity.
IESD 2025 - International Conference on Innovative Education and Social Development
232
These do not detract from the fact that the
philosophical orientations of Wittgenstein and
Heidegger can very usefully be pressed into service
to direct AI toward more contextually aware and
meaningful learning. By learning the philosophy of
meaning and conditions of the possibility of
understanding, the developers may eventually come
to engineer such AI that can more closely
approximate the specific faculties of human
reasoning.
3 WITTGENSTEIN’S LANGUAGE
GAMES: MEANING AS SOCIAL
PRACTICE AND CHALLENGES
FOR AI
The idea of “language games” was introduced by
Wittgenstein to basically dismiss the old theory that
language is a set of symbols operating within a fixed
system of rules. What he is actually trying to get at is
that meaning does not spring forth from matched
definitions or some sort of symbolic formalism, but
rather from the practical use of words within concrete
social relationships. It is from this view that language
becomes an ever-changing and active engagement
positioned within different “forms of life”—shared
and interacting cultural practices, customs, and ways
that ground the meaning of words. Take, for example,
the word “apology.” This term will have meaning
only as its usage is ascribed within a given culture and
specific relationship with reference to mending a
social relationship that has been broken by some
offense—not from any dictionary definition. The
human richness of language depends on parasitic
sensitivities such as tone, intention, or mere
relationality with established cultural standards.
Naturally, speakers adjust themselves to those games
for languages to which they belong by offering
appropriate utterances that those linguistic agents
might expect and conventionally perform.
Wittgenstein teaches us philosophical psychology
challenges the view that language is a device for
communicating independently constituted thoughts,
suggesting that language determines thought rather
than just communicating independently constituted
thoughts (Proudfoot, 2009), and with the concept that
language is a tool for conveying freely formed ideas
is contested by Wittgenstein's philosophical
psychology, which contends that language causes
thought rather than merely conveying independently
formed ideas. And its most fundamental level,
language is a practice that is social—it is not 'known'
or 'understood' through learned rules or patterns but
through active participation in situations where
meaning is constructed and negotiated.
Since the 1950s, post-Wittgensteinian philosophy
has shaped AI research, ignoring Margaret
Masterman's contributions to ordinary language
philosophy (Liu, 2021). Given this view of language,
it becomes clear that much as a large part of modern
AI might try, it does not understand language.
Modern AI processes language quite different from
the way data and statistics are used. Present AI is
based on recognizing patterns in a giant amount of
information stored in datasets; their outputs are
probability-based and not necessarily indicative of
true understanding or engagement with context.
While it might produce sentences that are
grammatically and semantically correct, the use of
language by AI fails to achieve any of the dynamism
of language games as posited by Wittgenstein and
because it does not dynamically operate within real
ongoing human social activities. For example, in a
discussion on ethics: human participants regulate
their speech in terms of the norms of the specific
culture, the moral principle involved, or even the tone
of feeling that is under consideration. AI would be
cautious with words when speaking on sensitive
matters; this will be applied as metaphors or hedging
words to avoid conflict. Such adjustments are not
formulaic but arise from an intuitive understanding of
the shared norms that govern ethical conversation. AI
does not have this power of interpretation to adapt
language in a socially meaningful way. It can only
simulate these through patterns that have been learned
during training, but it is not truly “aware” of the rules
of the game nor can it comprehend the deeper
significance of ethical discourse. For example, it may
provide an apparently appropriate response to a
sensitive topic, using an algorithm that lacks the
emotional or even cultural resonance needed to give
the conversation human depth. The reason for this is
simple: AI models are not attached to lived
experiences or cultural practices because they
constitute the very foundation of Wittgenstein’s
forms of life and, therefore, any meaning for
language. Consequently, AI perceives language as
bare information, stripped of those human
relationships that normally interpret its content. In the
opinion of Wittgenstein, meaning at all is essentially
coterminous with its use, a view that contemporary
AI is forced to take and can only ever really achieve
the appearance of use.
The existing AI models have been designed to
learn from the language used in static texts that float
in a vacuum. A big leap forward would be to have
Philosophical Foundations of Learning: Insights from Wittgenstein and Heidegger on AI Cognition
233
systems that can learn language use within socially
situated environments, where words are used in
actual, practical, and interactive contexts. Nelson
(2009) mentioned that the significance of communal
activities for language acquisition is highlighted by
contrasting Wittgenstein's ideas on word learning and
meaning in group activities with contemporary
cognitive and social-pragmatic theories. Through
this, the machine can attempt to replicate the kind of
practical understanding comprising diverse elements
which humans derive from actual lived interactions.
Just like humans learn by trial and error with social
validation, the AI can implement reinforcement
learning in the context of real environments. Through
interaction with a user and either explicit or implicit
feedback, the AI could develop that intuitive feeling
of when its response corresponds to the “rules of the
game.” For instance, it could come to perceive that
jokes are fine for informal contexts but not for the
gravitas needed in formal interactions or notice that at
times it should prioritize empathizing over just being
factually correct.
4 HEIDEGGER’S BEING-IN-THE-
WORLD: MEANING AS
EXISTENTIAL ENGAGEMENT
AND CHALLENGES FOR AI
Martin Heidegger’s concept of “Being-in-the-world”
presents a fundamental challenge to traditional views
of cognition. Heidegger repudiates the
representational model, which takes understanding to
be the production of internal constructions that
somehow correspond to an external reality.
Positively, he asserts that human cognition comes
into being as active engagement with the world
such an approach underscores the focus on meaning
as located, contextual, and relational. According to
this view, therefore, understanding does not stand
detached from mere observation but gets assimilated
as a practical engagement conducted through
embodiment. Humans, as per Heidegger, do not "gain
access" to the world by means of abstract symbols;
they are always already in some way involved in it,
drawing their significance from those contexts in
which their action and existence find themselves.
Imagine a carpenter with tools. In daily practice,
a carpenter uses these tools without breaking down
their description into individual parts. Instead of
defining the hammer as a “wooden utility with a
handle and a steel striking head, the carpenter gives
it a role—for instance, to drive in nails. Such
equipment acquires its definitive sense from its
practical integration into the totality of equipment
within which it operates. Heidegger has termed this
mode of being “readiness to hand,” in which meaning
is already built into our dealings with things,
stemming from their bearing on the tasks we set for
ourselves. This engaged relationship stands in stark
contrast with the theoretical attitude, which naturally
demands the isolation and abstraction of objects from
their settings and reduces them to individual
components. Unlike humans, AI systems perceive
their surroundings abstractly and separate from the
world that they act in; humans see where they are
going based on lived experience. Even in advanced
robotics research, where machines can move from
one place to another for the most part, those
movements are determined by pre-specified
algorithms rather than by the robot having a built-in
appreciation of its presence in such space. For
instance, a robot may traverse a cluttered room
successfully with reliance on sensor data and
pathfinding algorithms, yet always within that space
without ever truly comprehending its place and
purpose within it. For example, to move an obstacle
rather than navigate around it or why objects within
the room have some emotional or functional
importance to them. Lacking existential grounding,
AI is fundamentally unmoored from the very contexts
within which it operates, thereby reducing any
possible engagement with the world as input-output
functioning.
One of the fundamental challenges in attempting
to recapture the Heideggerian notion of Being-in-the-
world results from the basic nature of AI’s design. but
by addressing the Cartesian presumptions of modern
cognitive science, a Heideggerian approach can act as
the "conceptual glue" for future cognitive science
research (Wheeler, 2005). AI is designed upon
representational architectures that treat the world as
information to be processed, not as a meaningful
relational whole. These systems are based on
statistical relationships and hard-and-fast rules that
take inputs as they appear without the possibility of
accumulating the kind of contextual and relational
understanding that arises out of human cognition. Nor
does AI have that kind of intentionality, the
background upon which human dealings with the
world rest. A more Heideggerian approach might
enhance AI research and development, while
Heideggerian AI failed because it lacked a
comprehensive knowledge of the human mind
(Dreyfus, 2007). For humans, every action and every
perception is molded by some goal or anxiety, which
Heidegger has termed Dasein's "care," influencing
IESD 2025 - International Conference on Innovative Education and Social Development
234
how one interacts with things and interprets their
environment. Insight is a mysterious, unconscious
phenomenon that can be explained in causal terms, as
demonstrated by the emergence of the 'cognitive
unconscious' in the 19th and early 20th centuries
(Shanker, 1995). To approach the AI systems closer
to Heidegger’s ideal contextual understanding, the
researcher needs to make a machine that would take
into consideration the sensation and the environment
that is around it and the basic concept of human
activity and its prospects for further study are
illuminated by Heidegger's Dasein philosophy and its
applicability to cognitive science. (Kiverstein,
Wheeler, 2012). That means moving from
representational processing to one that is more
relational and adaptive engagement. There are
multiple avenues to be pursued: beyond just
immediate sensory inputs, the AI system must be able
to learn long-term patterns of interaction with the
environment and adapt its behavior over those scales.
AI systems should develop feedback loops to be able
to refine their understanding of the environment
through interaction with humans and other agents.
Preston (1993) raised that although it needs more
refinement and comprehension, Heidegger's
alternative method of analyzing intelligent behavior
provides insightful information for AI and cognitive
research. This draws Heideggerian insights in AI
design away from the classical frames of computation
models, turning to prioritize relational engagement,
contextual awareness, and intentionality. In this way,
digital systems could come a little closer to achieving
the existential and situational features of human
cognition. Perhaps, attaining true Being-in-the-world
may be far from reach, but such developments will
make the AI system practice more meaningful and
adaptive interactions with the environment, hence
drawing it nearer to Heidegger’s vision of
understanding as a dynamic process that involves an
active context.
5 CONTRASTING YET
COMPLEMENTARY
PERSPECTIVES:
WITTGENSTEIN AND
HEIDEGGER ON MEANING
AND CHALLENGES FOR AI
The conception of meaning of the two philosophers,
Wittgenstein and Heidegger, assimilate different yet
reciprocal aspects. For instance, Wittgenstein
primarily focuses on the external social practices that
form the basis for any meaning emerging out of
shared human activities. According to him, language
is not a formation but a dynamic process; it is a game
that depends on forms of life belonging to cultured
and social human beings. Wittgenstein's "Lecture on
Ethics" continues his exploration of ethics by delving
into Heidegger's ideas of Being and dread (Murray,
1974). Hence, words get their meaning not just from
any particular circumstance but from their use within
socially defined practices. For example, the concept
of making a promise only has meaning according to a
culture where there is already an understanding of
trust and responsibility. So, as Dastur (2010) said that,
Wittgenstein refuses metalanguage and questions his
own position, while Heidegger questions the ante-
predicative foundation of propositional language and
its connection to Dasein's transcendence. On the other
hand, Heidegger conveys that meaning arises in the
context of active embodied engagement with the
world. And so human understanding is based on the
actual physical, social, and existential contexts of life.
Meaning arises not only through social conventions
but through a direct, willful engagement with one's
surroundings. For instance, for a carpenter, the
concept of a hammer goes beyond what is described
in the language or how it is used within a society; it
emerges from the embodied experience of this tool as
an extension of his purpose and intentions for a
specific task. The orientation toward Being-in-the-
world pressed by Heidegger elicits how meaning is
fundamentally something deeply relational and
contextualized, emerging from human existence.
Social practices and existential engagements
underscore fundamental lacunae in the AI paradigm.
Most AI systems are essentially about symbols,
abstractions, and deriving patterns from data—
largely missing out on all of what learning and
meaning-making are socially and existentially about.
For example, it is capable of producing text in various
languages or holding a simple conversation, yet A.I.'s
comprehension consists strictly of functionality and
not human cognitive depth. Integrating insights
advanced by Wittgenstein with Heidegger
researchers will open up, through their work, the
manners in which AI can be brought toward a richer
and more cohesive understanding of meaning.
Wittgenstein and Heidegger's views on mind/action
can be combined to form a holistic view of life,
combining their views on the constitution of states
and the flow-structure of the stream of life (Schatzki,
1993). Since, according to Wittgenstein, meaning is
something concocted socially, therefore AI systems
can be developed to gather its learning experience
Philosophical Foundations of Learning: Insights from Wittgenstein and Heidegger on AI Cognition
235
from interaction throughout a wide range of human
settings. Existing AI models are rather grounded on
fixed datasets and hence display difficulty in
adaptation toward the socially contingent and
contextualized nature of practice. Such, including
dynamic and interactive frameworks, could place
artificial intelligence systems in a position to perceive
and absorb implicit rules of human interaction. An
example scenario could be where conversational AI
observes and fine-tunes responses according to live
users, thereby gradually learning norms of context
regarding politeness or humor. Being-in-the-world
would be applied as per Fecht’s principle which
indicates the embedding environment is instrumental
to meaningful understanding. AI systems which are
designed for the processing of symbolic data as well
as physical, sensory, and relational inputs may
acquire a better-rounded understanding of their
environment. One example can be a household robot
which notices the warmth coming from the sun. By
combining sensory information, such as the warmth
of sunlight or the sound of a certain person’s voice,
coming from the environment, the robot will be able
to relate to its environment in terms of Heideggerian
relationality. To develop AI, this is exactly what the
integration of embodied practices with social
practices entails.
6 ADVANCING MACHINE
COGNITION: TOWARD A NEW
PARADIGM
The future of artificial intelligence requires a
transition from representational models— where AI
identifies patterns based on data, to an engagement
model defined as context-aware, cognition-driven
through interaction. Today, representational models
are basic in the AI systems that are built around
extracting a statistical pattern from a large dataset and
generating output based on this abstraction. Duarte
(2019) said the techniques based on machine learning
can successfully mimic human behaviour in games,
producing results that are almost identical to those of
trained players in traditional card games like Hearts.
Great for doing things like advancing our capabilities
in natural language processing or image recognition,
it turns out to be extremely bad for getting a handle
on the world in any deep way because it lacks proper
engagement. An engagement model must mean that
AI systems are not only data processing but also
dynamic interaction with the environment, thereby
being capable of adaptation to the concrete context
and learning through embodiment experience.This
version of Hubert Dreyfus's seminal work
underscores the need for AI researchers to investigate
philosophy and intricate mental models, as well as the
incapacity of disembodied machines to replicate
higher mental activities (Dreyfus, 1992).
An engagement model gives priority to data as
contextual and relational knowledge rather than
abstract symbols. Thus, Future work should focus on
interactive games where communication in Natural
Language is crucial for understanding semantics and
physical embodiment is essential for developing
grounded meanings in neural models (Suglia,
Konstas, Lemon, 2024). And information has to be
embodied within real contexts. Therefore, for
instance, an interactional approach for conversational
AI would relate not only to the capacity to generate
grammatically correct language responses but a
further accommodation of tone, phrasing, and
information to the emotional status, cultural
environment, and social dynamics of the individual
being spoken to. Such dynamics in interaction barites
a striking resemblance to human communication,
where the meaning flows not just from words but
from the network between language use, intention,
and context. At the application level, this requires
adaptive feedback loops that depend on continuous
learning from experience. Perspectives of philosophy
give a grounding to such transformations, involving
critiques on existing limitations as well as offering
constructive ways to help build systems that could
comprehend more meaningful cognition. Both
Wittgenstein and Heidegger could be relevant to the
development of context-aware AI. Heidegger argues
that for a basic understanding of human experience,
one must consider a complex set of interrelations
among different entities over time. Wittgenstein’s
approach through language also offers an interesting
perspective: in his later work, he placed increasing
emphasis on the social aspects of language, and the
notion that rules should be seen as emergent
phenomena evolving in response to a need within
human practice. The training of AI should keep this
in mind and gradually place more importance on
datasets that reflect the dynamism and contextual
grounding for its learning. Machine learning
techniques, such as neural networks, evolutionary
computation, and reinforcement learning, can
enhance digital game AI by improving game agent
behavior and creating more engaging and entertaining
experiences (Galway, Charles, Black, 2008). For
example, an AI system trained using Wittgensteinian
principles in datasets representing diversity across
human social practices might acquire the ability to
IESD 2025 - International Conference on Innovative Education and Social Development
236
understand humor, together with politeness or
sarcasm, by recognizing and adapting its responses
according to contextual norms. The focus for such an
AI is on developing pragmatic adaptability as it learns
the evolving, dynamic “rules” of communication
through real-world experience rather than through
fixed algorithms. 'Being-in-the-World’: Heidegger’s
philosophy underscores the embodied practice and
referential context. Such an AI Heidegger would be
programmed to dynamically engage its environment,
drawing knowledge not only from datasets but also
from embedded sensory encounters. For instance, a
robotic system in a care home might learn how to
understand the implicit emotional and physical
signals given by the residents and adapt its responses
to become more personalized and caring. In such a
system, meaning would be that which results from its
relationship with the world, rather than as an isolated
computational result. These philosophical intuitions
are essentially what is meant to underscore the
impetus toward building future AI that can truly
perform within lived contexts, be those through social
interaction (Wittgenstein) or environmental
embedding (Heidegger). This is how AI can be
designed to engage with the world more in line with
human understanding.
As AI systems become increasingly integrated
into human contexts, ensuring that they align with
social norms, cultural differences, and existential
values becomes essential. The research of Heidegger
and Wittgenstein can help educators better
comprehend cognition in science classes, improve in-
class science instruction, and guide the creation of
innovative learning environments (Roth, 1997). The
move toward the model of engagement makes the
need for ethical safeguards even greater because AI
systems will come to bear direct influence upon and
interaction with human lives. Philosophy has laid
down important motivations for ensuring human-
centric ethical AI design through the principle of
making sense of the world. For example, in his claims
about language and meaning, Wittgenstein more than
once pressed that the rules of social practice be
respected in building an AI system. A robot trained to
know what is funny in one culture must appreciate
that these utterances may be used very
inappropriately or be misunderstood in another
setting. Heidegger posits that `being with others as a
condition of being is a significant aspect of being.
With consequences for ontological and
epistemological commitments, the paradigm shift in
AI towards enactive cognition centers on
embodiment, development, interaction, and emotion
(Vernon, Furlong, 2006). The AI will have to be
empathetic and be involved, especially for delicate
domains such as healthcare or education. Ethical
considerations, as autonomy achieves by the AI, must
also demand careful respect. As AI becomes more
capable of dynamic engagement, so too must design
ensure that the machine's autonomy is bounded by
humans watching it and keeping it accountable. An
AI system should therefore make explicit its decision-
making processes to allow for reasons for
understanding and challenging actions when
necessary.
7 CONCLUSION
The idea of incorporating AI into design reflects
limitations, but also untapped potential for
innovation. Wittgenstein's interpretation of
Heidegger challenges the representational and data-
driven nature of AI, the construction of AI and the
continuing development of its theory that can move
beyond mere efficiency to meaningful and ethical
engagement with the human environment. The results
are fine in terms of output, but these models miss the
deeper layers of meaning - contextual, social and
existential; These are all components of human
understanding. In practical social usage, the
philosophical principles of interaction, adaptability,
and situational awareness are prioritized and
established by humans in a shared framework. This
means embedding AI systems in a relational
environment where meaning is no longer thought of
as some abstract piece of data, but as something that
emerges and develops through concrete interactions
with the world. Philosophically, this has pushed the
boundaries of AI from mere functionality to the
ability to adapt meaningfully in real-world complex
situations.
The next generation of AI should also go beyond
these levels of interaction to capture social
interactions, environmental contexts, and even
philosophical reflections to better fit into these
contexts. This will enable AI to achieve not only
operational efficiency, but also the richness and
adaptability of human cognition. The insights raised
from philosophy will hopefully provide new ways to
build intelligent and meaningful AI that aligns with
human life.
REFERENCES
Dastur, F. (2010). Language and Metaphysics in Heidegger
and Wittgenstein. , 319-331.
Philosophical Foundations of Learning: Insights from Wittgenstein and Heidegger on AI Cognition
237
De Almeida Rocha, D., & Duarte, J. (2019). Simulating Hu-
man Behaviour in Games using Machine Learning.
2019 18th Brazilian Symposium on Computer Games
and Digital Entertainment (SBGames), 163-172.
Dreyfus, H. (2007). Why Heideggerian AI Failed and How
Fixing it Would Require Making it More Heideggerian.
Philosophical Psychology, 20, 247 - 268.
Dreyfus, H. (1992). What computers still can't do: a critique
of artificial reason. Leonardo, 27, 83.
Galway, L., Charles, D., & Black, M. (2008). Machine
learning in digital games: a survey. Artificial Intelli-
gence Review, 29, 123-161.
Gómez, J., & Bravo, R. (2009). The Unbearable Heaviness
of Being in Phenomenologist AI.
Kiverstein, J., & Wheeler, M. (2012). Heidegger and Cog-
nitive Science.
Liu, L. (2021). Wittgenstein in the Machine. Critical In-
quiry, 47, 425 - 455.
Murray, M. (1974). A Note on Wittgenstein and
Heidegger. The Philosophical Review, 83, 501.
Nelson, K. (2009). Wittgenstein and contemporary theories
of word learning. New Ideas in Psychology, 27, 275-
287.
Preston, B. (1993). Heidegger and Artificial Intelligence.
Philosophy and Phenomenological Research, 53, 43-
69.
Proudfoot, D. (2009). Meaning and mind: Wittgenstein's
relevance for the ‘Does Language Shape Thought?’ de-
bate. New Ideas in Psychology, 27, 163-183.
Roth, W. (1997). Being-in-the-World and the Horizons of
Learning: Heidegger, Wittgenstein, and Cognition. In-
terchange, 28, 145-157.
Schatzki, T. (1993). Wittgenstein + Heidegger on the stream
of life. Inquiry: Critical Thinking Across the Disci-
plines, 36, 307-328.
Shanker, S. (1995). The nature of insight. Minds and Ma-
chines, 5, 561-581.
Suglia, A., Konstas, I., & Lemon, O. (2024). Visually
Grounded Language Learning: A Review of Language
Games, Datasets, Tasks, and Models. J. Artif. Intell.
Res., 79, 173-239.
Vernon, D., & Furlong, D. (2006). Philosophical Founda-
tions of AI. , 53-62.
Wheeler, M. (2005). Reconstructing the Cognitive World:
The Next Step.
IESD 2025 - International Conference on Innovative Education and Social Development
238