From my perspective, generative artificial intelligence (GAI) cannot truly possess or create knowledge. My belief rests on principles from both general semantics and epistemological frameworks, including Chisholm’s analysis of knowledge. GAI, as I see it, lacks the intrinsic capability to “know” in the way that humans do, because knowledge requires subjective understanding, intentionality, and experiential context—elements GAI lacks. To explain this, I’ll look at what knowledge truly entails, how GAI functions, and why, in light of these definitions, AI cannot meaningfully “know.”
To understand why GAI can’t possess or create knowledge, we need to define knowledge itself. According to Chisholm (1977), knowledge is justified true belief. For someone to “know” something, there must be not only belief but also truth and justification, each existing in a cohesive relationship. Additionally, as general semantics emphasizes, knowledge is rooted in human experience, context, and the evolving relationships between symbols and what they signify (Hayakawa & Hayakawa, 1990). GAI, however, is limited to data processing and prediction based on patterns in data—lacking the subjective, interpretive process that turns information into knowledge for humans.
In general semantics, language and symbols are tools for humans to interact meaningfully with the world, but they don’t embody the world itself. Alfred Korzybski, the founder of general semantics, famously said, “The map is not the territory” (Korzybski, 1933). GAI can process vast maps—data sets and models—but it cannot perceive the “territory” in any meaningful way. GAI outputs depend on pattern recognition, statistical analysis, and algorithmic processes that merely approximate human-like responses. It never grasps or understands the data it processes; it only manipulates symbols in ways we interpret as knowledge-like.
Furthermore, Chisholm’s criterion for knowledge includes a crucial human element: knowing that we know. This self-awareness and reflective quality are unique to conscious beings. GAI does not “know” in this way because it does not understand or even conceive of what it is doing. When GAI processes information, it lacks intentionality, a feature philosophers like Searle (1980) have argued is essential for genuine knowledge. GAI operates under programmed structures, reacting to inputs according to coded rules, without any self-generated intention, purpose, or comprehension.
The limits of GAI’s capabilities are further illustrated in the context of creating knowledge. Creation implies innovation, a spontaneous synthesis of understanding influenced by experience and creativity. Humans draw on a vast reservoir of personal experiences, emotional contexts, and subjective interpretations to make sense of information, integrating it into coherent knowledge. GAI cannot do this; it can generate outputs based on learned data patterns, but it does not create new knowledge in the true sense. When GAI models like GPT generate new sentences or answer questions, they do so by referencing learned patterns—not by synthesizing novel ideas or insights that reflect personal, experiential understanding.
In my view, GAI’s inability to possess or create knowledge underscores a fundamental distinction between human cognition and machine processing. Knowledge is deeply intertwined with human subjectivity, context, and experience—all things GAI lacks. As Searle put it, there is a difference between simulating cognition and actually experiencing it; no amount of complex pattern processing equates to genuine understanding (Searle, 1980). The map is not the territory, and GAI’s sophisticated maps—its models—remain ultimately detached from the lived reality required to transform information into true knowledge.
GAI’s processes, as sophisticated as they may be, are merely data-driven manipulations without the subjective experience and intentionality that true knowledge requires. While GAI can mimic knowledge-like outputs, it fundamentally lacks the grounding in experience, awareness, and self-reflection that make human knowledge possible. Therefore, GAI cannot possess or create knowledge in any genuine sense.
References
Chisholm, R. (1977). Theory of Knowledge. Prentice-Hall.
Hayakawa, S. I., & Hayakawa, A. R. (1990). Language in Thought and Action (5th ed.). Harcourt.
Korzybski, A. (1933). Science and Sanity: An Introduction to Non-Aristotelian Systems and General Semantics. International Non-Aristotelian Library.
Searle, J. R. (1980). Minds, brains, and programs. Behavioral and Brain Sciences, 3(3), 417-424.
Leave a Reply