What it Means to Know in the Age of GenAI

Philosophy has always fascinated me, particularly when it tangentially brushes against everyday things like technology, software development, and AI. One of the philosophers I’m learning about who changed how we think about knowledge is Edmund Gettier. With just a three-page paper published in Analysis in 1963, Gettier threw a philosophical wrench into what many thought was a settled question: what does it mean to know something?

  • justified, deriving frome evidence
  • true, not a falsehood
  • belief, proposition in your head

In his brief paper, Gettier challenged the prevailing view by asking, “Is Justified True Belief Knowledge?” He presented two scenarios—later coined as “the Gettier cases“—that showed it’s possible to have a justified true belief (JTB) about something and still not genuinely know it. This earned him lasting recognition and sparked a whole body of philosophical literature in response.

Before Gettier, the dominant theory of knowledge was pretty simple: knowledge is justified true belief. If you believe something, have a good reason to believe it, and it turns out to be true, then you can say you “know” it. But Gettier pointed out that this definition has gaps. Sometimes, people arrive at the right answer for the wrong reasons. They’re justified in believing something that turns out to be true, but their justification is coincidentally flawed. Do they really know it then?

For example, let’s say you look at a clock, and it reads 3:00 PM. You believe it’s 3:00 because you trust the clock (justification), and it is 3:00 (truth), so you feel like you know the time. But, if that clock stopped working exactly 24 hours ago and by coincidence still shows the correct time, do you actually know what time it is? Gettier’s answer: not really. You just got lucky.

This little paper shook the foundations of epistemology, and in a way, it’s deeply relevant to where we are today with AI, especially generative AI like ChatGPT.

Software Development and AI

In software development, we deal with knowledge constantly. From debugging code to designing new algorithms, what we “know” has a direct impact on the outcomes of our work. We believe our software functions correctly because we’ve tested it (justification), it runs without errors (truth), and so we assume we “know” the software is solid. But as many of us have experienced, just because code works in one context doesn’t mean it will in another. Sometimes, an unforeseen bug or a missing edge case breaks everything, shattering what we thought we “knew.”

The same goes for AI systems, especially in the realm of generative AI. These systems produce results based on vast amounts of data and pattern recognition and prediction, but do they know anything? As developers, do we know what the model will output, or are we, like the broken clock in Gettier’s example, only accidentally correct?

Justified True Belief or Lucky Guess?

Generative AI has a fascinating relationship with knowledge. Take this model, for instance. When you ask ChatGPT a question, it doesn’t know anything in the way you or I might know something. It generates responses based on statistical patterns in the data it was trained on. Sometimes, the responses are spot on (truth), and the model was “justified” by the wealth of data supporting that response. But is that knowledge?

In the Gettier sense, probably not. Generative AI often arrives at the correct answer by coincidence rather than true understanding. For example, you might ask it to solve a complex math problem. It might give you the correct result, but that’s not because it understood the math. It was simply pattern-matching responses from its training set. There’s no deeper justification at work. The “truth” is more of an accident.

This is one of the biggest challenges we face in software development with AI today. We are building systems that can seem eerily smart, but they don’t really “know” anything in the philosophical sense. We need to be careful not to confuse accurate outputs with true knowledge, much like how Gettier warns us not to confuse lucky guesses with genuine understanding.

The Implications for Software Developers

As developers, this has a few implications. First, we need to be skeptical of AI’s outputs, no matter how convincing they seem. Just because an AI can generate the right answer doesn’t mean it “knows” what it’s doing, and we shouldn’t let its confident responses lull us into thinking it does. Much like a stopped clock that’s right twice a day, AI can hit the mark without true understanding.

Second, Gettier’s problem reminds us that testing and validation are vital. In software development, we often rely on justifications for why we believe something works: tests, frameworks, best practices. But justifications can fail. Our job isn’t just to build systems that work—it’s to anticipate and handle the cases where those systems might fail in unpredictable ways. AI systems, by their very nature, are unpredictable in many contexts. When we deploy them into the world, we need to keep Gettier’s lesson in mind: don’t trust that you “know” something just because it seems to work in one specific instance.

Lastly, the rise of AI in software development makes us reconsider what it means to possess expertise. If an AI can generate code, respond to complex queries, or even fix bugs, does that mean the AI knows how to code? No, it doesn’t. But that forces us to reflect on what it means for us as developers to know how to code. Is it enough to produce working solutions, or does true expertise require something deeper, something beyond the kind of justification and truth that AI can provide?

Knowing vs. Generating

Edmund Gettier’s short paper fundamentally changed how we think about knowledge, and his ideas remain just as relevant today, particularly as AI redefines the boundaries of what it means to “know” something. As we continue to develop and deploy AI, especially in fields like software development, we need to remember that AI’s impressive outputs don’t equate to understanding. Knowledge, in the human sense, involves more than justified true belief—it requires the kind of depth that AI, no matter how advanced, still lacks.

Knowing is more than getting lucky, whether you’re a developer fixing bugs or an AI generating the next best line of code. More than most people, developers are daily faced with bizarre epistemological problems. It helps to be able to distinguish 3:00 on a clock from, well, a gettier.

Other Thoughts

Questioning the reliability of our justifications for knowledge and being aware of the limitations of our representations of reality I think can be leveraged to poke a hole in Gettier’s problem by possibly using an understanding of Korzybski and general semantics. Korzybski’s famous phrase, “the map is not the territory,” emphasizes based on the example I gave above, that the clock (map) is not the same as the actual time (territory). The clock might display 3:00 PM, but if it stopped working 24 hours ago, it does not accurately reflect the current time. The clock is a symbolization of reality’s time. Could we contextually or critically think about the justification aspect of Gettier’s problem to have justified true belief outcome? Clarifying our language “The clock says it’s 3:00 PM” or “According to the clock, it is 3:00 PM” would show that there is uncertainty, and would trigger truth justification considerations as a counter-argument to Gettier’s idea.

Leave a Reply

Your email address will not be published. Required fields are marked *