Gettier Bugs

Edmund Gettier wrote a page paper that was published in Analysis in 1963 that kinda threw a philosophical wrench into what we thought we knew about knowledge. But in tech, Gettier bugs are a different kind of annoying and maybe just as philosophical if I think about it.

A Gettier bug is a situation where a system behaves as if it knows something to be true, but in reality, it only got the right answer by accident. I might think my code is flawless because it’s returning exactly what I expected, until it doesn’t. That’s when I realize the code wasn’t really doing what I thought it was.

This can happen when my logic is technically wrong, but it still leads to a correct result under specific circumstances. It’s not broken, yet. It’s working fine right up until that one variable changes, or some unexpected input comes in, and then boom! Everything falls apart. It’s hard to spot because the error only shows up when I least expect it.

I’ve run into a few of these, and let me tell you, it’s like my code is gaslighting me. I start questioning my entire setup. How can something that’s working be so fundamentally wrong? It’s like I’m living in a programming twilight zone where up is down, 1 is 0, and null is sometimes not null (but only on Thursdays).

Example of a Gettier bug

#include <stdio.h>

// Incorrect factorial function with a flawed base case
int factorial(int n) {
    // Incorrect base case: should be n == 0
    if (n == 1) {  
        return 1;
    } else {
        return n * factorial(n - 1);
    }
}

int main() {
    printf("Factorial of 5: %d\n", factorial(5));  // Output: 120 (correct)
    printf("Factorial of 0: %d\n", factorial(0));  // Undefined behavior (incorrect)
    return 0;
}

The corrected code

#include <stdio.h>

// Correct factorial function with proper base case
int factorial(int n) {
    if (n == 0) {  // Correct base case
        return 1;
    } else {
        return n * factorial(n - 1);
    }
}

int main() {
    printf("Factorial of 5: %d\n", factorial(5));  // Output: 120 (correct)
    printf("Factorial of 0: %d\n", factorial(0));  // Output: 1 (correct)
    return 0;
}

I think this is a good example because if you look at the function factorial, it will return the correct result for positive integers like 5, but will fail for factorial(0) which could lead to undefined behavior or a stack overflow since there’s not proper base case to terminate the recursion.

I mean, sure, I can write a bunch of tests, cover my edge cases, make sure everything behaves as expected in all the “usual” scenarios, but somehow, someway, that Gettier bug sneaks in. Sometimes these bugs can make me reconsider whether I really understood recursion when I first learned it and so on.

It’s even worse in production. Everything looks fine on the surface until that one freak edge case comes in, and suddenly my app is lying to me and everyone else. And me? I’m left sitting there with a cup of coffee, trying to figure out why this code, which has been running perfectly for months, suddenly decided to betray me.

Now it’s back to bug-fixing mode, hoping something sticks. It’s not just about finding the problem; it’s about finding the truth. Does this code really know what it’s doing? Or is it just pretending? And that’s the real philosophical side of Gettier bugs, it makes me question not just the code, but whether my approach was right in the first place. An existential crisis in lines of code.

When I do finally squash it, I can tell myself that now my program actually knows something. For real this time. Maybe.

Until the next bug, anyway.

Leave a Reply

Your email address will not be published. Required fields are marked *