I feel this essay synthesizes a lot of ideas I've seen floating around out there, but does so far more productively than those tweeting "stochastic parrot!" and writing off anyone excited about the latest generative models.
In particular, this essay made me reflect on how I've experienced the effect of interfaces on of generative AI the last few years:
Circa 2021, I recall seeing my tech coworkers playing with an earlier version of GPT, prior to it being placed within a chat interface. I remember being confused at my coworkers enthusiasm. Perhaps because I lacked imagination and understanding. But I think also because of the interface: generating one-off text blobs in a way that felt unpredictable and black box-y appeared to me uninspiring. (Indeed, I'd built something similar using n-grams for an intro computer science course my freshman year in university.)
When I first used ChatGPT3, though, the feeing was completely different... & intoxicating: I felt like I had this super knowledgable expert I could talk to at any moment about anything, without them ever getting tired. And, most importantly, it felt like a conversation -- ie the information was tailored to me and even the agent even responded to my objections. It felt like a magic I couldn't stop using.
I suspect the challenge here, even for someone like me who literally programs interfaces & trains models professionally, is that it takes so much energy to hold in my head what the model is, rather than anthropomorphizing it. It's so hard to both acknowledge that the model is indeed outputting (often factual) text tailored to my earnest questions, while remembering that it is doing it without engaging in anything like the human listening, thinking, or reasoning.
The jump between "I'm having a human-like exchange" and "I'm interacting with an entity that has the intelligence of a human" can happen so fast, it's almost imperceptible. And news outlets hungry for clicks have made made it even easier to believe.
You write
"it’s as if the theater production crew believed the performers really were the characters they portray on stage"
What's interesting to note is that this is exactly why audiences like theater: it's an interface just good enough for us to forget it is a performance. Audiences don't require too much to do so: just some props, a few stage pieces. What's remarkable though, as you point out, is that the crew seems to have forgotten, too.
I feel this essay synthesizes a lot of ideas I've seen floating around out there, but does so far more productively than those tweeting "stochastic parrot!" and writing off anyone excited about the latest generative models.
In particular, this essay made me reflect on how I've experienced the effect of interfaces on of generative AI the last few years:
Circa 2021, I recall seeing my tech coworkers playing with an earlier version of GPT, prior to it being placed within a chat interface. I remember being confused at my coworkers enthusiasm. Perhaps because I lacked imagination and understanding. But I think also because of the interface: generating one-off text blobs in a way that felt unpredictable and black box-y appeared to me uninspiring. (Indeed, I'd built something similar using n-grams for an intro computer science course my freshman year in university.)
When I first used ChatGPT3, though, the feeing was completely different... & intoxicating: I felt like I had this super knowledgable expert I could talk to at any moment about anything, without them ever getting tired. And, most importantly, it felt like a conversation -- ie the information was tailored to me and even the agent even responded to my objections. It felt like a magic I couldn't stop using.
I suspect the challenge here, even for someone like me who literally programs interfaces & trains models professionally, is that it takes so much energy to hold in my head what the model is, rather than anthropomorphizing it. It's so hard to both acknowledge that the model is indeed outputting (often factual) text tailored to my earnest questions, while remembering that it is doing it without engaging in anything like the human listening, thinking, or reasoning.
The jump between "I'm having a human-like exchange" and "I'm interacting with an entity that has the intelligence of a human" can happen so fast, it's almost imperceptible. And news outlets hungry for clicks have made made it even easier to believe.
You write
"it’s as if the theater production crew believed the performers really were the characters they portray on stage"
What's interesting to note is that this is exactly why audiences like theater: it's an interface just good enough for us to forget it is a performance. Audiences don't require too much to do so: just some props, a few stage pieces. What's remarkable though, as you point out, is that the crew seems to have forgotten, too.