The biggest tech story of the year is shaping up around the seemingly sudden arrival of AI chatbots into mainstream attention: piggybacking off last year’s viral reception to text-to-image generators like DALL-E 2, the launch of OpenAI’s ChatGPT in November has since spurred not only widespread media coverage and netizen adoption, but also an industry-wide arms race. Whatever polite corporate doffing made to AI’s thicket of ethical ramifications over the past few decades disintegrated nearly overnight in favor of Silicon Valley’s primal fear of competition, and we now live in a society where Microsoft’s newly AI-powered Bing (“Sydney,” to her friends), Google’s Bard, Meta’s LLaMA, and Snapchat’s My AI (which at least allows you the dignity of naming your chatbot yourself) seem poised to transform us all. The AI future feels nigh, if not terribly optimistic.
In an era where major breakthroughs in tech render either inscrutable—admit it, you still don’t know what a blockchain is, do you?—or hopelessly misguided (see: the metaverse), we as a public can at least intellectually get behind talking robots. At last, honestly! We’re kind of used to it already: After spending the greater part of Web 2.0 accepting the sleight of hand that invisible, algorithmic forces exert on our day-to-day, the consumer-friendly AI-powered machinations of driverless cars and actually efficient task assistants and decent predictive-text features has become a foregone conclusion. But now that it’s here—un-wait-listed, and off whatever leash that reputational risk posed in the past—we as a general public appear to be girding our loins in dread.
Over the past few weeks, we’ve witnessed the first cresting of media coverage around this new generation of AI chatbots, most of which has coalesced around two categories. The first is the stunt journalism, most famously executed in the New York Times’ conversation with Sydney, that can apparently turn even the most hard-bitten tech heads in the industry not a little flustered (“In the light of day, I know that Sydney is not sentient, and that my chat with Bing was the product of earthly, computational forces—not ethereal alien ones,” NYT columnist Kevin Roose wrote, and you can almost picture him reciting it to himself in a panicked affirmation).
Taken in whole as a genre, these test drives are revealing too, of the unavoidable anthropomorphism, plus general obsession with sentience, that ensues, especially amidst the classique set-up of a male journalist evaluating female-coded bot’s abilities. Writes Ben Thompson in Stratechery, “I was interested in exploring this fantastical being that somehow landed in an also-ran search engine…. Sydney absolutely blew my mind because of her personality; search was an irritant. I wasn’t looking for facts about the world; I was interested in understanding how Sydney worked and yes, how she felt.” (New game: Every time someone makes a Her reference, drink!)
I’m being rather flip here, of course; at an existential level, I find it endearing how human it is to parse a few computer-generated constellations of words for a sense of recognition; it’s not unlike the way TikTok talking heads harness front-facing angles and eye contact to activate some evolutionary script for creating trust. The instinct to hunt around for one’s personal uncanny valley moment is expected, though it should not escape our sense of irony that the most levelheaded, nonpersonified explanation of AI chatbots thus far came to us from the acclaimed sci-fi writer Ted Chiang.
At any rate, projecting sinister vibes onto a chatbot is one thing, and falling for the usual passel of cyborg clichés is another, but it’s troubling on an entirely different scale how easily experts and normies alike have forecasted the potential harm these chatbots pose already. The novelty of AI being able to compose a Nick Cave song or English 101 essay has worn off; in its place remains the growing category of coverage essentially itemizing every possible thing that could go wrong with the proliferation of these generative chatbots.
We already know AI can be as biased, racist, inaccurate, potentially unlawful, deadly, and generally toxic as the human-generated inputs chatbots train themselves on; we also understand by now that any piece of technology billed as a productivity tool will only ever exist to serve corporate bottom lines over the actual quality-of-life for workers. Already, this stage of AI capability is being dubbed as a disaster, the end of academia as we know it, an existential threat to news media (not that the average American really cares), capable of triggering the next misinformation nightmare and waging class war, to start with. As John Oliver pointed out on his most recent Last Week Tonight, this is just the harm we already know is coming: “And those are just the problems we can foresee right now.”
Even the tech giants themselves have lost faith in their own abilities to understand, much less control the AI they’re setting loose via what’s essentially a public beta test: The tone of Microsoft’s response to much of the errors and quirks that the stunt journalism has revealed run somewhere between “idk,” defensiveness, and a general promise to look into things; Snapchat’s announcement of My AI literally couched any consequences with: “Please be aware of its many deficiencies and sorry in advance!”
Sorry in advance? Toto, I have a feeling we’re not in the move fast, break things era anymore!
As far as tech hype cycles go, the days of gadget mania now feel utterly quaint in comparison; with each new development out of Silicon Valley, the task of media, early adopters, and experts now requires constant confrontation with the industry’s known knowns of harm as well as civilization-old issues that continue to defy a quick system patch to address. AI chatbots have failed the sniff test much faster than the once-utopic prophecies of social media, Web3, virtual reality, and its ilk, which should feel like a good thing. For those of us whose daily lives are continually shaped by these tech overlords, it’s one thing to be shepherded into the adoption curve of the newest shiny thing you only half understand; it’s another to hear what’s coming and feel immediate despair.