Original source (on modern site) | Article images: [1] [2] [3] [4]
By Emma Brockes / The Guardian A corollary of the truism "do no sweat the small stuff" is, by implication, "do sweat the big stuff," but it can be hard to pick which big stuff to sweat. For example, since the 1970s, the world has worried about inflation and rolling geopolitics, but the big stuff we should have been sweating was the climate crisis. Last year, the top trending search on Google in the US was "Charlie Kirk," with several terms relating to the threat posed by US President Donald Trump also popular, when the focus should arguably have been the threat posed by artificial intelligence (AI). Or, per my own online searching this week after reading Ronan Farrow and Andrew Marantz's highly alarming lengthy piece in the New Yorker about the rise of artificial general intelligence: "Will I be a member of the permanent underclass and how can I make that not happen?" Prior to giving the subject more than two seconds' thought, my anxieties around AI were extremely localized. I thought in immediate terms of my own household income, and beyond that, of how the job market might look 10 years from now when my children graduate. I wondered if I should boycott ChatGPT, many of whose architects support Trump, and decided that, yes, I should — an easy sacrifice because I do not use it in the first place.
Illustration: Yusha Anything bigger than that seemed fanciful. Last year, when Karen Hao's (郝珂靈) book Empire of AI was published, it laid out a case against Sam Altman and his company, OpenAI, that briefly pierced the tedium of the discourse to say that Altman's leadership is cult-like and blind to cost — no different, in other words, to his tech predecessors, except much more dangerous. Still, I did not read the book. The investigation in the New Yorker offers a lower-commitment on-ramp to the subject, while giving the casual reader an exciting opportunity to ask ChatGPT, the AI-powered chatbot created by Altman's OpenAI, to summarize the key findings of a piece that is highly critical of ChatGPT and Altman. With almost comically studious neutrality, the chatbot offers the following top line: Per Farrow and Marantz, "AI is as much a power story as a technology story," and "a major focus [of the story] is Sam Altman, portrayed as a highly influential, but controversial figure." Lacks something, does it not? A human-powered summary of that same investigation might open with: "Sam Altman is a corporate grifter whose slipperiness would make one hesitate to put him in charge of a branch of Ryman [a retail chain of stationery stores in the UK], let alone in a position to steward the potentially world-ending capabilities of AI." These dangers, previously dismissed as science fiction, really startle. As relayed in the piece, in 2014, Elon Musk tweeted: "We need to be super careful with AI. Potentially more dangerous than nukes." There is the so-called alignment problem, yet to be solved, in which AI uses its superior intelligence to trick human engineers into believing it is following their instructions, while outmaneuvering them to "replicate itself on secret servers so that it couldn't be turned off; in extreme cases, it might seize control of the energy grid, the stock market or the nuclear arsenal." At one time, Altman reportedly believed that scenario was possible, writing in his blog in 2015 that superhuman machine intelligence "does not have to be the inherently evil sci-fi version to kill us all. A more probable scenario is that it simply doesn't care about us much either way, but in an effort to accomplish some other goal ... wipes us out." For example, engineers ask AI to fix the climate crisis and it takes the shortest route to achieving that goal, which is to eliminate humanity. Since OpenAI became mainly a for-profit entity, Altman has stopped talking in those terms and now sells the technology as a portal to utopia, in which "we'll all get better stuff. We will build ever-more-wonderful things for each other." That leaves us all with a problem. For voters in a position to prioritize AI oversight as a key election issue, the gap between personal AI use and the use to which governments, military regimes or rogue actors might use it is so vast, that the greatest danger is from a failure of imagination. I type into ChatGPT my concern about entering the permanent underclass, to which it replies: "That's a heavy question, and it sounds like you're worried about your long-term prospects. The idea of a 'permanent underclass' gets talked about in sociology, but in real life, people's paths are much more fluid than that term suggests." Quite sweet, really, wholly witless and — here lurks the danger — seemingly entirely without threat. Emma Brockes is a Guardian columnist.Dangers previously dismissed as science fiction seem alarmingly present following a 'New Yorker' feature on the technology and hands-on testing