The short-lived firing of Sam Altman, the CEO of possibly the world’s most important AI company, was sensational. When he was sacked by OpenAI’s board members, some of them believed the stakes could not have been higher – the future of humanity – if the organisation continued under Altman. Imagine Succession, with added apocalypse vibes. In early November 2023, after three weeks of secret calls and varying degrees of paranoia, the OpenAI board agreed: Altman had to go.
The drama didn’t stop there. After his removal, Altman’s most loyal staff resigned, and others signed an open letter calling for his reinstatement. Investors, including its biggest, Microsoft, got spooked. Without talent or funding, OpenAI – which developed ChatGPT and was worth billions – wouldn’t even exist. Some who had been involved in the decision to fire Altman switched sides and within days, he was reinstated. Is he now untouchable? “Certainly he has entrenched his power,” says Karen Hao, the tech journalist whose new book, Empire of AI, details this saga in a tense and absorbing history of OpenAI. The current board is “much more allied with his interests,” she says.
Hao’s book is a gripping read (subtitle: “Inside the Reckless Race for Total Domination”), featuring the unimaginably rich, as well as people in developing countries who are paid a pittance to root out the horrific sexual and violent content in the internet data that AI is trained on. The cast of characters that make up OpenAI have brilliant minds, and often eccentric behaviour. Elon Musk, after all, is one of its founders. Another founder and its chief scientist, Ilya Sutskever – who would be part of the failed attempt to remove Altman – dramatically illustrated his fears about the “unaligned” AI they had created by burning a wooden effigy constructed to represent it in 2023 while his senior colleagues stood around a firepit at a luxury resort, wearing bathrobes.
At the centre of it all is Altman, OpenAI’s charismatic co-founder and CEO who is, depending how you view him, the villain who has put humanity on the path to mass extinction, or the visionary utopian who will bring us cures for diseases and a revolution in how we work. In the less than two years it has taken Hao to write her book, Altman, 40, appears to have outmanoeuvred his dissenters and has announced plans to raise $7tn.
Hao describes Altman as a “once-in-a-generation fundraising talent” and claims OpenAI’s chances of winning the AI arms race depend on raising vast sums. “He persuades people into ceding power to him, he doesn’t seize it. The reason he’s able to do this is because he’s incredibly good at understanding what people want and what motivates them. I came to understand that … when he is talking to someone, what comes out of his mouth is not necessarily correlated as much with his own beliefs as it is with what he thinks the other person needs to hear.” It’s why, she says, he was able to recruit so many talented people and get so much investment (and also what made some on his original board, and senior employees, nervous). “It’s also why he was able to pull off something that most people would not be able to do, which is to get the public to buy into this premise that he’s doing something profoundly good for society, just long enough to get away with it.”
Within OpenAI, Hao points out, “every single person that has ever clashed with him about his vision of AI development has left – Musk has left, Dario Amodei has left, Sutskever has left [the three were early leaders in OpenAI] and a whole host of other people. Ultimately, they had a different idea of how AI should be developed. They challenged Altman, and Altman won.”
In 2021, Altman’s sister Annie made the shocking allegation on what was then Twitter that he had sexually abused her as a child (he is nine years older). In January this year, she filed a lawsuit against him. In a statement released by Altman, his mother and his two brothers, they described the allegations as “utterly untrue” (his father died in 2018). Hao had several conversations with Annie, piecing together how her life unravelled. A bright child who planned to go to medical school, she suffered with poor mental health, and then developed a series of chronic physical health issues as a young adult. After her father’s death, her health declined even further and, as her family started cutting off financial help, she became estranged from them. Annie, says Hao, “is such a perfect case study of why we need to be sceptical of what Sam Altman says about the benefits of AI”. Altman claims AI is going to solve poverty and improve healthcare, but Annie – who lives in poverty and has health issues – hasn’t seen any of the benefits, says Hao. “She’s representative of more people, and how they live in the world, than he is, and it just so happens that this perfect case study is also his sister.”
Despite agreeing to speak to Hao for her book, OpenAI pulled out when they found out she was in touch with Annie. “This should be a family thing,” says Hao. “Why is a company representative now making this their top issue? That illustrated to me how important Sam, the man, is to the company.” It highlighted to her, she says, that Silicon Valley companies, particularly when faced with criticism, “can bring their full power to bear to quash that dissent.”
Hao studied mechanical engineering at university and moved to San Francisco after graduation to work for a startup. “I thought that was going to be my career, to be in Silicon Valley and do that whole journey,” she says, when we speak over Zoom. “I pretty much realised within the first year that the incentives within Silicon Valley for technology development were not aligned with the public interest.”
So she moved into journalism and, writing for the magazine MIT Technology Review, became fascinated by AI. “I was primarily spending all my time talking with researchers within companies that were operating in academic-like environments, where they didn’t really have any commercial objectives. There was so much diversity of research happening.” There were also healthy debates. This was in 2016, around the time Donald Trump won his first election, and there had been a lot of criticism of the tech industry. “There was emerging research on AI and its impact on society. What are the harms? What are the biases embedded in models that lead to potential widespread discrimination and civil rights issues? That’s kind of where the AI world and discourse was before it got derailed by ChatGPT.”
Within days of the release of ChatGPT in late 2022, it had one million users. Within a couple of months, it had 100 million users and had become the fastest growing consumer app in history. Hao felt its dazzling success had overshadowed those kind of debates, at least in the mainstream. “People were just buying what OpenAI and other companies were spoon-feeding in terms of narratives, like: this is going to cure cancer, this is going to solve the climate crisis, all these utopic things that you can’t even dream of.” She started working on what would become her book, looking at the history of OpenAI and its competitors. “Only when you have that context can you begin to understand that what these companies say should not be taken at face value.”
Before Hao started following OpenAI more closely, she says she had a “pretty positive impression. I was curious about them; they were not a company, they were a non-profit, and they talked about how they were going to be transparent, open their research, and were focused on benefiting society”. Then, in 2019, things started to change; OpenAI developed a “capped-profit” structure (investors would have their returns capped at a very generous 100 times), Altman became CEO, they signed a billion-dollar deal with Microsoft, and started to withhold their research. “It seemed like quite a significant shift,” says Hao.
“That is one of the reasons why OpenAI has had so much drama, but it’s also emblematic of AI development – it’s so much driven by ideology,” she says. “There’s this clash of egos and ideologies that’s happening, to try to seize the direction.” Within OpenAI, whether boomer (those who want to scale as fast as possible) or doomer (those who believe AI is a threat to humanity), the finish line was the same: to develop, and therefore control, AI first.
Does Hao think AI poses an existential threat? “The biggest and most pressing threat is that AI is going to completely erode democracy and, if you understand that, the conclusion is then we should just stop developing this technology in the way that these companies are developing it.” The funnelling of resources “is a completely different scale than previous tech companies … They’re trying to justify raising the largest private investment rounds again and again – OpenAI having just raised $40bn in the latest round”. That kind of concentration of wealth, she says, “is in and of itself a threat. We are already seeing that play out with the US government, with the takeover by unelected tech billionaires.”
The apocalyptic visions of a superintelligent AI turning against humanity have been a distraction, she thinks. “Ultimately, what’s going to cause catastrophe is people, not rogue AI, and we need to watch what the people are doing.” However, she has met people who genuinely believe AI will destroy humanity. “I spoke to people whose voices were literally quivering in fear, that is the degree to which they believe, and if you truly believe that, that’s terrifying.” Then there are those who use the idea of how AI could become so powerful as “a rhetorical tool to continue saying: ‘That’s why we good people need to continue controlling the technology and not have it be in the hands of bad people.’” But as far as Hao can see, “we’ve not gotten more democratic technologies, but more authoritarian ones.”
Neither does Hao have much sympathy for the argument that the development of AI requires huge investment. “I don’t think it needs the level of investment these companies say it needs,” she says. “They have already spent hundreds of billions of dollars on developing a technology that has yet to achieve what they said it’s going to achieve,” says Hao. “And now you expect us to spend trillions? At what point do we decide that actually they’ve just failed?
“When I was covering AI pre-ChatGPT and the wide range of research that was happening, there were such exciting ideas … ChatGPT erased people’s imaginations for what else could be possible.” Generative AI has taken over – not just OpenAI, but at other tech companies including Google’s DeepMind – and this, says Hao, “has distorted the landscape of research, because talent goes where the money goes.”
The money doesn’t flow equally, though. Hao interviews people working for outsourced companies in Kenya, Colombia and Chile, who annotate the data that generative AI is trained on, sifting out harmful content for low pay and without much thought for their mental health. The AI, meanwhile, is powered by vast datacentres, buildings packed with computers, that require a huge amount of energy to run, and whose cooling systems require a huge amount of water. In the near future there will be even bigger datacentres known as “mega-campuses”. Just one of these could use more energy than three cities the size of San Francisco.
The premise of her book is that the AI giants are running an empire. But history shows us that empires can and do fall. Hao sees each step of the supply chain as a potential site of resistance. Artists and writers, for instance, are pushing back against their work being used to train generative AI (the Guardian has a deal with OpenAI for the use of its content). Enforcing data privacy laws, “are also ways to contain the empire”, as is forcing companies to be transparent about their environmental impact, from their energy consumption to where and how the minerals needed for hardware are extracted. Tech companies, says Hao, “want their tools to feel like magic” but she would like more public education to make people realise that each AI prompt uses resources and energy. Hitting these pressure points and more means, she says, “we can slowly shift back to a more democratic model of governing AI”.
Compelling though he is, this isn’t just about Altman, the reigning emperor of AI. “It will take a far more concerted effort now to remove him,” she says, but adds, “we fixate a bit too much on the individual”. If, or when, Altman chooses to step down or is successfully ousted, will his successor be any different? “OpenAI is ultimately a product of Silicon Valley.” And anybody who may one day replace Altman, says Hao with foreboding, is going to pursue the same objective: “To build and fortify the empire.”