Sam Altman may be right that a bright AI future is coming, but the benefits may not be evenly distributed.
In a rare post on his personal blog on Monday, the OpenAI CEO heralded the coming of an AI-inspired “Intelligence Age.” We’re verging on this new prosperity because “deep learning worked,” he writes. And OpenAI proved that by training large language models with more and more computing power you get predictably smarter AI.
With these new abilities, we can have shared prosperity to a degree that seems unimaginable today,” he writes. According to Altman’s vision, we’ll all have our team of virtual experts, personalized tutors for any subject, personalized healthcare valets, and the ability to create any kind of software we can imagine. “With these new abilities, we can have shared prosperity to a degree that seems unimaginable today,” he writes. And most amazing of all: This utopia can start in just “a few thousand days.”
Sure, AI might accelerate human progress for the greater good. But it seems more likely to simply concentrate unprecedented intelligence in the hands of the few who have the resources and skill sets to apply it.
Altman, for his part, suggests that ensuring even distribution is just a matter of producing enough computing power, computer chips, and energy. “If we don’t build enough infrastructure, AI will be a very limited resource that wars get fought over and that becomes mostly a tool for rich people,” he writes.
But as several people pointed out on That is, the company doesn’t open-source its models so that other developers can modify or customize them for specific use cases, or host them within their own domains for data security. That’s partly because building huge frontier models is extremely costly, and the company and its investors have interest in making sure that the research is protected as intellectual property.
People in the open-source community, meanwhile, believe that AI models can be made better, safer, and more equitable when its raw materials—the models, parameters, code, training data, etc.—are put into the hands of a lot. of people throughout the ecosystem. And many of the AI models in production over the next decades will indeed be open-source, but most of today’s largest and highest-performing frontier models are closed. Altman, as a member of OpenAI’s board, was the driving force behind tightly controlling access to the models, and focusing the company on monetizing them. His approach doesn’t square with his company’s original “AI for all” ethos.