Gary Marcus, 54, has fervently warned on social media in recent months about the dangers of a Donald Trump return to the White House, especially if it is with Elon Musk at his side. Now that it has happened, Marcus feels deeply discouraged. A leading voice in the US on artificial intelligence (AI) and its risks, he testified last year before a Senate subcommittee on regulating AI alongside Sam Altman, CEO of OpenAI (the company behind ChatGPT). . In his latest book, Taming Silicon ValleyMarcus argues that generative AI – the technology behind tools like ChatGPT and Gemini – will make the world a worse place if left unchecked.
Marcus, professor of psychology and neuroscience at New York University, has spent decades researching the intersection of cognitive psychology and AI. He has also founded two startups. The first, Geometric Intelligence, was acquired by Uber in 2016 and became a deep learning research lab. The second, Robust.AI, which he founded together with one of the makers of the Roomba vacuum cleaner, focuses on developing open source software for autonomous robots.
Marcus is active on platforms like Marcus is blunt about his views on LeCun: “He’s an intellectually dishonest egomaniac who did everything he could to deplatform me when I first criticized large language models (LLMs), only to make a U-turn when ChatGPT overshadowed Meta’s work .”
Ask. How do you see the future after Trump’s victory in the presidential election?
Answer. Dark. Generative AI poses many risks, both in the short and long term, and I think the prospects for meaningful regulation under the Trump administration are poor. The EU has its AI law; the US has very little legislation around AI to protect its citizens, and I don’t see that changing in the coming years.
Q. There have been some attempts to regulate AI in California. Do you think it is possible that some states will pass their own AI laws?
A. California has passed some laws around issues like data transparency, but Silicon Valley lobbyists helped block SB-1047, which would have made companies liable for “catastrophic harm.” This was a mistake in my opinion. We can still hope that some states will try, but it will be an uphill battle unless citizens start shouting very loudly that they want protection. Otherwise, it could take a massive mess, like a massive AI-powered cyberattack, before anything significant happens on the legislative front.
Q. Trump has chosen Elon Musk will head the government’s Department of Efficiency. What do you expect from him?
A. Elon was one of the first people to warn about the risks of AI, but now he has heavily invested financially in AI’s success, and it’s hard to see how that wouldn’t color his recommendations to Trump. I imagine he will do everything he can to get the government to subsidize AI development, including his own companies, despite the risks he once warned about. Remember, this is the guy who signed the ‘6-month AI pause’ letter and who spent six months assembling a massive GPU cluster for his own AI.
It’s also ironic that Musk made most of his money — and thus acquired much of his power and influence — from Tesla, which builds electric cars and is basically environmentally friendly. It’s ironic because the form of AI Musk is excited about isn’t very environmentally friendly, consumes enormous amounts of power and water, and produces massive emissions. And yet I expect the Trump administration will push hard to relax environmental regulations to allow more energy generation to power AI.
Q. Microsoft, Amazon, Google and Meta are all interested in using nuclear power plants, in some cases their own, to power their data centers. Some of these companies have discussed this with the Biden administration. Do you see this plan as more feasible under Trump?
A. I’m pretty sure the Trump administration will be supportive unless there’s an angle I haven’t seen yet. I actually think nuclear energy makes a lot of sense, but pouring all that energy into gigantic large language models is probably not the best use of that energy, as opposed to reducing our dependence on fossil fuels.
Q. Getting back to Musk, what do you think about the government hiring the richest man in the world? Can a member of the government own a major social media platform?
A. I don’t think Trump has any notion of a “conflict of interest,” and he has generally ignored previous norms. I don’t think this is a good idea for the nation, but I doubt this will stop Trump from moving forward. Who’s going to stop him? America 2025 will almost certainly be very different from previous years.
Q. What do you mean?
A. Trump will ignore previous norms, and to some extent laws in general. He will appoint an attorney general who will be immensely sympathetic (after Trump’s interview appointment of Matt Gaetz), and the Supreme Court recently significantly expanded presidential immunity. Trump will take that as a mandate to do whatever he wants, whatever the written law is, and I don’t expect him to be significantly challenged in that regard.
Q. What about the big tech companies? Do you think they will flourish under his leadership? During his first term, Trump viewed Facebook and Twitter as liberal-leaning companies.
A. Twitter (now X) has changed enormously under Elon. I don’t think Meta has changed that much. I think the biggest problem for Big Tech is that they have invested heavily in generative AI, in the fantasy that it will evolve into “AGI” (artificial general intelligence), and in reality it is simply not a solid enough technology to sustain the revolutions. support, people are making it up. If generative AI doesn’t become profitable relatively quickly, the bubble will burst – and neither Trump nor Musk can fix that.
Q. Social media has destroyed privacy and paved the way for surveillance capitalism. What can we expect from AI?
A. Generative AI will advance surveillance capitalism. Some people are putting their deepest secrets into chatbots, and the makers of LLMs hope they can gain access to everyone’s files, emails, calendars, and even passwords. LLMs themselves, through the way they answer, can subtly shape people’s beliefs and even, according to a recent study by Elizabeth Lofus, implant false beliefs. We give the creators of LLMs extraordinary power. Meanwhile, LLMs are already being used to generate disinformation, make biased job hiring decisions, fuel cybercrime, and more. There are some benefits to the technology, but it is not at all clear that they provide a net benefit to humanity. Instead, most of the profits will go to its creators, while most of the costs will be absorbed by society.
Q. You argue in your book that the tech oligarchs will gain increasing control over American society. Were you already thinking about Trump’s victory when you wrote it?
A. I was worried that Trump would win, yes, although I think we would have faced challenges either way. But the book’s fundamental point is now even more urgent: We can’t trust big tech to regulate itself, and the US government is too enthralled with big tech to get what we need. The only way American citizens can protect themselves from AI is if they are very vocal – perhaps through boycotts.
Q. One of the members of that technological oligarchy will hold a government position.
A. Correct. And we can expect Musk to have an extremely strong voice in policy, much stronger than most other billionaires have ever had. It wouldn’t be surprising if Trump largely deferred technology policy to Musk, despite the enormous apparent conflicts of interest. The world I warned about has arrived. What we do about it is up to us.
Q. How can we tame Silicon Valley?
A. The people of the world must unite and say, “We don’t want AI that destroys the environment, rips off artists and writers, discredits people, and endorses mass propaganda, especially if its creators don’t take real responsibility for the damage they cause.” Only if we push for better technology will we see real improvement.
Sign up for our weekly newsletter to get more English-language coverage from EL PAÍS USA Edition