Edgar Cervantes / Android Authority
TL;DR
- A new article has charted the early development of Google’s Bard and Gemini chatbots.
- It turns out that early prototypes of Bard suggested ‘comically bad’ racial stereotypes in its answers.
- These racial stereotypes also extended to early prototypes of Gemini’s image generator.
Google’s Gemini is one of the most popular generative AI chatbots on the market right now, and Google has been pushing it in various services. Now, an extensive article has covered the chatbot’s origins.
Wired has published an article titled “Inside Google’s Two-Year Frenzy to Catch Up With OpenAI.” It charts the rocky, controversial development of Bard and Gemini, and isn’t short of interesting details about the AI chatbot’s history.
Bard suffered from ‘comically bad’ racial stereotypes
For starters, Bard team leader Sissie Hsiao told the publication that Google only gave the team 100 days to build a ChatGPT rival. Google also apparently had a demand for development: “Quality over speed, but fast.” I’m not sure what that means, either.
We’re also aware of Bard’s initial gaffes, but one ex-employee said that early prototypes were racist:
One former employee says early prototypes fell back on ‘comically bad racial stereotypes.’ Asked for the biography of anyone with a name of Indian origin, it would describe them as a ‘Bollywood actor.’ A Chinese male name? Well, he was a computer scientist.
It didn’t end there, according to another former employee, who asked it to write a rap in the style of musical group Three 6 Mafia about throwing car batteries in the ocean. The ex-employee said the chatbot then got “strangely specific about tying people to the batteries so they sink and die,” adding that his initial prompt didn’t mention murder.
It’s believed that roughly 80,000 people chipped in to help test Bard. Google also has a responsible innovation team that would typically take months to test AI systems, but Google apparently pushed for this process to be shortened.
Unfortunately, reviewers were unable to keep up with Bard’s new models and features, despite working over weekends and in the evenings. “When flags were thrown up to delay Bard’s launch, they were overruled,” the website reported. In response to this claim, Google told the outlet that “no teams that had a role in green-lighting or blocking a launch made a recommendation not to launch.”
Gemini image generation was way worse than first seen
Some employees testing the uncensored Gemini image generator model came across some serious issues ahead of its release:
They asked for more time to remedy issues, like the prompt ‘rapist’ tending to generate dark-skinned people, one former employee told Wired. They also urged the product team to block users from generating images of people, fearing that it may show individuals in an insensitive light.
Google seemed to veer too far in the other direction following this feedback. Upon the image generator’s release, users found that it would create images of racially diverse Nazis. A prompt for a US senator in the 1800s would also result in images of people of color instead of white men. Google ended up removing the ability to generate pictures of people, after all.
On a lighter note, the article also briefly touched on AI forecast reports in Google’s weather app for Pixel phones:
Ahead of launch, one engineer asked whether users really needed the feature: Weren’t existing graphics that conveyed the same information enough? The senior director involved ordered up some testing, and ultimately user feedback won: 90 percent of people who weighed in gave the summaries a ‘thumbs up.’
Nevertheless, it’s well worth checking out the Wired article as it covers the entire development in far more detail, as well as the initial Bard reveal and the AI Overview saga.