Google Turns Off AI Feature
That Generated Racist-Looking Nazi Images
A Controversial AI Glitch
Is Google keeping up with its responsibilities? In February, the company’s Gemini-powered AI image generator sparked backlash. It created images of ethnically diverse Nazi-era German soldiers. This appeared to be an overcorrection for its past struggles with racial bias. As a result, Google issued a public apology.
“We’re aware that Gemini is offering inaccuracies in some historical image generation depictions,” the company said in a statement. “We’re working to improve these kinds of depictions immediately.”
Apology Without Action
However, months passed with little visible improvement. Google initially shut down the feature and admitted it had “gotten it wrong.” Despite that, no solid safety measures were implemented during the downtime.
Now, Google has announced that its AI image generator is coming back online. Users can expect a revamped experience, hopefully, one that avoids past mistakes.
Image 3 With Safeguards
Dave Citron, Senior Director of Gemini Experiences, wrote in a blog post that Google has “upgraded our creative image generation capabilities” with Imagen 3. This new model is designed to align with the company’s “product design principles” and comes with “built-in safeguards.”
According to a pre-publication research paper by Google DeepMind, Imagen 3 uses a multi-stage filtering process. This process starts by eliminating unsafe, violent, or poor-quality images. It also removes AI-generated images that might reinforce visual biases or artifacts.
The researchers used specialized “safety datasets” to prevent the generation of hostile, sexualized, or inappropriate images. As stated by Google: “We reject the production of recognizable, lifelike persons, images of children, and very explicit, violent, or sexual content.”
Still Room for Concern
Despite these promises, users are skeptical. Will Imagen 3 avoid creating racially diverse Nazi soldiers or unsettling clowns? The outcome remains uncertain.
Citron admitted: “Of course, as with any generative AI tool, not every image Gemini creates will be perfect. But we’ll continue to listen to feedback from early users as we keep improving.”
The Takeaway
Although Google has tried to correct its earlier missteps, questions remain about whether it has done enough. While safeguards in Imagen 3 are a positive step, only time will reveal whether they are truly effective.
Discover more from Postbox Live
Subscribe to get the latest posts sent to your email.