My page - topic 1, topic 2, topic 3 Postbox Live

Google Turns Off AI Feature

Google Turns Off Ai Feature

Google Turns Off AI Feature

That Generated Racist-Looking Nazi Images

 


Is Google really current with its assignments?

In February, bad coverage was given to Google‘s AI picture generator, which is powered by Gemini.
At the time, the program generated images of ethnically varied German soldiers from the Nazi era, perhaps overcorrecting for the software’s ongoing problems with racial bias.

The IT giant was compelled to apologize as a result.

“We’re aware that Gemini is offering inaccuracies in some historical image generation depictions,” Google stated in a statement at the time. “We’re working to improve these kinds of depictions immediately.”


These changes don’t appear to have occurred for months, though, since Google first abandoned the feature and apologized for “getting it wrong” before failing to implement robust security measures.

Google has already revealed that its AI picture generator will be back up and running, and we can’t wait to explore and possibly discover any bugs this time around.


Senior director of Gemini Experiences Dave Citron promised in a blog post that “we’ve upgraded our creative image generation capabilities” with the release of the Imagen 3 model. The model is said to follow “our product design principles” and has “built-in safeguards.”

As stated in a paper that has not yet completed peer review, Google DeepMind researchers used Imagen 3 to “ensure quality and safety standards” through the use of a “multi-stage filtering process”.

The article states that “this process begins by removing unsafe, violent, or low-quality images.”

“We then eliminate AI-generated images to prevent the model from learning artifacts or biases commonly found in such images.”

“Safety datasets” were employed by the researchers to guarantee that the model would not produce images that were aggressive, unfriendly, sexualized, or graphic.

We reject the production of recognizable, lifelike persons, pictures of kids, and very explicit, violent, or sexual content,” Google stated in a blog post.

Is the job actually completed by the company this time? It’s unclear if Google‘s Imagen 3 will produce pictures of Nazis with a variety of racial backgrounds or horrifying clowns.

“Of course, as with any generative AI tool, not every image Gemini creates will be perfect,” Citron said, “but we’ll continue to listen to feedback from early users as we keep improving.”

 

 

 


Discover more from

Subscribe to get the latest posts sent to your email.

Leave a Reply

error: Content is protected !!

Discover more from

Subscribe now to keep reading and get access to the full archive.

Continue reading