Google says its AI image generator overcompensated for potential racial bias


  • Added Google statement.
  • Google has stopped generating images of people until the issue is resolved.

Update from February 23, 2024:

Google has published a post about what went wrong with Gemini’s image generation. Google says the app, which is based on Imagen 2, is tuned to avoid generating violent, sexually explicit, or real-life images, and aims to provide diverse results.

So what went wrong? In short, two things. First, our tuning to ensure that Gemini showed a range of people failed to account for cases that should clearly not show a range. And second, over time, the model became way more cautious than we intended and refused to answer certain prompts entirely — wrongly interpreting some very anodyne prompts as sensitive.

These two things led the model to overcompensate in some cases, and be over-conservative in others, leading to images that were embarrassing and wrong.


The company also reminded users that Gemini, designed as a creativity and productivity tool, may not always be reliable, especially for current events and sensitive topics.

The company suggests using Google Search for “fresh, high-quality information.” NewsGuard points out in a new study that Google Search is also under pressure from AI-generated content.



Google has turned off the generation of images of people and intends to turn it back on only after a “significant” improvement and “extensive testing”.

Original article from February, 22. 2024:

Google’s AI image generator overcompensates for racial bias, leading to historical inaccuracies

AI image generators have often been criticized for social and racial bias. In an attempt to counter this, Google may have gone a bit too far.

Racial and social bias in AI imaging systems has been demonstrated repeatedly in research and practice, sometimes even in scenarios where you wouldn’t look for it. They exist because AI systems adopt biases embedded in the training data. This can help reinforce biases, propagate existing biases, and even create new biases.

Large AI companies in particular struggle with bias if they want to deploy AI responsibly. Since they cannot easily remove it from the training data, they must find workarounds.


Screenshot via X

Google admits a mistake

There is an argument to be made that the generation of historically accurate images is not what generative AI is for. After all, hallucinations — machine fantasies, if you will — are a feature of these AI systems, not a bug.

But of course, this argument plays into the hands of conservative forces who accuse AI companies of being too “woke” and racist against white people for seeking diversity.

Gemini product manager Jack Krawczyk speaks of “inaccuracies in some historical image generation depictions” and promises a quick remedy. Google also acknowledges a mistake. In general, it is good that Gemini generates images of people from different backgrounds, Google says. But in a historical context, “it’s missing the mark.” The company has paused the generation of images of people until the issue is resolved.

Renowned developer John Carmack, who works on AI systems himself, calls for more transparency in AI guidelines: “The AI behavior guardrails that are set up with prompt engineering and filtering should be public — the creators should proudly stand behind their vision of what is best for society and how they crystallized it into commands and code. I suspect many are actually ashamed,” Carmack writes.

Update: An earlier version of this article included a paragraph and a link to allegedly AI-generated images of Justin Trudeau in various ethnic settings. However, these were real images. The paragraph and images have been removed.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top