SMARTPHONES

Google explains what went wrong with its AI images

Kaitlyn Cimino / Android Authority

TL;DR

  • Google has now provided an explanation for what went wrong with Gemini after it generated inaccurate and offensive images of people.
  • The tech giant claims that there were two things that went wrong and caused the AI to overcompensate.
  • AI image generation of people reportedly won’t be turned back on until after it has been significantly improved.

Google found itself in hot water after Gemini was caught generating images of people that were inaccurate and offensive. The company has since turned off the LLM’s ability to produce images of people. Now the company has released an apology, as well as an explanation for what happened.

In a blog post, the Mountain View-based firm apologized for Gemini’s mistake, stating that it was “clear that this feature missed the mark” and it was “sorry the feature didn’t work well.” According to Google, there were two things that led to the creation of these images.

As we reported earlier, we believed it was possible Gemini was overcorrecting for something that’s been a problem with AI-generated imagery, which is reflecting our racially diverse world. It appears that’s exactly what happened.

The company explains the first problem is related to how Gemini is tuned to ensure a range of people are depicted in images. Google admits it failed to “account for cases that should clearly not show a range.”

The second issue stems from how Gemini chooses what prompts are considered sensitive. Google claims that the AI became more cautious than it anticipated and refused to answer certain prompts.

At the moment, Google plans to keep image generation of people on ice until significant improvements have been made to the model.

Got a tip? Talk to us! Email our staff at news@androidauthority.com. You can stay anonymous or get credit for the info, it’s your choice.


Source link

Related Articles

Back to top button