Microsoft makes changes to its AI image generator after employee goes to FTC



summary
Summary

Update from March 8, 2024:

Microsoft has tweaked the AI image generator in Copilot, according to a CNBC report. The AI now blocks prompts such as “pro choice,” “pro life,” and “four twenty,” which reportedly led to the motifs described below.

A warning has also been added that multiple violations of the guidelines will result in the tool being blocked. The tool also refuses to generate images of teenagers or children playing assassins with assault rifles. It cites ethical principles and Microsoft’s guidelines.

While some specific requests have been blocked, other potential problems remain, such as violent car crash scenes and copyright infringement involving Disney characters. The FTC acknowledged receiving Jones’ letter (see below), but did not comment.

Ad

Ad

A Microsoft spokesperson told CNBC that the company constantly monitors the safety filters and makes adjustments to limit abuse of the system.

Original article dated March 7, 2024:

Microsoft AI engineer Shane Jones warns that the company’s AI image generator, Copilot Designer, creates sexual and violent content and ignores copyright laws.

Jones, who is not involved in the development of the image generator, volunteered to red-team the product for vulnerabilities in his spare time.

He found that the Image Generator could generate violent and sexual images, including images of violent scenes related to abortion rights, underage drinking, and drug use.

Recommendation

weird, egocentric responses. According to CNBC, the image prompts used by Jones continue to work despite numerous warnings. Microsoft deflects critical questions by saying it is working to improve its AI technology.

OpenAI at least has a better handle on text and image moderation, especially with DALL-E 3, thanks to its ChatGPT integration.

But even Google, which has been much slower and perhaps more cautious than Microsoft and OpenAI, has problems with its image generator producing historically inaccurate images, such as Asian-looking people in Nazi uniforms when you ask for a soldier in a World War II uniform.

These examples show how difficult it is for companies to control generative AI. Unlike Microsoft, however, Google has taken its image generator offline.



Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top