leaderboard_fup_chat_asuna_01
Google Bans Deepfake Porn Ads

May 29, 2024 – Google has announced a significant update to its policies, targeting deepfake porn ads and search results in an effort to combat the spread of non-consensual sexually explicit content. The changes, which take effect on May 30, 2024, will limit the reach and monetization of deepfake pornography across Google's platforms.

Deepfakes are artificial intelligence-generated media that superimpose a person's likeness onto another individual, raising serious concerns about privacy, consent, and potential abuse. While deepfakes have legitimate applications in entertainment and education, their misuse for creating sexually explicit content without consent has become a growing issue.

Google's updated Inappropriate Content Policy will prohibit deepfake porn ads, as well as content promoting the creation or distribution of such material. This includes websites or apps claiming to generate deepfake pornography, instructions on creating deepfakes, and endorsements or comparisons of deepfake services.

Google's updated Inappropriate Content Policy will prohibit the promotion of “synthetic content that has been altered or generated to be sexually explicit or contain nudity.” The policy applies to content promoting the creation or distribution of deepfake pornography, including websites or apps claiming to generate such material, instructions on creating deepfakes, and endorsements or comparisons of deepfake services.

Websites found in violation of the new policy will face severe penalties, including a significant reduction in search engine rankings, effectively limiting their visibility to users. Moreover, Google's advertising network will no longer serve ads on these sites, cutting off a crucial revenue stream for deepfake pornography creators and distributors.

The company emphasized its commitment to enforcing the updated policy, stating that violations will be considered “egregious” and will result in immediate suspension of Google Ads accounts without prior warning. Repeat offenders will be permanently banned from advertising on Google's platforms.

To further combat deepfake porn ads and search results, Google has been collaborating with organizations like the National Center for Missing & Exploited Children (NCMEC) and the Internet Watch Foundation (IWF) to identify and remove child sexual abuse material (CSAM) related to deepfakes. The company has also invested in developing advanced AI-powered tools to detect manipulated content and has provided resources for users to report abusive material.

The update to Google's policies comes amidst growing concern from lawmakers, advocacy groups, and the public about the harmful impacts of deepfake pornography. Victims of non-consensual deepfakes often face severe emotional distress, reputational damage, and even threats to their physical safety.

Experts in online safety and digital rights have praised Google's move as a significant step towards combating the spread of abusive deepfake content. However, they also emphasize the need for a multi-faceted approach involving collaboration between technology companies, law enforcement agencies, and civil society organizations.

As the fight against deepfake pornography continues, Google's actions represent a crucial starting point for fostering a safer and more responsible online environment. The company's commitment to enforcing its updated policies and investing in technological solutions sets a precedent for other tech giants to follow suit in addressing this emerging threat.

Sharing Is Caring:
Undresspro AI-1

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *