FBI Warns Of Sextortion Scams Using ‘Deepfakes’

FBI

Authored by Frank Fang via The Epoch Times (emphasis ours),

The FBI is warning the public that criminals are taking social media photos and videos to create deepfakes in sextortion schemes.

An FBI logo is pictured on an agent’s shirt in the Manhattan borough of New York on Oct. 19, 2021. (Carlo Allegri/Reuters)

The agency issued the warning in a public service announcement on June 6, drawing attention to how malicious actors take advantage of image and video editing tools powered by artificial intelligence (AI) to create deepfakes, which are fake media created from otherwise benign content.

The FBI continues to receive reports from victims, including minor children and non-consenting adults, whose photos or videos were altered into explicit content,” the agency said in the alert. “The photos or videos are then publicly circulated on social media or pornographic websites, for the purpose of harassing victims or sextortion schemes.”

The agency noted that it has observed an increase in the number of victims reporting sextortion as of April this year, with fake media being created using “content posted on their social media sites or web postings, provided to the malicious actor upon request, or captured during video chats.”

Typically, malicious actors have two different demands, according to the FBI. They either want victims to provide money or gift cards in exchange for not sharing the fake images or photos with their family members or social media friends, or have victims provide real sexually-themed images or videos of themselves.

As a result, the FBI urges the public to “exercise caution” when posting photos and videos of themselves online, including social media and dating apps.

“Although seemingly innocuous when posted or shared, the images and videos can provide malicious actors an abundant supply of content to exploit for criminal activity,” the agency said.

“Advancements in content creation technology and accessible personal images online present new opportunities for malicious actors to find and target victims,” the agency added. “This leaves them vulnerable to embarrassment, harassment, extortion, financial loss, or continued long-term re-victimization.”

The FBI shared a list of recommendations, including asking parents to monitor their children’s online activities and run frequent online searches of their children’s information to know what is publicly available.

Consider using reverse image search engines to locate any photos or videos that have circulated on the internet without your knowledge,” a recommendation says.

Other recommendations include implementing privacy settings on social media accounts, and exercising caution when accepting friend requests and engaging in video chats.

Warnings

In recent years, U.S. authorities have been issuing warnings about deepfake technology.

In June 2022, the FBI issued a warning about the increasing number of complaints reporting the use of deepfake videos during interviews for remote job positions to gain access to sensitive data.

“The remote work or work-from-home positions identified in these reports include information technology and computer programming, database, and software-related job functions. Notably, some reported positions include access to customer PII [personally identifiable information], financial data, corporate IT databases and/or proprietary information,” the agency said in an announcement.

The Federal Trade Commission (FTC) issued a consumer alert in March, alerting the public that criminals have been using deepfake audios to trick victims into believing their loved ones have been kidnapped.

A scammer could use AI to clone the voice of your loved one. All he needs is a short audio clip of your family member’s voice–which he could get from content posted online–and a voice-cloning program,” the commission wrote. “When the scammer calls you, he’ll sound just like your loved one.”

In such a situation, the FTC asks people to hang up, particularly if the caller wants to be paid via wire transfer, cryptocurrency, or a gift card.

“Don’t trust the voice. Call the person who supposedly contacted you and verify the story. Use a phone number you know is theirs. If you can’t reach your loved one, try to get in touch with them through another family member or their friends,” the commission said.

Legislation

In May, Rep. Joe Morelle (D-N.Y.) introduced a bill to make sharing sexualized non-consensual deepfakes illegal.

The legislation, known as the Preventing Deepfakes of Intimate Images Act (H.R.3106), would ban the non-consensual disclosure of digitally altered intimate images and make sharing these images a criminal offense.

“The spread of A.I.-generated and altered images can cause irrevocable emotional, financial, and reputational harm–and unfortunately, women are disproportionately impacted,” Morelle said in a statement accompanying the bill’s introduction.

“As artificial intelligence continues to evolve and permeate our society, it’s critical that we take proactive steps to combat the spread of disinformation and protect individuals from compromising situations online,” he added. “I’m proud to have introduced this legislation that takes common-sense steps to protect their privacy, and I look forward to working with my colleagues to pass it into law.”

Fran Drescher, president of SAG-AFTRA, applauded the legislation.

This bill will help curb an incredibly destructive practice that strikes at the heart of personal privacy, safety and autonomy. Sexual abuse, whether occurring physically or digitally, should never be excused or permitted as ‘personal expression,’ and it should never be tolerated,” he said in a statement.

Drescher added: “Deepfakes are violations, objectification and exploitation, and must be made illegal and punishable by law. This bill is a powerful step to ensure that this technology is not used to cause harm.”

Loading…

ENB Top News
ENB
Energy Dashboard
ENB Podcast
ENB Substack