The FBI’s Internet Crime Complaint Center (IC3) has issued a public service announcement highlighting the growing use of generative artificial intelligence (AI) by criminals to enhance the scale, believability, and reach of fraudulent activities. These AI tools, designed to create realistic synthetic content, are being weaponized for schemes involving social engineering, financial fraud, identity theft, and sextortion.
Generative AI uses data inputs to synthesize convincing text, images, audio, and videos. While these tools are legal and have legitimate uses, their capabilities allow criminals to bypass traditional warning signs of fraud, such as grammatical errors or unrealistic visuals. By leveraging AI, fraudsters can produce high-quality fake content quickly and at scale.
The FBI has identified several ways in which generative AI is being misused, including:
- AI-Generated Text: Used in spear-phishing emails, romance scams, and investment schemes; assists foreign actors in overcoming language barriers to produce content with fewer errors; powers fake social media profiles and fraudulent websites (including cryptocurrency scams); and enables chatbots on malicious websites to lure victims into harmful interactions.
- AI-Generated Images: Creates realistic profile pictures for fake social media accounts; forges identification documents such as driver’s licenses and government credentials; generates images of celebrities or influencers promoting counterfeit products or fake charity appeals; and fabricates disaster or conflict imagery to solicit fraudulent donations.
- AI-Generated Audio: Utilizes vocal cloning to impersonate loved ones in crisis scenarios, demanding money, and mimics voices to bypass security measures, such as voice authentication for financial accounts.
- AI-Generated Videos: Produces deepfake videos of public figures for use in investment fraud or impersonation of authority figures and generates realistic videos for real-time interactions, making scammers appear legitimate.
Protecting against AI-powered fraud
The misuse of generative AI poses a significant risk to both individuals and organizations. Targets often include vulnerable populations like the elderly, who may fall victim to confidence schemes, as well as companies and financial institutions, which are exposed to deepfake impersonations of executives or employees. The accessibility of generative AI tools makes it easy for scammers to launch these sophisticated schemes globally.
The FBI advises individuals and organizations to adopt proactive measures to safeguard against generative AI-enabled fraud:
- Establish a secret word or phrase with trusted individuals to verify identities during emergencies.
- Examine images and videos for imperfections, such as distorted features, irregular shadows, or unnatural movements.
- Verify suspicious calls by hanging up and directly contacting the organization or individual using known contact details.
- Reduce your online footprint, limiting public access to personal images and voice recordings.
- Avoid sharing sensitive information with people you have only met online or over the phone.
- Be cautious of unsolicited requests for money, cryptocurrency, or gift cards.
Victims of generative AI-related scams are encouraged to file a report with the FBI’s Internet Crime Complaint Center at www.ic3.gov. Reports should include identifying details of the fraudsters, financial transaction data, and a detailed account of the interaction.
Leave a Reply