
A social engineering campaign is targeting U.S. government officials and their contacts through AI-generated voice calls and malicious text messages, the FBI warns.
The impersonation campaign, active since April 2025, exploits trust in public figures to hijack accounts and expand access across sensitive networks.
The FBI issued a Public Service Announcement disclosing that the attackers are impersonating high-ranking U.S. federal and state officials via SMS (“smishing”) and voice messages (“vishing”) enhanced with generative AI. The campaign primarily targets current and former senior officials, aiming to exploit their authority and access for further compromise.
The malicious actors initiate contact through what appear to be benign or urgent messages claiming to be from a government official. These messages often attempt to lure victims to secondary communication platforms where attackers deploy malicious links. Once a target interacts with these links, attackers may gain access to personal or official accounts, posing serious risks to sensitive government communications and inter-agency trust chains.
The attackers use advanced AI tools to clone voices and craft persuasive voice memos, making the impersonations increasingly difficult to detect. By leveraging generative audio and common phishing techniques, the campaign elevates traditional social engineering to a new level of believability.
The targeted victims include individuals with privileged access or influence within federal and state systems, creating a ripple effect when accounts are compromised. Once inside, actors can further exploit trust relationships to reach additional targets or collect contact data for future campaigns. Even non-official contacts of victims may be manipulated if their information is exposed.
The FBI describes this campaign as an evolution of classic spear phishing tactics, now enhanced by AI. Smishing messages are generated using spoofed numbers with no identifiable subscriber, while AI-generated voices in vishing attempts are tailored to match public recordings of officials, family members, or colleagues. This deepfake-like precision aims to deceive even cautious recipients.
The bureau's warning includes a set of practical recommendations to mitigate the risk from vishing attacks:
- Always verify new communication channels by cross-referencing with known contact methods.
- Examine messages for subtle inconsistencies in spelling, grammar, or URLs.
- Be cautious of AI-generated voice messages with unnatural cadence, mismatched tone, or call lag.
- Avoid clicking on links or downloading files from unknown or unexpected sources.
- Enable multi-factor authentication on all accounts, and never share authentication codes.
- Establish secret words or phrases with trusted contacts to verify identities.
The FBI emphasizes that these impersonation attacks are difficult to detect due to the increasing realism of generative AI. It urges anyone who suspects fraudulent contact to report incidents to their agency's security office or submit details to the FBI's Internet Crime Complaint Center (IC3).
Leave a Reply