A group of spammers exploited OpenAI's language model to create unique spam messages that bypassed traditional filtering systems, successfully targeting over 80,000 websites in just four months, according to new research from SentinelLabs.
The spam campaign, orchestrated through a system called AkiraBot, used OpenAI's chat API with the gpt-4o-mini model to generate personalized marketing messages promoting questionable search engine optimization services. Each message was uniquely crafted to include specific details about the targeted website, making them appear legitimate and harder to detect as spam.
"The messages seemed curated because they contained personalized details about each website," noted researchers Alex Delamotte and Jim Walter from SentinelLabs. This sophisticated approach made traditional spam filtering methods largely ineffective.
The spammers programmed AkiraBot to pose as a "helpful assistant that generates marketing messages," using Python scripts to rotate domain names and customize content for each target. The messages were delivered through website contact forms and chat widgets.
Log files revealed that between September 2024 and January 2025, the campaign successfully reached more than 80,000 websites, with only 11,000 failed delivery attempts. This high success rate demonstrates the effectiveness of using AI-generated unique content to bypass standard spam detection systems.
After SentinelLabs disclosed their findings, OpenAI revoked the spammers' account, stating that such usage violates their terms of service. However, the fact that this activity continued undetected for four months highlights the challenges in proactively preventing AI misuse.
This incident illustrates how advanced AI technology can be weaponized for malicious purposes, presenting new challenges for cybersecurity professionals and website administrators in their ongoing battle against spam.