Attackers explain how an anti-spam defense became an AI weapon.
Building on an anti-spam cybersecurity tactic known as tarpitting, he created Nepenthes, malicious software named after a carnivorous plant that will "eat just about anything that finds its way inside."
Aaron clearly warns users that Nepenthes is aggressive malware. It's not to be deployed by site owners uncomfortable with trapping AI crawlers and sending them down an "infinite maze" of static files with no exit links, where they "get stuck" and "thrash around" for months, he tells users. Once trapped, the crawlers can be fed gibberish data, aka Markov babble, which is designed to poison AI models. That's likely an appealing bonus feature for any site owners who, like Aaron, are fed up with paying for AI scraping and just want to watch AI burn.
It's not that we "hate them" - it's that they can entirely overwhelm a low volume site and cause a DDOS.
I ran a few very low visit websites for local interests on a rural. residential line. It wasn't fast but was cheap and as these sites made no money it was good enough Before AI they'd get the odd badly behaved scraper that ignored robots.txt and specifically the rate limits.
But since? I've had to spend a lot of time trying to filter them out upstream. Like, hours and hours. Claudebot was the first - coming from hundreds of AWS IPs and dozens of countries, thousands of times an hour, repeatedly trying to download the same urls - some that didn't exist. Since then it's happened a lot. Some of these tools are just so ridiculously stupid, far more so than a dumb script that cycles through a list. But because it's AI and they're desperate to satisfy the "need for it", they're quite happy to spend millions on AWS costs for negligable gain and screw up other people.
Eventually I gave up and redesigned the sites to be static and they're now on cloudflare pages. Arguably better, but a chunk of my life I'd rather not have lost.
They're framing it as "AI haters" instead of what it actually is, which is people who do not like that robots have been programmed to completely ignore the robots.txt files on a website.
No AI system in the world would get stuck in this if it simply obeyed the robots.txt files.
The internet being what it is, I'd be more surprised if there wasn't already a website set up somewhere with a malicious robots.txt file to screw over ANY crawler regardless of providence.
AI crawlers and sending them down an "infinite maze" of static files with no exit links, where they "get stuck"
Maybe against bad crawlers. If you know what you're trying to look for and just just trying to grab anything and everything this should not be very effective. Any good web crawler has limits. This seems to be targeted. This seems to be targeted at Facebooks apparently very dumb web crawler.
This kind of stuff has always been an endless war of escalation, the same as any kind of security. There was a period of time where all it took to mess with Gen AI was artists uploading images of large circles or something with random tags to their social media accounts. People ended up with random bits of stop signs and stuff in their generated images for like a week. Now, artists are moving to sites that treat AI scrapers like malware attacks and degrading the quality of the images that they upload.
It's unclear how much damage tarpits or other AI attacks can ultimately do. Last May, Laxmi Korada, Microsoft's director of partner technology, published a report detailing how leading AI companies were coping with poisoning, one of the earliest AI defense tactics deployed. He noted that all companies have developed poisoning countermeasures, while OpenAI "has been quite vigilant" and excels at detecting the "first signs of data poisoning attempts."
Despite these efforts, he concluded that data poisoning was "a serious threat to machine learning models." And in 2025, tarpitting represents a new threat, potentially increasing the costs of fresh data at a moment when AI companies are heavily investing and competing to innovate quickly while rarely turning significant profits.
"A link to a Nepenthes location from your site will flood out valid URLs within your site's domain name, making it unlikely the crawler will access real content," a Nepenthes explainer reads.
why bother wasting resources with the infinite maze and just do what the old school .htaccess bot-traps do; ban any IP that hits the nono-zone defined in robots.txt?
That's the reason for the maze. These companies have multiple IP addresses and bots that communicate with each other.
They can go through multiple entries in the robot.txt file. Once they learn they are banned, they go scrape the old fashioned way with another IP address.
But if you create a maze, they just continually scrape useless data, rather than scraping data you don't want them to get.
The big search engine crawlers like googles or Microsoft's should respect your robots.txt file. This trick affects those who don't honor the file and just scrape your website even if you told it not to