Many small website owners can defend their sites from web-crawling robots without big tech's help. Here is how.
Lemmy newb here, not sure if this is right for this /c.
An article I found from someone who hosts their own website and micro-social network, and their experience with web-scraping robots who refuse to respect robots.txt, and how they deal with them.
You know how popular VPNs are, right? And how they improve privacy and security for people who is them? And you're blocking anyone who's exercising a basic privacy right?
That's not what I'm complaining about. I'm unable to access the site because they're blocking anyone coming through a VPN. I would need to lower my security and turn off my VPN to read their blog. That's my issue.
A problem with this approach was that many readers use VPN's and other proxies that change IP addresses virtually every time they use them. For that reason and because I believe in protecting every Internet user's privacy as much as possible, I wanted a way of immediately unblocking visitors to my website without them having to reveal personal information like names and email addresses.
I recently spent a few weeks on a new idea for solving this problem. With some help from two knowledgeable users on Blue Dwarf, I came up with a workable approach two weeks ago. So far, it looks like it works well enough. To summarize this method, when a blocked visitor reaches my custom 403 error page, he is asked whether he would like to be unblocked by having his IP address added to the website's white list. If he follows that hypertext link, he is sent to the robot test page. If he answers the robot test question correctly, his IP address is automatically added to the white list. He doesn't need to enter it or even know what it is. If he fails the test, he is told to click on the back button in his browser and try again. After he has passed the robot test, Nginx is commanded to reload its configuration file (PHP command: shell_exec("sudo nginx -s reload");), which causes it to immediately accept the new whitelist entry, and he is granted immediate access. He is then allowed to visit cheapskatesguide as often as he likes for as long as he continues to use the same IP address. If he switches IP addresses in the future, he has about a one in twenty chance of needing to pass the robot test again each time he switches IP addresses. My hope is that visitors who use proxies will only have to pass the test a few times a year. As the whitelist grows, I suppose that frequency may decrease. Of course, it will reach a non-zero equilibrium point that depends on the churn in the IP addresses being used by commercial web-hosting companies. In a few years, I may have a better idea of where that equilibrium point is.
I have plenty of spare bandwidth and babysitting-resources so my approach is largely to waste their time. If they poke my honeypot they get poked back and have to escape a tarpit specifically designed to waste their bandwidth above all. It costs me nothing because of my circumstances but I know it costs them because their connections are metered. I also know it works because they largely stop crawling my domains I employ this on. I am essentially making my domains appear hostile.
It does mean that my residential IP ends up on various blocklists but I'm just at a point in my life where I don't give an unwiped asshole about it. I can't access your site? I'm not going to your site, then. Fuck you. I'm not even gonna email you about the false-positive.
It is also fun to keep a log of which IPs have poked the honeypot have open ports, and to automate a process of siphoning information out of those ports. Finding a lot of hacked NVR's recently I think are part of some IoT botnet to scrape the internet.
I found a very large botnet in Brazil mainly and several other countries. And abuseipdb.com is not marking those IPs are a thread. We need a better solution.
I think a honeypot is a good way. Another way is to use proof of work basically on the client side. Or we need a better place to share all stupid web scraping bot IPs.
I love the idea of abuseipdb and I even contributed to it briefly. Unfortunately, even as a contributor, I don't get enough API resources to actually use it for my own purposes without having to pay. I think the problem is simply that if you created a good enough database of abusive IPs then you'd be overwhelmed in traffic trying to pull that data out.
I wouldn't even know where to begin, but I also don't think that what I'm doing is anything special. These NVR IPs are hurling abuse at the whole internet. Anyone listening will have seen them, and anyone paying attention would've seen the pattern.
The NVRs I get the most traffic from have been a known hacked IoT device for a decade and even has a github page explaining how to bypass their authentication and pull out arbitrary files like passwd.
This is signal detection theory combined with an arms race that keeps the problem hard. You cannot block scrapers without blocking people, and you cannot inconvenience bots without also inconveniencing readers. You might figure something clever out temporarily, but eventually this truism will resurface. Excuse me while I solve a few more captchas.
The internet as we know it is dead, we just need a few more years to realise it. And I'm afraid that telecommunications will be going the same way, when no-one can trust that anyone is who they say anymore.
I've found that many of these solutions/hacks block legitimate users that are using the tor browser and Internet Archive scrapers, which may be a dealbreaker for some but maybe acceptable for most users and website owners.