I, for one, welcome our traffic light-identifying overlords.
Anyone who has been surfing the web for a while is probably used to clicking through a CAPTCHA grid of street images, identifying everyday objects to prove that they're a human and not an automated bot. Now, though, new research claims that locally run bots using specially trained image-recognition models can match human-level performance in this style of CAPTCHA, achieving a 100 percent success rate despite being decidedly not human.
ETH Zurich PhD student Andreas Plesner and his colleagues' new research, available as a pre-print paper, focuses on Google's ReCAPTCHA v2, which challenges users to identify which street images in a grid contain items like bicycles, crosswalks, mountains, stairs, or traffic lights. Google began phasing that system out years ago in favor of an "invisible" reCAPTCHA v3 that analyzes user interactions rather than offering an explicit challenge.
Despite this, the older reCAPTCHA v2 is still used by millions of websites. And even sites that use the updated reCAPTCHA v3 will sometimes use reCAPTCHA v2 as a fallback when the updated system gives a user a low "human" confidence rating.
This is actually a good sign for self driving. Google was using this data as a training set for Waymo. If AI is accurately identifying vehicles and traffic markings, it should be able to process interactions with them easier.
Catcha's data collection always was with the intent for training ai on these skills. That's "the point" of them.
It's reasonable to expect that the older version of captchas can now be beaten by modern ai, because they're often literally trained on that exact data to beat it.
Captcha effectively is free to use on websites as a tool because the data collection is the "payment", they then license that data out to people like OpenAI to train with for stuff like image recognition.
It's why ai is progressing so fast, captchas are one of humanity's long term collected data silos that are very full now.
We are going to have to keep progressing the complexity of catches as it will be the only way to catch modern AIs, and in turn it will collect more data to improve it.
CAPTCHA doesn't stop bots, and let us be honest, it never really did. It frustrated the hell out of people though, and caused people to waste time doing these challenges. Meanwhile even before AI bad actors and bots could get past it simply by using captcha solver services run by exploited humans solving captchas for the service.
It's a display of security theater meant to make normies feel safe but in reality doesn't stop most bad actors.
I mean, we literally train them by completing the CAPTCHAs. Why do you think you were picking things like bikes, traffic lights, cars, and busses? The only question now is what's next...
I never get the first one and rarely the second one. If it says to click all the squares with motorcycles and it’s just the one big picture, am I supposed to click stuff like the tire and mirrors? I always do and never get it right. Then most of the time they ask me to identify motorcycles, they show me motor scooters and what am I supposed to do then? I think I just need to get one of these bots to do it for me.
Meanwhile I sometimes fail those. I have been locked out of applications because I missed a square of a bus, or perhaps because I like to be efficient in my mouse cursor movements. I ducking hate CAPTCHAs.
Technically the "correct" answer is set by the highest percentage of people choosing it. EG: 19 people select Box A and 1 selects Box B, then the machine decides Box A is in fact correct.
That means these AI could be selecting the wrong answers for all anybody knows, if enough of them are answering the prompts, and still passing.
Thank God this means i can stop wondering if i should click on the... the 13 pixels from the fucking bike in that one corner square or wondering if i should count the scooter as a motorcycle fuck i am so tired of that shit
Pro-tip for webscrapers: using AI to solve captchas is a massive waste of effort and resources. Aim to not be presented with a captcha in the first place.
There is a Russian captcha solver bot called xevil that costs under $100 (I think, last time I looked) that has been able to solve nearly all captchas for years. You just have to supply it with relatively expensive proxy IP addresses because Google rate limits solve attempts.
So the title of this article has been true for a long long time. Capatchas are absolutely useless except against poor or uninformed script kiddies.
So...if CAPTCHA are already beaten by bots what's the point if it still exists ? to mock our weakness ?
In the old days CAPTCHA could do its job, but nowadays nah....even crawler/scrapper/meta bots can bypass it easily.
The real question is why do we as real humans still often fail to beat CHAPTCHA? Are we less human? Are we really robots in CHAPTCHA perspective ?