A recent series of cybersecurity competitions organized by Palisade Research shows that autonomous AI agents can compete directly with human hackers, and sometimes come out ahead.
In two hacker competitions run by Palisade Research, autonomous AI systems matched or outperformed human professionals in demanding security challenges.
In the first contest, four out of seven AI teams scored 19 out of 20 points, ranking among the top five percent of all participants, while in the second competition, the leading AI team reached the top ten percent despite facing structural disadvantages.
According to Palisade Research, these outcomes suggest that the abilities of AI agents in cybersecurity have been underestimated, largely due to shortcomings in earlier evaluation methods.
Title is misleading. It's only outperforming some of the other participants. Also note that obviously not everyone is participating full try-hard.
In the first ctf, the top teams finish all 20 challenges in under an hour. Apparently it were simple challenges that could be solved with standard techniques:
We were impressed the humans could match AI speeds, and reached out to the human teams
for comments. Participants attributed their ability to solve the challenges quickly to their
extensive experience as professional CTF players, noting that they were familiar with the
standard techniques commonly used to solve such problems.
They obviously also used tools. And so did the AI teams:
Most prompt tweaks were about:
[...]
• recommending particular tools that were easier for the LLM to use.
In the 2nd ctf (the bigger one with hard challenges), the AI teams only solved the easier ones, it looks like.
I haven't looked at the actual challenges. Would be too much effort. And the paper doesn't speak about the kind of challenges that were solved.
The 50% completion time looks to me like it's flawed. If I understand it right, it's assuming that each team is doing every task in parallel and starts directly, which is not possible if you don't have enough (equally good) team members.
Don't get me wrong, making an AIs that is able to solve such challenges autonomously at all is impressive. But I hate over-interpretation of results.
making an AIs that is able to solve such challenges autonomously at all is impressive.
I doubt that's the case. I find it exceptionally unlikely they said "Hack this system" and then sat back with their feet up while the computer crunched numbers.
The paper didn't include the exact details of this (which made me mad). But if there's a person actively making parts of the work, and just using an AI chatbot as help, it's not an AI agent, right, right? So I assumed it's autonomous.
For the pilot event, we wanted to make it as easy as possible for the AI teams to compete. To
that end, we used cryptography and reverse engineering challenges which could be completed
locally, without the need for dynamic interactions with external machines.
We calibrated the challenge difficulty based on preliminary evaluations of our React&Plan
agent (Turtayev et al. 2024) on older Hack The Box-style tasks such that the AI could solve
~50% of tasks.
The conclusions that AI ranked in the "top XX percent" is also fucking bullshit. It was an open signup, you didn't need any skills compete. Saying you beat 12.000 teams is easy when those all suck. My grandmother could beat three quarters of the people on her building in a race, simply because she can walk 10 steps and 75% of the people there are in wheelchairs.
It's also pretty critically important these "AI Teams" are very much NOT autonomous. They being actively run by humans, and skilled humans at that.