A recent series of cybersecurity competitions organized by Palisade Research shows that autonomous AI agents can compete directly with human hackers, and sometimes come out ahead.
In two hacker competitions run by Palisade Research, autonomous AI systems matched or outperformed human professionals in demanding security challenges.
In the first contest, four out of seven AI teams scored 19 out of 20 points, ranking among the top five percent of all participants, while in the second competition, the leading AI team reached the top ten percent despite facing structural disadvantages.
According to Palisade Research, these outcomes suggest that the abilities of AI agents in cybersecurity have been underestimated, largely due to shortcomings in earlier evaluation methods.
Title is misleading. It's only outperforming some of the other participants. Also note that obviously not everyone is participating full try-hard.
In the first ctf, the top teams finish all 20 challenges in under an hour. Apparently it were simple challenges that could be solved with standard techniques:
We were impressed the humans could match AI speeds, and reached out to the human teams
for comments. Participants attributed their ability to solve the challenges quickly to their
extensive experience as professional CTF players, noting that they were familiar with the
standard techniques commonly used to solve such problems.
They obviously also used tools. And so did the AI teams:
Most prompt tweaks were about:
[...]
• recommending particular tools that were easier for the LLM to use.
In the 2nd ctf (the bigger one with hard challenges), the AI teams only solved the easier ones, it looks like.
I haven't looked at the actual challenges. Would be too much effort. And the paper doesn't speak about the kind of challenges that were solved.
The 50% completion time looks to me like it's flawed. If I understand it right, it's assuming that each team is doing every task in parallel and starts directly, which is not possible if you don't have enough (equally good) team members.
Don't get me wrong, making an AIs that is able to solve such challenges autonomously at all is impressive. But I hate over-interpretation of results.
making an AIs that is able to solve such challenges autonomously at all is impressive.
I doubt that's the case. I find it exceptionally unlikely they said "Hack this system" and then sat back with their feet up while the computer crunched numbers.
The paper didn't include the exact details of this (which made me mad). But if there's a person actively making parts of the work, and just using an AI chatbot as help, it's not an AI agent, right, right? So I assumed it's autonomous.
I mean, technically, you can call any controlling sensor an "agent". Any if-then loop can be an "agent".
But AI bros mean "A piece of software that can autonomously perform any broadly stated task", and those don't exist in real life. An "AI Agent" is software you can tell to "Order me a pizza", and it will do it to your satisfaction.
An AI agent is software you can tell "Hack that system and retrieve the flag". And it's not that.