Prompt-inject an AI chatbot with … an image!
Prompt-inject an AI chatbot with … an image!

Prompt-inject an AI chatbot with … an image!

‘What is this chatbot vulnerable to?’ ‘Yes.’
https://www.youtube.com/watch?v=Ug5kTJrKeTM&list=UU9rJrMVgcXTfa8xuMnbhAEA - video
https://pivottoai.libsyn.com/20250822-prompt-inject-an-ai-chatbot-with-an-image - podcast
The unicode stuff amazes me as that is one of the things. That could actually be filtered for. Not doing any input validation. It isnt low hanging fruit, it is already on the floor. The incompetence..
It feels like the rise of LLMs has set back cybersecurity by a good decade or so, and by my guess it probably has.
Agents are throwing away decades of hard-learned lessons in input sanitization (providing cybercriminals a Greatest Hits compilation of vulnerabilities), "vibe coding" is introducing vulnerabilities aplenty to codebases and hiding them under mountains of technical debt/unmaintainable code, LLM usage is damaging coding ability in coders both junior and senior, the entire tech field is haemorrhaging talent from burnout and layoffs, and that's just the things that are immediately coming to mind.
As I see it, cybersec may find itself practically back to square one once the dust settles.
People are using llms to filter logs and alerts which will get someone hacked in a spectacular way eventually. Which will be very funny. And will give people 20 year old flashbacks.
Also buffer overflows are going to be back as the llm doesnt know to give the length of the correct string to the strncpy. Gonna be funny, if people keep proper backups.