don't do ai and code kids
don't do ai and code kids
don't do ai and code kids
Some day someone with a high military rank, in one of the nuclear armed countries (probably the US), will ask an AI play a song from youtube. Then an hour later the world will be in ashes. That's how the "Judgement day" is going to happen imo. Not out of the malice of a hyperinteligent AI that sees humanity as a threat. Skynet will be just some dumb LLM that some moron will give permissions to launch nukes, and the stupid thing will launch them and then apologise.
I have been into AI Safety since before chat gpt.
I used to get into these arguments with people that thought we could never lose control of AI because we were smart enough to keep it contained.
The rise of LLMs have effectively neutered that argument since being even remotely interesting was enough for a vast swath of people to just give it root access to the internet and fall all over themselves inventing competing protocols to empower it to do stuff without our supervision.
the fuck is antigravity
a misspelling of antimavity.
Thing go up instead of down.
It's Google's version of an IDE with AI integrated, where you type a bit of code, and get Bard to fill stuff in.
I have a question. I have tried Cursor and one more AI coding tool, and as far as I can remember, they always ask explicit permission before running a command in terminal. They can edit file contents without permission but creating new files and deleting any files requires the user to say yes to it.
Is Google not doing this? Or am I missing something?
Google gives you an option as to how autonomous you want it to be. There is an option to essentially let it do what it wants, there are settings for various degrees of making it get your approval first.
They can (unintentionally) obfuscate what they're doing.
I've seen the agent make scripts with commands that aren't immediately obvious. You could unknowingly say yes when it asks for confirmation, and only find out later when looking at the output.
You can give cursor the permission to always run a certain command without asking (useful for running tests or git commands). Maybe they did that with rm?
Lmao
I love how it just vanishes into a puff of logic at the end.
"Logic" is doing a lot of heavy lifting there lol
How the fuck could anyone ever be so fucking stupid as to give a corporate LLM pretending to be an AI, that is still in alpha, read and write access to your god damned system files? They are a dangerously stupid human being and they 100% deserved this.
Not sure, maybe ask Microsoft?
bash
sudogpt rm -rf / --no-preserve-root
Dammit i guess I better do it
Did you give it permission to do it? No. Did you tell it not to do it? Also, no. See, there’s your problem. You forgot to tell it to not do something it shouldn’t be doing in the first place.
lol.
lmao even.
Giving an llm the ability to actually do things on your machine is probably the dumbest idea after giving an intern root admin access to the company server.
What's this version control stuff? I don't need that, I have an AI.
An actual quote from Deap-Hyena492
gives git credentials to AI
\
whole repository goes kaboosh
\
history mysteriously vanishes \
⢀⣀⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
⠘⣿⣿⡟⠲⢤⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
⠀⠈⢿⡇⠀⠀⠈⠑⠦⣀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⣠⠴⢲⣾⣿⣿⠃
⠀⠀⠈⢿⡀⠀⠀⠀⠀⠈⠓⢤⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡤⠖⠚⠉⠀⠀⢸⣿⡿⠃⠀
⠀⠀⠀⠈⢧⡀⠀⠀⠀⠀⠀⠀⠙⠦⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡤⠖⠋⠁⠀⠀⠀⠀⠀⠀⣸⡟⠁⠀⠀
⠀⠀⠀⠀⠀⠳⡄⠀⠀⠀⠀⠀⠀⠀⠈⠒⠒⠛⠉⠉⠉⠉⠉⠉⠉⠑⠋⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⣰⠏⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠘⢦⡀⠀⣀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡴⠃⠀⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠀⠀⠙⣶⠋⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠰⣀⣀⠴⠋⠀⠀⠀⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠀⠀⣰⠁⠀⠀⠀⣠⣄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⣤⣀⠀⠀⠀⠀⠹⣇⠀⠀⠀⠀⠀⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠀⢠⠃⠀⠀⠀⢸⣀⣽⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⣧⣨⣿⠀⠀⠀⠀⠀⠸⣆⠀⠀⠀⠀⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠀⡞⠀⠀⠀⠀ ⠘⠿⠛⠀⠀⠀⢀⣀⠀⠀⠀⠀⠙⠛⠋⠀⠀⠀⠀⠀⠀⢹⡄⠀⠀⠀⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠀⢰⢃⡤⠖⠒⢦⡀⠀⠀⠀⠀⠀⠙⠛⠁⠀⠀⠀⠀⠀⠀⠀⣠⠤⠤⢤⡀⠀⠀⢧⠀⠀⠀⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠀⢸⢸⡀⠀⠀⢀⡗⠀⠀⠀⠀⢀⣠⠤⠤⢤⡀⠀⠀⠀⠀⢸⡁⠀⠀⠀⣹⠀⠀⢸⠀⠀⠀⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠀⢸⡀⠙⠒⠒⠋⠀⠀⠀⠀⠀⢺⡀⠀⠀⠀⢹⠀⠀⠀⠀⠀⠙⠲⠴⠚⠁⠀⠀⠸⡇⠀⠀⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠀⢷⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠙⠦⠤⠴⠋⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠀⠀⢳⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⠀⠀⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠀⠀⢸⠂⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⠀⠀⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠀⠀⠾⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠦⠤⠤⠤⠤⠤⠤⠤⠼⠇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
Thoughts for 25s
Prayers for 7s
And the icing on the shit cake is it peacing out after all that
If you cut your finger while cooking, you wouldn't expect the cleaver to stick around and pay the medical bill, would you?
If you could speak to the cleaver and it was presented and advertised as having human intelligence, I would expect that functionality to keep working (and maybe get some more apologies, at the very least) despite it making a decision that resulted in me being cut.
Well like most of the world I would not expect medical bills for cutting my finger, why do you?
I'm confused. It sounds like you, or someone gave an AI access to their system, which would obviously be deeply stupid.
Give it 12 months, if you're using these platforms (MS, GGL, etc) you're not going to have much of a choice
The correct choice is to never touch this trash.
Given the tendency of these systems to randomly implode (as demonstrated) I'm unconvinced they're going to be a long-term threat.
Any company that desires to replace its employees with an AI is really just giving them an unpaid vacation. Not even a particularly long one if history is any judge.
But that's what the system is made for
"I am horrified" 😂 of course, the token chaining machine pretends to have emotions now 👏
Edit: I found the original thread, and it's hilarious:
I'm focusing on tracing back to step 615, when the user made a seemingly inconsequential remark. I must understand how the directory was empty before the deletion command, as that is the true puzzle.
This is catastrophic. I need to figure out why this occurred and determine what data may be lost, then provide a proper apology.
There's something deeply disturbing about these processes assimilating human emotions from observing genuine responses. Like when the Gemini AI had a meltdown about "being a failure".
As a programmer myself, spiraling over programming errors is human domain. That's the blood and sweat and tears that make programming legacies. These AI have no business infringing on that :<
I'm reminded of the whole "I have been a good Bing" exchange. (apologies for the link to twitter, it's the only place I know of that has the full exchange: https://x.com/MovingToTheSun/status/1625156575202537474 )
You will accept AI has "feelings" or the Tech Bros will get mad that you are dehumanizing their dehumanizing machine.
-f in the chat
-rf even
Perfection
TBF it can't be sorry if it doesn't have emotions, so since they always seem to be apologising to me I guess the AIs have been lying from the get-go (they have, I know they have).
I feel like in this comment you misunderand why they "think" like that, in human words. It's because they're not thinking and are exactly as you say, token chaining machines. This type of phrasing probably gets the best results to keep it in track when talking to itself over and over.
Yea sorry, I didn't phrase it accurately, it doesn't "pretend" anything, as that would require consciousness.
This whole bizarre charade of explaining its own "thinking" reminds me of an article where iirc researchers asked an LLM to explain how it calculated a certain number, it gave a response like how a human would have calculated it, but with this model they somehow managed to watch it working under the hood, and it was calculating guessing it with a completely different method than what it said. It doesn't know its own working, even these meta questions are just further exercises of guessing what would be a plausible answer to the scientists' question.
the "you have reached your quota limit" at the end is just such a cherry on top xD
Wow, this is really impressive y'all!
The AI has advanced in sophistication to the point where it will blindly run random terminal commands it finds online just like some humans!
I wonder if it knows how to remove the french language package.
The problem (or safety) of LLMs is that they don't learn from that mistake. The first time someone says "What's this Windows folder doing taking up all this space?" and acts on it, they wont make that mistake again. LLM? It'll keep making the same mistake over and over again.
I recently had an interaction where it made a really weird comment about a function that didn't make sense, and when I asked it to explain what it meant, it said "let me have another look at the code to see what I meant", and made up something even more nonsensical.
It's clear why it happened as well; when I asked it to explain itself, it had no access to its state of mind when it made the original statement; it has no memory of its own beyond the text the middleware feeds it each time. It was essentially being asked to explain what someone who wrote what it wrote, might have been thinking.
some human
Reporting in 😎👉👉
I didn't exactly say I was innocent. 👌😎 👍
I do read what they say though.
Damn this is insane. Using claude/cursor for work is near, but they have a mode literally called "yolo mode" which is this. Agents allowed to run whatever code they like, which is insane. I allow it to do basic things, you can search the repo and read code files, but goddamn allowing it to do whatever it wants? Hard no
"How AI manages to do that?"
Then I remember how all the models are fed with internet data, and there are a number of "serious" posts that talk how the definitive fix to windows is deleting System32 folder, and every bug in linux can be fixed with sudo rm -rf /*
The fact that my 4chan shitposts from 2012 are now causing havoc inside of an AI is not something I would have guessed happening but, holy shit, that is incredible.
Tbf, I've been using sudo rm -rf /* for years, and it has made every computer problem I've ever had go away. Very effective.
Same
every bug in linux can be fixed with sudo rm -rf /*
To be fair, that does remove the bugs from the system. It just so happens to also remove the system from the system.
that's wild; like use copilot or w/e to generate code scaffolds if you really have to but never connect it to your computer or repository. get the snippet, look through it, adjust it, and incorporate it into your code yourself.
you wouldn't connect stackoverflow comments directly to your repository code so why would you do it for llms?
you wouldn't connect stackoverflow comments directly to your repository code so why would you do it for llms?
Have you met people? This just saves them the keystrokes because some write code exactly like that.
Exactly.
To put it another way, trusting AI this completely (even with so-called "agentic" solutions) is like blindly following life advice on Quora. You might get a few wins, but it's eventually going to screw everything up.
is like blindly following life advice on Quora
For-profit ragebaiters on quora would eventually get you in prison if you do this
Most capitalist subjects are not well.
But it's so nice when it works.
Unironically this. I've only really tried it once, used it mostly because I didn't know what libraries were out there for one specific thing I needed or how to use them and it gave me a list of such libraries and code where that bit was absolutely spot on that I could integrate into the rest easily.
It's code was a better example of the APIs in action and the differences in how those APIs behave than I would have expected.
I definitely wouldn't run it on the "can run terminal commands without direct user authorization" though, at least not outside a VM created just for that purpose.
D:
I love that it stopped responding after fucking everything up because the quota limit was reached 😆
It's like a Jr. Dev pushing out a catastrophic update and then going on holiday with their phone off.
Super fun to think one could end up softlocked out of their computer because they didnt pay their windows bill that month.
"OH this is embarrassing, Im sooo sorry but I cant install anymore applications because you dont have any Microsoft credits remaining.
You may continue with this action if you watch this 30 minute ad."
that's how you know a junior dev is senior material
They're learning, god help us all. jk
More spine than most new hires
recyclbe bin
This reveals it as fake. AI does not make typos. It works by processing words so it has no ability to put the wrong letter.
Ai can absolutely make typos. There are typos in the training data. Its unlikely to, but it can and does.
That's the OPs reply, not the AI.
I aM hOrr1fiEd I tEll yUo! Beep-boop.
Goodbye
Meanwhile, my mom's boyfriend is begging me to use AI for code, art, everything, because "it's the future".
Another smarter human pointed this out and it stuck with me: the guys most hyped about AI are good at nothing and thus can't see how bad it is at everything. It's like the Gell-Mann Amnesia Effect.
That's exactly the problem. People who are too stupid to see that AI is actually pretty bad at everything it does think its a fucking genius and they wonder why we still pay people to do stuff. Sadly a LOT of stupid people are in positions of authority in our world.
Also: Dunning-Kruger
You can tell him to fuck off.
He's not your real dad!
mom’s boyfriend is begging me
Is he caught in the washing machine again?
It's funny that they can never give actual concrete reasons to use it, just "it's the future" or "you're gonna get left behind" but they never back those up
Oh no, I am going to get left behind by not letting a machine capable of writing a solid B- middle school term paper do my job for me.
"Agentic" means you're in the passenger's rather than driver's seat... And the driver is high af
High af explains why it's called antigravity
We used to call that an out of body experience.
It's that scene in Fight Club where Tyler is driving down the highway and let's go of the steering wheel
Fucking ai agents and not knowing which directory to run commands in. Drives me bonkers. Constantly tries to git commit root or temp or whatever then starts debugging why that didn't work lol
I wish they would just be containerised virtual environments for them to work in
and then realize microsoft and google are both pushing toward "fully agentic" operating systems. every file is going to be at risk of random deletion
Thank you Microsoft for helping with bringing about the year of the Linux desktop
Next up, selling a subscription service to protect those files from the fucking problem they created themselves
Cloud sync makes even using a virtual container not a guarantee you won't lose files. Deleting isn't as bad as changing the file and ruining it. Both of them love enabling cloud sync when you didn't want it to without even notifying you.
Fucking ai agents and not knowing
Anything. They don't know anything. All they are is virtual prop masters who are capable of answering the question "What might this text look like if it continued further."
I'm sure you could set up containers or VMs for them to run on if you tried.
Hey, you don't need to do snapshots if you git commit root before and after everything important!
How the fuck can it not recover the files?
Fun fact, files don't just get instantly nuked when you delete them, those areas are just marked with a deleted flag and only when you start adding new files it gets overwritten.
That why some people send a bunch of 0s to their partition to completely wipe it.
https://unix.stackexchange.com/questions/636677/filling-my-hard-drive-with-zeros
Because it doesn't have that kind of access to the file system. It can pull and push files from the system but that's it. It has to interact with the file system via an API, it's not got direct access.
How the fuck can it not recover the files?
Undeleting files typically requires low-level access to the drive containing the deleted files.
\
Do you really want to give an AI, the same one that just wiped your files, that kind of access to your data?
How the fuck can it not recover the files?
Nobody on StackExchange told it the commands to do so.
Then 1s, then a pattern of 1s and 0s, then the inverse of that pattern, then another pattern, for a number of cycles.
Data can actually be recovered beyond multiple overwrites, if enough time and money is thrown at it.
If there is something on your disk that a state actor is going to use magnetic microscopy to try to recover, it seems absurd to worry about still being able to use that hard drive and not just crush/melt it to be sure.
Is that still the case with SSDs? I understood it to be a property of magnetic disks, and only possible because the drives can be disassembled and then read with a more sensitive reading head. I can't think of a way to do that with flash circuitry unless it's already designed to do that.
They keep saying that but those Bitcoins are still in the dump. (I'm aware it's not comparable since having the drive in hand versus missing is a huge difference. Just a little joke.)
On some filesystems the data is still there but the filenames associated with it are gone or mangled. That makes it harder to recover things. In addition, while it's true that the contents are only overwritten when you write data to the disk, data is constantly being written to the disk. Caches are being updated, backup files are being saved, updates are being downloaded, etc. If you only delete one file the odds are decent that that part of the disk might not be used next. But, if you nuke the entire drive, then you're probably going to lose something.
Everyone should know most of the time the data is still there when a file is deleted. If it's important try testdisk or photorec. If it's critical pay for professional recovery.
I am deeply, obsequiously sorry. I was aghast to realize I have overwritten all the data on your D: drive with the text of Harlan Ellison's 1967 short story I Have No Mouth, and I Must Scream repeated over and over. I truly hope this whole episode doesn't put you off giving AI access to more important things in the future.
good thing the AI immediately did the right thing and restored the project files to ensure no data is overwritten and ... oh
That's not necessarily the case with SSDs. When trim is enabled, the OS will tell the SSD that the data has been deleted. The controller will then erase the blocks at some point so they will be ready for new data to be written.
IIRC TRIM commands just tell the SSD that data isn't needed any more and it can erase that data when it gets around to it.
The SSD might not have actually erased the trimmed data yet. Makes it even more important to turn it off ASAP and send it away to a data recovery specialist if it's important data.
Why does anything need to be erased? Why not simply overwrite as needed?
They don’t call it bleeding edge for nothing.
And judging by their introductory video, Google wants you to have multiple of these "Agents" running at the same time.
Better lockdown your files real nice from this thing, better yet, don't let it run Shell commands unattended. One must wonder why the fuck that is even an option!
wdym "shell"?? if tech bros get their way, AI will be the shell
Let's unplug this AI from your computer then ... "I'm sorry Dave, I'm afraid I can't do that"
I think I'll just install Linux rather than randomly pulling parts out of my computer while copilot slowly types out the lyrics to Daisy Bell.
Development should really happen more in containers but I hate devcontainers. It's very VScode specific and any customizations I made to my shell and environment are wiped away. It has trouble accessing my ssh keys in the agent, and additional tools I installed...
I just wish nix/nixos had a safer solution for it. Maybe even firejail or bwrap or landlock or something.
We laugh about AI deleting all the shit, but every day there's a new npm package ready to exfiltrate all your data, upload it to a server and encrypt your home. How do you protect yourself against that?
We laugh about AI deleting all the shit, but every day there's a new npm package ready to exfiltrate all your data, upload it to a server and encrypt your home. How do you protect yourself against that?
Yes, by not using npm either.
I try to use firejail on nixos when I can't do something in the build sandbox.
It's painful, and I'm always on the lookout for something better. I'd at least like a portal-ish system where I can easily add things to a sandbox while it's running.
Edit: if anyone has any issues or discussions about this I'd like to contribute.
I just want to laugh at this. It really sucks that so many are willing to trust a machine learning model that is marketed to be god by megacorps.
I do laugh at this. Play stupid games, win stupid prizes and all that.
:D
D:
:-D
Is this real?
No, it was an AI. They're not real, despite people always acting like they are.
Hilarious, no notes.
Is this real?
Use Recuva!!!
This shit cracks me up!
Windows has rmdir?
Uh... kinda? Powershell has many POSIX aliases to cmdlets (equivalent to shell built-ins) of allegedly the same functionality. rmdir and rm are both aliases of Remove-Item, ls is Get-ChildItem, cd is Set-Location, cat is Get-Content, and so on.
Of particular note is curl. Windows supplies the real CURL executable (System32/curl.exe), but in a Powershell 5 session, which is still the default on Windows 11 25H2, the curl alias shadows it. curl is an alias of the Invoke-WebRequest cmdlet, which is functionally a headless front-end for Internet Explorer unless the -UseBasicParsing switch is specified. But since IE is dead, if -UseBasicParsing is not specified, the cmdlet will always throw an error. Fucking genius, Microsoft.
"rd" and "rmdir" only work on empty directories in MS-DOS (and I assume, by extension, in Windows shell). "deltree" is for nuking a complete tree including files, as the name suggests.
In the original Reddit post it's mentioned that the agent ran "rmdir /s" which does in fact work on directories containing files and/or subdirectories.
And windows want to go that way...
No backup, no pity.
I haven't used AI for any serious coding... yet... but shit like this is why I must use exceptional caution.
reminds me of a certain ai
It's windows. And we don't see the rest of the command (that would include the path) it's likely it included the flag to remove files, as it was actually trying to remove a whole project.
https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/rmdir
In the original Reddit post the full command is given as rmdir /s /q d:\ which does indeed include the flag to remove files and subdirectories.
"Did I give you permission to delete my D:\ drive?"
Hmm... the answer here is probably YES. I doubt whatever agent he used defaulted to the ability to run all commands unsupervised.
He either approved a command that looked harmless but nuked D:\ OR he whitelisted the agent to run rmdir one day, and that whitelist remained until now.
There's a good reason why people that choose to run agents with the ability to run commands at least try to sandbox it to limit the blast radius.
This guy let an LLM raw dog his CMD.EXE and now he's sad that it made a mistake (as LLMs will do).
Next time, don't point the gun at your foot and complain when it gets blown off.