the linux-file-deletion is used as a example for good software design. It has a very simple interface with little room for error while doing exactly what the caller intended.
In John Ousterhout's "software design philosophy" a chapter is called "define errors out of existence". In windows "delete" is defined as "the file is gone from the HDD". So it must wait for all processes to release that file. In Linux "unlink" is defined as "the file can't be accessed anymore". So the file is gone from the filesystem immediately and existing file-handles from other processes will life on.
The trade-off here is: "more errors for the caller of delete" vs "more errors due to filehandles to dead files". And as it turns out, the former creates issues for both developers and for users, while the later creates virtually no errors in practice.
Exactly type rm -rf / instead of rm -rf ./ and you ducked up. Well you messed up a long time ago by having privileges to delete everything, but then again, you are human, some mistakes will be made.
The trade-off here is: "more errors for the caller of delete" vs "more errors due to filehandles to dead files". And as it turns out, the former creates issues for both developers and for users, while the later creates virtually no errors in practice.
One drive has a trash for the trash. Iโm still not convinced those files are gone after the 2nd empty, I think they just donโt show the other trash cans
Outlook on Exchange is like this. You can delete stuff to the Deleted Items directory. If you delete it from there it goes into another area called 'Recover deleted items'.
They usually support one but it is generally not provided by the file manager it's self. This means that assuming that the file managers use the same trash system you can trash a file on one recover it another.
Akchooly, what you're referring to as terabyte (TB) is called tebibyte (TiB), because window$ suck and JADEC made everyone believe that binary units are metric units, which is stupid. But we have the savior IEC which KDE is using in all of their software and I respect that.
They are not likely to be using the terminal. Pretty much every graphical file browser will ask for confirmation upon delete, and many will use a rubbish bin by default.
To be fair, assuming you are not using a wastebasket which comes pre installed in a lot of distros, you still need the right permissions to delete files that belong to the system and if you're using rm you have to use the -rf option to remove a folder and it's contents.
I did, and it was fast. I was a complete noob, so I thought rm -rf /* would delete everything in the current folder. I hit Ctrl + C, but it was too late. Took a few seconds to wipe out the whole system.
Yeah, but get this! It's not enough to just envoke cmd in Windows with just Win+R (sorry, sorry... Super+R ๐), even though you're invoking it from an admin account, no sir, it's still just a plain user as long as cmd is concerned ๐.
And this is what you get when you wanna do backwards compatibility all the way down to DOS ๐.
One of my first experiences with Linux at university was watching a classmate install Slackware, and then (for a laugh) dragging everything into the recycle bin.
They got a passing grade, because the lecturer saw their working installation, but they learned a valuable lesson in Linux that if you delete something, it'll fucking delete it.
also, defender is synchronous by default (e.g. nothing gets written until it gets scanned, and scanning parallelization is limited), and can only act asynchronously (aka write first, then queue check) on "trusted dev drives" (aka ReFS-based virtual vhdx partitions aimed at developers as a solution to horrible ntfs throughput, especially if defender is enabled)
I don't think LLMs usually make this kind of mistake.
Maybe it's not written by a native speaker. Also it's a doge meme, it could be just slightly bad grammar on purpose.