Obligatory DO NOT RUN THIS ON YOUR COMPUTER (or anyone else's).
You'd think with fully open permissions, everything would work better, but many programs, including important low level things, interpret it as a sign of system damage and will refuse to operate instead.
If you do run it, you'd better have a backup or something like Timeshift to bail you out, and even if you do have that, it's not worth trying it just to see what will happen.
It's not quite as bad as deleting everything because you can boot from external media and back up non-system files after the fact, but the system will almost certainly not work properly and need to be repaired.
One of our servers is a rotting carcass being kept alive by our collective prayers. It runs Windows 7 and custom software whose developer is dead and the source is missing, nothing has been updated for over a decade, and it has its own independent UPS because once it goes down, it has an extremely slim chance of recovering, and we're afraid to test it. It controls the card entry system into the building, including the server room. Boss doesn't want to replace it because we'd have to replace all of the terminals and controllers too, and it hasn't catastrophically failed yet.
You're right. It's not a pet. It's like one of the Saw movies: if it dies, we're all fucked.
I'm sure there's a good reason (or at least a believable reason) but I'm curious now, why can't copies be made of the binary/data and start trying to get it running on a VM or another box?
The question I often ask clients who think this way is "How much would it cost if it did fail? Let's say this happened today. What would be the cost to replace it NOW and not only that but make sure people who are working can still do so with the interruption?
Now how much would it cost to schedule the interruption and manage the fall out in a way that is controllable?
For some, the catastrophic failure points to "hey I fixed the thing!" And the incentives for that kind of person are different from the person whose job is to mitigate risk.
It sounds like your boss is the former. In which case it's going to be fun when it fails.
I learned this relatively quickly running my own server with the intention of my family also using it. Data on a separate drive, backed up regularly and automatically. System on it's own drive, dd'd when it's in it's final state and backed up before I screw around any deeper than trying out a new container. I can bring my server back up in however long it takes to transfer data.
Someone actually ran it on a server at my workplace, trying to fix file permissions on a samba share. Broke SSH and the samba daemon. Thankfully I was able to fix by removing the permissions from the config files the error logs pointed to.
Just saying, I think it was a ChatGPT idea, other people use it every day. I only use it if I'm completely stumped, and only take it as suggestions.