To be fair, American Standard Code for Information Interchange was only meant to display English, which doesn't care about the language your name is from.
It's a "joke" because it comes from an era when memory was at a premium and, for better or worse, the English-speaking world was at the forefront of technology.
The fact that English has an alphabet of length just shy of a power of two probably helped spur on technological advancement that would have otherwise quickly been bogged down in trying to represent all the necessary glyphs and squeeze them into available RAM.
... Or ROM for that matter. In the ROM, you'd need bit patterns or vector lists that describe each and every character and that's necessarily an order of magnitude bigger than what's needed to store a value per glyph. ROM is an order of magnitude cheaper, but those two orders of magnitude basically cancel out and you have a ROM that costs as much to make as the RAM.
And when you look at ASCII's contemporary EBCDIC, you'll realise what a marvel ASCII is by comparison. Things could have been much, much worse.
It's a joke because it includes useless letters nobody needs, like that weird o with the leg, and a rich set of field and record separating characters that are almost completely forgotten, etc, but not normal letters used in everyday language >:(
Yes I'm being sarcastic, but I also think utf-8 is plaintext these days. I really can't spell my name in US ASCII. Like the other commenter here went into more detail on, it has its history, but isn't suited for today's international computer users.
It's also some surprise internal representation as utf-16; that's at least still in the realm of Unicode. Would also expect there's utf-32 still floating around somewhere, but I couldn't tell you where.
And is mysql still doing that thing with utf8 as a noob trap and utf8_for_real_we_mean_it_this_time_honest or whatever they called it as normal utf8?
Me too. To this Day our national Electric invoice standard uses ISO-8859-15. An that's just fine until somebody feels the need to have a look with Notepad, add a random space and save the file.
Notepad then helpfully changes the encoding to UTF-16 and the whole patch errors out somewhere down the chain.
You'd think things would be simple, otherwise the existence of UTF-8.
And yet for the last 17 years, every company I've been in has had some sort of horrible mess involving unicode and non-unicode and nobody either recognising the problem, or knowing how to solve it when they did recognise it ("well, the £ turns into a ? so we just replace any ? in the filename by a £").
In my experience things are fine while you work in a single environment, or you have control over the entire pipeline of data. Things quickly turn into a story from the Bible when different systems start trying to communicate.
Already with a single standard in a single project things have a tendency to start breaking down as soon as there's more than one developer and disagreement arises about what the text in the standard specification actually means.