That's not how token ring worked. The token controls which node is allowed to transmit over a shared medium. Every node saw every packet and made it's own determination of relevance.
That's what I thought too unless the pic (left) literally is how cables are arranged??
My understanding was a shared medium (say, all computers in parallel on a single UTP), where they pass a virtual token "packet" that assigns the right to transmit while anyone receives if addressed, like a ball between kindergarteners sitting in a circle.
The pictured ring topology (left) makes it seem like everyone can only talk to a computer one over, which seems awful for efficiency and resilience, while the pictured star topology (right) introduces an authority figure (MAU is like a kindergarten teacher that decides who walks around and gives the ball to whichever child they think should speak next). Both seem inherently worse than Ethernet - left can be completely broken by disabling one or two nodes while the right one is just a switched network with less throughput.
Back when token ring was designed normally networks would use coaxial cables for communication. No matter if it ran ethernet, token ring or something else, everybody would share basically a single cable. The cable would have T connectors inserted to connect a computer and the end of the cable needed something to terminate it. It didn't need to be a single line, you could have splits and even a star like design, although there were limitations.
And you are right, any disruption anywhere on the line meant the network would go do. That might be someone removing the termination cap on the end, or simply the line being broken somewhere. However because computers were usually connected using T splitters, it didn't really matter if the computer was connected or not. But the connection not being terminated properly could be an issue. Especially if there was another cable connected to the T before being connected to the computer.
Normally in a room the cable would be laid out like a ring although it usually wouldn't be a closed ring, but instead terminated on one end. This meant each computer would be connected to its direct neighbors, but this wouldn't be an active thing. It wasn't like the computer could only transmit to its neighbors and then they needed to pass it on. It was like a shared line, where everyone could transmit and every computer would receive everything transmitted.
When everything switched over to the regular twisted pair cables we know today, it didn't really change from a communications point of view. Every computer wasn't connected to their neighbors but instead to a hub, but just like before anything anyone transmitted could be received by anyone on the network. It wasn't until much later when things like switches became commonplace and not everyone got all the traffic.
There definitely are good reasons why Ethernet won out over token ring, but there are scenarios where token ring was better. Before modern bridges, Ethernet could struggle with collisions if a network were too highly utilized - especially if nodes were physically spread out.
As for the diagrams, it can sometimes be confusing when it's not made clear what is being represented. Physical and logical topologies can be mixed star and bus and matched in different ways, and diagrams don't always make clear to which they refer.
I think token ring is a data link layer technology that controls transmission access over the physical connection. Like early non-switched Ethernet, computers are connected in parallel to the same wires but instead of collision detection and random delays, which caused congestion and serious overhead on busy networks, a "token" is passed around and determines the right to "speak". Everyone listens at the same time and starts receiving packets when addressed. If the computers were literally wired in series like a looping daisy chain, the failure of one would destroy message propagation. Instead, if the token-bearing computer or disconnects from a token ring network, the token is presumed expired after a short while and a new token-bearer is chosen. It's like a kindergarten activity where you sit around in a circle and need to hold the ball to speak, passing it around. It doesn't matter who you're addressing, you can even broadcast, but that's handled by a higher-level protocol.
As for memos, I have never used them and they seem extremely inefficient.
Edit: looks like Token Ring is actually more physical than I thought, with special cables connecting computers in series, so you may be right. That sounds really stupid as a thing to build a network on, it's easy to cut it in half by disabling just two computers, antithetical to the internet's resiliency principle.
Edit edit: my original understanding was right, the literal cable ring is obsolete for good reason. I still don't get the role of a MAU in the star topology unless it's just needed for old NICs to understand virtual tokens.
My memory of token ring is vague, but I think it was originally a ring in series as you said - however token ring switches (that isn't what they were called) also existed, which was the "modern" way of writing up a token ring network.
Yeah, see the pic in the thread. The "switch" (MAU, Media Access Unit) seems redundant to me though, based on what I read I would expect the network interface cards to create a functional ring on their own over a shared medium. Maybe the old cards for ring-topology networks only worked in that one mode and the MAU made them compatible by pretending they were part of a physical ring, cutting computers out of it if they turned off.
My point is, if you have a shared medium anyway, you can get rid of the MAU by having nodes manage the (virtual) token themselves, basically take limited-time turns based in some order like ascending MAC addresses. You could then wire the cable in any way you want with unlimited junctions, taps, whatever as long as you created a graph where all nodes are connected to each other. The entire point of a token ring is to manage a shared medium (that is, a single pair of wires, either UTP or coax, which can efficiently be wired along the shortest, possibly branching path) because if you have to use a direct connection from every endpoint to an MAU in a star topology, you could just have an Ethernet switch anyway.
That is what 'automation' often is. You take a working process, then let machines do as many steps in that process as you can. Harvesting crops, sending memos, robots spraypainting car parts, self driving cars (We still have a lot to do there)
Building on that it gets even more interesting as we try to find better, or even completely new processes.