The fact that it allowed for all kinds of topologies, and that it served as a bus (shared medium, hence the name 'Ether') rather than a point-to-point link is what I think made the biggest difference.
Of course now that we all use switched links they are point-to-point again but an ethernet 'hub' gave you the same effect as a bus with all devices seeing all of the traffic. This made efficient broadcast protocols possible and also allowed for a historical - but interesting - trick: the screw-on taps that you could place on a single coaxial cable running down a department giving instant access without pulling another cable. Zero network configuration required, just get the tap in place and assign a new address. DHCP later took care of that need as well.
This was fraught with problems, for instance a single transceiver going haywire could make a whole segment unusable and good luck finding the culprit. But compared to the competition it absolutely rocked.
To build on your comment, although it's been years since I studied Ethernet in depth...
- (On the bus thread) Ethernet started from an assumption of bad behavior (out of spec cabling, misbehaving clients, etc.) and tighten requirements just enough to construct a useful network. Much better balance between de facto ruggedness vs performance than its peers.
- From the beginning, Ethernet reasoned that it was cheaper to put logic in purpose-built networking hardware than endpoints (i.e. PC network adapters). This was a better scaling tradeoff. 1x $$$ network device + 100x $ client adapter vs 1x $$ networking device + 100x $$ client adapter.
- Because of the above, you started to get really cost- and data-efficient networks when the cost of Ethernet switches plummeted. (Remember, in early Ethernet days, networks were hub/broadcast-only!)
Ethernet switches are actually pretty complex things, when you think about it. They have to learn what MAC addresses are behind each port, and build a complex forwarding table and do table lookups in real time. The larger the switch, the more complex it is. Its hard to make it scale.
Around the same era, Myrinet switches with higher bandwidth (1.2Gb/s if I remember correctly) and higher density at a fraction of the port cost of slower ethernet switches. This was possible because the Myrinet switches were dumb.. The Myrinet network elected a "mapper" that distributed routes to all NICs. The NICs then pre-pended routing flits to the front of each packet. So to forward a packet to its destination, all a Myrinet switch had to do was strip off and then read the first flit, see that it said "exit this hop on port 7", and then forward it on to the next switch. Higher densities were achieved with multiple chips inside the cabinet.
In the mid 2000s we even built one of what was, at the time, one of the worlds largest ethernet switches using (newer, faster Myrinet) internally, and encapsulating the ethernet traffic inside Myrinet. That product died due to pressure from folks that were our partners, but felt threatened by our incredibly inexpensive high density switches.
Sounds similar to Infiniband where each subnet has a subnet manager which calculates routing tables for the entire subnet, and assigns 16-bit local identifiers (LID) so you stations don't need to use the full 16 byte GUID's.
Also Infiniband packets are power of two sized, making fast switching easier.
At their core, most hardware evolutions seem like optimizing compute:memory:storage:functionality vs the (changing) current state of the art/economy.
When Ethernet was first released, compute was expensive. Made sense to centralize compute (in routers) and make everything else dumb-tolerant.
Now, compute is cheap and plentiful at network-calculating scales and throughout expectations are very high, so it makes sense to burn compute (including in clients) to simplify routing hardware.
Ha! But they delivered that much value (or more), so the market supported the price until supply flooded.
We could do worse for a transformative technology ranking metric than "How overpriced was this when first released?" (significant look at Nvidia cards)
I had a bunch of workloads that quite literally got cut down to about 15% or so of the original runtime (a cluster compressing a whole archive of CDs for a large broadcaster) so I happily paid up. But still... $1000 / port!!
And here I have sitting next to me a 48 port gigabit switch that cost 15% of what that 100 megabit switch cost in 1996 or so. Iirc it was one of the first D-link products I bought, it definitely wasn't great (it ran pretty hot) but it worked quite well enough. Amazing progress.
for years i was carrying around an ethernet splitter that would allow me to connect two devices into one ethernet port. i last used it some 10 years ago in a place without wifi
using a splitter is usually a temporary solution, and i am unlikely to be sharing a port with a PoE device. nor do i care about gigabit speed when the only reason to use a splitter is to make up for missing wifi.
Original DIX Ethernet was standardized by my manager, David Redell of Xerox. It was the bare minimum to do the job, 6-byte station destination, 6-byte source address, 2-byte packet length, a 2-byte Ethertype field (the latter 2 were combined for networks with hardware framing), and 32-bit CRC. NO arc in the hardware. It leveraged the move to byte-based memories and small CPUs. It followed the end-to-end principle in system design just about optimally - the most minimal MAC design of all time. EASY TO BUILD UPON AND ENHANCE.
Ethernet (CSMA/CD) is a protocol that copies human speech patterns. After someone stops speaking people hear the quiet (carrier sense multiple access / CSMA) and wait a very short and randomized amount of time and begin to speak. If two speakers collide they hear the collision and shut up (CD - collision detection). They both pick a randomized amount of time to pause before trying again. On the second third etc. collision people wait longer and longer before retrying.
The thing about original ethernet (1981) is that it wastes 2/3 of the channel because a highly loaded channel has too many collisions and too many back offs. But deployment and wiring were expensive so running a single wire throughout a building was the cheapest possible way to start (enhanced by thinwire Ethernet and twisted pair to have a less bulky cable a few years later). The frame design was PERFECT and within ~10 years people were using ethernet frames to build switched networks and today only radio networks are CSMA/CD = Ethernet.
I was in Xerox SDD in the early 80’s I have lots of memories dealing with the large coax taps which we in the ceiling.
I also remember setting up a Star demo at the NCC and someone forget coax cable terminators (or was short one terminator?) which was causing reflectance issues with the signal which was solved by cutting the cable to a precise length to get the demo working.
In the original ethernet design, routers were not used (because 8-bit processors were too slow and 16-bit processors were just starting to emerge). So the original standard proposed repeaters as the way to extend a large network, and this was a very very cheap analog way to grow your network. It was quite common to have a whole building or even several nearby buildings on one ethernet and then a high speed (i.e. 56 Kbps or maybe even a T-1) link to other buildings either nearby or in other cities.
Maybe you know, why isn't the CRC at the end? Then you could stream the packet instead of needing to construct it and then go back to the header to write the CRC.