30 years after it was inserted into the architecture to work around some software issues/bugs, bootloaders still have to fool around with it every time a x86 machine is booted. Lovely.
Well for one thing, the machines need to maintain 8086 compatibility even though it is horribly outdated. IIRC the actual processor just runs microcode that emulates an x86-compatible system or something to that effect.
It's sort of like gcc where layers keep getting put on top of layers and only like 5 wizards from MIT know how it actually works.
The word "emulation" is uncharitable. Breaking a high-level instruction into micro-operations allows the architecture to present a consistent ISA to programmers while optimizing execution for the micro-architecture. Yes, x86 originally did this out of necessity. But it's a little unfair to call it "duct tape". Case in point: ARM sometimes does the same thing, despite being a RISC.
In other words: design your clean, orthogonal RISC ISA however you want: at some point in the future, some processor designer is going to end up translating those instructions into something else.
As for layers upon layers of abstractions: welcome to computer science. ;)