Ubuntu and Fedora, though the most common desktop distros, also are notoriously NOT very stable. Fedora is RedHat beta-test playground, and Ubuntu is built from a Debian-unstable snapshot. OTOH many people complains that Debian, RedHat or CentOS are too long to implement the latest features, but this is the price to pay for extreme reliability. Know your needs, choose accordingly.
I just wish there was a middle ground though where you dont have to choose between extreme reliability + years of waiting and fast turnaround + notorious instability
For desktops: Debian testing + pinning as needed to backports, sid, and experimental.
For servers: Debian stable + backports.
That last in particular is vastly preferable on servers to the usual "Oh, we've got to run Red Hat because it's industry standard, but we're going to have to rip out the guts and install a metric shit-ton of third-part / roll-your own / nonstandard packages, and then hope that the poor third-generation admin to inherit this cesspit can make heads or tails of it.
Debian testing is both reliable enough (I think there were no more than 2 cases when an upgrade would lead to an unusable state in the past 8 years) and not too far behind the bleeding edge. Or you could use a rolling release distro such as gentoo, arch, or in a related world FreeBSD.
The problem with Debian Testing is that once something breaks badly, you usually have to wait two weeks until it is fixed, because changes need to trickle down from Sid. In my experience, Sid is more useful as a desktop. But it's certainly not for the beginner.
Since my favorite Debian combo was already mentioned, I should mention my usual plan-B: CentOS with CentOS Plus enabled on the server. What flavor of Unix you run on the desktop is irrelevant, although I frequently find myself in dependency hell when I am working on a Mac. Installing the qtconsole version of iPython is next to impossible.