Unlike Base64 or Base32, Base58 has approximately O(N^2) complexity because it requires iterative division and multiplication operations on big integers. You can't encode a gigabyte of data with Base58 in a reasonable time, but you certainly can with Base64 or Base32.
I thought base58 runs on 8 byte blocks because 58^11 is slightly larger than 256^8. Then I checked the spec and this is actually not a standard requirement.
In that case, there would be padding left in every encoded block. The size overhead would weaken the case for Base58 especially if you consider using it for arbitrarily long data.
Seems like a compiler should be able to convert division to shifts and subtractions.
> u8 divmod 58 can be reduced to a u8->u16 multiply, a right shift, and three conditional subtractions; that's not great, but on a modern CPU it's a afterthought compared to the quadratic loop over the input size.