I agree with that reasoning, but I would word it differently. The economy would probably chug along regardless. The 1% of rich folks would happily colonize the solar system with nanomachines or whatever. The big problem is what happens to the other 99% who only have their labor to sell, if machines depress the price of labor below the minimal cost of sustaining a human body. That's why we need basic income.
People might also get psychological problems due to being idle and irrelevant, I have no idea how to solve those.
And in the long run, we need Friendly AI anyway :-)
One problem I have with the basic income whenever I see it mentioned, or discussed in the context you noted. It's always phrased as "we need basic income because bad scenario X will happen". It's never phrased as "if scenario X ever happens, we will have to institute basic income".
The reason I think the distinction is important is because such proponents are yet to conclusively prove/argue that scenario X will ever happen. And as I don't believe such a scenario will happen anywhere remotely in our lifetimes, we can't and shouldn't use such a wild hypothetical as a valid reason for a huge program. To use it as a reason, and claim it as valid, would be quite disingenuous.
Granted, you do phrase it as "[...]what happens[...]if[...]. That's why we need basic income." But you're yet to prove conclusively that the "if" will ever happen.
To be fair, many people think that technological unemployment is already happening, so basic income would be a good idea even today. Many others disagree and call it the "luddite fallacy".
Personally, I think technological unemployment is not a fallacy and we'll see much more of it in our lifetimes. For example, self-driving cars might put a 50-year-old truck driver out of a job. What would you advise him to do, retrain as a neurosurgeon? What if by the time he's finished retraining, neurosurgery is also automated? Even if theoretically there was an endless supply of jobs for humans, the job market always takes nonzero time to react. When technological change becomes quicker than that, we're in trouble.
People might also get psychological problems due to being idle and irrelevant, I have no idea how to solve those.
And in the long run, we need Friendly AI anyway :-)