Well all you would really need are some easily detectable signals. So if move hand, move foot, move eyes, etc are all easily detectable then you have the basis for an interface. After that it's just a matter of building the interface around those limitations. It wouldn't be mind reading (not even close), but it seems like you should be able to get a reasonably good UI going.
I doubt it. Firstly, you will have to provide a method to filter out real movements from intended ones. A sensor on a few muscles may help, but sticking them on your skin every morning would not help towards the goal of "Reasonably good UI".
Secondly, I am not sure one can learn to almost unconsciously think about certain movements of bodily parts. Chances are this will keep requiring too much apof one's attention.
Thirdly, I think temporal resolution will be awful. Even if you can learn to think about say 3 movements simultaneously, I doubt you will get this above a byte per second of bandwidth. Written text is around a bit/character, so that would likely be way below slow speech.
Most of this is opinion/guessing, so feel free to correct things.