krishna shenoy has some nice papers that apply information theory to bmis (albeit fused to a T9 style text entry task). either way, they attempt to quantify, in bits, the bitrate of the bmi. (which was, back then, quite low... a few bps)
i think it gets rather muddy as there isn't really a good metric for raw signal quality (afaik). there's cell tuning and number of spiking channels, but still not a great measure of snr for bmi work (afaik). often times people will apply measures to the outputs of their systems, like task performance, but part of the problem there is that often the state model has varying quality and suitability to task, so it can be difficult to disambiguate signal quality from state model performance.
(of course, in speech recognition they don't care, the game is to minimize WER and maximize decoding speed and whether language or acoustics (at least when they were separate) get you there, it doesn't matter)
>i think it gets rather muddy as there isn't really a good metric for raw signal quality (afaik). there's cell tuning and number of spiking channels, but still not a great measure of snr for bmi work (afaik). often times people will apply measures to the outputs of their systems, like task performance, but part of the problem there is that often the state model has varying quality and suitability to task, so it can be difficult to disambiguate signal quality from state model performance.
This was something I picked up on back in 2017. I did manage to come up with a definition of SNR that made some sense (basically the euclidean distance between symbol menas, divided by the noise level along the vector connecting the two symbols, assuming the feature space was basically an N-dimensional QAM signal using features 1...N instead of amplitude and phase) - but even then that didn't take into account the fact that the noise was neither well-approximated by AWGN biased nor even constant...
And of course, as you said, you could get a bad SNR just because you're extracting the wrong features (although, to be fair, the same problem can exist in telecoms too).
hah, it's funny. i come at all of this from a sensorimotor control view (even though i was very interested in speech and language- as for a cs person discrete stuff is easier to reason about) and while i can appreciate ideas like this and the shenoy lab stuff, when i worked on this stuff in practice we had no discrete symbols as we decoded continuous variables like positions, velocities, angles and torques- which didn't seem to have clean mappings into comms/noisy channel theory/info theory.
i think it gets rather muddy as there isn't really a good metric for raw signal quality (afaik). there's cell tuning and number of spiking channels, but still not a great measure of snr for bmi work (afaik). often times people will apply measures to the outputs of their systems, like task performance, but part of the problem there is that often the state model has varying quality and suitability to task, so it can be difficult to disambiguate signal quality from state model performance.
(of course, in speech recognition they don't care, the game is to minimize WER and maximize decoding speed and whether language or acoustics (at least when they were separate) get you there, it doesn't matter)