We can already get full 3D pose estimation from Wi-Fi. Whether that's a good thing is a separate topic, but there's a recent paper[1] which also has a poster page[2] and a youtube video[3] embedded on it. The audio quality of the video is poor, however. There's a lot of echo.
This general line of work is what the comment from transpute[4] seems to have been implying. There's also a prior body of work on this which I'm not really familiar with.
Anyone sane can't actually favor such a system at home using cameras to a system with existing technology(check other comments).
These systems maybe make sense in environments where cameras are already installed but not at home...
What are the challenges with this implementation? Assuming inexpensive depth cameras, are multiple cameras throughout the living space (to ensure full coverage) networked not feasible?
So this is an area where I can speak from experience. I was previously employed as the ML research scientist for a startup that designed and implemented an mmwave radar-based solution for the use case of seniors in independent living situations. That company fell apart for reasons unrelated to the technical side, unfortunately.
What we found is that seniors had essentially no desire to wear any sensors, which rules out the wearable inertial sensors mentioned in the paper. Also as others have mentioned, a sensor that captures visual images is a nonstarter due to privacy protections on a regulatory level but also privacy concerns on a personal level.
I'd add one other set of challenges that is unfortunately never covered in the academic literature - non-ideal rooms for monitoring signals. The papers show empty conference rooms with line-of-sight between sensors and people, but real settings are much messier. Not only is there furniture to block or distort signals, but also many sources of noise like fans, metal objects, open windows (which cause breezes to move curtains and other objects), pets, visitors, etc. Not to mention the unique room configurations for every person. We overcame several of these challenges but didn't develop perfect answers for many of them.
It was a fun position that gave me a weirdly specific set of knowledge that isn't always transferable on a technical level but was still great, and I wish it could have lasted longer. I'd be happy to share more info though if you're curious!
> What we found is that seniors had essentially no desire to wear any sensors, which rules out the wearable inertial sensors mentioned in the paper.
This may well change over time as new waves of seniors become more gadget friendly. Or cultural differences. I've got two seniors around here who thought their fall detector watches were a great idea, and paid for them despite being very expensive in their eyes (dumb phones, no need for latest computers makes these the most expensive gadgets apart from TVs they have ever owned). Or maybe it was because one got stuck in a bush for half an hour until someone found them. Opinions can change after the first fall or the hip replacement or even the cataract surgery.
I've thought about the cultural change aspect too and I agree with you. Younger generations are more used to wearing smart watches and carrying phones, so the appetite for wearable fall and activity detection devices will be much higher in future.
2. Perceived loss of independence ("I don't want a baby monitor!")
However, even a phone accelerometer in the pocket seems to get useful if imperfect gait data[1]. If it's on the wrist, then it's enough for Apple to use it for fall detection[2].
My understanding is that you might have something fairly effective if you:
1. Made a watch look like wrist watches which seniors are used to and enjoy (no annoying screen, etc)
2. Integrated Wi-Fi sensing data like I mentioned elsewhere in this thread[3]
3. Found and added any features the sensors perceive as useful (there might not be any, however)
I know the regulatory and privacy hurdles for using Wi-fi radar data in a health-related device are formidable. However, there's a clear use case for it: detecting stroke-related movement abnormalities.
The paper on phone gait tracking I mentioned earlier[1] seemed to show the opposite side of the body had signal quality issues. Yes, using Wi-Fi data probably requires an additional device to combine it with data from the watch. However, "BE FAST" is an acronym used in patient education on stroke care because response time is critical[4].
Wi-Fi or another radar-type device (like the one we developed) is the way to go. That got around people not wanting, or simply forgetting, to wear a device, but it did have its own technical challenges.
A sensor fusion approach is a great option, and wearables + some radar-based system seems like a best of all worlds solution, if people will use the wearables. Another big bonus you get from wearables is vitals detection, although I implemented a couple over-the-air vitals detection algorithms with our radar device that were sometimes very reliable.