Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Wow eyetracking is not something i thought of.. and now i want it.

I wonder if we could replace mouse with eyetracking? I wouldn't expect it to be accurate enough though, give micro movements that eyes do.. and in general erratic movements.. but i'd love to be wrong.



Eye tracking is useful if you can or want to sit in front of a desk. I'm concerned at the lack of diversity in eye-tracking manufacturers. Tobii is the only commercial brand I'm aware of or that Talon supports and initial setup requires Windows (I don't know if recalibration also requires Windows).

I haven't used eye tracking but I'd imagine that commands could be given in the short time that an on-screen element is focused... and the rest of the time the cursor jumps erratically.


Talon's eye tracking functions as a mouse replacement. Is there a specific demo you'd like to see? I can record one.


I've been researching eye tracking for my own project for the past year. I have a Tobii eye tracker which is probably the best eye tracking device for consumers currently (or the only one really). It's much more accurate than trying to repurpose a webcam.

So the problem with eye tracking is what's called the "midas touch" problem. Everything you look at is potentially a target. If you were to simply connect your mouse pointer to your gaze, for example, any sort of hover effect on a web page would be activated simply by glancing at it. [1]

Additionally, our eyes are constantly making small movements call saccades [2]. If you track eye movement perfectly, the target will wobble all over the screen like mad. The ways to alleviate this is by expanding the target visually so that the small movements are contained within a "bubble" or by delaying the target slightly so the movements can be smoothed out which naturally causes inaccuracy and latency. [3] There are efforts to predict the eyes movements to give the user the impression of lower latency, but it's imperfect solution.

Another issue is gaze activation. Computers can't read our minds, so systems which require one to stare fixedly at an object in order to activate an interface are common. The problem with this is the both the delay and effort required. You can easily get a headache from the effort of trying to fixate your eyes on a target. Eye tracking in VR and AR have similar problems.

There are other forms of activation - if you open your iPhone's accessibility menu in the settings, you'll see a bunch of options including head nods, facial gestures, eye blinks and more. [4]

The future of eye tracking is definitely multimodal. A specific gaze target combined with a gesture or hotword is the way humans naturally interact with other humans (you look at a person, get confirmation through eye contact or a nod, and then speak or gesture.) What's amazing is the amount of redundant effort being made in this area. Some of this stuff has been known a decade or more. There are tons of both research papers and thousands of patents to explore which cover the topic in great detail. There is very little that hasn't already been solved.

1. https://uxdesign.cc/the-midas-touch-effect-the-most-unknown-...

2. https://en.m.wikipedia.org/wiki/Saccade

3. https://help.tobii.com/hc/en-us/articles/210245345-How-to-se...

4. https://support.apple.com/accessibility




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: