Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Probably couldn’t have chosen 2 worse people to demonstrate the range of human facial expressions


If you haven't gotten very far with the "photorealistic" part, maybe you couldn't have chosen 2 better people for the demo?


It’s to be expected with beta products. Emotions have been on the Zuckerberg 1.0 roadmap since the beginning. I don’t know about Fridman, though.


Mark Zuckerberg testifying in congress reminds me of the Star Trek movie First Contact when Data was starting to feel anxious around the Borg, so he disabled his emotion chip.


Honestly though he handled that extremely well and made it backfire.

Congressional hearings are purely for political grandstanding. The low height seat countered with a cushion, the dumb questions answered with direct unemotional answers. 'we sell ads senator'. The entire process had nothing come out of it except a few politicians had egg on their face.


"Data, there are times when I envy you"


My impression is that the device isn't able to track all of the face's subtle movements so the avatars come across as seeming relatively expressionless. For example, I noticed that Lex's and Mark's eyebrows don't seem to move as much as you might expect given the emotions communicated by their voices. I assume this is either because the device literally restricts the movements of the eyebrows (perhaps they're pressed down under the headband) or it just isn't able to track them that well.


Such a negative, ad hominem attack of a comment. The technology is breathtaking. The two people have each contributed so much.


I thought Fridman was just a podcast guy? Not that that's quite nothing, but there are a lot of podcast guys.


Fridman is a rabbit hole, you'll find detractors and defenders and slanderers.

The debate has already been had so i'll just link it

https://news.ycombinator.com/item?id=32348302


https://en.wikipedia.org/wiki/Lex_Fridman

Lex Fridman is a Russian-American computer scientist, podcaster, and writer. He is an artificial intelligence researcher at the Massachusetts Institute of Technology, and hosts the Lex Fridman Podcast, a podcast and YouTube series.


Lex Fridman has also done original research on robotics and computer vision detection of facial expressions. Here is one of his papers; there are several others on related areas.

https://ieeexplore.ieee.org/abstract/document/8751968/


That's an inadequate description


it was a tongue-in-cheek joke referencing a meme.

You were meant to chuckle at it, not take it seriously.


They also made the same joke in the interview itself


It's not a range test demo. It's a real conversation with real people who aren't prone to melodrama.

As mentioned in the video by Lex, it's the subtleties that make all the difference. I'm astonished with the accuracy of the blinking, mouth movements, subtle cheek variations, etc. It seems more accurate than the realtime feed from my webcam. The only thing I wouldn't like about it is having to wear a headset in order to experience it.


Sure, but from a technical point of view I don't think the range of human facial expressions is that wide anyway. It's just movements of muscles.


Hi mark


I Did Not Hit Her. I Did Not.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: