Mark Zuckerberg testifying in congress reminds me of the Star Trek movie First Contact when Data was starting to feel anxious around the Borg, so he disabled his emotion chip.
Honestly though he handled that extremely well and made it backfire.
Congressional hearings are purely for political grandstanding. The low height seat countered with a cushion, the dumb questions answered with direct unemotional answers. 'we sell ads senator'. The entire process had nothing come out of it except a few politicians had egg on their face.
My impression is that the device isn't able to track all of the face's subtle movements so the avatars come across as seeming relatively expressionless. For example, I noticed that Lex's and Mark's eyebrows don't seem to move as much as you might expect given the emotions communicated by their voices. I assume this is either because the device literally restricts the movements of the eyebrows (perhaps they're pressed down under the headband) or it just isn't able to track them that well.
Lex Fridman is a Russian-American computer scientist, podcaster, and writer. He is an artificial intelligence researcher at the Massachusetts Institute of Technology, and hosts the Lex Fridman Podcast, a podcast and YouTube series.
Lex Fridman has also done original research on robotics and computer vision detection of facial expressions. Here is one of his papers; there are several others on related areas.
It's not a range test demo. It's a real conversation with real people who aren't prone to melodrama.
As mentioned in the video by Lex, it's the subtleties that make all the difference. I'm astonished with the accuracy of the blinking, mouth movements, subtle cheek variations, etc. It seems more accurate than the realtime feed from my webcam. The only thing I wouldn't like about it is having to wear a headset in order to experience it.