I tried something like that 2 decades ago with a 640x480 monocular strapped to my sunglasses, with interpolation i could use a 1024x768 resolution in combination with a arm based pocketpc with host usb and compactflash video-out.
I used it for reading and 'fast' offline wikipedia/tr3 database search with both frogpad and twiddler2 and some voice commands.
The problem is that you can see the foreground and depth ok as the monocular screen on my left eye 'merged' semi-transparently due to brain processing. I assume this is a bit worse on the XReal.
The main issue was that when walking you make a slight sinus wave up and down compared to the foreground. You don't notice this usually, but with a paragraph of text or code positioned in front of your eyes it becomes very distracting.
One solution, using a mode of transport that doesn't involve moving up and down slightly, for example using a bicycle or car for transportation.
In both circumstances, the latter being the most problematic, it's not advisable safety wise (or even illegal) and a screenreader solution is better. I had the idea of using Emacsspeak for this, or do a smart speakup echo from the commandline.
Another solution is using RSVP, or using both RSVP and text to speech.
Samsung DeX is great though, Motorola, Huawai and recent Android have support for desktop mode too (if the phone supports video-out).
The latest Xreal glasses provide 3DoF tracking natively in the hardware. Your eyes in this case would perceive the screen as stationary as you and it inevitably "bounce" during motion.
I still don't recommend walking around with them on while reading.
I will also plus one Samsung Dex. It really is amazing to have a desktop like experience with just glasses on, and feels properly cyberpunk.
That's so cool, thanks for the detailed reply. I agree walking around with a big bouncy screen in front is probably not going to be very ergonomic. Still I can't let the idea go completely. Going RSVP+TTS might indeed be a more viable approach, certainly something to test with the current wave of AI/LLM agents.
I do remember a website a long time ago about a guy walking around with a setup like this. Seemed like it involved lead acid batteries and a lot of weight.
Was really interesting then when desktops were as big as large pizzas.
The problem is that you can see the foreground and depth ok as the monocular screen on my left eye 'merged' semi-transparently due to brain processing. I assume this is a bit worse on the XReal. The main issue was that when walking you make a slight sinus wave up and down compared to the foreground. You don't notice this usually, but with a paragraph of text or code positioned in front of your eyes it becomes very distracting.
One solution, using a mode of transport that doesn't involve moving up and down slightly, for example using a bicycle or car for transportation. In both circumstances, the latter being the most problematic, it's not advisable safety wise (or even illegal) and a screenreader solution is better. I had the idea of using Emacsspeak for this, or do a smart speakup echo from the commandline.
Another solution is using RSVP, or using both RSVP and text to speech. Samsung DeX is great though, Motorola, Huawai and recent Android have support for desktop mode too (if the phone supports video-out).