Hacker Newsnew | past | comments | ask | show | jobs | submit | peterept's commentslogin

I got excited about this a few years ago when I was into digital pinball. I built an open source project called Phonemote using an iPhone to track eyes and relay telemetry over WiFi/BT and a plugin for FuturePinball.

However the result wasn’t that useful because we humans have 2 eyes.

The 3d effect is very compelling if you close/cover one eye.

But that becomes annoying quickly.

The best results I had was sneering a little hair gel in the center of my left eye glassses. Then it felt like I was using 2 eyes but really my right eye was seeing the pinball table clearly and fooling my brain.


There is a taiwanese company that has developed a sandwiched lenticular screen protector for phones that should fix this. Looks amazing in my opinion. You should try it out with your project.


Oh yes at the time I ordered a sheet but it didn’t align with my display panel. I should try that again.


Do you have a link to that, sounds cool?


this is the one i saw.

https://www.optiqb.com/


Back in the day of LAN parties Descent was our favorite game.


Auto-pilot is not on all Teslas as it was a paid upgrade til mid-2019. My M3 does not have auto-pilot!


Thanks. Fixed.

I didn’t realize early M3s didn’t have it.


To avoid memory allocations, and if you can modify the source string in place, then an alternative is to return std::vector<char*> and modify the string to replace the separators with '\0'.

Of course, as that post suggests, use reserve() to encourage having the vector itself as optimal as possible. (In my strsplit call I pass it in as optional so each caller can optimize it).


That's just strtok, and programming C++ as if you are an unreformed C programmer is always a mistake. If you want to not copy the strings, string_view. We also have std::split and std::views::split etc.


To prevent any misunderstandings, programming C using strtok is a mistake too.


As a general rule, don't overwrite your input data. That's a hardcore space optimization that can end up making your program slower. At the minimum it will lead to headaches later on. Are you sure you won't need the pristine input for diagnostic and error messages later on? Do you always have at least 1 separator character to overwrite in the first place?


Wouldn’t the SDK be the same as Google Cardboard but disabling the distortion and mirroring the output?

Use ARKit/ARCore for tracking.

A few lines of code and you are all set.


I was under the impression there’s stereo vision, plus additional touch controls


Cardboard SDK is already stereo vision.

Looking at tech specs there is no touch controls. Input is either by using CV hand tracking or sending motion controls from an Apple Watch. Basically they are using Apples SDK for everything.


Not a very original or clever form factor. At least design it more like the Mira Prism (https://www.mirareality.com/) where the phone position is closer to your head and have a large open FOV. I get that they want to expose the phones back camera for tracking - but use some fancy mirrors or something?

Ultimately this is very google cardboard like and passive so $129 is quite expensive.

I built a fun toy like this in the early 2000’s for $10 using foam board and a $5 sheet of teleprompter glass.

Ultimately anything like this has failed to capture the market because people just don’t want to have their phones out of reach and/or risk their battery using the camera/tracking.


Same happened to me!


I developed NewtFTP because wireless cards were just starting to appear, the PM100D cellular cards and the early 802.11 WiFi. FTP was the most popular way at that time to distribute application packages and files.

It felt like living in the future walking down the street in ‘96 with my newton and accessing the internet!


I was worried about that too - because I loved the (apparent) simplicity of using Apple maps from my phone... until I actually got the Tesla and found voice navigation is way easier and flawless. (And in the very occasional situation I have an address on my phone now I can just share it to the Tesla). I don’t miss CarPlay (at least for the features I used).


I run a multiplayer VR team platform where we run a copy of Unity3d on the linux cloud server and load the the same Unity Asset Bundle on the server as the clients ("The current world"). I chose this method because:

(1) it followed the Unity3d documented approach using UNET server and client architecture (2) it meant we get to simulate the entire scene on the server (physics, meshes, networked objects, etc)

We boot the instances on demand when a team wants to meet in VR.

Sounds like Unity's new T&C's means this architecture is dead in the water.. unless Authorized by Unity3d. :-(


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: