Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think they've gone about this the wrong way, architecturally. I would have streamed draw commands to the watch and received touch events back, with the watch hardware essentially being a display client with no compute power of it's own. That would cut silicon real estate, remove the need for local wifi, etc - and placed the development focus on a super low latency wireless command stream. That way, the watch, as a product, would last much longer between upgrades, and your UI complexity would be bound by the host phone, not the little SoC.


It would also make it impossible to untether the watch from the phone in future revisions, which seems likely to be the plan for the longer term, once technology improves enough to allow adding cellular connectivity without unacceptable size and battery life compromises. Here's someone's take on it:

http://stratechery.com/2014/now-apple-watch/


See wiremine's post here:

"The Watch app resides on the user’s Apple Watch and contains only storyboard and resource files; it does not contain any code. "


With regards to the WatchKit apps - I'm sure I read they were static resources - essentially a GUI library that can run on the watch. That's subtly different to being able to render any command in a draw stream, say, from CoreAnimation. The former gives a storyboarded GUI from a resource set (sliders, buttons, etc), with static animated images (!), the latter is more flexible and I think would be less resource intensive watch-side (render display commands vs local GUI library)

If you wanted to go super-minimal, you could even probably stream a compressed bitmap to a screen that resolution over a wireless protocol, moving all rendering phone-side.

There's also still the issue of the native watch apps coming in 2015.


This would introduce lag in user interactions, which is unacceptable to Apple and its users.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: