We don't use NTP, but for robotics, stereo camera synchronization we often want the two frames to be within ~10us of eachother. For sensor fusion we then also need a lidar on PTP time to be translated to the same clock domain as cameras, for which we also need <~10us.
We actually disable NTP entirely (run it once per day or at boot) to avoid clocks jumping while recording data.
> We actually disable NTP entirely (run it once per day or at boot) to avoid clocks jumping while recording data.
This doesn't seem right to me. NTP with default settings should be monotonic. So no jumps. If you disable it Linux enters 11-minute mode, IIRC, and that may not be monotonic.
Pedantically, a monotonic function need not have a constant first derivative. To take it further, in mathematics it is accepted for a monatomic function to have a countable number of discontinuities, but of course in the context of a digital clock that only increments in discrete steps, that’s of little bearing.
But that’s all besides the point since most sane time sync clients (regardless of protocol) generally handle small deviations (i.e. normal cases) by speeding up or slowing down the system clock, not jumping it (forward or backward).
You are correct, NTP prefers to jump first (if needed) and then slew afterwards (which is exactly what we want!), although it can jump again if the offset is too large.
In our case the jumps where because we also have PTP disciplining the same system clock, when you have both PTP and NTP fighting over the same clock, you will see jumping with the default settings.
For us it was easier to just do a one time NTP sync at the beginning/boot, and then sync the robots local network with only PTP afterwards.
For a low precision environment to avoid sudden jumps I used SetSystemTimeAdjustment on Windows (now SetSystemTimeAdjustmentPrecise) to smoothly steer the system clock to match the GPS supplied time signal.
Late to the party here, but you should definitely be using pytorch 25.09 (or whatever is latest when you go to check) rather than 24.10. That's a year old pytorch on new hardware, I suspect a lot of these bugs have been fixed.
Also interesting in the example shared that 03 thought for 5 seconds for the female case and 46 seconds for the male case. Wish we had access to the chain of thought.
Robotics is sorely lacking front end devs and UI/UX guys. Most companies won’t hire any until they are already pretty established, but they might hire some. You can check out companies like foxglove who make GUI tools that robotics companies then pay to use.
I still think there is value in chats and retaining context. But there is also value in starting clean when necessary. Giving users control and teaching people how to use it is the way IMO.
The problem with retaining context is that it gets polluted. That pollution gets you into a latent space with errors, which probably not where you want your next token prediction to be sourced.
The reasonable alternative is a chat interface that lets you edit any text, the AI response or your prompts, and regenerate from any point. This is why I use the API "playground" interfaces or something like LibreChat. Deepseek at least has prompt editing/regeneration.
1. Share a cohesive and inspiring vision for the project.
2. Understand your skills, strengths/weaknesses etc and try to give you work that challenges you / help you grow / are interesting.
I think these are rare and can be hard to do (I'm now trying to do it myself!), but when it happens it's very motivating.
reply