Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes to this. Furthermore:

- you can solve neural networks in analytic form with a hodge star approach* [0]

- if you use a picture to set your initial weights for your nn, you can see visually how close or far your choice of optimizer is actually moving the weights - eg non-dualized optimizers look like they barely change things whereas dualized Muon changes the weights much more to the point you cannot recognize the originals [1]

*unfortunately, this is exponential in memory

[0] M. Pilanci — From Complexity to Clarity: Analytical Expressions of Deep Neural Network Weights via Clifford's Geometric Algebra and Convexity https://arxiv.org/abs/2309.16512

[1] https://docs.modula.systems/examples/weight-erasure/





Thanks for the explanations and the great links!



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: