Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I was watching his lectures and I saw this post when I take a break. He was talking about the ups and downs in the history of Neural Nets. As far as I understand from all these lectures we're on the verge of a new up phase. Neural Nets are meaningful when they are large and deep and training such nets becomes feasible, although not immediately.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: