Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Another small correction: every instance of "inference" in your comment should probably be replaced with "training." It's the training phase the involves running gradient descent of various flavors to optimize the network parameters.


This is an important point.

Inferencing means “to predict” (I’m not sure when this terminology became popular; a few years ago most of us were just using the word predict)

Once trained, a model no longer requires derivatives. It’s more or less a function evaluation, which can be done on plain CPUs.


It's from statistical inference, e.g., when the goal is to find the values of a model's parameters that match the sample. So if the model is y = f(x, params), inference gives you params, and prediction gives you y for a value of x that you haven't seen before.

Also, shouldn't the verb be inferring?


More than that, inference usually refers to acts of decision making or evidence evaluation, like testing hypotheses or interpreting confidence/credible intervals.


Except in AI terminology[1], it seems that inference doesn't mean outputting params. It means outputting y.

Yes, I believe the verb is "inferring".

[1] https://blogs.nvidia.com/blog/2016/08/22/difference-deep-lea...


You're probably right then. I find AI nomenclature to be a bit of a mess.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: