Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The killer feature is presumably inference at the edge, but I don't see that being used on desktop much at all right now.

Especially since most desktop applications people use are web apps. Of the native apps people use that leverage this sort of stuff, almost all are GPU accelerated already (eg. image and video editing AI tools)



What does “at the edge” mean here?


Not using AI on the cloud. So if your connection is uncertain or you want use your bandwidth for something else-- like video conferencing or gaming. Probably the killer app is something that wants to use AI but doesn't involve paying a cloud provider. I was talking to a vendor about their chat bot built to put into MMOs or mobile games. It woudl be killer to have a character have life like conversation in those kinds of experiences. But the last thing you want to do is increase your server costs the way this AI would. Edge computing could solve that.


Edge is doing computing on the client (eg. browser, phone, laptop, etc.) instead of the server


Half the definitions I see of edge include client devices, and half of them don't include client devices.

I like the latter. Why even use a new word if it's just going to be the same as "client"?


I'm guessing "the edge" is doing inference work in the browser, etc. as opposed to somewhere in the backend of the web app.

Maybe your local machine can run, I don't know, a model to make suggestions as you're editing a Google Doc, which frees up the Big Machine in the Sky to do other things.

As this becomes more technically feasible, it reduces the effective cost of inference for a new service provider, since you, the client, are now running their code.

The Jevons paradox might kick in, causing more and more uses of LLMs for use cases that were too expensive before.


Not in the cloud.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: