Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Personally, the only way I’m going to give an LLM access to a browser is if I’m running inference locally.

I’m sure there’s exploits that could be embedded into a model that make running locally risky as well, but giving remote access to Anthropic, OpenAI, etc just seems foolish.

Anyone having success with local LLMs and browser use?



The primary risk with these browser agents is prompt injection attacks. Running it locally doesn't help you in that regard.


True, I wasn’t thinking very deeply when I wrote this comment… local models indeed are prone to the same exploits.

Regardless, giving a remote API access to a browser seems insane. Having had a chance to reflect, I’d be very wary of providing any LLM access to take actions with my personal computer. Sandbox the hell out of these things.


If each LLM sessions is linked to the domain and restricted just like how we restrict cross domain communication, this problem can be solved? We can have a completely isolated LLM context per each domain.


I'm not sure how running inference locally will make any difference whatsoever? or do you also mean hosting the MCP tools it has access to?


I imagine local LLMs are almost as dangerous as remote ones as they're prone to the same type of attacks.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: