Personally, the only way I’m going to give an LLM access to a browser is if I’m running inference locally.
I’m sure there’s exploits that could be embedded into a model that make running locally risky as well, but giving remote access to Anthropic, OpenAI, etc just seems foolish.
Anyone having success with local LLMs and browser use?
True, I wasn’t thinking very deeply when I wrote this comment… local models indeed are prone to the same exploits.
Regardless, giving a remote API access to a browser seems insane. Having had a chance to reflect, I’d be very wary of providing any LLM access to take actions with my personal computer. Sandbox the hell out of these things.
If each LLM sessions is linked to the domain and restricted just like how we restrict cross domain communication, this problem can be solved? We can have a completely isolated LLM context per each domain.
I’m sure there’s exploits that could be embedded into a model that make running locally risky as well, but giving remote access to Anthropic, OpenAI, etc just seems foolish.
Anyone having success with local LLMs and browser use?