Because the article shows it isn't Gemini that is the issue, it is the tool calling. When Gemini can't get to a file (because it is blocked by .gitignore), it then uses cat to read the contents.
I've watched this with GPT-OSS as well. If the tool blocks something, it will try other ways until it gets it.
How can an LLM be at fault for something? It is a text prediction engine. WE are giving them access to tools.
Do we blame the saw for cutting off our finger?
Do we blame the gun for shooting ourselves in the foot?
Do we blame the tiger for attacking the magician?
The answer to all of those things is: no. We don't blame the thing doing what it is meant to be doing no matter what we put in front of it.
It was not meant to give access like this. That is the point.
If a gun randomly goes off and shoots someone without someone pulling the trigger, or a saw starts up when it’s not supposed to, or a car’s brakes fail because they were made wrong - companies do get sued all the time.
But the LLM can't execute code. It just predicts the next token.
The LLM is not doing anything. We are placing a program in front of it that interprets the output and executes it. It isn't the LLM, but the IDE/tool/etc.
So again, replace Gemini with any Tool-calling LLM, and they will all do the same.
When people say ‘agentic’ they mean piping that token to various degrees of directly into an execution engine. Which is what is going on here.
And people are selling that as a product.
If what you are describing was true, sure - but it isn’t. The tokens the LLM is outputting is doing things - just like the ML models driving Waymo’s are moving servos and controls, and doing things.
It’s a distinction without a difference if it’s called through an IDE or not - especially when the IDE is from the same company.
That causes effects which cause liability if those things cause damage.
Because it misses the point. The problem is not the model being in a cloud. The problem is that as soon as "untrusted inputs" (i.e. web content) touch your LLM context, you are vulnerable to data exfil. Running the model locally has nothing to do with avoiding this. Nor does "running code in a sandbox", as long as that sandbox can hit http / dns / whatever.
The main problem is that LLMs share both "control" and "data" channels, and you can't (so far) disambiguate between the two. There are mitigations, but nothing is 100% safe.
Sorry, I didn't elaborate. But "completely local" meant not doing any network calls unless specifically approved. When llm calls are completely local you just need to monitor a few explicit network calls to be sure.
The LLM cannot actually make the network call. It outputs text that another system interprets as a network call request, which then makes the request and sends that text back to the LLM, possibly with multiple iterations of feedback.
You would have to design the other system to require approval when it sees a request. But this of course still relies on the human to understand those requests. And will presumably become tedious and susceptible to consent fatigue.