> What’s this “it” are you talking about, exactly?
Orion's code.
LLMs facilitate the attribution-free pillaging of open-source code. This creates a prisoner's dilemma for anyone in a competitive context. Anything you build will be used by others at your cost. This was technically true in the past. But humans tried to honor open-source licenses, and open-source projects maintained license credibiilty by occasionally suing to enforce their terms. LLMs make no such attempt. And the AI companies have not been given an incentive to prevent vibe coders from violating licenses.
It's a dilemma I'm glad Kagi is taking seriously, and one the open-source community needs to start litigating around before it gets fully normalised. (It may already be too late. I could see this Congress legislating in favour of the AI companies over open source organisations.)
> Most code is just some complicated plumbing, not some valuable algorithmic novelty. And this plumbing is all about context it lives in
Sure. In this case, it's a WebKit browser running on Linux. Kagi is eating the cost to build that. It makes no sense for them to do that if, as soon as they have a stable build, (a) some rando uses Claude to copy their code and sell it as a competitor or (b) Perplexity straight up steals it and repackages it as their own.
You don’t need a LLM to just copy their code as a whole thing. Copying and rebranding (plus some vendor adaptations) is a valid concern that I have already agreed about, but for the third time: it’s not what they wrote. Has nothing to do with codebase management.
And taking some individual pieces may sound problematic as an abstract concern but have you ever tried to adopt code from one FLOSS codebase into another different one? Especially UI code, if it’s not some isolated purportedly reusable component? Maybe Orion developers are wizards who wrote exceptionally portable and reusable code, but usually in my experience it’s a very painful process, where you’re constantly fighting all the conceptual mismatches from different design choices. And I’ve yet to see a LLM that can do architectural refactoring without messing things up badly. So I’m generally skeptical of such statements. And that’s why I’m suggesting we pick a concrete example we can try to analyze, for doing this on highly abstract “the whole code” level is not going to succeed.
Orion's code.
LLMs facilitate the attribution-free pillaging of open-source code. This creates a prisoner's dilemma for anyone in a competitive context. Anything you build will be used by others at your cost. This was technically true in the past. But humans tried to honor open-source licenses, and open-source projects maintained license credibiilty by occasionally suing to enforce their terms. LLMs make no such attempt. And the AI companies have not been given an incentive to prevent vibe coders from violating licenses.
It's a dilemma I'm glad Kagi is taking seriously, and one the open-source community needs to start litigating around before it gets fully normalised. (It may already be too late. I could see this Congress legislating in favour of the AI companies over open source organisations.)
> Most code is just some complicated plumbing, not some valuable algorithmic novelty. And this plumbing is all about context it lives in
Sure. In this case, it's a WebKit browser running on Linux. Kagi is eating the cost to build that. It makes no sense for them to do that if, as soon as they have a stable build, (a) some rando uses Claude to copy their code and sell it as a competitor or (b) Perplexity straight up steals it and repackages it as their own.