It can be as simple as "you cannot train on someone's work for commercial uses without a license", It can be as complex as setting up some sort of model like Spotify based on the numbers of time the LLM references those works for what it's generating. The devil's in the details, but the problem itself isn't new.
>Dividing equal share based on inputs would require the company to potentially expose proprietary information.
I find this defense ironic, given the fact that a lot of this debate revolves around defining copyright infringement. The works being trained on are infringed upon, but we might give too many details about the tech used to siphon all these IP's? Tragic.
>Do you know if the openAI lawsuits have laid this out?
IANAL, but my understanding of high profile cases is going more towards the "you can't train on this" litigation over the "how do we setup a payment model" sorts. If that's correct, we're pretty far out from considering that.
It can be as simple as "you cannot train on someone's work for commercial uses without a license", It can be as complex as setting up some sort of model like Spotify based on the numbers of time the LLM references those works for what it's generating. The devil's in the details, but the problem itself isn't new.
>Dividing equal share based on inputs would require the company to potentially expose proprietary information.
I find this defense ironic, given the fact that a lot of this debate revolves around defining copyright infringement. The works being trained on are infringed upon, but we might give too many details about the tech used to siphon all these IP's? Tragic.
>Do you know if the openAI lawsuits have laid this out?
IANAL, but my understanding of high profile cases is going more towards the "you can't train on this" litigation over the "how do we setup a payment model" sorts. If that's correct, we're pretty far out from considering that.