One of the largest communities on Lemmy is [email protected], so I’m not really surprised that there’s people that don’t care about copyright :)
On the other hand, if a human is allowed to write a summary of a book, why should an AI not be allowed to do the same thing? Are they going to sue cliffnotes too?
if a human is allowed to write a summary of a book, why should an AI not be allowed to do the same thing?
Said human presumably would have to purchase or borrow a book in order to read it, which earns the author some percentage of the profits. If giant corps want to use the books to train their LLMs, it’s only fair that they’d have to negotiate with the publishers much like libraries do.
Said human presumably would have to purchase or lend a book in order to read it
Borrowing a book from a library doesn’t earn the author any more profits for each time it’s lended out, I don’t think. My local library just buys books off Amazon.
What if I read the CliffNotes and make my own summary based on that? What if I read someone else’s summary and reword it? I think that’s more like what ChatGPT is doing - I really don’t think it’s being fed entire copyrighted books as training data. There’s no actual proof LibGen or ZLib is being used to train it.
My main point is that if people don’t want their content used for training LLMs they should absolutely have the option to not have their content used to train LLMs.
Training databases should be ethically sourced from opt in programs, that some companies are already doing, such as Adobe.
My main point is that if people don’t want their content used for training LLMs they should absolutely have the option to not have their content used to train LLMs.
How can one prove that their content is being used to train the LLM though, rather than something that’s derivative of their content like reviews of it?
One of the largest communities on Lemmy is [email protected], so I’m not really surprised that there’s people that don’t care about copyright :)
On the other hand, if a human is allowed to write a summary of a book, why should an AI not be allowed to do the same thing? Are they going to sue cliffnotes too?
Said human presumably would have to purchase or borrow a book in order to read it, which earns the author some percentage of the profits. If giant corps want to use the books to train their LLMs, it’s only fair that they’d have to negotiate with the publishers much like libraries do.
Borrowing a book from a library doesn’t earn the author any more profits for each time it’s lended out, I don’t think. My local library just buys books off Amazon.
What if I read the CliffNotes and make my own summary based on that? What if I read someone else’s summary and reword it? I think that’s more like what ChatGPT is doing - I really don’t think it’s being fed entire copyrighted books as training data. There’s no actual proof LibGen or ZLib is being used to train it.
authors do get money from libraries that buy the books. and in some places they even get money depending on how much its checked out.
My main point is that if people don’t want their content used for training LLMs they should absolutely have the option to not have their content used to train LLMs.
Training databases should be ethically sourced from opt in programs, that some companies are already doing, such as Adobe.
How can one prove that their content is being used to train the LLM though, rather than something that’s derivative of their content like reviews of it?
there is already lots of evidence that they have scraped copyrighted art and photographs for their datasets.