I’m actually surprised by the comments in here. This technology is incredibly disruptive to authors, if they are correct that their intellectual property has been misused by these companies to train LLMs, then they absolutely should have the right to prevent that.
You can both be pro AI and advancement, and still respect creators intellectual rights and the right to not have all content stolen by megacorporations and used by them to create profits while decimating entire industries.
I agree. This technology doesn’t exist in a vacuum. This isn’t some utopia where a Human artist can just solely focus on creating their art and not worry about financial gain because their survival needs are always guaranteed to be met or whatever.
One of the largest communities on Lemmy is [email protected], so I’m not really surprised that there’s people that don’t care about copyright :)
On the other hand, if a human is allowed to write a summary of a book, why should an AI not be allowed to do the same thing? Are they going to sue cliffnotes too?
if a human is allowed to write a summary of a book, why should an AI not be allowed to do the same thing?
Said human presumably would have to purchase or borrow a book in order to read it, which earns the author some percentage of the profits. If giant corps want to use the books to train their LLMs, it’s only fair that they’d have to negotiate with the publishers much like libraries do.
Said human presumably would have to purchase or lend a book in order to read it
Borrowing a book from a library doesn’t earn the author any more profits for each time it’s lended out, I don’t think. My local library just buys books off Amazon.
What if I read the CliffNotes and make my own summary based on that? What if I read someone else’s summary and reword it? I think that’s more like what ChatGPT is doing - I really don’t think it’s being fed entire copyrighted books as training data. There’s no actual proof LibGen or ZLib is being used to train it.
My main point is that if people don’t want their content used for training LLMs they should absolutely have the option to not have their content used to train LLMs.
Training databases should be ethically sourced from opt in programs, that some companies are already doing, such as Adobe.
My main point is that if people don’t want their content used for training LLMs they should absolutely have the option to not have their content used to train LLMs.
How can one prove that their content is being used to train the LLM though, rather than something that’s derivative of their content like reviews of it?
Exactly this, this is the equivalent of me taking a movie, making a function, charge for it, and then be displeased when the creators demand an explanation about it.
It’s more like reading a book and then charging people to ask you questions about it.
AI training isn’t only for mega-corporations. We can already train our own open source models, so we should let people put up barriers that will keep out all but the ultra-wealthy.
I don’t think that it is even remotely close to being the same thing. I’m sorry but we shouldn’t be affording companies the ability to profit off other people’s creations without their consent, regardless of how current copyright law works.
Acting as though a human writing a summary is the same thing as a vast network of computers processing data at a speed that is hundreds if not thousands times faster than a human is foolish. Perhaps it is also foolish to try and apply our current copyright laws (which already favour large corporations and not individual creators) to this slew of new technology, but just ignoring the fundamental difference between the two is no way of going about it. We need copyright reform, we need protections for creators, and we need to stop acting as though machine learning algorithms are remotely comparable to humans both in their capabilities, responsibilities and rights.
There is a perfectly reasonable way of doing this ethically, and that is using content that people have provided to the model of their own volition with their consent either volunteered or paid for, but not scraped from an epub, regardless of if you bought it or downloaded it from libgen.
There are already companies training machine learning models ethically in this manner, and if creators do not want their content used as training data, it should not be.
But when the answers aren’t original thoughts but regurgitations of other peoples’ thoughts about the book, then it’s plagiarism. LLMs can’t provide original output, only variations on what people have made available (whether legally or not). The answer might not even be correct or make any sense. It’s just predictive text to a crazy degree.
When you copy someone’s work without attribution, that’s plagiarism. When your output is only possible because of someone else’s work over which they own copyright and the output replicated the copyrighted material, that’s copyright infringement.
LLMs can provide original output, but they can also make errors. You’d have to prove it meets the grounds for plagiarism, and to my knowledge no one’s been able to. It’s all been claims with no substance or merit so far.
An LLM can’t make something original, it can only make something derivative. But that derivative work isn’t the same as when a human makes a derivative work because a human isn’t writing each word or phrase based on the likely “correct” next word or phrase through an algorithmic process. What humans do is magnitudes more complex, though it can at times also be accidental or intentional plagiarism.
In short, an LLM’s output is necessarily a string of preexisting human inputs. A human’s output, while it can be informed by and reference other human inputs, doesn’t have to replicate preexisting human inputs and can be an original analysis. The AI that is publicly available is not sophisticated enough to be more than fancy predictive text.
You’re making a hasty generalization here, namely by making sweeping claims without evidence or examples. Also, you’re begging the question by assuming that humans are more original than LLMs, again without providing any support or justification.
Take for example this study that found doctors prefered Med-paLM’s output to human doctors’. If Everything is a remix, there’s no reason LLMs can’t meet the minimum criteria for creativity, especially absent any evidence to the contrary.
I’m really not, though I’ll readily admit I’m simplifying things. An LLM can only create something it’s been given. I guess it can generate a string of characters and assign a definition to it, but it’s not really intentional creation. There are many similarities between how a human generates something and how an LLM does, but to argue they’re the same radically oversimplifies how humans work. While we can program an LLM, we literally do not have the capability to replicate a human brain.
For example, can you tell me what emotions the LLM had when it produced the output it did? Did its physical condition have any effect? What about its past, not just what it has learned but how it was treated? What is its motivation? A human response to anything involving creativity factors in many things that we aren’t even consciously aware of, and these are things an LLM doesn’t have.
The study you’re citing is from Google, there’s likely some bias and selective reporting. That said, we were talking about creativity, not regurgitating facts or analyzing data. I think it’s universally accepted that as the tech gets better, it’s preferable to have a computer make the first attempt at a diagnosis, especially for a scan or large data analysis, then have a human confirm.
For the remix example, don’t forget that samples get attribution. Artists credit what they sampled and get called out when they don’t. I’m actually unclear as to whether an LLM actually can cite to how it derived its output just because the coders haven’t revealed if there’s some sort of derivation log.
No, it’s more like checking out every book from the library, and spending 450 years training at the speed of light, being evaluated on how well you can exactly reproduce the next part of any snippet taken from any book.
Eventually the bad actors are going to lose a lot of money trying to litigate their theft of people’s art. It was always going to end up in the legal system. These apps are even programmed to scrub watermarks and signatures. It’s deliberate theft.
I’m actually surprised by the comments in here. This technology is incredibly disruptive to authors, if they are correct that their intellectual property has been misused by these companies to train LLMs, then they absolutely should have the right to prevent that.
You can both be pro AI and advancement, and still respect creators intellectual rights and the right to not have all content stolen by megacorporations and used by them to create profits while decimating entire industries.
I agree. This technology doesn’t exist in a vacuum. This isn’t some utopia where a Human artist can just solely focus on creating their art and not worry about financial gain because their survival needs are always guaranteed to be met or whatever.
One of the largest communities on Lemmy is [email protected], so I’m not really surprised that there’s people that don’t care about copyright :)
On the other hand, if a human is allowed to write a summary of a book, why should an AI not be allowed to do the same thing? Are they going to sue cliffnotes too?
Said human presumably would have to purchase or borrow a book in order to read it, which earns the author some percentage of the profits. If giant corps want to use the books to train their LLMs, it’s only fair that they’d have to negotiate with the publishers much like libraries do.
Borrowing a book from a library doesn’t earn the author any more profits for each time it’s lended out, I don’t think. My local library just buys books off Amazon.
What if I read the CliffNotes and make my own summary based on that? What if I read someone else’s summary and reword it? I think that’s more like what ChatGPT is doing - I really don’t think it’s being fed entire copyrighted books as training data. There’s no actual proof LibGen or ZLib is being used to train it.
authors do get money from libraries that buy the books. and in some places they even get money depending on how much its checked out.
My main point is that if people don’t want their content used for training LLMs they should absolutely have the option to not have their content used to train LLMs.
Training databases should be ethically sourced from opt in programs, that some companies are already doing, such as Adobe.
How can one prove that their content is being used to train the LLM though, rather than something that’s derivative of their content like reviews of it?
there is already lots of evidence that they have scraped copyrighted art and photographs for their datasets.
Exactly this, this is the equivalent of me taking a movie, making a function, charge for it, and then be displeased when the creators demand an explanation about it.
It’s more like reading a book and then charging people to ask you questions about it.
AI training isn’t only for mega-corporations. We can already train our own open source models, so we should let people put up barriers that will keep out all but the ultra-wealthy.
No, it’s really nothing like reading at all. Your example requires a human element. This is just the consumption of data, not reading.
Humans are the ones making these models. It’s not entirely the same thing, but you should read this article by the EFF.
I don’t think that it is even remotely close to being the same thing. I’m sorry but we shouldn’t be affording companies the ability to profit off other people’s creations without their consent, regardless of how current copyright law works.
Acting as though a human writing a summary is the same thing as a vast network of computers processing data at a speed that is hundreds if not thousands times faster than a human is foolish. Perhaps it is also foolish to try and apply our current copyright laws (which already favour large corporations and not individual creators) to this slew of new technology, but just ignoring the fundamental difference between the two is no way of going about it. We need copyright reform, we need protections for creators, and we need to stop acting as though machine learning algorithms are remotely comparable to humans both in their capabilities, responsibilities and rights.
There is a perfectly reasonable way of doing this ethically, and that is using content that people have provided to the model of their own volition with their consent either volunteered or paid for, but not scraped from an epub, regardless of if you bought it or downloaded it from libgen.
There are already companies training machine learning models ethically in this manner, and if creators do not want their content used as training data, it should not be.
But when the answers aren’t original thoughts but regurgitations of other peoples’ thoughts about the book, then it’s plagiarism. LLMs can’t provide original output, only variations on what people have made available (whether legally or not). The answer might not even be correct or make any sense. It’s just predictive text to a crazy degree.
When you copy someone’s work without attribution, that’s plagiarism. When your output is only possible because of someone else’s work over which they own copyright and the output replicated the copyrighted material, that’s copyright infringement.
LLMs can provide original output, but they can also make errors. You’d have to prove it meets the grounds for plagiarism, and to my knowledge no one’s been able to. It’s all been claims with no substance or merit so far.
An LLM can’t make something original, it can only make something derivative. But that derivative work isn’t the same as when a human makes a derivative work because a human isn’t writing each word or phrase based on the likely “correct” next word or phrase through an algorithmic process. What humans do is magnitudes more complex, though it can at times also be accidental or intentional plagiarism.
In short, an LLM’s output is necessarily a string of preexisting human inputs. A human’s output, while it can be informed by and reference other human inputs, doesn’t have to replicate preexisting human inputs and can be an original analysis. The AI that is publicly available is not sophisticated enough to be more than fancy predictive text.
You’re making a hasty generalization here, namely by making sweeping claims without evidence or examples. Also, you’re begging the question by assuming that humans are more original than LLMs, again without providing any support or justification.
Take for example this study that found doctors prefered Med-paLM’s output to human doctors’. If Everything is a remix, there’s no reason LLMs can’t meet the minimum criteria for creativity, especially absent any evidence to the contrary.
I’m really not, though I’ll readily admit I’m simplifying things. An LLM can only create something it’s been given. I guess it can generate a string of characters and assign a definition to it, but it’s not really intentional creation. There are many similarities between how a human generates something and how an LLM does, but to argue they’re the same radically oversimplifies how humans work. While we can program an LLM, we literally do not have the capability to replicate a human brain.
For example, can you tell me what emotions the LLM had when it produced the output it did? Did its physical condition have any effect? What about its past, not just what it has learned but how it was treated? What is its motivation? A human response to anything involving creativity factors in many things that we aren’t even consciously aware of, and these are things an LLM doesn’t have.
The study you’re citing is from Google, there’s likely some bias and selective reporting. That said, we were talking about creativity, not regurgitating facts or analyzing data. I think it’s universally accepted that as the tech gets better, it’s preferable to have a computer make the first attempt at a diagnosis, especially for a scan or large data analysis, then have a human confirm.
For the remix example, don’t forget that samples get attribution. Artists credit what they sampled and get called out when they don’t. I’m actually unclear as to whether an LLM actually can cite to how it derived its output just because the coders haven’t revealed if there’s some sort of derivation log.
No, it’s more like checking out every book from the library, and spending 450 years training at the speed of light, being evaluated on how well you can exactly reproduce the next part of any snippet taken from any book.
Eventually the bad actors are going to lose a lot of money trying to litigate their theft of people’s art. It was always going to end up in the legal system. These apps are even programmed to scrub watermarks and signatures. It’s deliberate theft.