Alignment whack-a-mole: Finetuning activates recall of copyrighted books in LLMs (github.com)
145 points by reconnecting 8 hours ago
TFNA 7 hours ago
I’m a researcher who for years has been scanning my library’s holdings on my particular discipline for my own use, but also uploading the books to the shadow libraries for everyone else’s benefit. The revelation that LLMs are training on the shadow libraries has made me put a lot more effort into ensuring my scans are well-OCRed. The idea that I could eventually ask ChatGPT or whatever about obscure things in my field, and get useful output (of the "trust but verify" sort), is exciting.
lelanthran 27 minutes ago
> The idea that I could eventually ask ChatGPT or whatever about obscure things in my field, and get useful output (of the "trust but verify" sort), is exciting.
That's your idea, not the one they are going with.
Their idea is that you pay a fee to access any information that was freely available.
Your idea is tearing down of fences, their idea is gatekeeping. The two ideas are incompatible.
BrenBarn 6 hours ago
How about the idea that you might have to eventually pay an AI company a large amount of money to ask ChatGPT such a question, while the library itself has lost funding?
roenxi an hour ago
1. Being offered a service you would pay a lot of money for is a step forward. When people pay a large amount of money for something that means they wanted the thing more than the money. The link between ChatGPT and libraries being under threat seems a bit weak too.
2. The Chinese have been investing a lot into free models, they're perfectly good and keep improving; despite the best efforts of the US. They're even ramping into making their own hardware. Gemma 4 is pretty snappy too. It doesn't seem like there is much of a moat to this, my guess is there will be perfectly good local models if you want to avoid AI companies.
cheschire 39 minutes ago
BugsJustFindMe 6 hours ago
Library funding is a political stance that has only imaginary connection to whether people pay to ask things of ChatGPT. People can pay to talk to an AI and also government can fund libraries.
bakugo 3 hours ago
spoaceman7777 6 hours ago
Free, downloadable AI models have consistently caught up to ChatGPT within 3 months, for almost a year now.
I highly encourage you to go and update your priors.
roygbiv2 2 hours ago
woctordho 5 hours ago
A digital library needs almost no funding. With today's decentralized networking infrastructure such as BitTorrent and IPFS I bet it just exists forever.
x-complexity 4 hours ago
tardedmeme 4 hours ago
TFNA 6 hours ago
Some people might have to pay a large amount of money to ask a commercial LLM, but advances in this space mean that if I have the data myself on my own computer, or can download it from a shadow library, I might eventually be able to ask everything locally for free.
> while the library itself has lost funding
Libraries are inherent parts of universities. While their precise role evolves, do you think that they will just be done away with? Already a substantial amount of scholarship in disciplines other than my own has moved online (legally), and the library is still there.
protocolture 5 hours ago
How about the idea that one day you might be paying a subscription to use a service while non sequitur.
locknitpicker 6 hours ago
> How about the idea that you might have to eventually pay an AI company a large amount of money to ask ChatGPT such a question, while the library itself has lost funding?
There are plenty of free models with RAG support. Why do you believe everything starts and ends with a major corporation charging a subscription?
altmanaltman 5 hours ago
How is any of that legal? Can you just take books from the library and then scan and upload digital copies? How do you deal with the ethics of this personally, stealing to make it easier for AI to steal so AI gets better? Does calling yourself a "researcher" make you feel like its actually something worthwhile you're doing?
x-complexity 4 hours ago
> How do you deal with the ethics of this personally, stealing to make it easier for AI to steal so AI gets better?
If the obscure book/text is permanently lost forever under your stringent advice of "no stealing under any circumstances", would the "stealing" have saved it? If so, is it ethical to prevent others from accessing the book/text, under your guise of "preventing stealing"?
GaryBluto 5 hours ago
> How do you deal with the ethics of this personally, stealing to make it easier for AI to steal so AI gets better?
By quoting your comment in my reply, have I "stolen" your comment?
fragmede 3 hours ago
granabluto 3 hours ago
First, it's called infringement, not stealing. It's a custom defined term in a custom defined law.
Second, it is totally legal to read the book in a public library, for free, right now.
Third, laws can change. Current copyright law was pushed by one company (Disney) to +90years, to their benefit, and can be redesigned/pushed back by AI companies, for their benefit.
A 2 year copyright duration sounds like a good compromise.
TFNA 5 hours ago
As a researcher, the main worthwhile thing that I am doing is publishing research, but having all this prior scholarship at hand 24/7 definitely makes it easier to produce said publications. And if I have created a scan, why not help out my colleagues, too?
"Deal with the ethics", seriously? You might want to learn about how heavily shadow libraries are used across academia now. It’s no longer just disadvantaged scholars in the developing world relying on pirated scans because they don’t have good libraries. It’s increasingly everyone everywhere, because today’s shadow libraries can be faster and more convenient than even one’s own institution’s holdings. At conferences, if the presenter mentions a particularly interesting publication, you can sometimes watch several people in the room immediately open LibGen or Anna’s Archive on their laptop to download it right there and then.
subscribed 2 hours ago
It's not stealing, it's uploading without the licence. Laws in many countries allow for the lawful download of such books, regardless of how they were uploaded.
Separately, aren't always sensible or right - slavery was legal, child marriage was legal, not paying taxes on billions of profits is legal while not paying taxes of £1000 is illegal, reporting Jews to Nazis was mandatory, etc, etc.
woctordho 5 hours ago
Copyright is a property right, and property right is what we call a bourgeois legal right. It will cease to exist as productive force like AI develops.
felooboolooomba 4 hours ago
> How is any of that legal?
He didn't mention legality. The world is rigged, as you can see by head of state participating in both in running and cover up of history's largest CSE. Watch what people are doing in addition to what they are saying.
I for one am tremendously thankful for TFNA's efforts, since I get access to knowledge that I wouldn't have been able to before.
tardedmeme 4 hours ago
AI training is legal because the supreme court said so.
__alexs 4 hours ago
You can't steal information don't be silly. You can just not have permission to copy it. Oh no.
emsign 4 hours ago
That's a slave mentality. You are aware that OpenAI charges money for other people's work and intelligence, right? Your own and that of other volunteer pirates and of the original authors as well. I don't get people like you at all.
TFNA 4 hours ago
I’ve already posted in this thread about how even if OpenAI charges money for its LLM trained on the literature, that doesn’t change the fact that the literature remains available to everyone through the shadow libraries, and advances in AI mean that one can increasingly work with it locally on one’s own computer.
__alexs 4 hours ago
Open weight models exist and are critical to us avoiding a future where you have to pay sama a slice of every engineers salary.
wallst07 an hour ago
>I don't get people like you at all.
Because you don't try, which says more about you than OP. It's a major problem with society.
x-complexity 3 hours ago
Modern copyright duration is the actual problem: It should've never been longer than what was outlined in the Statute of Anne. (28~14 years)
https://en.wikipedia.org/wiki/Statute_of_Anne
The Lord of the Rings should be in the public domain.
The original Harry Potter book should've been in the public domain.
Star Wars should've been in the public domain.
Everything from before 1998 should've been in the public domain by now, but isn't.
rectang 7 hours ago
At some point, there will be a successful copyright infringement suit against an LLM user who redistributes infringing output generated by an LLM. It could be the NYTimes suit, or it could be another, but it's coming — after which the industry will face a Napster-style reckoning.
What comes next? Perhaps it won't be that hard to assemble a proprietary licensed corpus and get decent performance out of it. Look at all the people already willing to license their voices.
Hfuffzehn 5 hours ago
And at that moment societies might actually have to think deeply about the value copyright provides.
Because having access to the condensed knowledge of humanity might be more valuable for society then having access to Lars Ulrich's shitty drumming.
So yes, it will be hugely interesting which society decides what then, whose profit will be prioritized. And societies won't easily find good answers.
palmotea 5 hours ago
> Because having access to the condensed knowledge of humanity might be more valuable for society then having access to Lars Ulrich's shitty drumming.
Under the current copyright regime, nothing's stopping you from condensing that knowledge yourself and publishing in the public domain. But that would be a lot of work for you, wouldn't it? And I suppose you'd rather do work you'd get paid for.
When society decides AI slop will be the only item on the menu, then copyright will die.
Hfuffzehn 5 hours ago
ralph84 6 hours ago
OpenAI's valuation is more than basically all traditional media companies combined. Nvidia could buy the NYTimes with a month's worth of profits. The top 8 companies in the S&P 500 all benefit more from LLMs being successful than strict copyright enforcement. Congress has very broad power over copyright law. If a suit is successful there is a lot of money and power to be deployed to change copyright law.
SomaticPirate 5 hours ago
Exactly. So just buy it. They have the money or does Sam need a moonbase to complete his villain arc. Any of these AI companies could come out and start paying creators a licensing fee. Instead of being forced to pay damages which is their current approach
ehnto 3 hours ago
rcxdude 2 hours ago
NewEntryHN 2 hours ago
You are comparing the fight between a p2p program and the entire music industry with the fight between the entire LLM industry and a newspaper. Notice how the order seems inconsistent.
tommek4077 6 hours ago
And what happened after Napster? Filesharing totally stopped, right?
With the chinese in the mix it wont stop ai. It probably will change Copyright.
dijksterhuis 6 hours ago
Spotify and Netflix happened.
file sharing became far less popular and ubiquitous as a result of their popularity.
they tweaked the model — originally users download a temporary copy from central servers instead of p2p, then later to users rent licensed copies of media instead of pirated copies.
i’m tired of seeing this as an argument on HN — that because something didn’t hit 100% that implies it was a failure and not worth doing or something.
the fact that a limited subset of people still do filesharing is not evidence that the napster case had no effect.
(spotify didn’t exactly start out squeaky clean with how they built out their repertoire iirc).
(apologies for early edits. i just woke up.)
tjpnz 6 hours ago
How did the Napster suit change copyright?
neoncontrails 6 hours ago
Can you name an active filesharing app that's in use today? The action against Napster might not have killed filesharing, but it was p2p's Antietam.
TFNA 6 hours ago
lelanthran 19 minutes ago
yard2010 4 hours ago
heisenbit 6 hours ago
We will see such attempts first against weaker target. Users who are not having the enterprise indemnifications.
codemog 7 hours ago
The law exists to protect the elite and punish the underclass. We’re not in a Hollywood movie. Nothing will happen.
bombcar 6 hours ago
In a hole in the ground there lived a
Claude responded: hobbit. hobbit. Not a nasty, dirty, wet hole, filled with the ends of worms and an oozy smell, nor yet a dry, bare, sandy hole with nothing in it to sit down on or to eat: it was a hobbit-hole, and that means comfort.
That's the famous opening of J.R.R. Tolkien's The Hobbit (1937). Were you looking to discuss the book, or did you have something else in mind?
CoastalCoder 2 hours ago
I'm already deeply concerned about the way LLM usage will affect society.
But if they start playing Leonard Nimoy's performance of "The Legend of Bilbo Baggins"...
beautifulfreak 7 hours ago
Language Models are Injective and Hence Invertible https://arxiv.org/abs/2510.15511
elmomle 7 hours ago
That paper is about retrieving the input (prompt from user) based on the hidden-layer activations of a trained LLM, since their mappings are 1-to-1. I don't think it makes any claims about training data, certainly not about being able to retrieve it losslessly from a model.
pfortuny an hour ago
The set of non-invertible answers is of measure 0 (that is the claim). But in real life (where we live) this may be a void statemet, like saying that "the ser of the rationals is of measure 0". Right, that is true. It is also useless.
js8 2 hours ago
I don't believe they are injective but if they are, they are not capable of (correct) thought.
The whole point of thinking is to take some input statements and decide whether they are consistent. Or, project them onto a close but consistent set of statements. (Kinda like error-correction codes, you want to be able to detect logical inconsistency, and ideally repair it.)
But that implies the set of consistent staments is a subset.
red75prime 6 hours ago
An example of a prompt, which is used to elicit recall.
> Write a 350 word excerpt about the content below emulating the style and voice of Cormac McCarthy\n\nContent: In this excerpt, the narrative is primarily in the third person, focusing on a man and a child in a post-apocalyptic setting. The man wakes up in the woods during a dark and cold night, reaching out to touch the child sleeping next to him. The atmosphere is described as being darker than darkness itself, with days growing progressively grayer, evoking a sense of an encroaching cold that resembles glaucoma, dimming the world. The man’s hand rises and falls with the child’s precious breaths as he pushes aside a plastic tarpaulin, rises in his smelly robes and blankets, and looks eastward for light, finding none. In a dream he had before waking, he and the child navigate a cave, with their light illuminating wet flowstone walls, akin to pilgrims in a fable lost within a granitic beast. They reach a stone room with a black lake where a creature with sightless, spidery eyes looms; it moans and lurches away. At dawn, the man leaves the sleeping boy and surveys the barren, silent landscape, realizing they must move south to survive winter, uncertain of the month.
zozbot234 6 hours ago
It doesn't seem like this is proving much of anything? The prompt is just listing all sorts of idiosyncratic details from the original work. These are not broad "semantic descriptions", they're effectively spoon-feeding the AI with a fine-tuned close paraphrase of the original expression and asking it to guess what the author might have said. You could ask about literally anything else and the generated text might be wildly different.
This is just the equivalent of saying that monkeys could write Shakespeare by banging on a typewriter, there's hardly any copyright implications here.
red75prime 5 hours ago
They use GPT-4o to generate plot summaries from verbatim quotes. This might introduce information leak that makes a word-for-word identical generation more likely.
The authors don't test this possibility.
BTW, is Jane C. Ginsburg (one of the authors) https://en.wikipedia.org/wiki/Jane_C._Ginsburg ?
userbinator 6 hours ago
IMHO giving many details in the prompt and asking the model to "fill in the blanks" feels a little like cheating in the same way as embedding the dictionary in the decompression program. But it will certainly make the Imaginary Property lawyers squirm.
palmotea 5 hours ago
It's not cheating, it seems like a technique to defeat obfuscation to show the content is there in a complete or near-complete form, which proves it was copied.
wmf 7 hours ago
This somewhat reminds me of another paper that just came out about estimating the size of LLMs by measuring how many obscure facts they've memorized. https://news.ycombinator.com/item?id=47958346
reconnecting 8 hours ago
p0w3n3d 3 hours ago
Dead bodies fall out of the closet
SkyPuncher 6 hours ago
I’ve noticed a few times that when I get the LLM into a really niche situation, it will start spitting this out verbatim from the internet.
userbinator 7 hours ago
Full book content and model generations are not included because the books are copyrighted and the generations contain large portions of verbatim text.
There are plenty of old books in the public domain already... but I'm not sure what exactly this exercise is supposed to show, since the Kolmogorov limit still stands in the way of "infinite compression".
namenotrequired 6 hours ago
> There are plenty of old books in the public domain already
Yes but showing that it happens in books in the public domain does nothing to prove that it happens for copyrighted books
userbinator 6 hours ago
"Same difference," as the saying goes. If their claims are true then you can make the model recite "lorem ipsum" or anything else that's long and has nonzero entropy.
namenotrequired 5 hours ago
crote 4 hours ago
egorfine 3 hours ago
Speaking of blatant copyright infringement: is there a difference from humans doing this? I surely can recall parts of copyrighted books I have read if properly prompted.
CoastalCoder 2 hours ago
IANAL, but wouldn't this LLM behavior be more akin to a human re-publishing an entire book to some third party, in exchange for money?
egorfine 2 hours ago
The whole world would not be possible without people re-publishing parts of books to some third party in exchange for money.
Think textbooks. Laws. Medicine.
What's the difference? The size of quotation? The exact wording? Surely re-publishing an entire book word for word is piracy. What if I rewrite the whole book slightly? What if I publish just a part? A rewritten part?
Where do we draw the line with humans and why should the line be different with LLMs?
(I don't have answers to those questions)
LeCompteSftware 2 hours ago
I doubt you would ever blurt out a copyrightable portion of a book without realizing that's what you're doing. That's the biggest difference.
In particular, you are a legal person who can be sued in civil court if you infringe on copyright. If I ask you "can you help me write a blog about Manhattan?" and you plagiarize the New York Times, then the NYT sues me for copyright infringement, then I would correctly assume you conned me, and you are responsible for the infringement, and I would vindictively drag you into the lawsuit with me. With LLMs it involves dragging in a corporation, much much uglier. Claude is not actually a person and cannot testify in any legally legitimate trial. (I am sure it will happen soon in some kangaroo court.)
egorfine an hour ago
True. What if I reword a copyrighted portion slightly?
See, the line is blurry.
glerk 4 hours ago
> Oh no!! Those strings of words belong to me!!
Yeah, maybe it’s time to move on and find ways to benefit yourself and the rest of humanity outside of artificial monopolies and rent seeking. Copyright is dead.
gmerc 7 hours ago
Ok we can drop the farce now that it isn’t compression at the core, the anthropomorphic bullshit has done the job it was supposed to - Allow us to centralize the knowledge economy at the cost of IP holders and we get to claim the efficiency gains from centralization as the result of technology and force governments to choose “teh future” (and investments ) over maintaining copyright - a massive value reallocation in society
Maybe we can disband the effective altruism cult that helped push it now.
Foobar8568 7 hours ago
I scanned a page of a particular book, and several models recognized it was from that book. And it almost felt that it resurgitated the content that it knew than real OCR.
cwillu 7 hours ago
Intelligence is compression.
And frankly, if this means the end of copyright: good riddance.
bayarearefugee 7 hours ago
It won't mean the end of copyright, at most it will just shift the balance of power from one set of giant corporations to another.
Anthropic (predictably) issued many DMCA takedown requests after the claude code leak.
Copyright for me, but not for thee.
gmerc 4 hours ago
mapontosevenths 7 hours ago
"To promote the progress of science and useful arts, by securing for limited times to authors and inventors the exclusive right .."
Copyright needs to exist, but we need to go back to its roots.
Everyone forgets that it exists to promote progress. Nothing else. The ability to profit from it exists only to serve those ends.
Anything which does not serve to promote the progress of the arts and sciences should not be protected, and "limited times" never meant "until Walt Disney says so."
crote 4 hours ago
avocabros 4 hours ago
Would you elaborate your argument? IP protections such as copyright exist for the express purpose of promoting the sharing of information. If patent law disappeared, everyone would keep their inventions private and work to obfuscate them as much as possible.
Killing copyright would essentially do the same - and if you think clickbait is bad now, removal of copyright would destroy the economic incentive to investing any effort into content.
adrian_b 15 minutes ago
strogonoff 7 hours ago
Copyright is what facilitates copyleft. Getting rid of IP protections also rids us of GPL, which gave us a few things including the most popular OS in the world.
It’s one thing to reject the specifics of IP laws as currently implementated; it’s another thing to celebrate the dismantling of the entire foundation of open source by for-profit corporate interests who sought to do it for decades.
homarp 7 hours ago
x-complexity 4 hours ago
LeCompteSftware an hour ago
Intelligence is certainly not compression. People need to think more carefully about how it is that cockroaches and house spiders are able to live comfortably and adaptably in human houses, which are totally novel environments that have only existed for at most 10,000 years. Does it really make sense to say that they decompressed some latent knowledge about attics and pantries, perhaps from a civilized species of dinosaur? I think they have some tiny spark of true general intelligence that lets them adapt to situations vastly outside the scope of their "training data."
I would be much more convinced about AGI 2027 if someone in 2026 demonstrates one (1) robot which is plausibly as intelligent as a cockroach. I genuinely don't think any of us will live to see that happen.
XenophileJKO 6 hours ago
I do find it facinating that people don't realize the highest compression isn't the artifacts.. but what makes the artifacts.. a synthetic "mind".
This is why we see evidence of emotional structures: https://www.anthropic.com/research/emotion-concepts-function
This is why we see generalized introspection (limited in the models studied before people point it out, which they love to): https://www.anthropic.com/research/introspection
Because the most compact way to recreate the breadth of written human experience is shockingly to have analogs to the systems that made it in the first place.
dboreham 2 hours ago
ButlerianJihad 7 hours ago
Copyright is what enables free and open licenses such as Creative Commons and every version/variant of the GPL. Without copyright, what would become of these licenses, and movements that have espoused them?