Same Theft, New Tech: Music, AI, and Unpaid Creativity
Fair Use, Free Ride, or Business Models with Body Counts?
The texts arrived fast ‘n furious when the news broke that the Trump administration axed Shira Perlmutter, head of the U.S. Copyright Office. None of the messages were optimistic, and all found the timing curious. That’s because the U.S. Copyright Office just dropped a preliminary report on whether training generative AI on copyrighted works counts as infringement, or if it’s protected by that oft-quoted/rarely-understood legal catch-all known as “fair use.”
Spoiler: it’s complicated.
The preliminary report dives deep into the mechanics of how AI models gobble up mountains of non-public domain data like Oreos after an all-night bong session. More importantly, it looks at the history, contextual, and legal underpinnings of whether that kind of bulk ingestion is kosher or just an extremely polite smash-and-grab. The report isn’t empty hand-wringing around ethics, nor is it politicized. It lays out the legal thresholds and spotlights pressure points such as the unlicensed copying that happens during LLM training by running them through the fair use gauntlet: What’s the purpose? Is it transformative? Does it hurt the market? And crucially, are the AI companies getting rich while the artists whose work made it possible are left with the impossible task of chasing compensation from a system designed to ignore them? Or expect artists to fall for the old “do it for the exposure” trick?

The report also discusses potential licensing schemes, voluntary and statutory, which could give creators a cut without throwing innovation under the bus. But the central message is clear: the future of generative AI shouldn’t be built (at least entirely) on the creative labor of others without consent, credit, or compensation. Think of it this way: if an AI model is learning how to “improvise” by listening to thousands of hours of other people’s solos—note for note, chorus after chorus—without credit, payment, or even a tip of the hat, that’s not innovation. That’s an artificial jam session built on the innovations of others, and the only one getting paid is the electric company.
Dear Tech Genius: if your model is learning to (for instance) swing from Hank Mobley, the least it could do is cut a check to his estate.
Mobley died sick, broke, and overlooked by the same industry he helped build—like so many Black artists whose work built the foundations we still stand on. If your billion-dollar model is built on their sweat, then don’t tell me it’s “fair use.” That’s theft with better branding.
What Napster Taught Us—And What We’re Ignoring Again
Let me take you back to the hazy dial-up days of the late 90s/early 2000s. Napster was a bigger star than any artist on the planet, though it took a minute for the music industry to wake up to the reality that the Internet wasn’t a fad or inconvenience. Suddenly, every teenager with a DSL line became a pirate, and the industry—shocked and appalled that people preferred free—declared war. Lawsuits flew. Metallica made headlines. I happily used P2P networks to successfully market Creed (for which I make no apologies, so keep your snarky comments to yourself). And in the end, after the dust settled and the iPods were loaded, streaming emerged as the imperfect, uneasy truce: a compromise between access and artist rights.
Fast forward to 2025. The new threat isn’t distribution. It’s generation. Generative AI isn’t just Napster with better PR. It’s not about sharing music anymore—it’s about making it. And doing so by training on billions of copyrighted works, often without consent or compensation. The U.S. Copyright Office’s latest report reads like a polite but firm cease-and-desist to the tech industry: yes, innovation is great, but maybe don’t build your billion-dollar model on a music catalog someone spent their life creating.
The parallels are striking. Then, it was “Why should I pay for music I can download for free?” Now it’s “Why should I license music when I can train a model to simulate Coltrane in a nanosecond?” The AI companies are playing the same music, just in a different key. The familiar chorus? “Fair use.” But unlike a kid in a dorm room swapping MP3s, these players are global corporations pumping out albums by non-existent artists that might just edge a real one off the playlist. And the speed and scale means they can create gigabytes of Souless Coltranes in minutes, or a 10 hour playlist of soundalike Beatles creations before you can finish typing the word “Beatles.”
Is Your Permanent Record Still Yours? Or Permanent?
What the Copyright Office preliminary report makes clear is that training on copyrighted material isn’t automatically fair use. Particularly when the output competes with the real thing, and especially when it generates revenue for everyone except the original creator. There are strong calls for licensing schemes, but the AI side counters: “There’s no way to license everything in these massive datasets—it’ll kill innovation!”
Bullshit. Or, at least, a half-truth dressed up as inevitability.
The music industry already has infrastructure for large-scale licensing. ASCAP, BMI, SESAC, Harry Fox—they’ve been issuing and managing performance and mechanical licenses for decades. The entire streaming ecosystem runs on it. If AI companies genuinely wanted to license music for training, the pipes are already laid. They weren’t built for this exact use case. But they’re a foundation—not a roadblock.
Yes, licensing AI training data presents new challenges: derivative rights, consent for novel uses, economic valuation, and global scope. But none of that makes it impossible. It just makes it inconvenient—especially if your business model depends on free access to the creative labor of others.
But here’s the rub: licensing means paying, and fair use means free. So instead of coming to the negotiating table, many AI firms are whining that the very existence of copyright law is pissing in their opportunity pool. And just in case that wasn’t dystopian enough, let’s not forget what JUST happened when the Copyright Office released a report gently suggesting that maybe AI models shouldn’t be trained on billions of copyrighted works without permission. The head of the office, Shira Perlmutter, was out two days later—allegedly because the report didn’t sit well with Elon or other tech titans shopping for a legal greenlight. You can’t license what you’re actively trying to kill.
This isn’t a technical issue. It’s a business model with a body count.
And here’s the kicker: where Napster was about copying finished works, generative AI is about replacing the creative act itself. It’s a shift from piracy to plagiarism with plausible deniability. We’ve gone from “I downloaded your song” to “I trained on your voice, vibe, solos, and catalog, then had my bot write a new one that kinda sounds like it could’ve been yours. But different-ish.”
This is where history matters. The industry got scorched by Napster because it didn’t act until it was too late. Then it overreacted and alienated a whole generation of music fans. We can’t afford to bungle the response to AI the same way, by either ignoring it until the machines are headlining Coachella or tech becomes the enemy. AI isn’t inherently bad, but its use will be harmful if applied thoughtlessly.
What’s needed is what we always needed: balance. Innovation and compensation. Access and attribution. If we want AI to truly “learn” from human creativity, it needs to respect the people it’s learning from. Start there—and maybe, this time, we won’t have to clean up another mess built on stolen art.
thanks so much for "telling it like it is" , best piece on this subject I've read so far in terms of agency and clarity :)