IV. Applicability of the USPTO Eligibility Guidance to AI-Assisted Inventions
For the subject matter eligibility analysis under 35 U.S.C. 101, whether an invention was created with the assistance of AI is not a consideration in the application of the Alice/Mayo test and USPTO eligibility guidance and should not prevent USPTO personnel from determining that a claim is subject matter eligible. In other words, how an invention is developed is not relevant to the subject matter eligibility inquiry. Instead, the inquiry focuses on the claimed invention itself and whether it is the type of innovation eligible for patenting.
The Federal Court of Justice, Bundesgerichtshof decided in a ruling issued on June 11, 2024 (AZ X ZB 5/22) that Artificial Intelligence cannot be recognized as an inventor. Only a human can file for an AI generated invention. The DABUS cases are pro-corporate attempts brought by the Artificial Inventor Project seeking intellectual property rights for AI-generated output “in the absence of” a traditional human inventor, but the courts are not buying it, and the result is and will always be the same, you need a human name on a patent, regardless of how little input the human made in generating the invention.
Normally, to register a valid patent for an invention, you need to prove “substantial human contribution”, so even human inventors who are for hire would need to have their names on the patent. Previously, German courts were split on the issue. Now, the Bundesgerichtshof has resolved the split by removing the requirement for “substantial contribution” by a human.
What the Bundesgerichtshof is doing basically is to tell the courts to stop obsessing over the degree of contribution of human versus machine input. It is unnecessary to examine how much of the process of invention has been automated. Everyone agrees that machines cannot invent anything coherent entirely on their own and if they could, it would take a human to decide whether something was invented, so without the human, there is no invention. It matters very little what technology you use to come to the conclusion that something is an invention.
So humans will continue having their names on the patent, but they won’t need to prove they never used AI to generate parts or the whole of the invention. The requirement for a human inventor is simple, if someone uses the patent without permission, a robot cannot file a lawsuit and you can’t sue a robot for infringing on your IP. A robot can’t assign rights to anyone because it is a corporate asset. Assets are owned, they have no agency or the capacity to consent. Given that corporations are not recognized as inventors, they need at least one precedent where a corporate asset can replace the actual human inventors. There is no other goal in trying so desperately to remove the human inventor requirement. Right now, if you are for hire, you already consented to be deprived of your rights with or without AI. Even before the advent of AI, it was customary for the CEO of a corp to put their own name on the patent even though 9 other employees made the invention and all their names are not necessarily on the patent. If AI becomes an exception for the human inventor requirement, it will be another step into corporate appropriation of human work.
Luckily, DABUS is an extremely weak case, well publicized all over the media but very weak, from here, I’d say the DABUS claims begin to border on frivolous at this point. How many times in how many jurisdictions can a plaintiff lose the same case, before being declared a vexatious litigant? The fact that there seems to be unlimited money to bring an unlimited number of the same version of unsuccessful case is telling. I think the courts have better things to do right now.
It is another way to say that human users will always own the rights to AI generated output, so long they have provided the most minimal of input in a prompt and made a final call regarding the generated output. If no human was involved in the generating of an invention, then it wouldn’t be possible to register it. AI platforms are simple tools, no different from other applications you may have used to create your IP. Basically, the German courts are instructed to stop caring what tools and mediums an inventor used to create the IP, be it Microsoft Word, a gas-stove, a shovel, Ableton Live, a tractor, a fork, artificial intelligence, an X Box Kinect, a hair-brush, or any other tangible or intangible object.
Most often, tools may have not been used at all for a valid invention, humans often have an instant vision of something they need to use at a specific moment, but that doesn’t yet exist. You imagine it, you make it, you use it and if you want to make money with it, then you patent it. Otherwise, I believe the majority of existing inventions are not even patented. Conversely, the majority of patented inventions are so abstract that they may be as good as useless. Inventions come from a specific need. Patenting whatever is patentable and isolating molecules from efficient systems (i.e. things from nature) has proven time and again to be counterproductive old-world mentality, but this is a subject for another post.
What is true for inventions is even more true for music or script-writing for example (I hope coding as well, because I will need to code soon and will not hesitate to use AI.) Humans hear music in their heads and our minds create multiple scenarios faster than the speed of light. Of course we need tools to organize all this information and take it out of our heads in coherent form from time to time. AI is here to facilitate and accelerate human creation and productivity. Corporations don’t seem to like this. Before the advent of AI nobody cared whether you compose on a MacBookPro, on a phone, with a pen and a harmonica, or by recording your washer and dryer to make beats. Bottom line is, we have all these billions of machines and tools, but it takes a human to make shit and mainly to decide if it has been made at all.
On a first glance, it seems that the Australia courts keep requiring the need for a human to own and control the invention, but in the decision there is a discussion as to whether AI can be named an inventor for the sake of being named an inventor, even though only a human can file a valid patent and be a patentee, regardless of the number of “inventive steps” the machine has taken, or any thought processes a human has had. As I explained above, nobody cares how a human applicant got to the invention. Practically, it is the court thinking out loud philosophically while nothing really changes. To cite paragraph 12 from the judgment
[The commissioner’s] position confuses the question of ownership and control of a patentable invention including who can be a patentee, on the one hand, with the question of who can be an inventor, on the other hand. Only a human or other legal person can be an owner, controller or patentee. That of course includes an inventor who is a human. But it is a fallacy to argue from this that an inventor can only be a human. An inventor may be an artificial intelligence system, but in such a circumstance could not be the owner, controller or patentee of the patentable invention.
To sum it up, the Australian court says AI could be a “sole” inventor, but you still need a human to take credit for the AI’s work in order to register the invention and derive any economic benefit. After all these mental acrobatics, it looks like we are at the exact same place we started out.
It’s all great stuff, and it all points to the same place. Humans will own everything AI generates. When you see a court discussing definitions in the dictionary, it means the law is no longer of any help and everyone is completely lost. Here we have one of those moments.
I didn’t know that “computer” initially referred to a human who made computations. So, a human can be a computer, but a computer cannot be a human. Got it. What helpful information to start the day.
I try to watch a film a day. I only select movies that don’t contain gun violence or open political messages. I avoid horror films and everything war-related, be it historical, modern, or science fiction. If it is about war, I don’t watch it. End of the world and apocalypse, only if it is not war-related, like Don’t Look Up a few years back or the Obama feature with Julia Roberts. I watched the latter in part. Technically, I can’t watch 90% of the films due to concerning subject matter outlined above that I consider a danger to a free and democratic society and humanity as a whole. Everything else is on the table.
I keep handwritten notes while I watch and then compile titles in a separate List of Watchables for movies I’d like to see again. This year, less than 15 movies made the list since January. These are the movies I’d pay money to watch again, with links to trailers.
Godzilla minus One (I made a war exception for this one, because I only heard good things, it was depressing but also deeply moving and contained a strong anti-war message)
Embrassez qui vous voudrez (made in 2002 but I only accessed it this year and was so impressed, I started digging into more French movies and went through the entire Cannes line-up from last year)
Cocorico (a blast, great script, unequaled dialogue)
I would’ve absolutely loved the film La Bête . It has an Inception-ish vibe, but contains unnecessary scenes of animal abuse, without these scenes it would be in my list.
This complaint is very similar to the Udio complaint. I will address different points. Suno is the first Music AI platform I started testing last month. Others including Udio followed through word of mouth. Prior to May, there were no viable music AI platforms according to professional standards, but Suno’s latest version opened the floodgates of creativity – the industry mentions 10 new songs a second on Suno alone – and there are already a good few dozens of platforms quickly catching on.
In a way, everything we may say now about AI is at a very early stage of training, building, debugging and adjusting and is evolving as we speak through the invaluable input of millions of user pioneers. We are seeing progress unfold at the speed of light before our eyes. Everyone is learning, AI is learning and countless users who never made music in their lives are also learning about making music, with each platform providing valuable tips and tricks. There is a process of demystification and breakdown of loops, beats, melodies, and vocal flows in different languages, as well as deconstruction and re-appropriation of the music production process. It brings tears to my eyes to see so many users become creators instead of passive consumers.
Many users throughout platforms mention that since AI came along, their favorite songs are the songs they made themselves. This is fantastic for humanity. Obviously, these users have now less time to listen to commercial songs. Until now, we had to listen to everything the industry imposes on us, because there was no alternative to learn from, other than public domain. It was time-consuming, frustrating, and depressing due to violent, reductive, and misogynistic lyrics and systemic undue sexualization and dehumanization of artists by the industry. Now that AI listens to these commercial “hits”, we can protect our ears while focusing on more productive things that bring us joy. In a way AI doesn’t do anything more than we’d be doing without AI, but AI saves us time and protects our emotional well-being and integrity by ingesting and filtering the trash the industry throws at us, so that we can minimize our exposure to harmful content.
Can the music industry really stop progress and continue keeping AI for themselves?
In both complaints we see that the platforms refuse to disclose what data they trained their models on. They claim it is proprietary information. The reasoning behind refusing to disclose training particulars may be that anything related to training is a trade secret and training in itself is fair use.
Ideally a LLM should have no restrictions regarding training and they shouldn’t pay for data that is publicly available. Copyright law specifically provides a training / education exemption under its fair use doctrine which may differ from one country to another, but essentially recognizes that non-commercial and transformative activity which is good for humans and society in general justifies limiting the ability of rights-holders to derive profit from copyright. Without fair use exceptions, there would be no journalists, no standup comedians, no content creators, no Youtube or TikTok, no parodies, no criticism (i.e. pop art), etc.
I can certainly copy an entire song to break it down and learn how it was made note by note. Why can’t AI? When I need to learn a music video choreography, I copy entire videos from the internet, I break them down into sections which I then further copy (several times per section, slow then normal speed) into a myriad of little video tutorials that I watch a million times until I get the moves right. While I learn the moves, I reproduce these moves with my own body which I film (another countless times) and edit into new videos. This is a 100% fair use example (and btw it’s true, I do that every day). Why can’t AI do the same with music? What’s the difference? Why does it stop being fair use when AI does the copying for the purpose of training rather than a user trying to learn a song or a dance?
It seems that both complaints put much effort in proving that the LLMs copied entire songs for training. They are not really denying it. Training is clearly a transformative process. I think what the fuss is revolving around is whether there is such a thing as “excessive training” that should be excluded from fair use defenses.
In Para. 12, the plaintiffs suggest that music generated on AI platforms is NOT human-created work! This is a strange insult to millions of human users. I’m pretty sure this qualifies as hate speech. Last time I checked, I am human and I write my own lyrics. Yet another lowly and unfounded attack. Why do they think they are the only humans in the room. WTF!
Due to the dehumanizing characterization of human users as non-human, I am not going to read the rest of the complaint. Sorry, but I can’t deal with more hateful content. Not on Canada Day. I’ll let my bot finish the job but I won’t publish the result.
I am reading the Udio complaint right now. It is a little more than a “nothingburger” as the majority of users and IP lawyers have overwhelmingly noted. It is also an example of how to make a mockery of the justice system, beginning with basing an entire claim on self-serving evidence, more precisely all the evidence is based on intentional infringement of industry-owned lyrics. The only thing the plaintiffs are capable of proving with this lawsuit is how they hypothetically infringed their own lyrics, forced AI to further infringe their copyright through very precise instructions, and obtained a copyright infringing result. Several times.
If copyright law has ever been clear about something since the 18th century is not to copy other people’s texts without their consent. If you give AI infringing lyrics, it will come up with an infringing output, how surprising is that.
This lawsuit is a coaxing manual. How about, we copied the actual chorus from Michael Jackson’s Billy Jean lyrics and directed Udio to sound like Michael Jackson in as much detail and likeness as possible, and Udio made a song that resembles Billy Jean!!! So, the plaintiffs entered into prompt the excerpt “Billy Jean is not my lover, she’s just a girl who claims I am the one”. One can’t make this up. This is monumental bad faith and a waste of time of judicial resources.
Moving on, the plaintiffs copied word for word lyrics excerpts from All I Want For Chrismas is You (disclaimer: I can’t stand this song), inserted the infringed lyrics into the prompt and the name Mariah Carey along with other personal and artistic characteristics of the artist and again, the platform gave them exactly what they wanted, a copyright infringing result.
The exact same thing happened to other very old songs My Girl, I Get Around (Beach Boys), Dancing Queen (solely based on “we can dance we can jive”), American Idiot (interesting choice of song), as well as other holiday songs.
On pages 27, 28 we have an interesting “artist resemblance” table I deemed useful to reproduce as an example of exactly how NOT to make music with AI. I doubt that the great majority of AI users have the same desperate clinging to has-beens as the plaintiffs imagine. Don’t these overexposed artists already have thousands of copycats who have never heard of AI? The market was already saturated with these styles before the advent of AI. Also, the table doesn’t specify what lyrics were used in the prompt, so it is safe to assume that from the outset the lyrics were infringed like in the previous examples.
I hope you read that. It was quite funny. I have a few favorites in there. You ask AI to recreate a famous song by a band that rhymes with the smeetles, and OMG, AI sounds like the Beatles. Do you seriously expect a music AI platform had never heard of the Beatles or did you force the AI to go out of its way to find out about “smeetles” and which famous band rhymes with… Smeetles?!? I looked it up. It is not a word.
Words are the most important thing for LLMs. This is why you can’t ask ChatGPT or Claude to answer your emails, because they see each word in the email they need to answer as a prompt and the result is guaranteed nonsense. Each word inside the prompt (even someone else’s email) is interpreted separately as a part of an instruction, you must think like an algorithm for a minute and understand how a model interprets words.
Unless the model, like the latest Udio, is specifically programmed to ignore the artists names and rhymes thereof (eyeroll really), it will always try to reproduce as accurately as possible the instructions contained in words a human provides. This is why it will always be human users who will bear liability for AI’s output.
The complaint goes on to say that Udio copied other people’s vocals. I agree that it is the case and I agree it is not cool, but that’s the courts fault. There is little will to grant copyright to vocal performers, even in jurisdictions like Canada where vocal performances are specifically protected by the Copyright Act.
I spent 4 years in court trying to stop a label from remixing and selling my own vocal samples, and the only reason I won is because the contested vocals were attached to my own original lyrics in a distant slavic language, so it became eminently clear that the only way to enforce music copyright is to own the lyrics, something that continues being true in the field of AI.
The rest of the complaint adresses the fair use test, so that’s for the jury to decide. On a first sight, the main grievance appears to be the notion of “competition”. The industry is obviously diverting the fair use doctrine in order to enforce an anti-competitive monopole on all the musical loops in the world and trying to use the justice system to prevent any new music being made, unless they own the rights. That in my opinion is another sign this is an abusive lawsuit.
One thing I’m hearing from everywhere on this issue is that if the courts side with the music industry, nothing is in place to stop Russia and China to keep infringing the industry’s IP with the same tools, fair use or not, and they will flood us with their own commercial versions of AI generated output and will charge us for it, while our unsustainable music industry keeps dying anyway. There comes a moment when you just can’t afford to stifle innovation as a court.
Greed versus greed, you guys. The weather has been too nice these days for me to read the lawsuits yet, but I promise I will by Canada Day, because nothing could be more exciting. For now, just know that I am on nobody’s side. I do believe that training AI on all the music in world is fair use and doesn’t require a licence. As with everything copyright, you can only know from the output if something is infringing, on a case by case basis, there is no blanket solution to this situation. However, the major music gen companies (which quite hilariously are entirely financed by the music industry) have for long flaunted the idea that they automatically somehow “own” all the rights to musical compositions generated on their platforms. Now that they know they own zero rights in the generated content, it appears from here that the industry is suing itself, to basically give itself a pretext to shut down these platforms, and force licenses for thousands of years on all other platforms they don’t control, existing and yet not created.
I didn’t know the term “coaxing” until the Anthropic lawsuit, but from what I’ve heard about the evidence in the Suno and Udio lawsuits, the term strongly applies. The platforms were specifically prompted to infringe copyright from the outset, otherwise nobody in their right mind would ever ask AI to recreate Johnny B Goode or Great Balls of Fire. You can literally do it with a guitar or a piano. And people have been doing it for a hundred years already on a daily basis before computers existed. Are you going to sue the piano for playing Great Balls of Fire? The whole idea is to screw the users, and whatever opinion I may draft on the matter, it will always be user-centric.
Even before the lawsuit I noticed that the algorithms of Suno and Udio are being messed with and at times rendered quasi-useless (as in they rarely accurately respond to prompt), this is what gave me a hint that they are industry-controlled and then I found out who the first big investors are. I am keeping close tabs of algorithmic conduct across platforms, but given these lawsuits, I won’t give away any particulars on that because I have no intention helping either party. It is entirely possible that I myself end up being sued for generated content. I thought I was taking risks, but nowhere near the “evidence” I’m hearing from in these lawsuits. I am lawyering up too (as usual) and any concrete evidence on algorithmic conduct is for now litigation privilege, until I decide to lift the privilege or to use it against someone.
But for now, the best use of of my time will be to go to the beach, and I strongly advise you to do the same. The lawsuits are not going to run away, they will entertain us for years to come.
It is about time the most important progressive State gave the tone and sent a clear message that this not about politics. The bill received 65 votes and 15 abstentions, nobody voted against it. It must pass two more steps before it comes into effect. Regardless of where I stand on IP protection, I will …
Conflict of interest: I can no longer write on artificial intelligence for this blog because I started working with artificial intelligence myself and I’m already on the other side of the fence. From what I’ve seen so far, AI benefits humanity in a more productive and sustainable way than the outdated IP regimes that require …
Exactly two years after the Uvalde school massacre, families of victims Friday filed two (much needed) wrongful death lawsuits in California and Texas against social media giant Meta, Activision — the maker of the popular video game “Call of Duty” — and Daniel Defense, the manufacturer of the AR-15 which the teen gunman used to …
Noncompete agreements are a widespread and exploitative practice that prevents workers from taking a new job or starting a new business. Non-competes foster toxic work environments, by often forcing workers to either stay in a job they want to leave or bear other significant harms and costs, such as being forced to switch to a …