A List of Generative Patent Drafting Software

Patent drafting can be very technical, cumbersome, and time-consuming, yet there is no guarantee that your patent will be approved. One may need to file in several countries, thus increasing expenses. Patent drafting AI is filling a real and urgent need to slash patent filing costs to minimum. Removing prohibitive monetary barriers to patent filing is poised to empower inventors to file as many applications for as many patents as they can possibly think of on an ongoing basis. This is good for innovation.

Here is a non-exhaustive list of patent-drafting bots by alphabetical order. We haven’t had the opportunity to test them, so this is not an endorsement. The list excludes software that only features search and research specs:

A useful reading: https://www.americanbar.org/groups/intellectual_property_law/publications/landslide/2018-19/january-february/drafting-patent-applications-covering-artificial-intelligence-systems/

China Internet Court Attributes AI Generated Image Copyright To Human Prompt Creator

On Monday, the Beijing Internet Court held that a human plaintiff prompt is sufficient to invoke copyright protection in a Stable Diffusion generated image, so long as the output qualifies as an “original” work. Copyright is determined on a case by case basis, so this decision is not entirely inconsistent with other AI jurisprudence trends …

AI Voice Cloning Without Consent Is Identity Theft, Human Voice Is A Biometric Identifier

You may have heard by “experts” that “there are no laws” against unauthorized voice cloning. These experts conveniently forget that identity theft is a criminal offense under any jurisdiction in this world. Human voice is a unique biometric identifier linked to human anatomy and identity and soon inextricable from your universal Digital ID. Anyone (yes …

US FTC Memo To Copyright Office Warns Gen AI Causes Unfair Competition, Deceptive Practices, and Consumer Risk

The United States Federal Trade Commission (FTC) submitted a comment to the Copyright Office after conducting its own AI study last August. Although the FTC has no jurisdiction over copyright matters, it does have jurisdiction over consumer and competition violations and can indeed investigate and penalize companies for such violations independent of parallel copyright lawsuits, and in spite of standard indemnification clauses, misleadingly and inaccurately called “copyright shields” by OpenAI for example. There is no such thing as a copyright shield in this world. The FTC is right to point out that promoting such pseudo legal notions as a “copyright shield” is in itself a deceptive practice.

The U.S. Copyright Office has already said that creations made with generative artificial intelligence are ineligible for copyright because they don’t primarily come from a human hand. Nevertheless, Time Magazine keeps citing AI influencers who push the idea that asking for consent is unworkable and the best alternative are opt-out schemes, stances I personally find reprehensible under all current legal frameworks. Creators should refuse to settle for opt-out schemes and the lawsuits should keep coming. I agree with the FTC.

As much as I enjoy listening to Frank Sinatra and Aretha Franklin belt out modern rap lyrics on the popular There I Ruined It Youtube Channel I am uncomfortable when the same thing happens to living breathing human beings like Taylor Swift for example whose vocals have been ripped off on same channel. If a performer is alive, there are different provisions under privacy statutes (in Canada also copyright law) against their voice being used by AI without their consent. Also, I believe that music labels should be barred from cloning and exploiting artists vocals, which is why I’m cool with Sinatra’s AI peformances by anyone else than a label as a form of fair use. Consent should come directly from performers or be deemed prima facie invalid for copyright and a liability as an unfair competition and deceptive practice. As of now, I am not seeing enough lawsuits for this.

The FTC was instantly attacked by “critics” with pseudo-legal arguments like Chamber of Progress CEO Adam Kovacevich who contends that if something is fair use it cannot constitute a deceptive practice or unfair competition because “fair use is the original anti-monopoly policy. Copyright is a monopoly right… The whole range of startups who have the potential to disrupt those incumbents are not going to have the ability to pay and that’s what the principle of fair use protects here. So I think that the FTC really hasn’t thought about how fair use is anti-monopoly policy.” This is heavily misguided and downright ludicrous. I am surprised to hear such nonsense from a CEO.

Again, copyright statutes are distinct from trade practice statutes under the purview of the FTC. You could exerce a monopoly or run a deceptive practice without touching on copyright, or concurrently. A vast number of AI startups on government subsidies at the moment are over-inflating their promises while delivering basic software programs from the 1990’s disguised as AI. I won’t name names here but the “useful” applications of AI have been a major disappointment across the board. Consumers are being egregiously lied to and their time is being wasted. Companies are being lied to and charged for underwhelming products touted as the top in innovation. I find it totally deceptive and even a fraud. So, it is entirely possible to violate several statutes even if you don’t violate one particular statute on technical grounds.

Finally, I think it is important to remind everyone that the fair use regime is not supposed to make you rich. Only in very limited circumstances will fair use be recognized for commercial use and only in the US. In Canada, if you intend to make one cent from your AI, fair use is not on the table. In general, if you intend to make money with someone’s property you better ask for consent. I don’t understand the CEO logic at all. The FTC is not overstepping its authority.

Westlaw and Ross Intelligence Lawsuit Over Gen Ai Goes To Jury Trial

This lawsuit raises the overlooked issue concerning legal databases charging people for money for content that they don’t really own. Anyone who’s been to law school is trained to use Westlaw on a limited academic license, paid for with prohibitive students tuitions, in order to complete research that ultimately is in the public’s interest. Student access to Westlaw opens many doors for employment in the legal trade, because a significant number of law firms cannot afford to pay for Westlaw access, so they rely on interns’ academic access. These paid databases are hurting the public the most. There is no copyright per se on judgments or legislation, or citations. Filed proceedings, except under non-publication orders, are considered public information, which means they should be accessible. For free.

Legal information and trial data shouldn’t be kept behind a paywall

We contend that all legal databases should be free for the public, and by extension free to train large language models. I believe that in this day and age when legislation changes often, with numerous reforms undergoing in a whole brand new world to look forward to, the public should be equipped for free with all the tools traditionally at lawyers disposal, no less to figure out their rights and duties, and become better citizens.

Monetizing a large language model however, that sifts through all this data and answers your questions like a lawyer, is justified, because it will save us thousands of hours of research (if it works at all), will improve access to justice, and will root out frivolous lawsuits. Thomson Reuters is training their own LLM on data they don’t really own, but simply aggregate into databases, they call “proprietary”. And now they want to stop competing LLM’s from training on said databases. For example, CanLII in Canada and some of Soquij also have proprietary elements but remain free for the public for what matters most, namely jurisprudence and caselaw by provision. The parts that are not free yet, such as dockets access, shouldn’t be monetized either. Dockets info is completely free in states like California and in the UK.

It feels like Thomson Reuters wants to stall innovation and monopolize legal LLM’s

Indeed, Thomson Reuters is accusing Ross Intelligence of unlawfully copying content from its legal-research platform Westlaw to train a competing artificial intelligence-based platform. A decision by U.S. Circuit Judge Stephanos Bibas sending the case to a jury sets the stage for what could be one of the first trials related to the “unauthorized” use of data to train AI systems.

When you pay Westlaw a salty hourly fee to access its databases, nothing precludes you from copying this information at will for whatever purpose you need it for, which evidently includes training LLM’s. If anything, there should be more LLM’s training on Westlaw’s databases.

This is very different from tech companies such as Meta Platforms, Stability AI and Microsoft-backed OpenAI facing lawsuits from authors, visual artists and other copyright owners over the use of their work to train the companies’ generative AI software. Authors, artists and copyright owners actually own copyright over their works that have been used without their consent. The same cannot be said about Thomson and Reuters. Nobody gave them a license to make those databases, because a licence is not required in the first place. In theory, anyone or a bot can make such databases by compiling publicly available information.

The issue revolves mainly around the “headnotes” which summarize points of law in court opinions. These are citations extracted from the opinions themselves. They are something of an extremely detailed bullet point deconstruction of the legal analysis. Students do that every day. Another thing about the headnotes, handy as they are, you do need a bot to go through all of them, because they end up taking up more space than the entire judgment. I don’t agree that they are proprietary. I tend to agree with the defendant that they are fair use.

Ross said that the Headnotes material was used as a “means to locate judicial opinions,” and that the company did not compete in the market for the materials themselves. Thomson Reuters responded that Ross copied the materials to build a direct Westlaw competitor.

The court decided to leave up to the jury to decide fair use and other questions, including the extent of Thomson Reuters’ copyright protection in the headnotes. He noted that there were factors in the fair-use analysis that favored each side. The judge said he could not determine whether Ross “transformed” the Westlaw material into a “brand-new research platform that serves a different purpose,” which is often a key fair use question.

Yes, but it is not the only factor. Fair use analysis would apply if Westlaw had copyright over the headnotes to begin with. I think the headnotes are already fair use in a sense if we accept that judgments and papers are protected by copyright in theory, even though unenforceable in practice. I don’t see why you would need to prove transformative use when training models on someone else’s fair use material in a context where there is no economic right in the core content to begin with. It is indeed an interesting case.

“Here, we run into a hotly debated question,” judge Bibas said. “Is it in the public benefit to allow AI to be trained with copyrighted material?”

I would answer the question with a resounding: YES.

Hollywood Writers Strike Ends After 146 Days; Actors Strike Continues

On Sunday, Hollywood’s striking writers represented by The Writers Guild of America (WGA) and production workers represented by The Alliance of Motion Picture and Television Producers (AMPTP) reached a tentative agreement (subject to final contract language). The tentative agreement has finally been declared a “victory”. The writers’ strike began on May 2 and followed five …

Meta’s LLama 2 Runs On Open Source

This summer, the AI division of Mark Zuckerberg’s Meta unveiled its Llama 2 chatbot. Meta’s approach with Llama 2 contrasts with that of the company OpenAI, which created the AI chatbot ChatGPT, because LLama is open source—meaning that the original code is freely available, allowing it to be researched and modified. This strategy has sparked a vast …

How To Know If AI Is Sentient, Neuroscientists Provide a Checklist

In 2021, Blake Lemoine at Google stirred up a media storm by proclaiming that one of the chatbots he worked on, LaMDA, was sentient—and he subsequently got fired. In fact, most deep learning models are loosely based on the brain’s inner workings. AI agents are increasingly endowed with human-like decision-making algorithms. The idea that machine intelligence could …