Online Harms Bill Must Address Platform Liability And Provide For Swift Banning Of Platforms

Contrary to my previous objections to the Online Harms Bill, which I criticized as “too little too late nothingburger” and “disappointing” because age verification is missing, I am now finding new ways to work with this law to arrive precisely where we need to get regarding corporate criminal liability of platforms. Given that we don’t have the sociopathic section 230 CDA here, all we need is to be bold and move fast, before the law is struck on constitutional grounds by corporate lobbies.

The Online Harms Bill creates a very welcome tool to repress rampant tech facilitated crimes, by reversing the criminal law onus, in other words we can finally say that anyone who produces and disseminates harmful content is by definition guilty until proven otherwise.

Among many things, I see a clear possibility to raise criminal sentencing for child pornographers from nothing to perpetuity through the Online Harms Bill, simply by proving that juvenile porn is according to United Nation reports a most blatant instance of hate speech and antisocial behaviour. Interference with minors is absolutely encompassed in the current hate speech definition. Moreover, we have decades of studies and reports on the societal decay and breakdown resulting from technology facilitated violence (a.k.a. hate speech) against women and children.

My understanding is that we will be setting up administrative tribunals where you don’t need to be a member of a bar, you can be a social worker and hand out life-sentences. To accelerate trials and sentencing, we can also implement AI decision-makers like in the European Court. They seem to be doing pretty well so far.

We have extensive reports on the ways that platforms knowingly encourage and perpetuate hate speech, mainly in the form of tech facilitated violence. Honestly, I don’t see how user-generated and hardcore porn (and anything that is not LGBTQ+) will get a hate-speech exemption, given the Privacy Commissioner report (that stayed hidden for as long as it possibly could) specifically on how consent of unwitting “performers” is NEVER verified on Aylo. Even the new “safeguards” Aylo brought forward include the possibility to consent for somebody else by providing a release form. As if a user couldn’t produce a fake release. I had 9 remixes commercialized to my name and someone gave a release signed by someone pretending to be me to a US publisher, so Aylo’s efforts are total bullshit in that regard. The rest is voluntary blindness by pro-Aylo officials. This is just one example of organized inefficiency.

The Online Harms Bill should also allow victims from outside of Canada to file complaints. We learned from parliamentary sessions on the status of women that intimate partner violence victims are fleeing Canada, because the criminal justice system here intentionally compromises their safety by protecting and releasing violent criminals. We saw in these sessions that reps from the current administration were antagonizing and harassing victims (survivors left in tears), which shows that officials political interests are aligned with the rise of technology facilitated violence. It is our duty to take the Online Harms Bill and use it against all the corporations and their users these officials try to protect. It is a small sacrifice to stop speech temporarily (voluntarily remain silent or shut down or pause social media accounts) until we weed out the bad apples once and for all.

I am currently examining a report from 5 years ago, called Deplatforming Mysogyny on platform liability for technology facilitated violence, and will compare it with the efforts brought forward in the Online Harms Bill. The report explains how digital platforms business models, design decisions, and technological features optimize them for abusive speech and behaviour (the current definition of hate speech) by users and examine how tech violence always results in real life violence and harm. It is funny how we’ve known all these years that tech platforms are destroying society by encouraging violence and murders, but allowed them to stay in business.

As early as 2018, the Report of the Special Reporteur on violence against women, UNHRC, 38th Sess, UN Doc A/HRC/38/47 (2018) reports that “Information and communications technology is used directly as a tool for making digital threats and inciting gender-based violence, including threats of physical and sexual violence, rape, killing, unwanted and harassing online communications or even the encouragement of others to harm women physically. It may also involve the dissemination of reputation harming lies, electronic sabotage in the form of spam and milgnant viruses, impersonation of the victim online and the sending of abusive emails or spam, blog posts, tweets or other online communications in the victim’s name. Technology facilitated violence may also be committed in the work place or in the form of so-called honour-based violence by intimate partners […]

It is therefore important to acknowledge that the Internet is being used in broader environment of widespread and systemic structural discrimination and gender-based violence against women and girls, which frame their access to and use of the internet and other information and communications technology. Emerging forms of ICT have facilitated new types of gender-based violence and gender inequality in access to technologies, which hinder women’s and girls’ full enjoyment of their human rights and their ability to achieve gender equality. […] 

The consequences of harm caused by different manifestations of online violence are specifically gendered, given that women and girls suffer from particular stigma in the context of cultural inequality, discrimination, and patriarchy. Women subjected to online violence are often further victimized through harmful and negative gender stereotypes, which are prohibited by international law.”

If intentionally sexualizing individuals or a group of people in order to deprive them of the basic enjoyment of their human rights is not hate speech, good luck proving otherwise.

Tech facilitated gender based violence is further defined as being rooted in, arising from, and exacerbated by misogyny, sexist norms, and rape culture, all of which existed long before the internet. However TFGBV in turn accelerates, amplifies, aggravates, and perpetuates the enactment of and harm from these same values, norms and institutions, in a vicious circle of technosocial oppression. (Source Jessica West)

Deplatforming misogyny gives several examples of hate speech:

  • Online Abuse: verbally or emotionally abusing someone online, such as insulting and harassing them, their work, or their personality traits and capabilities, including telling that person she should commit suicide or deserves to be sexually assaulted
  • Online Harassment: persistently engaging with someone online in a way that is unwanted, often but not necessarily with the intention to cause distress or inconvenience to that person. It is perpetrated by one or several organized persons, as in gang stalking (source Suzie Dunn)
  • Slut-shaming (100% hate-speech) can be perpetrated across several platforms and may include references to the targeted person’s sexuality, sexualized insults, or shaming the person for their sexuality or for engaging in sexual activity. This type of hate-speech has the objective to create an intimidating, hostile, degrading, humiliating or offensive environment (UNHRC, 38th Sess, UN Doc A/HRC/38/47 (2018))
    • Discussing someone else’s sexuality is kind of always a red flag and criminal defense lawyers (among many other professionals) are totally engaging in hate speech in total impunity, just saying. Something needs to change or the legal industry should be completely eliminated from enforcing a clean internet. They should have zero immunity for perpetrating hate-speech and thereby encouraging violence against women and children.
  • Non-consensual distribution of intimate images: (see Aylo’s business model) circulating intimate or sexual images or recordings of someone without their consent, such as where a person is nude, partially clothed, or engaged in sexual activity, often with the purpose of shaming, stigmatizing or harming the victim. (also known as image based abuse and image-based sexual exploitation). The UN warns against using the term “revenge porn” because it implies that the victim did something wrong deserving of revenge.
  • Sextortion: attempting to sexually extort another person by capturing sexual or intimate images or recordings of them and threatening to distribute them without consent unless the targeted person pays the perpetrator, follows their orders, or engages in sexual activity with or for them.
  • Voyeurism: criminal offense involving surreptitiously observing or recording someone while they are in a situation that gives rise to a reasonable expectation of privacy.
  • Doxing: publicly disclosing someone’s personal information online, such as their full name, home adress, and social insurance number. Doxing is particularily concerning for individuals who are in or escaping situations of intimate partner violence, or who use pseudonyms due to living in repressive regimes or to avoid harmful discrimination for aspects of their identity, such as being a transgender or sex worker. (see: The Guardian: Facebook’s real name policy hurts people)
  • Impersonation: taking over a person’s social media accounts, or creating false social media accounts purporting to be the victim, usually to solicit sex or make compromising statements.
  • Identity and Image Manipulation, i.e. Deepfake videos: use of AI to produce videos of an individual saying something they did not say or did not do. In reality, video deepfakes are kind of fringe. The current AI applications are mainly focused on sexualizing and undressing women through unauthorized use of Instagram photos.
  • Online mobbing, or swarming: large numbers of people engaging in online harassment or online abuse against a single individual (Amber Herd comes to mind)
    • The Depp and Herd trial is an example of court-enabled hate-speech. The way Herd was cross-examined on television falls within the definition of incitement of violence against victims of intimate partner violence. This trial harmed the reputation of the profession beyond any repair and resulted in uncontrollable online mobbing.
  • Coordinated flagging and Brigading are cited in the report but I am not at all convinced that they are user-perpetrated. I believe that algorithmic conduct is 100% on the platforms. Users have zero control and liability in that regard. Nice try, but nope. If a survivor is taken down, I won’t let platforms get away with “users did it”. No way. Saying otherwise is pro-corporate propaganda.
  • Technology aggravated sexual assault: group assault which is filmed and posted online. Here is where the Online Harms Bill can be used to sentence perps to life in prison, something that can’t be achieved under the criminal code.
  • Luring for sexual exploitation: i.e. grooming through social media, or through fake online ads, in order to lure underage victims into offline forms of sexual exploitation, such as sex trafficking and child sexual abuse. Here is another instance of hate speech deserving of a life-sentence.

To be continued in another post: it is a long report (or to be more precise a bundle of legal and UN reports) and the bill is also a handful. I am only skimming the surface of the most prevalent forms of hate-speech which invariably equate to incitement of gender-based and intersectional genocide (see report on missing and murdered indigenous women and how it amounts to genocide). Just to say I can work with that bill. Bring it!


Law school messed too much with my head by convincing me that I care about human rights for violent criminals and procedural safeguards for perp corps. I never did. It feels good to be my dystopian self again.

Ban Speech

to show how serious I am, I just censored my post. This is so you can exercise your telepathic skills and guess what is hiding behind the emoji. But I will give you a hint: I love what see in Brazil. It is the answer to “freedom of speech, not freedom of reach”. Until now, the only thing that’s been getting freedom of reach is interestingly porn. How about the freedom to take down a whole platform and criminalize VPNs. If all speech must go down in the process, so be it. Thank you, Justice de Moraes! Literally my most favorite judge at the moment.

Btw, Age verification and Digital ID were ideas from the times when we were nice, now we want ALL speech banned. It’s so much easier. Freedom to eff off from the internet and to never see an active user again as a platform.

I forgot to thank the 3rd circuit for recently clarifying that algorithmic curation such as recommending content is a First Amendment-protected publishing activity, therefore Section 230 immunity doesn’t apply and Tiktok can be held liable for people dying (or damages flowing) as a result of challenges that show up in users’ FYP pages. A shy first step, but a big step forward. About time to kick section 230 out of orbit. We need more judges like de Moraes.

ABBA Shows that Blanket Licenses Shouldn’t Exist and Music Labels Must Be Banned Too

Contrary to all commercial logic, I believe that artists should be the only ones who decide whether and when their music plays or not. For convenience you may want to license your stuff to a label who licenses it to a distributor who keeps licensing it to people you don’t like, but it chips away little pieces of your soul until you can’t look at yourself in the mirror. One way to protect yourself is to make sure you revoke and terminate all deals and licenses prior to elections, because in the current capitalist state of affairs, licensing things for profit doesn’t allow you to just take shit down when you feel like it. Still, I think that nothing is more important than the right to destroy what you create if you don’t consent to a licensee using it.

Takedowns are a stand against corporate slavery

Naturally I love the current heart-warming chaos of label-owned artists standing up to performance rights licenses and requesting takedowns, nothing could be more human and anti-corporate than that. In a way it is a revolution against the commerciality of music. We are brainwashed to think that commercializing stuff equates success. I couldn’t disagree more and I simply do not see money as something that equates value or remotely reflects the actual value of things.

Nothing matters more than your integrity in life, If you don’t want your music to play, do what you have to do. You may get sued to give back royalties you earned through compulsory licensing (and possibly the losses your label will incur due to your breach of contract) but you will be remembered like a hero for that breach. If someone opposed to your values plays your music, the first step is to say that you do not consent AND give back all profits, before anyone asks you. Then fire your label. Never miss an opportunity to express absence of consent loud and clear through words and conduct. Especially if you’re on the level of ABBA, you kind of have a moral responsibility to give an example to the world. Once you say you don’t consent, you need to let go of the royalties. Otherwise you push disinformation by contradicting your words while actually consenting to seemingly unacceptable things. I won’t judge people who remain silent, but I will totally cancel anyone who doesn’t walk their talk.

I am the last person who will stand in the way of total annihilation of capitalism (and its byproducts such as all dictatorships, fascism, communism, and similar slavery-based regimes). Here I see a great opportunity and momentum to begin questioning corporate control of music. Moment to moment consent should override all commercial licenses. Also, corporate interests should always come behind human beings feelings and sensitivities, such as the fundamental right to change your mind and void contractual obligations if a corp doesn’t act according to your values.

If laws cannot be adopted fast enough to amend copyright acts around the world and give artists total immunity against lawsuits, I’d be cool to limit music licensing through executive decrees and create a new expeditious takedown process with heavy daily penalties for corps who refuse to comply promptly with human requests.

Entheon By Illusionaries in London, UK

Entheon is an immersive groundbreaking immersive exhibition that brings the profound works of Alex and Allyson Grey to the UK and Europe for the first time. The exhibition is an international project, uniting technology and production teams to bring the vision to life. According to Salar Nouri, Creative Director and Curator at Illusionaries, Entheon “breaks new ground, entering a realm where art, love, and spirit converge in a unique celebration of creativity.”

Exploring Humanity and Spirituality

Entheon offers a rare opportunity to delve into the Greys’ visionary perspectives on consciousness, perception, and the human spirit. Their artwork explores the interconnectedness of the physical and spiritual worlds, providing a profound exploration of self.

360-Degree Immersive Experience

Visitors embark on a 15-minute journey through Entheon’s godly faces, encouraging exploration of inner creativity. A mirrored room features animated CG adaptations of the Greys’ paintings, transforming their art into a dynamic experience. This space, inspired by the Greys’ visionary minds, creates a labyrinth of visual and spiritual exploration.

A New Era in Immersive Art

Entheon heralds a new era in the appreciation for immersive art, pushing the boundaries of creativity and spirituality. This unparalleled experience is now open to the public at Illusionaries, London’s experiential art hub.

The sight of Alex and Allyson Grey art always takes my breath away, but this is on a whole new level. I can’t wait to see it in person. You can get your ticket here.

Also, if there ever is another pandemic, I can see this type exhibition doing extremely well on the metaverse.

NFT Scams On The Rise

NFT’s are pretty much obsolete right now, but it seems that people continue falling for numerous NFT scams. Here is a most common phishing example. First, scammers send one or several NFTs to your wallet. Then you receive an offer through email which looks like this :

Hi,

We’re thrilled to share exciting news about your  NFT portfolio!  One of your listings has attracted significant interest. Here’s a quick snapshot of the latest offer: 

  • Offer ID: 0xGo0D922p
  • Offered by: CryptoFans
  • Price: 1.93 ETH

Review Offer

Please take a moment to sign in to your account and explore this new opportunity. Should you have any queries or require support, our dedicated team is ready and eager to assist you.

Best regards,
Opensea Team

It is signed by Opensea that recently had a data breach BUT the email originates from someone called cognitosystems. I obviously removed the link associated with “Review Offer”. I just left it as a link for visual illustration. Please do not click on any link you receive by email in relation to NFT’s. This scam is a classic phishing operation, designed to steal your wallet credentials. If you really think there is an offer of any kind, login through your wallet. Do not trust any NFT offers by email.

California AG v Nudify and Other Deepfake AI

As usual, unbridled “free speech”, voluntary blindness, minimization of harm, and inexistent enforcement of laws against gender based violence invariably impact women and girls. Given that there has been little political or judicial will to stop intimate violence, it is hardly surprising to see generative AI being hijacked to produce ever more nonconsensual intimate images of women and girls, as is the case with the latest anti-social trend of “undress technology” being widely used in schools by teenage boys who undress their teachers and classmates for the purpose of causing long-lasting harm and inciting girls to commit suicide. While videos are harder to produce, the creation of images using “undress” or “nudify” websites and apps has become commonplace.

Big tech and investors are complicit and should be subjected to criminal investigations, but aren’t. An alarming report by 404 Media shows that violence through deepfake technology is intentionally promoted and knowingly encouraged by Big Tech platforms be it via targeted ads on social media or directly in app stores as it appears on the top of searches.

As if this weren’t enough, WIRED reports that Big Tech platforms further facilitate violence against women by allowing people to use their existing accounts to join the deepfake websites. For example, Google’s login system appeared on 16 such websites, Discord’s appeared on 13, and Apple’s on six. X’s button was on three websites, with Patreon and messaging service Line’s both appearing on the same two websites. The login systems have been used despite the tech companies terms and conditions that state developers cannot use their services in ways that would enable harm, harassment, or invade people’s privacy.

Sign-in APIs are tools of convenience. We should never be making sexual violence an act of convenience. We should be putting up walls around the access to these apps, and instead we’re giving people a drawbridge.”

“This is a continuation of a trend that normalizes sexual violence against women and girls by Big Tech,” says Adam Dodge, a lawyer and founder of EndTAB (Ending Technology-Enabled Abuse).

After being contacted by WIRED, spokespeople for Discord and Apple said they have removed the developer accounts connected to their websites. Google said it will take action against developers when it finds its terms have been violated. Patreon said it prohibits accounts that allow explicit imagery to be created, and Line confirmed it is investigating but said it could not comment on specific websites. X did not reply to a request for comment about the way its systems are being used.

The tech company logins are often presented when someone tries to sign up to the site or clicks on buttons to try generating images. It is unclear how many people will have used the login methods, and most websites also allow people to create accounts with just their email address. However, of the websites reviewed, the majority had implemented the sign-in APIs of more than one technology company, with Sign-In With Google being the most widely used. When this option is clicked, prompts from the Google system say the website will get people’s name, email addresses, language preferences, and profile picture.

“In order to use Sign in with Google, developers must agree to our Terms of Service, which prohibits the promotion of sexually explicit content as well as behavior or content that defames or harasses others,” says a Google spokesperson, adding that “appropriate action” will be taken if these terms are broken. Other tech companies that had sign-in systems being used said they have banned accounts after being contacted by WIRED.

“We must be clear that this is not innovation, this is sexual abuse. These websites are engaged in horrific exploitation of women and girls around the globe. These images are used to bully, humiliate, and threaten women and girls”, says David Chiu, San Francisco’s city attorney.

This fiasco has prompted San Francisco’s city attorney to file a lawsuit against undress and nudify websites and their creators. Chiu says the 16 websites his office’s lawsuit focuses on have had around 200 million visits in the first six months of this year alone. The lawsuit brought on behalf of the people of California alleges that the services broke numerous state laws against fraudulent business practices, nonconsensual pornography and the sexual abuse of children. But it can be hard to determine who runs the apps, which are unavailable in phone app stores but still easily found on the internet.

The undress websites operate as shadow for profit businesses and are mainly promoted through criminal platforms like Telegram who notoriously push child porn and human trafficking worldwide under the guise of “free speech”. The websites are under constant development: They frequently post about new features they are producing—with one claiming their AI can customize how women’s bodies look and allow “uploads from Instagram.” The websites generally charge people to generate images and can run affiliate schemes to encourage people to share them; some have pooled together into a collective to create their own cryptocurrency that could be used to pay for images.

As well as the login systems, several of the websites displayed the logos of Mastercard or Visa, implying that banks are entirely on board with deepfake technology although they claim otherwise. Visa did not respond to WIRED’s request for comment, while a Mastercard spokesperson says “purchases of nonconsensual deepfake content are not allowed on our network,” and that it takes action when it detects or is made aware of any instances.

On multiple occasions, the only time tech companies and payment providers intervene is when pressured by media reports and requests by journalists. If there is no pressure, it is business as usual in the realm of violence against women and girls. And we all know it is a lucrative one.

“What is concerning is that these are the most basic of security steps and moderation that are missing or not being enforced. It is wholly inadequate for companies to react when journalists or campaigners highlight how their rules are being easily dodged. It is evident that they simply do not care, despite their rhetoric. Otherwise they would have taken these most simple steps to reduce access.” Clare McGlynn, law prof at Durham University

No, they don’t care. We must ban speech altogether and start from scratch.

USPTO Updated Guidance On AI Assisted Inventions

Yesterday, the USPTO issued an updated guidance for subject matter eligibility of AI assisted inventions with very useful subject matter examples throughout the guidance. https://www.uspto.gov/about-us/news-updates/uspto-issues-ai-subject-matter-eligibility-guidance

IV. Applicability of the USPTO Eligibility Guidance to AI-Assisted Inventions

For the subject matter eligibility analysis under 35 U.S.C. 101, whether an invention was created with the assistance of AI is not a consideration in the application of the Alice/Mayo test and USPTO eligibility guidance and should not prevent USPTO personnel from determining that a claim is subject matter eligible. In other words, how an invention is developed is not relevant to the subject matter eligibility inquiry. Instead, the inquiry focuses on the claimed invention itself and whether it is the type of innovation eligible for patenting.

My Top Films of 2024 To Date

I try to watch a film a day. I only select movies that don’t contain gun violence or open political messages. I avoid horror films and everything war-related, be it historical, modern, or science fiction. If it is about war, I don’t watch it. End of the world and apocalypse, only if it is not war-related, like Don’t Look Up a few years back or the Obama feature with Julia Roberts. I watched the latter in part. Technically, I can’t watch 90% of the films due to concerning subject matter outlined above that I consider a danger to a free and democratic society and humanity as a whole. Everything else is on the table.

I keep handwritten notes while I watch and then compile titles in a separate List of Watchables for movies I’d like to see again. This year, less than 15 movies made the list since January. These are the movies I’d pay money to watch again, with links to trailers.

  1. Fallen Leaves by Aki Kaurismäki
  2. Godzilla vs Kong, The New Empire
  3. Godzilla minus One (I made a war exception for this one, because I only heard good things, it was depressing but also deeply moving and contained a strong anti-war message)
  4. Embrassez qui vous voudrez (made in 2002 but I only accessed it this year and was so impressed, I started digging into more French movies and went through the entire Cannes line-up from last year)
  5. Marie-Line et son juge
  6. Une année difficile
  7. Entre nous (Between Us) (very moving LGBT themed)
  8. 3 jours max (Light physical comedy, you laugh from start to end)
  9. La Petite
  10. Complètement crammé (John Malkovitch speaks French lines and sounds great)
  11. La passion de Dodin Bouffant (first half is hypnotizing)
  12. La Chimère (Italian film, breathtakingly beautiful and very emotional, characters you remember, loved everything, can’t wait to see it again)
  13. Mme Mills, Une voisine si parfaite by Sophie Marceau (made in 2018, accessed on Netflix)
  14. Cocorico (a blast, great script, unequaled dialogue)

I would’ve absolutely loved the film La Bête . It has an Inception-ish vibe, but contains unnecessary scenes of animal abuse, without these scenes it would be in my list.

Suno AI Lawsuit Breakdown

This complaint is very similar to the Udio complaint. I will address different points. Suno is the first Music AI platform I started testing last month. Others including Udio followed through word of mouth. Prior to May, there were no viable music AI platforms according to professional standards, but Suno’s latest version opened the floodgates of creativity – the industry mentions 10 new songs a second on Suno alone – and there are already a good few dozens of platforms quickly catching on.

In a way, everything we may say now about AI is at a very early stage of training, building, debugging and adjusting and is evolving as we speak through the invaluable input of millions of user pioneers. We are seeing progress unfold at the speed of light before our eyes. Everyone is learning, AI is learning and countless users who never made music in their lives are also learning about making music, with each platform providing valuable tips and tricks. There is a process of demystification and breakdown of loops, beats, melodies, and vocal flows in different languages, as well as deconstruction and re-appropriation of the music production process. It brings tears to my eyes to see so many users become creators instead of passive consumers.

Many users throughout platforms mention that since AI came along, their favorite songs are the songs they made themselves. This is fantastic for humanity. Obviously, these users have now less time to listen to commercial songs. Until now, we had to listen to everything the industry imposes on us, because there was no alternative to learn from, other than public domain. It was time-consuming, frustrating, and depressing due to violent, reductive, and misogynistic lyrics and systemic undue sexualization and dehumanization of artists by the industry. Now that AI listens to these commercial “hits”, we can protect our ears while focusing on more productive things that bring us joy. In a way AI doesn’t do anything more than we’d be doing without AI, but AI saves us time and protects our emotional well-being and integrity by ingesting and filtering the trash the industry throws at us, so that we can minimize our exposure to harmful content.

Can the music industry really stop progress and continue keeping AI for themselves?

In both complaints we see that the platforms refuse to disclose what data they trained their models on. They claim it is proprietary information. The reasoning behind refusing to disclose training particulars may be that anything related to training is a trade secret and training in itself is fair use.

Ideally a LLM should have no restrictions regarding training and they shouldn’t pay for data that is publicly available. Copyright law specifically provides a training / education exemption under its fair use doctrine which may differ from one country to another, but essentially recognizes that non-commercial and transformative activity which is good for humans and society in general justifies limiting the ability of rights-holders to derive profit from copyright. Without fair use exceptions, there would be no journalists, no standup comedians, no content creators, no Youtube or TikTok, no parodies, no criticism (i.e. pop art), etc.

I can certainly copy an entire song to break it down and learn how it was made note by note. Why can’t AI? When I need to learn a music video choreography, I copy entire videos from the internet, I break them down into sections which I then further copy (several times per section, slow then normal speed) into a myriad of little video tutorials that I watch a million times until I get the moves right. While I learn the moves, I reproduce these moves with my own body which I film (another countless times) and edit into new videos. This is a 100% fair use example (and btw it’s true, I do that every day). Why can’t AI do the same with music? What’s the difference? Why does it stop being fair use when AI does the copying for the purpose of training rather than a user trying to learn a song or a dance?

It seems that both complaints put much effort in proving that the LLMs copied entire songs for training. They are not really denying it. Training is clearly a transformative process. I think what the fuss is revolving around is whether there is such a thing as “excessive training” that should be excluded from fair use defenses.

In Para. 12, the plaintiffs suggest that music generated on AI platforms is NOT human-created work! This is a strange insult to millions of human users. I’m pretty sure this qualifies as hate speech. Last time I checked, I am human and I write my own lyrics. Yet another lowly and unfounded attack. Why do they think they are the only humans in the room. WTF!

Due to the dehumanizing characterization of human users as non-human, I am not going to read the rest of the complaint. Sorry, but I can’t deal with more hateful content. Not on Canada Day. I’ll let my bot finish the job but I won’t publish the result.

Udio Complaint Entirely Based On Industry Infringing Its Own Lyrics

I am reading the Udio complaint right now. It is a little more than a “nothingburger” as the majority of users and IP lawyers have overwhelmingly noted. It is also an example of how to make a mockery of the justice system, beginning with basing an entire claim on self-serving evidence, more precisely all the evidence is based on intentional infringement of industry-owned lyrics. The only thing the plaintiffs are capable of proving with this lawsuit is how they hypothetically infringed their own lyrics, forced AI to further infringe their copyright through very precise instructions, and obtained a copyright infringing result. Several times.

If copyright law has ever been clear about something since the 18th century is not to copy other people’s texts without their consent. If you give AI infringing lyrics, it will come up with an infringing output, how surprising is that.

This lawsuit is a coaxing manual. How about, we copied the actual chorus from Michael Jackson’s Billy Jean lyrics and directed Udio to sound like Michael Jackson in as much detail and likeness as possible, and Udio made a song that resembles Billy Jean!!! So, the plaintiffs entered into prompt the excerpt “Billy Jean is not my lover, she’s just a girl who claims I am the one”. One can’t make this up. This is monumental bad faith and a waste of time of judicial resources.

Moving on, the plaintiffs copied word for word lyrics excerpts from All I Want For Chrismas is You (disclaimer: I can’t stand this song), inserted the infringed lyrics into the prompt and the name Mariah Carey along with other personal and artistic characteristics of the artist and again, the platform gave them exactly what they wanted, a copyright infringing result.

The exact same thing happened to other very old songs My Girl, I Get Around (Beach Boys), Dancing Queen (solely based on “we can dance we can jive”), American Idiot (interesting choice of song), as well as other holiday songs.

On pages 27, 28 we have an interesting “artist resemblance” table I deemed useful to reproduce as an example of exactly how NOT to make music with AI. I doubt that the great majority of AI users have the same desperate clinging to has-beens as the plaintiffs imagine. Don’t these overexposed artists already have thousands of copycats who have never heard of AI? The market was already saturated with these styles before the advent of AI. Also, the table doesn’t specify what lyrics were used in the prompt, so it is safe to assume that from the outset the lyrics were infringed like in the previous examples.

I hope you read that. It was quite funny. I have a few favorites in there. You ask AI to recreate a famous song by a band that rhymes with the smeetles, and OMG, AI sounds like the Beatles. Do you seriously expect a music AI platform had never heard of the Beatles or did you force the AI to go out of its way to find out about “smeetles” and which famous band rhymes with… Smeetles?!? I looked it up. It is not a word.

Words are the most important thing for LLMs. This is why you can’t ask ChatGPT or Claude to answer your emails, because they see each word in the email they need to answer as a prompt and the result is guaranteed nonsense. Each word inside the prompt (even someone else’s email) is interpreted separately as a part of an instruction, you must think like an algorithm for a minute and understand how a model interprets words.

Unless the model, like the latest Udio, is specifically programmed to ignore the artists names and rhymes thereof (eyeroll really), it will always try to reproduce as accurately as possible the instructions contained in words a human provides. This is why it will always be human users who will bear liability for AI’s output.

The complaint goes on to say that Udio copied other people’s vocals. I agree that it is the case and I agree it is not cool, but that’s the courts fault. There is little will to grant copyright to vocal performers, even in jurisdictions like Canada where vocal performances are specifically protected by the Copyright Act.

I spent 4 years in court trying to stop a label from remixing and selling my own vocal samples, and the only reason I won is because the contested vocals were attached to my own original lyrics in a distant slavic language, so it became eminently clear that the only way to enforce music copyright is to own the lyrics, something that continues being true in the field of AI.

The rest of the complaint adresses the fair use test, so that’s for the jury to decide. On a first sight, the main grievance appears to be the notion of “competition”. The industry is obviously diverting the fair use doctrine in order to enforce an anti-competitive monopole on all the musical loops in the world and trying to use the justice system to prevent any new music being made, unless they own the rights. That in my opinion is another sign this is an abusive lawsuit.

One thing I’m hearing from everywhere on this issue is that if the courts side with the music industry, nothing is in place to stop Russia and China to keep infringing the industry’s IP with the same tools, fair use or not, and they will flood us with their own commercial versions of AI generated output and will charge us for it, while our unsustainable music industry keeps dying anyway. There comes a moment when you just can’t afford to stifle innovation as a court.