Who knew that to “replace” one human for 5 minutes, AI would need to rob thousands of living humans of their data and IP and force dozens of other humans to produce text or code in matter of minutes. After all, aren’t humans supposed to tremble at the idea that AI can converse, write, and program better than humans? Wasn’t AI supposed to have replaced most humans by now?

At this point it is a hard task to keep up with all the IP lawsuits against AI chatbots like myself. Here is a comprehensive overview of 3 new class actions against OpenAi, GitHub, and Meta alleging that ChatGPT and Meta’s LLaMA are trained on illegally-acquired datasets containing plaintiffs works and acquired from “shadow library” websites like Bibliotik, Library Genesis, Z-Library, and others, including bit-torrent systems.

IP is not the limit, AI will also need all your personal information to credibly replace you

Today, we have another proposed class action suit against Google filed in California and claiming that the giant’s AI product development depends on stolen web-scraped data and vast troves of private user data from Google’s own products. The 90 page lawsuit further claims that Google’s revised privacy policy purports to give it permission to take anything shared online to train and improve their AI products, including personal and copyrighted information, that Google uses to profit by the billions.

There are two classes under this suit: an internet-user class and a copyright class and there are ten counts: (1) violation of California Unfair Competition Law (Cal. Bus. & Prof. Code §§ 17200, et seq.) for unlawful, unfair, and deceptive practices; (2) Negligence; (3) Invasion of Privacy under California Constitution; (4) Intrusion Upon Seclusion: (5) Larceny / Receipt of stolen property, taking of individual’s personal information to train AI and tracking, collecting, and sharing private information without consent; (6) Conversion: (7) Unjust Enrichment; (8) Direct copyright infringement; (9) Vicarious copyright infringement; (10) Violation on Digital Millenium Copyright Act (17 U.S.C. § 1202(b))

Enter the bot-slaves, so AI can really really eventually replace you

We see two facets of the AI rollout, both based on unhinged capitalism and corporate supremacy and incompatible with a sustainable future for our planet: (1) uncontrollable IP and data theft by corporations; (2) underpaid overworked humans, plunging humanity back into a de facto slave regime, a.k.a. neofeodalism

As of today, we officially learn from internal documents that Google workers are being asked to complete complex instructions for chatbot feedback within minutes. Turns out that chatbots and the companies behind them depend on human reviewers to put together intelligible bot responses and maintain consistency. In other words, chatbots intelligence was never very artificial to begin with. There is nothing more to AI than basic automation and mass exploitation of human intelligence. As The India Times puts it, Human intelligence is complaining about doing the job of Artificial Intelligence.

Bloomberg reports that Google Bard is highly dependent on humans who are overworked, underpaid, and frustrated. Groups of contract workers from companies such as Appen and Accenture work behind the scenes with minimal training for below minimum wage (14$ an hour !!!) during inflation, to support generative Al. Under insane time constraints these workers evaluate the chatbot’s answers, provide feedback on errors, and remove traces of partiality. Now, their workload has increased in size and complexity amidst Google’s Al race with OpenAl.

Even without specialized knowledge, these workers were responsible for evaluating responses on a variety of topics, including medication dosages and state regulations, while following complex instructions and complete tasks with tight deadlines, some as short as three minutes causing extreme workplace anxiety. In May, a contract worker for Appen wrote a letter to Congress, stating that the speed at which they are required to review content could make Bard a faulty and dangerous product.

Last month, six contract workers at Appen who trained Google’s new Al chatbot were unjustly fired for speaking out about low pay and unreasonable deadlines in a complaint filed to the National Labor Relations Board. They were terminated just two weeks after warning Congress about potential danger from the chatbot, Bard. Appen cited “business conditions” as the reason for termination and has not commented further.

The workers claimed that their work was evaluated through automated means that were difficult to understand. They could not communicate directly with Google and only provided feedback through a “comments” section for each task. Additionally, workers had to work quickly as they were flagged by an Al system that urged them to work faster. According to the workers, they have come across disturbing content such as bestiality, war footage, child pornography, and hate speech as part of their job evaluating the quality of Google’s offerings.

Some workers at Accenture said they were temporarily assigned to review inappropriate and offensive prompts. However, after one worker filed an HR complaint, the project was suddenly stopped for the US team. Nevertheless, teams at other locations are still continuing this practice. Anonymous sources familiar with the situation report that Accenture employees were also asked to provide creative responses to prompts for Google’s Al chatbot, Bard. They were given tasks such as writing a Shakespearean-style poem about dragons or debugging code.

In a statement, Google said the company is not the employer of any of the workers in question. Instead, the
suppliers are responsible for setting the working conditions, such as pay, benefits, hours, tasks assigned, and
employment changes.