Why I Will Avoid The Substance Movie

The reason I want implementation of age verification for adult content is not because I want to access such content, but precisely because I don’t ever want to access it. If a site asks me to show my papers, I will know to avoid it. I know I cannot force other people to protect their mental health, but personally I am dreaming of a clean internet and a world without war. And I’d be cool with no internet if it can’t be cleaned up. For now, the rule is, if it is safe for the smallest of little children, it is safe for me. Otherwise, it is not worth my time. So, the answer is in the intro, but I will go on.

Although I was initially looking forward to that film, I heard so many spoilers and parental advisories, that I can safely conclude without even watching the film that its unique purpose is to depict extreme violence and extreme sexualized violence against women under the pretext of consensual medical interventions. Taking two of the most watchable female actors and making something unwatchable… how can you even!

At this point, I am convinced that AI can do a better job at writing scripts because human intelligence seems to have caught an incurable misogyny virus in Hollywood. You can’t conceal misogyny just because the person who wrote this garbage is a woman. Women directors in Hollywood clearly must show unflinching hatred for their own gender to be picked up by that boomer industry.

It reminds me of my second law internship when I was asked to watch child porn to poke holes in child victims credibility and I was like “nah”, and so I don’t work in law anymore. How can I put this, I’d rather die than watch something I don’t want to watch. I also realized if I continue representing this client, I may incidentally murder him and then graphic movies would be made about me… I am about to also cancel Cannes over this film actually. It’s a point of no return. From now on, I will carefully filter anything out of Cannes before I decide whether or not it is safe for me to watch.

There definitely is a fake trend of body horror content out there (judging by the gazillion gory, true or not so true crime shows streaming right now) and it is interesting to find out why, because I did not consent to that. Maybe I am living in the wrong world, but it will continue to be a hard nope.

And by the way, I still watch hundreds of films every year (mandatory minimum of one film a day), it’s just that Hollywood fails to make the cut these days due to its obsession with war, guns, and violence against women.

Some of the official parental advisories on The Substance are the following :

  • Extensive & graphic full female nudity throughout entirety of movie.
  • Excessive body brutality exclusively to the female & nude body.
  • Fully naked females explicitly fondle their nude breasts, buttocks & bodies while the camera graphically zooms in.
  • Explicitly sexualized female imagery with close camera female body shots.
  • High impact blood and gore, injury detail and mutilation including closeups of grotesque imagery that cannot be described.
  • Confronting scenes of self-harm.
  • The most graphic detailed drug use in a film ever. Frequent close-ups of injections and highly disturbing including an instance where it is infected and still injected.

At least in movies you get parental advisories, however on the internet children can freely access any of the above gore often without even trying. Still trying to understand, why is the internet getting a carte blanche in that regard.


Prediction: this film will be nominated for an academy award. That’s how unwatchable it is. Long live AI.

Online Harms Bill Must Address Platform Liability And Provide For Swift Banning Of Platforms

Contrary to my previous objections to the Online Harms Bill, which I criticized as “too little too late nothingburger” and “disappointing” because age verification is missing, I am now finding new ways to work with this law to arrive precisely where we need to get regarding corporate criminal liability of platforms. Given that we don’t have the sociopathic section 230 CDA here, all we need is to be bold and move fast, before the law is struck on constitutional grounds by corporate lobbies.

The Online Harms Bill creates a very welcome tool to repress rampant tech facilitated crimes, by reversing the criminal law onus, in other words we can finally say that anyone who produces and disseminates harmful content is by definition guilty until proven otherwise.

Among many things, I see a clear possibility to raise criminal sentencing for child pornographers from nothing to perpetuity through the Online Harms Bill, simply by proving that juvenile porn is according to United Nation reports a most blatant instance of hate speech and antisocial behaviour. Interference with minors is absolutely encompassed in the current hate speech definition. Moreover, we have decades of studies and reports on the societal decay and breakdown resulting from technology facilitated violence (a.k.a. hate speech) against women and children.

My understanding is that we will be setting up administrative tribunals where you don’t need to be a member of a bar, you can be a social worker and hand out life-sentences. To accelerate trials and sentencing, we can also implement AI decision-makers like in the European Court. They seem to be doing pretty well so far.

We have extensive reports on the ways that platforms knowingly encourage and perpetuate hate speech, mainly in the form of tech facilitated violence. Honestly, I don’t see how user-generated and hardcore porn (and anything that is not LGBTQ+) will get a hate-speech exemption, given the Privacy Commissioner report (that stayed hidden for as long as it possibly could) specifically on how consent of unwitting “performers” is NEVER verified on Aylo. Even the new “safeguards” Aylo brought forward include the possibility to consent for somebody else by providing a release form. As if a user couldn’t produce a fake release. I had 9 remixes commercialized to my name and someone gave a release signed by someone pretending to be me to a US publisher, so Aylo’s efforts are total bullshit in that regard. The rest is voluntary blindness by pro-Aylo officials. This is just one example of organized inefficiency.

The Online Harms Bill should also allow victims from outside of Canada to file complaints. We learned from parliamentary sessions on the status of women that intimate partner violence victims are fleeing Canada, because the criminal justice system here intentionally compromises their safety by protecting and releasing violent criminals. We saw in these sessions that reps from the current administration were antagonizing and harassing victims (survivors left in tears), which shows that officials political interests are aligned with the rise of technology facilitated violence. It is our duty to take the Online Harms Bill and use it against all the corporations and their users these officials try to protect. It is a small sacrifice to stop speech temporarily (voluntarily remain silent or shut down or pause social media accounts) until we weed out the bad apples once and for all.

I am currently examining a report from 5 years ago, called Deplatforming Mysogyny on platform liability for technology facilitated violence, and will compare it with the efforts brought forward in the Online Harms Bill. The report explains how digital platforms business models, design decisions, and technological features optimize them for abusive speech and behaviour (the current definition of hate speech) by users and examine how tech violence always results in real life violence and harm. It is funny how we’ve known all these years that tech platforms are destroying society by encouraging violence and murders, but allowed them to stay in business.

As early as 2018, the Report of the Special Reporteur on violence against women, UNHRC, 38th Sess, UN Doc A/HRC/38/47 (2018) reports that “Information and communications technology is used directly as a tool for making digital threats and inciting gender-based violence, including threats of physical and sexual violence, rape, killing, unwanted and harassing online communications or even the encouragement of others to harm women physically. It may also involve the dissemination of reputation harming lies, electronic sabotage in the form of spam and milgnant viruses, impersonation of the victim online and the sending of abusive emails or spam, blog posts, tweets or other online communications in the victim’s name. Technology facilitated violence may also be committed in the work place or in the form of so-called honour-based violence by intimate partners […]

It is therefore important to acknowledge that the Internet is being used in broader environment of widespread and systemic structural discrimination and gender-based violence against women and girls, which frame their access to and use of the internet and other information and communications technology. Emerging forms of ICT have facilitated new types of gender-based violence and gender inequality in access to technologies, which hinder women’s and girls’ full enjoyment of their human rights and their ability to achieve gender equality. […] 

The consequences of harm caused by different manifestations of online violence are specifically gendered, given that women and girls suffer from particular stigma in the context of cultural inequality, discrimination, and patriarchy. Women subjected to online violence are often further victimized through harmful and negative gender stereotypes, which are prohibited by international law.”

If intentionally sexualizing individuals or a group of people in order to deprive them of the basic enjoyment of their human rights is not hate speech, good luck proving otherwise.

Tech facilitated gender based violence is further defined as being rooted in, arising from, and exacerbated by misogyny, sexist norms, and rape culture, all of which existed long before the internet. However TFGBV in turn accelerates, amplifies, aggravates, and perpetuates the enactment of and harm from these same values, norms and institutions, in a vicious circle of technosocial oppression. (Source Jessica West)

Deplatforming misogyny gives several examples of hate speech:

  • Online Abuse: verbally or emotionally abusing someone online, such as insulting and harassing them, their work, or their personality traits and capabilities, including telling that person she should commit suicide or deserves to be sexually assaulted
  • Online Harassment: persistently engaging with someone online in a way that is unwanted, often but not necessarily with the intention to cause distress or inconvenience to that person. It is perpetrated by one or several organized persons, as in gang stalking (source Suzie Dunn)
  • Slut-shaming (100% hate-speech) can be perpetrated across several platforms and may include references to the targeted person’s sexuality, sexualized insults, or shaming the person for their sexuality or for engaging in sexual activity. This type of hate-speech has the objective to create an intimidating, hostile, degrading, humiliating or offensive environment (UNHRC, 38th Sess, UN Doc A/HRC/38/47 (2018))
    • Discussing someone else’s sexuality is kind of always a red flag and criminal defense lawyers (among many other professionals) are totally engaging in hate speech in total impunity, just saying. Something needs to change or the legal industry should be completely eliminated from enforcing a clean internet. They should have zero immunity for perpetrating hate-speech and thereby encouraging violence against women and children.
  • Non-consensual distribution of intimate images: (see Aylo’s business model) circulating intimate or sexual images or recordings of someone without their consent, such as where a person is nude, partially clothed, or engaged in sexual activity, often with the purpose of shaming, stigmatizing or harming the victim. (also known as image based abuse and image-based sexual exploitation). The UN warns against using the term “revenge porn” because it implies that the victim did something wrong deserving of revenge.
  • Sextortion: attempting to sexually extort another person by capturing sexual or intimate images or recordings of them and threatening to distribute them without consent unless the targeted person pays the perpetrator, follows their orders, or engages in sexual activity with or for them.
  • Voyeurism: criminal offense involving surreptitiously observing or recording someone while they are in a situation that gives rise to a reasonable expectation of privacy.
  • Doxing: publicly disclosing someone’s personal information online, such as their full name, home adress, and social insurance number. Doxing is particularily concerning for individuals who are in or escaping situations of intimate partner violence, or who use pseudonyms due to living in repressive regimes or to avoid harmful discrimination for aspects of their identity, such as being a transgender or sex worker. (see: The Guardian: Facebook’s real name policy hurts people)
  • Impersonation: taking over a person’s social media accounts, or creating false social media accounts purporting to be the victim, usually to solicit sex or make compromising statements.
  • Identity and Image Manipulation, i.e. Deepfake videos: use of AI to produce videos of an individual saying something they did not say or did not do. In reality, video deepfakes are kind of fringe. The current AI applications are mainly focused on sexualizing and undressing women through unauthorized use of Instagram photos.
  • Online mobbing, or swarming: large numbers of people engaging in online harassment or online abuse against a single individual (Amber Herd comes to mind)
    • The Depp and Herd trial is an example of court-enabled hate-speech. The way Herd was cross-examined on television falls within the definition of incitement of violence against victims of intimate partner violence. This trial harmed the reputation of the profession beyond any repair and resulted in uncontrollable online mobbing.
  • Coordinated flagging and Brigading are cited in the report but I am not at all convinced that they are user-perpetrated. I believe that algorithmic conduct is 100% on the platforms. Users have zero control and liability in that regard. Nice try, but nope. If a survivor is taken down, I won’t let platforms get away with “users did it”. No way. Saying otherwise is pro-corporate propaganda.
  • Technology aggravated sexual assault: group assault which is filmed and posted online. Here is where the Online Harms Bill can be used to sentence perps to life in prison, something that can’t be achieved under the criminal code.
  • Luring for sexual exploitation: i.e. grooming through social media, or through fake online ads, in order to lure underage victims into offline forms of sexual exploitation, such as sex trafficking and child sexual abuse. Here is another instance of hate speech deserving of a life-sentence.

To be continued in another post: it is a long report (or to be more precise a bundle of legal and UN reports) and the bill is also a handful. I am only skimming the surface of the most prevalent forms of hate-speech which invariably equate to incitement of gender-based and intersectional genocide (see report on missing and murdered indigenous women and how it amounts to genocide). Just to say I can work with that bill. Bring it!


Law school messed too much with my head by convincing me that I care about human rights for violent criminals and procedural safeguards for perp corps. I never did. It feels good to be my dystopian self again.

ABBA Shows that Blanket Licenses Shouldn’t Exist

Contrary to all commercial logic, I believe that artists should be the only ones who decide whether and when their music plays or not. For convenience you may want to license your stuff to a label who licenses it to a distributor who keeps licensing it to people you don’t like, but it chips away …

Amazon One Seems Ideal For Age Verification For Aylo Sites

Not sure why I am learning about this contactless biometric ID tool today (living in Canada for the past 4 years must be it) but the thing exists since 2020 as a payment method throughout the USA, being deployed in no less than 500+ Wholefoods locations. I am not trying to advertise for Amazon here, but palm recognition technology strikes me as a way more sophisticated approach than mobile pay, microchipping, or physical credit cards and paper ID.

I am frankly astonished that, during the pandemic, palm recognition wasn’t used to verify immunization status. I totally hated showing my QR code along with government issued ID to randos on a powertrip behind a plexiglas carefully studying my papers while I wanted to kill them. To the extent possible, I refused to comply, rushing past QR lines like a “distracted consumer” from hell, robust EDM blasting through my headphones, occasionally displaying a “talk to the hand” sign, shouting out “do not comply!” here and there to my alienated co-citizens… Who knew that “talk to the hand” would have been a literal solution. A palm scan would’ve totally saved me the humiliation (and subsequent PTSD). By 2022, I had entirely stopped going to venues and restaurants altogether and am still unable to return due to these painful memories.

And it sucks in a way because I love order and compliance, and all of a sudden I had no choice to boycott venues and intentionally behave like a dork contrary to my nature, because I couldn’t reconcile my law-abiding character with my absolute duty to oppose tyrannical bullshit (in my case it was the health status disclosure that broke the camel’s back, since I was cool with distancing and still am very much into it). For a minute I embraced the idea that I may be a conspiracy theorist, although I only got acquainted with such theories for the first time mid-2020 and in general I have very low opinion of politics. The rule of law is above politics and division, right (RIGHT!) Wrong. The rule of law never stood a chance next to an executive order that took 2 minutes to draft on toilet paper… now, how about a few hundred thousand of executive orders!

Although it’s too late to go back and fix that entirely avoidable fiasco, here we are 3 years later, the same government that brought us the privacy invading mandates is now suuuuper worried about the privacy of porn-consumers. I’m not here to judge, but we have a French Canadian proverb “tu ne peux pas avoir le beurre et l’argent du beurre”. You can’t have it both ways, keep the butter and the money from the butter. Or can you!

The good news is that palm recognition is a win-win. When you register with Amazon One, you would link your palm scan (that also records your veins for a unique biometric configuration) with your credit card, ID and mobile number. This data is only available to Amazon One and purportedly not shared with third parties or law enforcement (unless there is a warrant).

To access Aylo material, all you’d need to do is hold your hand in front of your device camera to ascertain you are not a minor. No names, addresses, or any personal data whatsoever are ever disclosed or stored, nobody looks at government issued IDs, so that, privacy is fully shielded. A VPN will be useless in that respect, as would be Tor, as would be a fake ID. Palm recognition could also be used to block children from accessing social media and literally anything parents decide to block them from. On a side note, please don’t use Tor for porn, it slows it down for everyone else.

In the eventuality of a (cough) new pandemic, palm recognition would also contain your immunization status and I mean your entire vaccination track-record from childhood, dispensing with the need to show government issued ID QR codes and immunization booklets. It would facilitate and speed up visa issuing (i.e. you need 2xPolio, HepB, Dengue, etc for India). All you’d need to do is hover your hand over a scanner and get your visa, board a plane or train, or access whatever venue you need to go to.

And finally, if enough stores take up palm recognition, you wouldn’t need to carry a phone or physical wallet anymore.

Age Verification Bill Is Preferable to (too little too late) Online Harms Bill

Age verification to access adult content online is the only viable and sensible way to counter the irreparable damage pornographic platforms cause to society. The fact that Pornhub prefers to block access to their content in jurisdictions that enforce age verification is a sign that Pornhub is nothing less than a criminal platform. If all adult sites are truly “sketchy” to cite our prime minister, and couldn’t be trusted to verify ID, then I don’t understand why they are allowed to legally operate. They should simply be blocked and it would save the government a great deal of money.

Last time I checked, everyone in Canada (and many places in the US) needs to show their papers to buy alcohol, cigarettes, or government weed. Even nightclubs want to see your papers before letting you in. If you don’t want to show your papers, you don’t get in. If you’re too young, you don’t get in. Not once have I been able to get into a club in our (extremely liberal) Quebec before the age of 18, or the (more conservative) province of Ontario before the age of 19. We also hear stories of the time when porn content was only available on tangible format (magazines, videotapes, dvd’s) people had to show ID to access such content. Yet, online porn of the vilest kind has always been accessible to children in Canada. How does that make any sense?

I personally worked on cannabis legalization memoir during my second year in law school in 2016 (two years later, it was legalized) and age verification was always a sine qua non for legalization, given how harmful weed can be to the developing brain. In the same manner, I also recommended a system preventing the sale of cannabis to people experiencing mental health issues. It didn’t get implemented, but it should. You can hate me for it but the science is clear, if you have a diagnosed mental health condition, weed will make you psychotic and likely a danger to yourself and others. In order to counter the overdose epidemic, I am also a proponent of the legalization of opiates, and mainly pharmaceutical opiates that should be available to all addicts, who are often patients in need of pain-management let down by the health system, to be administered by certified nurses in every pharmacy of this country.

However, when it comes to porn, I believe the societal damage exceeds that of any drug. I believe that online porn (through the nonconsensual user generated model that is being pushed and rewarded on popular platforms) is the main factor behind the mental health epidemic amongst minors. Many kids never really fully get to understand how consent works. Those who believe they need to perform the violent acts depicted in porn videos, become suicidal. For many people, it is the first introduction to heterosexual relations and it makes kids hate society and their biological sex. It is not a coincidence that so many kids refuse to conform to their gender.

Given that online porn tends to obfuscate the notion of consent for profit, which in itself promotes content depicting self-harm and assault, studies are proving now and again that online porn is the main driver of nonconsensual content, antisocial behaviour, intimate partner violence, criminal harassment, cyberbullying (to name a few), and now identity theft via deepfakes.

This is not an ideological or political issue. I don’t understand why online pornographers in Canada should be exempt from age checks. Even less do I understand why the federal government keeps giving these platforms a free pass to make their content available to everyone, for free (a paywall would fix a few issues). But, this is the feeling I am getting when reading the Online Harms Bill that took 5 years in the making, with its convoluted system of takedown enforcement, as if Canada ever enforced anything. I myself spent 4 years in court to take down commercial nonconsensual stuff and it only worked out when the adverse party corporation declared a bankruptcy, briefly went out of business, and their international distributor finally caved because even Google intervened before the courts reluctantly did. Canadian courts in general are mildly useless, as they seem to spend most of their efforts in further sexualizing survivors and siding with the adverse parties’ commercial interests (like the government consistently sides with Pornhub). Nobody can tell us how Canada under the online harms bill will enforce “hefty” fines on platforms that operate in Sweden, South Korea, Morocco, or Iceland for example. In my case I had to take down over 5400 pieces of online content spread over 50 countries and an extraterritorial interlocutory injunction wasn’t enough. It was only the beginning. But oh, age verification has nothing to do with Digital ID (something that will happen anyway, don’t worry). It has to do with common sense.

Not once in my life have I heard an argument saying that parents should be the ones to enforce a ban on cigarettes or cannabis, rather than the state to impose age verification at the stores. Not once have I heard the argument that age verification to access cannabis is infringing on the privacy of old farts who want to buy legal cannabis. And don’t start me on the times we needed to disclose our health status AND show government ID to buy food at Costco or Walmart, a trauma that feels like yesterday… (will not forget, neither forgive). Why is online porn so different and important to the federal government that it should be accessible for free to children at all times?


Update: Although Australia failed to follow up on introducing age checks last year, given their unique diaspora of single-user sex workers and (not human trafficked) entrepreneurs, the UK is already surprisingly advanced into determining “trusted and secure digital verification services” with a focus on “layered” checks. It is encouraging to know that government ID alone won’t be enough to access adult sites in the UK, and that users will need to submit at least one instant selfie (timestamped at the moment of access) to prove they really are who they say they are. If photos on ID don’t match selfies, users’ access to the sites will be blocked. This is easily enforceable through third party facial recognition AI that will not store any personal information, face scans, or selfies, and will only assess age on a moment to moment basis. Contrary to banks who regularly leak users personal information for the simple reason that they need to store such data, it won’t be possible for pornsites to leak anything because they won’t have access to any personal information, and the third party AI verifying it won’t be allowed to store it.

If we worry so much about porn sites handling sensitive information, then we should bar them from taking users credit cards for their premium content. As it is now, they have large databases of credit cards. A credit card is sufficient to perform a full credit check on the holder, so it is pretty damn sufficient at identifying a user.

Canada should follow in the steps of the UK and rewrite the online harms act, first to remove the bizarre ideological sections regarding hate speech (we already have hate speech offenses in the criminal code and more than enough caselaw on the matter), as well as the bizarre life sentence for vague ideological thought crimes, since it has nothing to do with protecting children. I wouldn’t mind a life sentence for child porn producers and pedophiles, however, who currently get out with a slap on the wrist; (2) borrowing from the UK Online Safety Act, to mandate the use of trusted and secure digital verification services including real time facial recognition, face scans, digital wallets, government ID, selfies and many combinations thereof. Of course the cost will be relayed on platforms. This will unite Bill S210 and C63 on same footing; (3) similar to the UK Act, Canada should exempt Twitter, Reddit, and other mainly text-based platforms; (4) leave the 24 hour takedown requirements, but create an expeditious appeal process to affected users to reinstate content that doesn’t fall under the purview of the act, and impose dissuasive fines including the payment of attorney fees for frivolous takedown requests (à la DMCA by analogy). (5) to err on the safe side, Canada should mandate all mobile providers to automatically block porn sites, so that only computer cameras would be used for real time face scans and face video.

Another reason to block adult mobile apps is that all mobile apps are specifically designed to collect and store personal information even when you are not using them. Mobile OS also regularly take photos, videos and recordings of users for the purpose of improving their experience. It is standard practice to collect extensive personal information on mobile users since intelligent phones exist. Cybersecurity experts are able to decrypt such data packets while hackers (or law enforcement with or without a warrant) are able to intercept and use them. If you access porn on your phone, you can safely expect that your most intimate and biometric details are stored in many many places, and you would be even more surprised to learn that you automatically consented to all of it. Age verification would be the least of your problems. There are tons of applications capable of accurately guessing your age based on what you do with your phone.

Finally, we should never leave it to parents to protect children, because if you read criminal jurisprudence, parents and especially foster parents (and other family members) are often factors of child abuse and child pornography in this countryfor the reason that they have unfettered access to these children. Abusive parents also get away with a slap on the wrist. Since we don’t trust parents to respect children’s choice of gender, it would be a little hypocritical to trust them to safeguard their kids from porn. I wouldn’t.


Update 2nd: after wasting a few hours on online harms bill scenarios, I predict the bill has no future other than to target speech criticizing the bill (like this post) and to ban survivor speech (already going on without the help of the bill). So basically, if the bill ever comes to exist, it will achieve the exact opposite effect of its apparent intended purpose. As Australia has shown, nothing concrete will happen in the sphere of child protection anywhere. These bills are all for show, as corporate commercial interests will always trump child safety and consent. Even the UK will only apply age checks from 2025. Why 2025? Because the UK will likely also bail before the promised deadline and drop the checks altogether shortly before 2025. Comparative law should be renamed to comparative inefficiency.

Just like electric cars promises are flopping all over the place, because you can’t tell people to choose between doing their laundry or charging their car to go to work, you also can’t authorize a mega-polluting wetland-destroying Swedish project on unceded Mohawk territory, and pretend to care about the environment or ancestral rights in the same sentence. And very obviously, you can’t make porn accessible to children for free at all times and pretend to be a good person just because you wrote another fake bill (which is not quite written yet).

The point is, do not wait for a bill or a court to save you. As I previously said, the only way to enforce anything in the realm of nonconsensual material is to arm yourself with patience and look for ways in and out of court to apply pressure on local courts via foreign legal mechanisms, file police reports and Interpol reports, seek injunctions, sue platforms, sue banks that continue to work with rogue platforms, use the takedown and delisting mechanisms of search engines, make videos, hit film festivals, write open letters to ministers.. and whatever other grassroots ideas you may come up with. If you sue in damages, sue in the US, not Canada. The important thing is to take action every single day. I love how in the US people pick up the phone and call their state rep or senator. The only way out is to let the whole world know that you did not consent. Don’t stop until everything is taken down to the ground.

Is the Ideological Decentralization of AI Good For Innovation

It is likely a sign of evolution when global superpowers begin competing for digital innovation rather than outdated, old-world weaponry. If we are officially in a cold war, then it means that inventors and other talent will ultimately have the choice to work for the side that treats them the best. It means there will …

Shopping For a Non-Intelligent Phone Without GPS, Camera, and Wifi

Do flip phones emit less radiation? Yes, Flip phones and dumb phones are objectively better for those looking to reduce radiation exposure; particularly models without bluetooth & GPS capabilities. Also flip phones without multiple apps running in the background that require constant internet access are speculated to be safer. (Source) The SAR value, also known as …

Illinois Flood of Class Actions: Twitter, Snap, Tinder, Home Depot Collected, Stored, Used Biometric Data Without Users Consent

A proposed class action lawsuit claims X Corp., which owns and operates Twitter, has wrongfully captured, stored and used Illinois residents’ biometric data, including facial scans, without consent. The suit more specifically alleges that Twitter has run afoul of the Illinois Biometric Information Privacy Act by capturing and storing users’ biometric information without notice or express consent …

Bill C-18: Compared To The Australian Model

I thought C-18 follows the Australian model, but the more I look at it, the less I know what this bill means for Canadians. I understand that the CRTC (a federal entity) now has a new jurisdiction to make up criteria for provincial businesses and local matters, which appears unconstitutional, or as Facebook has pointed …

GDPR: Meta Hit With Dissuasive Fine For Illegal Data Transfers From Europe To US Servers

This decision comes in the wake of Meta’s and 5000 other companies willful ignorance of EU data protection law and persistent illegal transfers of European users’ sensitive data, such as names, email and IP addresses, messages, viewing history, geolocation data and other information, to US servers. Meta will of course appeal the ruling, but if …