top of page

Deepfakes & Privacy (F)laws


Last week (January 22-26) was data privacy week in Canada. The theme of this year’s event was “take control of your data”.


This week, we’re witnessing an alarming example of how we not only don’t have control of our data, but that the laws to help protect us are flawed or non-existent. They don’t go far enough to address concerns being raised by artificial intelligence.


Deepfake intimate images of Taylor Swift have circulated on social media platforms garnering over 45 million views. They have now been removed but platforms like X were ill-equipped to take the content down quickly to contain its spread. 

Deepfake technology is not new, but it has become more realistic looking, widely available and easier to use thanks to advances in AI. In most cases women are the victims of this crime...which is currently not a crime because there is no law that makes it illegal. Nor are there civil laws that can easily be used to redress harms. 


When it comes to deepfakes current privacy law does not apply. Since it is by definition fake, it doesn’t necessarily meet the standard of personal information…even if your actual face is part of the fake content. That feels both shocking and very wrong!


This isn’t just a problem for celebrities. The CBC documentary, Another Body, outlines the plight of a student who experienced a similarly horrific situation. The student (who, coincidentally chose to go by the pseudonym Taylor in the doc) recounts the tale of finding that her likeness was used in a series of hardcore videos. She reported the issue to authorities, learning at that point that the person who did this did not break the law, because this issue falls between the gap of laws that should protect people. 


A 2021 story in the MIT Review outlined how few legal options exist for fake images:


If a victim’s face is pulled from a copyrighted photo, it’s possible to use IP law. And if the victim can prove the perpetrator’s intent to harm, it’s possible to use harassment law. But gathering such evidence is often impossible…leaving no legal remedies for the vast majority of cases.” - MIT Review

In a paper entitled “We are not ready for deepfakes”* Canadian law student Keita Szemok-Uto explains how deepfakes challenge privacy by impacting our right to be left alone, undermining control over information and impacting dignity and autonomy. Szemok-Uto also called out “structural misogyny” in our culture and how deepfakes are a continuation of age-old patterns of gender based violence. Despite these harms, there is no law that adequately addresses these issues. It’s not fully covered through any of the torts (laws that redress wrongs) such as copyright, defamation and false light, harassment or privacy. 


Even the newer Nova Scotia Intimate Images and Cyber Protection Act (written before deepfakes) - is inadequate because a person must “be nude” as opposed to “be depicted as nude” in order for the image to be considered an intimate image. Szemok-Uto ends with an appeal to courts to “be proactive” and to include a person’s social media images as part of their “private facts and affairs” or to “create a new statutory tort” that would directly address this issue.


The later course of action is what the province of BC had done. BC has just implemented the Intimate Images Act which aims to protect victims. This is a civil law which helps streamline the process for reporting an incident and taking action, with fines as high as $10,000 for an individual and $100,000 for an internet company.


The act requires perpetrators to destroy the images and remove them from the Internet, search engines and all forms of electronic communication. It also covers threats to distribute intimate images.” - CBC

The US is currently looking into a “Taylor Swift AI Fraud Act” in light of what has happened to the popstar. But, right now, in Canada, if you are not a BC resident, there is little legal recourse. 


How can we “take control of our data” when the legal support to do so isn’t there?



By Katrina Ingram, CEO, Ethically Aligned AI

 

Ethically Aligned AI is a social enterprise aimed at helping organizations make better choices about designing and deploying technology. Find out more at ethicallyalignedai.com     

© 2024 Ethically Aligned AI Inc. All right reserved.


bottom of page