Frankly Fake: The ethics of keeping it 'not real'
- Katrina Ingram
- Jun 13
- 4 min read
Updated: Jun 30

The other week, Google launched Veo 3, its last video generation tools. The realistic quality of the tools spawned headlines like this one in PC Magazine: I Tested Out Google's Veo 3 AI Video Generator. The Internet Is Not Prepared for What's Coming
"It might mean the final death knell for truth on the internet. Veo 3 already poses a major threat right now, but just one minor update could revolutionize deepfake creation, online harassment, and the spread of misinformation." (PC Magazine)
I was giving a talk that same week and I played a video that was reportedly generated by Veo 3. Yet, there was also discussion about videos that people CLAIMED were Veo 3 generated but were actually not AI generated. In other words, fake fake generated videos…as opposed to real fake generated videos. Why would anyone do that? To prove that their AI-generated tool is the most realistic looking. The goal is realism.
The whole thing is pretty confusing, especially since generated images and video are losing tell-tale signs such as looking at the hands to see if there are too many or not enough fingers. So, what’s the deal with making it look realistic? Why not keep AI generated content identifiably and frankly fake?
To ‘keep it real’, we need to keep it fake
There's an ethical case to be made for keeping AI content looking like it was AI-generated vs seeking to up the game towards realism. Realistic looking AI content, simply by seeking to be realistic, includes an element of deception. That is the unstated goal of making it look real - to convince people it is real.
Bad actors will use these tools to increase their ability to cause harm, using deepfakes to perpetuate fraud, scam people, spread mis or dis information and produce ‘evidence’ to support false claims. Courts will have a hard time knowing what evidence is actually real. No good comes of these uses and in fact, there is a cost born by the people and organizations left dealing with these impacts.
There is also the potential for a 'liars dividend' - as legal scholars Bobby Chesney and Danielle K. Citron point out in their paper Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security:
"Deep fakes will prove useful in escaping the truth... liars aiming to dodge responsibility for their real words and actions will become more credible as the public becomes more educated about the threats posed by deep fakes.... As the public becomes more aware of the idea that video and audio can be convincingly faked, some will try to escape accountability for their actions by denouncing authentic video and audio as deep fakes." (Chesney and Citron, 2019)
What about the upside?
The more beneficial use case for deceptively real content is improving the creative process in order to make it more efficient and affordable. It might also provide for new forms of artistic expression as well as open the door to who can be a creator. In doing so, there is a threat to jobs - this is how it reduces costs - but one could argue the benefits are worth it.
However, given the immense downside risks to society from not being able to tell if content is fake or real, challenging the notions of truth, including spin off impacts on processes like democracy, the case for efficiency and cost reduction doesn’t seem like much of an upside.
As the tools improve, this is a growing concern. This AI-generated video looks like it could have been 'streeter' style news interviews at a local auto show. If you look closely there are some tells...like what self-respecting Hells Angel doesn't know how to spell Hell (see badge at 0:33). But the point is, it's becoming more difficult to know for sure.
Data by-products
Similar to processed meat by-products or engineered wood, AI-generated content should be clearly identified as something other than the content directly produced by human content creators. It contains remnants of the human produced content, but it has so thoroughly processed those data points that it no longer can legitimately be considered as the original thing. In other words, I know my floor isn't real hardwood and that chicken nuggets contain a lot of things that are not actually chicken. Product labelling laws ensure I understand what I'm getting.
Yet, unlike meat or wood by-products, human produced digital content is already a signifier, removed from the thing it represents. Images are a representation, as Magritte reminds us in the famous painting, The Treachery of Images. In that respect, AI-generated video feels twice removed - a pastiche of a representation.

Wikipedia
Perhaps we should not rush towards the real, but instead, celebrate the fake. Here’s to AI avatars that look like AI avatars! This is not a human.
Script adapted from my post The AI cure for Baumol's cost disease
By Katrina Ingram, CEO, Ethically Aligned AI
Ethically Aligned AI is a social enterprise aimed at helping organizations make better choices about designing and deploying technology. Find out more at ethicallyalignedai.com
© 2025 Ethically Aligned AI Inc. All right reserved.