All online video is now suspect

WE MUST, BY DEFAULT, DOUBT THE AUTHENTICITY OF ALL POSTED VIDEOS

In 1997, I wrote a joke job application for my website and posted it to some humor newsgroups to drive a little traffic. Later someone stripped my intro that it was a joke and claimed it was real, reposting it to a large jokes mailing list. It went viral with the claim it was real. Multiple urban legends sites now debunk it.

This is getting worse in the age of AI. People are making really interesting AI photos and videos that then get appropriated and reposted by others in ways that claim or imply they’re real. And it’s getting harder to determine whether they’re real or fake without close inspection or even specialized tools.

I know, I post a lot of music I created with Suno. But when I post it, I acknowledge the music is all or mostly AI. There have been a few songs where I sang or whistled part of them into my mic so Suno could use it as a guide, thus incorporating my original music, but when that’s the case, I note that too. AI isn’t going anywhere, but that makes disclosure all the more important, and when opportunists can “borrow” and repurpose your AI-assisted content without incorporating that disclosure, your content can end up being misconstrued as real way too easily.

The Wooden African Jet Plane

A friend of mine, a former NASA engineer, shared someone’s LinkedIn post of a video of African men building a full-scale model of a jumbo jet out of wood with hand tools. My friend praised their model building skills. Here’s the video. When it appeared on LinkedIn someone had added a little balloon, saying “this is amazing!”

@zambianaitv African man builds his own Jumbo Jet Plane using wood. #Ichasulilwe ♬ Ichasulilwe Bnell ft Afunika – Zambian Ai Tv

Because the OP never credited the builders, the source, or gave the story of why they engaged in such an ambitious and expensive project, I Googled it. It took a while to get past all the results that were people posting the same video and find a result about its origin… which debunked it as AI, but mostly because the oldest post they could find was the one above from the TikTok account ZambianAITV.

I went back and watched it closely… a ladder had a rung that flickered in and out of existence around 13 seconds in, wood chips went flying from places they shouldn’t, a scene with a man sawing a log left lots of sawdust, but no mark on the log.

It was very realistic. In my first run through it, it looked legit. It was only because the poster didn’t cite their source and it seemed too ambitious to be a hobby project that I dug further, saw the claim it was AI, and then went back to catch the signs of AI I missed the first time through.

I don’t believe the ability to make these almost real videos/images is the problem or that the people who make them always intend to deceive. It’s the third parties who copy the content and post it without credit or provenance, to support their messaging and generate engagement, who seem to be the worst offenders.

If you call them on it, they’ll get defensive and ask “how does that change the point I made?” Simple… You’re known by the company you keep. Your truth suffers from you pairing it with the undisclosed lie.

We must demand disclosure, demand provenance

If someone posts video they claim they created, we must demand they disclose if it’s AI. YouTube and TikTok both have those disclosures in their upload processes, but they’re optional and/or not well-enforced. YouTube’s upload process even requires yout to expand a collapsed section of the upload form to find their weirdly worded disclosure section. I posted a few songs before I found the hidden and poorly-worded AI disclosure element in YouTube’s form.

The part of YouTube’s upload form that hides the AI disclosure behind a button.
YouTube’s AI content disclosure that requires you to read *closely*.

We must demand that YouTube, Facebook, TikTok, LinkedIn, and other sites that let you post video become much more blatant and vigilant about requiring posters to disclose AI content. If someone posts AI content without disclosing, there should be a maximum of one or two warnings before the account is suspended and any revshare money paid to the creator for the video is clawed back.

If someone posts video they didn’t create, we must demand they disclose where they got it (i.e. provenance). If the place they got it does not have strict rules related to AI disclosure, the post should be rejected automatically. We have to reward the honest and punish the dishonest, and do it with the same level of transparency we demand from them.

AI video is as dangerous as it is cool

Is it the viral videos that pose the biggest danger? The more eyes, the more likely someone calls bullshit, but it also helps the lie to circle the globe before the truth has its boots on.

Is it the malicious videos that intend to deceive that are the biggest danger? As I’ve noted above, I never claimed my snarky job application was real and this was 25 years before Chat-GPT debuted. But as it circulated, someone else thought it was funnier or more effective by adding the claim it was real.

Did this video cause harm? Not that I know of. But a video that is innocuous on its own could be repurposed into something malicious. The manufacturer that made the van Timothy McVeigh used in his bombing of a federal building never intended it to be used that way.

Should the onus be on the viewer to detect AI video? No. The average internet user should not have to have the expertise/tools to detect AI video any more than the average janitor should have needed the expertise/tools to determine that Bernie Madoff was running a ponzi scheme.

What about the industry regulating itself? The tech industry will fight tooth and nail against laws to force them to detect and label AI, claiming the industry should self-regulate. We shouldn’t need to spend years in court to get token compensation after the damage is done. Sadly, the EU’s legislative overreach will tackle this in their usual hamfisted manner before the US Congress sends anything to Trump. Meanwhile lawless nations that sponsor APTs or turn a blind eye to cybercrime will continue to poke holes in the tech industry’s tissue-thin self-regulation to cause mayhem.

AI video isn’t going away and it shouldn’t, but…

I don’t want AI video to go away. I think it can be used to create amazing art and help great new voices to emerge. And we’re just not going to get that toothpaste back in the tube.

You and I don’t need to be told not to take our blow dryer into the shower because of the electrocution hazard, but there are those who do. We need the warning labels to be legally mandatory, clear, prominent, and we MUST provide the agencies tasked with enforcing the disclosure rules with the funding to do it properly.

Add a Comment

Your email address will not be published. Required fields are marked *