As Truth Becomes Rare, Next Year Will Be All About Authenticity
Perhaps the inception of AI is a necessary tipping point, so that we can finally realize what we've lost bit by bit in our ongoing infatuation with digital content.
When the year started, I was playing with LoRA (Low-Rank Adaptation) on Replicate to create a picture of me as Goku or a Beatle. It's a dodgy, multi-step process where you have to feed an AI algorithm multiple pictures of yourself, so it would know how to place your face on somebody else's head. Now nobody needs a LoRA for casual self-inserts. You can ask Nano Banana Pro with a single prompt to hang out with the celebrity of your choice or the Teenage Mutant Ninja Turtles.
When Veo 3, Google's generative video model, launched around May, I was posting on my social feed about the end of easily separating truth from fiction. Now, Redditors on r/interestingasf*** have to scrutinize if posts on the board are real or not. Sometimes, it's downright impossible to tell.
When the year started, I was working with many writers. As the year comes to a close, and AI adoption has become the rule rather than the exception, I found myself working with few. Too few.
The technology is evolving so fast it's hard to believe the first horrific Will Smith eating spaghetti video was posted only in 2023, and now the latest generations are virtually indistinguishable from reality.
But three years since ChatGPT launched, we're still oblivious about how to wield this great power that fell into our lap. How do we really use it for our benefit? Making fan-edit trailers and putting Britney Spears, Taylor Swift, and Cher in our homes to eat our food can't be it. And how do we stop ourselves from getting tempted to use it for more nefarious ends, like making fake receipts and resumes—or entire applicant identities—when the incentive to do so is massive? To illustrate: a company called Arcads raised $16M just by showing how to burn the entire user-generated content (UGC) industry to the ground—with AI influencers. Why anybody would want to buy products based on fake reviews, and why anyone would raise funds for such a business are beyond me.
The only thing I am sure of is that more and more people are getting tired and weary of the state of things. We're looking for a reprieve, and so next year will be all about authenticity—verifiable, originally authored, and, when necessary, expert truth that enables us to make the best decisions.
And maybe just in time. Because in terms of digital content, one way to sum up the last several years is, quite simply, the willful distortion of truth. As soon as social algorithms were fine-tuned to cater to individual wants and biases, this suspicion that some people have been consuming lies, while others have been seeing fact has been inescapable; when in reality, the whole well where everyone drinks is poisoned with half-truths. This has been the norm for too long, and if you're on Facebook or YouTube or Tiktok or X, you know this only too well. Almost nothing now can be taken at face value.
Adding generative AI to these truth-bending social algorithms practically makes digital the realm of falsehoods.
But with every piece of stolen media and every barefaced lie floating on our social feed, I think we are relearning how to get disgusted, which is a positive sign of things to come.
I predict and hope that next year, we will regain a healthy sense of distrust. For many things, I want us to get back to the old ways: if I don't see it with my own eyes, I'm not believing it.
For "news," if it's not reported or corroborated by a credible organization, it didn't happen; and for advice, if it's not by a verifiable, credentialed professional, we don't lap it up.
More importantly, if somebody is intentionally exploiting AI capabilities to commit fraud, we're demanding consequences. Legal ones.
While it's good to know that we're inching towards more responsible AI use (or at least the EU is), there's a sense that everything needs to come to a head before we course-correct towards significant changes.
Perhaps the inception of AI is a necessary tipping point, so that we can finally realize what we've lost bit by bit in our ongoing infatuation with digital content. As varied industries scream for regulation, and people lose their livelihoods, and as we live in fear of governments potentially weaponizing untruths against their own people so that a handful can stay in power, maybe we can get back to discussing how we can agree on discerning truth from fabrication.
On the consumer-side: is it too much to ask to be able to listen to legitimate experts again, and not ones primarily driven by engagement? On the business-side: shouldn't we be incentivizing products and services that promote expertise and originality instead of the opposite? And lastly, shouldn't politicians be finding ways to protect their constituents from fraud instead of running their own content mills of disinformation? When verifiable, authoritative content becomes rare in a sea of AI-generated inaccuracies, the market itself will create a solution where truth is financially rewarded instead of lies, and this might be the light at the end of the tunnel.
Truth has become cheap the past several years. But now everyone has a piece of it and has realized that when its stock is divided ad infinitum, it's nothing but a worthless knock-off. The value of the real thing is bound to rise again.