Historical Authenticity and Deepfakes

The first time I remember feeling a genuine sense of dread about advancing technology’s effects on the preservation of authentic historical records, Twitter was on fire.

This was not unusual. I’m not an avid Twitter user, but even I know that Twitter having weekly meltdowns was pretty much the norm in 2019. It was more of a cause for concern if several weeks went by without some verified person being locked out of their account than not.

But this time it felt unique. Mostly it was because the President of the United States had tweeted out a doctored video of House Speaker Nancy Pelosi and a large swath of people thought it was real.

Fox news said it was real.

It was not.

But it also sort of was.

And I suppose that that was what I found so unsettling.

The doctored video of Pelosi was one that was edited to pitch-shift and slow down her speech, an effect which made her sound inebriated. It spread like wildfire on right-wing news sites in the United States. While technically not a deepfake, it re-sparked the debate in the mainstream about deepfakes and the spread of not just misinformation, but disinformation in the media.

Deepfakes are synthetic media that use machine learning and AI technology to impose the image from one photo or video into another. May of 2019 was hardly the first time they had made the news, the year previous they were in headlines for their usage in revenge pornography and their banning from Reddit, but the Pelosi video was the first edited video featuring a significant political figure that was convincing.

It was the first time I had seen technology progress far enough that more people found a doctored video more convincing than the original.

Maybe my ignorance on the topic belies my age, but before then, when I thought of the way in which documents or digital media could be altered, my brain went to poorly rendered photoshops or Stalin’s erasure campaign.

Erasure was obvious; Photoshop in my youth meant that something was obviously altered; the Pelosi video was near seamless.

Obviously, discussions erupted about all the things this technology could make political figures do or say. And while the immediate consequences of this technology were, without question, very concerning, I was more focused on how much more significant being able to differentiate between real and fake just became for people in historical professions.

It already felt like archivists, particularly those in assessment, would have to have a level of digital savviness many would not stereotype to the profession. But to be able to identify footage doctored to this level? Was that possible?

Microsoft made it public in September of last year that they are developing a deepfake detection tool. And much of the academic literature on the topic is focused on detection over creation, but there is pushback on that focus as one can feed into the other.

The more sophisticated the technology for detection, the more those creating the footage can learn from those tools and fill in the cracks in their systems. All the while, those of us who are interested in preserving the digital space are left more at a loss at the increasing gap between the technology and our knowledge.

This is, of course, the worst-case scenario for how deepfakes will affect the GLAM sector, though the most immediately relevant. Preservation is already up against the sheer deluge of content in the digital space. To expect historians, archivists, and other preservation enthusiasts to also become experts in detecting minute signs of manipulation in that content is an exercise in futility.

There is also a case to be made that this technology could ‘bring history to life’ in a way never seen before. As many people transition to telling more personal, grounded histories in an effort to connect more with our audiences, couldn’t deepfakes open the door to making our subjects come off the page even more?

There’s certainly an ethical conversation to be had about ‘raising the dead’ and applying deepfake technology to subjects that exhibits are being designed around, but there’s plenty of evidence to suggest that people care more about historical narratives when they feel personal.

So should we embrace it then? Or should we pool our efforts into resisting the technology that could seriously poison the authenticity of archives?

It’s not a question I have the answer to. But I do know that’s I’ll be watching Trotsky and Stalin sing Video Killed the Radio Star while I ponder it.