Deepfakes, Technology, and Truth by Sarah Darer Littman
That’s until the school’s anonymous gossip site, Rumor has It, posts a video of Dara accusing Will of paying someone to take the SAT for him.
The video goes viral, and suddenly Will is being investigated. There’s just one problem. Dara swears she never said any of those things, despite the evidence in the video.
So did Will cheat?
Is it really Dara saying he did?
Who’s lying, and who’s telling the truth?
The inspiration for Deepfake, which comes out on October 6, 2020, came as a result of two careers I had during my long and winding road to becoming a novelist: a technology analyst and a journalist. From being a tech analyst, I learned to look at new technology with a critical eye. As a journalist I’ve tried to uncover and write about the truth, in order to hold elected officials accountable. So when I started reading about deepfakes I wondered: What happens when advances in technology make it increasingly more difficult to know what is true and what isn’t?
At this point you might be asking yourself, “What are deepfakes?” The word deepfake is a portmanteau of deep learning and fake.
Have you ever used FaceSwap in Instagram? Or my new favorite timewaster app My Talking Pet? (click on photo of my dog to see him talking) The technology behind deep fakes is like that, but different.
Deep learning is a kind of machine learning in artificial intelligence. It involves algorithms inspired by the structure and processes of the human brain, which are capable teaching itself how to recognize and categorize from massive quantities of unlabeled or unstructured data.
As with any technology, deep learning is a tool that can be used for good purposes, such as diagnosing illness, fraud detection, or restoring old films, like the amazing job Peter Jackson did with WWI footage in They Shall Not Grow Old. But as with many technologies, it has also been used for nefarious purposes such as harassing and exploiting women by convincingly superimposing their faces into compromising videos. In April 2020, it was used to mimic Jay-Z’s voice, leading to copyright controversies.
All of this got me to thinking, especially in light of how the assessment of our nation’s intelligence agencies was that misinformation and disinformation influenced in the 2016 election. How could deepfakes be used to disrupt our lives and our democratic process? We’ve already seen how quickly a “cheapfake” video of Nancy Pelosi spread through social media. It wasn’t a deepfake, just slowed down by 25% to make her appear intoxicated. Still, people who wanted to believe it spread it.
Deepfake looks at how this technology could be used to upend a high school student’s future. I hope that it will be a useful vehicle for starting discussions about media literacy, but also how we can think critically about technology and the role technology companies play in a democratic republic.
It’s also a great way to start discussions about how a society that is increasingly reliant on algorithms in so many different applications might be building in bias and discrimination. A 2019 study found that a widely used risk-management algorithm used by health insurance companies and hospitals to determine which patients would benefit from “high-risk care management” programs perpetuated racial inequalities because of the metric used to determine risk. In December 2019, EdWeek reported that the National Institute of Science and Technology, which tested nearly 200 facial recognition systems with more than 8 million people, found “African-Americans and Asian-Americans can be between 10 and 100 times more likely to be misidentified by the technology than white people, and women are more likely to be falsely identified than men.” A study published in March 2020 found that speech recognition software, which is becoming an increasingly widespread tool, on average misidentifies words 35% of the time for black people vs. 19% for white people.
I took a computer science course in college, back when PC’s were still a novelty. One thing that stuck with me was “Garbage in, Garbage Out.” Lack of diversity in the tech field is resulting in algorithms that have real and dangerous consequences for people of color.
Silicon Valley has sold us the vision that their products will lead us to a more equitable and utopian society, but the reality hasn’t always lived up to the hype.
The best way to protect our society from moving forward in a way that reinforces and perpetuates structural inequity is to educate our students and create discussions around these issues in the classroom. I hope Deepfake will help to spark those discussions.
Sarah Darer Littman is the critically acclaimed author of Deepfake; Backlash; Want to Go Private?; Anything But Okay; In Case You Missed It; Life, After; and Purge. She is also an award- winning news columnist and teaches writing at Western Connecticut State University and with the Yale Writers’ Workshop. Sarah lives in Connecticut with her family, in a house that never seems to have enough bookshelves. You can visit her online at sarahdarerlittman.com.