A video shows a gubernatorial candidate admitting to accepting bribes while he was a state legislator. The video makes the rounds on social media and eventually the story breaks on the evening news. The candidate’s political aspirations are scuttled, and lawmakers open an investigation into the alleged illegal activity.

The problem is, the video is a fake—a “deepfake.”

Most of us realize that photos can easily be manipulated with software. Videos, however, still have the ring of truth. Hollywood spends millions of dollars on special effects, so how could anyone afford to doctor a political video?

What is a deepfake and how hard is it to create? It turns out it’s not that difficult or expensive. Artificial intelligence (AI) makes it possible to create phony images, sound recordings, and videos that are highly realistic, and the technology needed to create these deepfakes is increasingly accessible.

In addition to political disinformation campaigns, these automated video creation techniques are being used for extortion and other criminal activity. Experts warn that deepfakes could erode confidence in political and legal institutions and cause people to distrust any form of video evidence.

The Technology Behind Deepfakes

Deep learning techniques make deepfake videos possible. Machine learning uses advanced algorithms to discover meaningful patterns in data and get better at performing a task. Deep learning works with much larger data sets and uses neural-network algorithms to “learn” at a much higher level.

Neural networks mimic the way the human brain works when processing data. For example, neural networks can recognize visual and audio patterns based on similarities and distinct features, and classify those patterns by comparing them to previous data sets. Thus, they can identify a specific individual or extract a portion of an image such as a license plate.

The most sophisticated deepfakes are made using generative adversarial networks (GANs). GANs can take videos of a particular person and be trained to produce new videos that look and sound exactly like the real thing.

This deepfake technology is increasingly accessible. A number of open source deepfake generators are available for download, and AI labs are constantly improving their techniques and publishing the results online. It doesn’t take a particularly powerful computer to run these applications, although making an AI video with a consumer-grade PC could take a week or more.

Why Deepfakes Are Scary

In a deepfake, someone can be made to seem as if they’re saying something they never did, such as this deepfake of President Obama. It suggests a terrifying future in which disinformation campaigns dominate politics and tear at the very fabric of democracy.

Lawmakers are so alarmed that the Senate recently passed the Deepfake Report Act, S.2065, by unanimous vote. The bill would require the Department of Homeland Security to assess and report on the impact of deepfake technology and how foreign governments and nongovernmental entities might use deepfakes to harm national security, commit fraud, or violate civil rights.

The federal bill has not advanced, but at least two states have passed laws criminalizing AI video creation. California Assembly Bill 730 makes it illegal to circulate doctored videos of politicians within 60 days of an election. Texas has enacted a similar law.

The Texas law also bans the creation of videos depicting someone “engaging in sexual conduct” without that person’s consent. However, the overall answer of whether deepfakes are illegal is cloudy.

Deepfake technology makes it possible to put a person’s face on someone else’s body, such as placing a celebrity in a pornographic film. According to a report by Deeptrace, 96% of deepfakes contain pornographic content, primarily depicting female celebrities. While most pornographic deepfakes are posted online for entertainment purposes, they are also used for harassment and extortion.

Evidentiary Issues of Deepfakes

Deepfakes raise serious legal questions. Fabricated video that is admitted into evidence could be used to convict the innocent, exonerate the guilty, or influence the verdict in a civil matter. In a recent U.K. child custody battle, the mother used a manipulated audio recording in an attempt to show that the father had made violent threats. Some legal experts are concerned that existing evidentiary rules are inadequate to address the issue of deepfakes.

Under Rule 902 of the Federal Rules of Evidence, certain types of documents are self-authenticating, meaning that no additional testimony or evidence outside of the document itself is needed for admissibility. As amended in 2017, the rule includes records “generated by an electronic process or system that produces an accurate result,” and “data copied from an electronic device, storage medium, or file.” Both types of electronic records must be certified by a “qualified person,” but no other authentication is needed.

However, when an electronic record’s authenticity is challenged, the burden shifts to the party introducing the evidence to prove that it’s real. This can be an expensive and time-consuming proposition. Moreover, once the veracity of evidence is called into question, jurors are inclined to disregard it. Overzealous or unscrupulous attorneys could raise the specter of deepfakes to cast doubt on legitimate video evidence.

The Path Forward

Despite the terrifying potential of deepfakes, some experts say the threat is overblown. In “Deepfakes: A Grounded Threat Assessment,” Tim Hwang, from Georgetown’s Center for Security and Emerging Technology, notes that most disinformation campaigns continue to use rudimentary photo editing techniques rather than AI-generated videos. Traditional techniques are effective without the cost and potential risk of creating deepfakes.

Furthermore, Hwang says, the machine learning algorithms used to create deepfakes leave distortions that can be traced across multiple videos—a sort of “fingerprint” that can be used to identify the creator. The public can then be alerted and the individual or group responsible for the deepfakes can be banned from social media and otherwise sanctioned.

Digital media forensics experts are working on techniques for detecting deepfakes. One approach looks for digital artifacts and inconsistencies in the expected behavior of cameras. Some such artifacts can be linked to known GAN models. Researchers are also fighting fire with fire, using deep learning and neural networks to create models for detecting deepfakes.

Existing laws can also play a role in combating deepfakes. States can prosecute individuals who use deepfakes for extortion or harassment. People who are harmed by deepfakes could have claims for invasion of privacy, defamation, or right of publicity, among other torts.

Deepfakes raise troubling questions about the reliability of video evidence and confidence in political and legal institutions. However, new laws, legal procedures, and forensic techniques show promise for detecting deepfakes and bringing those who abuse them to justice.

Expand Your Legal Education Online With Concord Law School

Concord Law School at Purdue University Global is the nation’s first fully online law school, and we are accredited by the State Bar of California.* We offer several online options, including:

  • Juris Doctor program for those who wish to become a practicing attorney in California.*
  • Executive Juris Doctor program for those who want an advanced legal education but have no intention of becoming a practicing attorney. The program also offers tracks in law and technology, education law, business law, or health law.
  • Individual law courses for professionals working in business, IT, health care, government, human resources, education, or other relevant fields who wish to brush up on their skills in certain areas.

Interested in learning more? Reach out today.