The “false news” and misinformation phenomenon is not new, but advances in technology, particularly “deepfakes”, have highlighted the severity of the threat in ways that have not happened before. Deepfakes have advanced significantly in recent years, and the tell-tale signs (e.g., odd hand or mouth movements, or odd pronunciation) that once betrayed the technology are becoming harder and harder to spot. Also, deepfakes are now extremely easy to create. It is now time to put regulations in place in this area to prevent negative uses of the technology and create an environment in which positive use cases occur.
What is a deepfake?
The technology behind deepfakes is complicated, but in their basic terms, deepfakes use some form of artificial intelligence to manipulate faces and voices in videos. This form of artificial intelligence uses different images of an individual at different angles (usually, the more data is available, the better, so celebrities are often targeted because there are tons of images of them available online) and overlay them with those of an actor, such as a digital mask. The result is a video of a character who sounds and looks like the subject of the deepfake, but instead says and acts whatever the creator of the deepfake decides.
Channel 4 (a UK television broadcaster) recently posted an example of a deepfake during the UK Christmas season and posted a video in which ‘The Queen’ delivered an ‘alternative Christmas message’. This, of course, was not Her Majesty, but a deep fake. This video received a lot of criticism from the UK public. UK media watchdog Ofcom received over 200 complaints. A spokesman for the network responded that there should be a “strong reminder” of the spread of misinformation in the digital age, and the video itself ends with the actress used to create the deepfake that reveals herself. There are countless other examples on the web.
Some find this technology amusing and a bit of fun. But it can have dire consequences for the public and for those who are the subject of deep falsification.
One of the original uses of the technology was in pornography. Members of the public and even celebrities found their faces on the bodies of adult entertainers on pornographic websites. Such content can and has had serious personal and professional effects on these individuals.
A number of politicians have also been the subject of deepfakes. Check out this video created by Framestore (a visual effects company that developed the Queen’s deepfake mentioned above) in which “Boris Johnson” and “Donald Trump” demystify deepfakes. or this video of ‘Boris Johnson’ promoting an event for Framestore. The dangers of this type of use are obvious: imagine browsing social media and watching a series of videos of briefings from government officials on the coronavirus. Now imagine that of the 10 videos that you quickly flipped past and watched, one is a deepfake and contains dangerous misinformation. The question arises as to how easy it is to identify this video as a fake and what the consequences of such misinformation.
As we can see, deepfakes pose a serious threat to misinformation. They can also have serious consequences for those who are deepfaked.
What protection does the law provide to combat these problems?
In the UK, the answer is that English law is currently completely inadequate to deal with deepfakes. The UK currently has no laws specifically targeting deepfakes and there is no “deepfake intellectual property right” to invoke in a dispute. Likewise, there is no specific law in the UK protecting a person’s “image” or “personality”. This means that the subject of deep falsification must rely on a plethora of rights that are neither sufficient nor sufficient to protect the individual in this situation. Hosts and intermediaries who provide the infrastructure are largely protected from legal claims under the EU E-Commerce Directive, which is implemented in English law.
As mentioned above, deepfake creators use video, audio, or images of people in the public domain to create deepfakes. In many cases, the celebrities do not own the copyright to these images or audiovisual content, so they may have difficulty making claims for copyright infringement on their own and rely on those who own the copyright (e.g., film studios and photographers ). Take action and apply for an injunction.
If a celebrity is portrayed endorsing a product they did not actually endorse, the celebrity may be able to file a claim subject to exit. If the celebrity has registered their name as a trademark and the deepfake video uses that name, the celebrity can take action for trademark infringement. If a celebrity is involved in lewd, abusive, or illegal behavior in a deepfake, they may be able to invoke a libel. Celebrities may also be able to exercise their data protection rights to prevent misuse of their similarity (as their personal data). Harassment claims are also possible.
However, the legal options currently available to celebrities and other public figures may not produce the desired result. Once a deepfake is on the internet, it is likely to be difficult to successfully find and delete all copies of the deepfake.
As pointed out by industry commentators like Robert Wegenek, the UK should take swift action to regulate deepfakes as current law is insufficient to deal with this new technology. The misuse of technology ignites public suspicions, ruins reputations, creates openings for fraud and hinders progress in the field. On the other hand, proper regulation could unlock the benefits of the technology. There are some positive use cases of deepfake technology, and one of the industries that could benefit from it is the entertainment industry. For example, in France, when an actress was unable to film due to coronavirus restrictions, deepfake technology was used in a soap opera with her consent. In addition, there are innumerable possibilities in the film industry, for example a lip-synchronous deepfake can be used to create a dubbed movie. Daniel Craig’s James Bond suddenly speaks in every language. And imagine a fake Winston Churchill teaching children’s history in museums!
The government has recognized the importance of investigating this matter, at least in the context of criminal law. In 2018, the government hired the Law Commission to look into deepfakes related to pornography. The Commission recommended that “review the response of criminal law to abuse of online privacy and, in particular, consider whether the harm caused by emerging technologies such as ‘deepfake’ pornography is adequately addressed by criminal law.”
As usual, the law falls two steps behind technology, which we believe is very detrimental to society.