July 5, 2020

Volume X, Number 187

July 03, 2020

Subscribe to Latest Legal News and Analysis

July 02, 2020

Subscribe to Latest Legal News and Analysis

“Deepfake” Technology: Very Real Marketing Value … and Risks

As COVID-19 lockdowns continue to restrict in-person production, advertisers are increasingly turning to digital technologies to produce new creative assets. Recently, there has been increased interest in using “deepfake” technologies to repurpose archival footage.  A “deepfake” is essentially a video or audio that has been manipulated in a way that is undetectable to people viewing or listening, resulting in a piece of media that appears authentic.

A recent example is a commercial featuring a digitally altered depiction of the SportsCenter anchor Kenny Mayne, currently 60. In the spot, a much younger Mr. Mayne appears in SportsCenter footage from 1998 speaking seemingly prophetically about the year 2020. This was accomplished by layering video of Mr. Mayne’s mouth onto footage of his 38-year-old face.

“Deepfake” technology was already on advertisers’ radars before the pandemic. Last year, David Beckham appeared to speak nine languages through the use of “deepfake” technologies in an ad for Malaria No More, a U.K. charity encouraging people to help the fight against malaria. And in January, Doritos launched a campaign with the app Sway, which employs AI technology to allow users to make a video appearing to show them performing the dance moves performed by Lil Nas X in Doritos’ Superbowl commercial.

“Deepfakes” Can Present A Valuable Personalization Tool for Brands

Studies show that a consumer with a more personalized experience is more likely to complete a purchase. AI company Tangent.ai is seeking to capitalize on this with an algorithm designed to help consumers determine what products will look like on them. For example, a consumer may change a model’s lipstick, hair color, ethnicity or race.

Zao, an app released in China last fall, shows how advertisers may take “deepfake” personalization even one step further. Zao allows users to “star” in their favorite movies by seamlessly superimposing their face onto actors’ bodies in well-known movie scenes. The application of this technology to advertising can create the ultimate targeted advertisement, by placing the consumer in the ad. Instead of hiring an expensive celebrity to appear in their commercial, advertisers can hire anyone and use “deepfake” technology to replace them with the consumer in the final ad. A common advertising goal is to make consumers visualize themselves using a product, and the use of “deepfake” technology will make that easier than ever, as people will be able to literally see themselves using a product.

Potential for Abuse

With the uncertainty around how long lockdowns will continue to prevent in-person production, the use of digital technologies like “deepfake” is likely to become increasingly more prevalent. This technology is not without its issues, however. The use of “deepfake” technology raises various ethical issues around disinformation and consent, and certainly poses risk of misuse and fraud, especially in the areas of politics and government. This may leave consumers with competing feelings when viewing a “deepfake” ad: a mixture of wonder at the technology’s capabilities, and concerns about misuse and fraud.  In many cases, it is difficult to distinguish between ads with actual people and those that have been digitally altered, which may leave viewers feeling as though they’ve been “tricked”.

Advertisers should therefore be cautious in employing “deepfake” technology and be transparent about the manipulation, making it clear to viewers that what they are viewing is not real. And advertisers should of course always obtain consent from those appearing in their “deepfake”-powered ads.

With Great Power, Comes Great Responsibility

As “deepfake” technology continues to advance, some experts say that we will soon be unable to tell the difference between real humans and those that have been digitally altered. There are algorithms currently in development that will help viewers tell the difference, and the big tech companies are getting involved — Google recently released a database of thousands of “deepfake” videos to public institutions to assist the with training systems that detect altered media. Some experts have floated the idea of using a digital watermarking system as a way of ensuring people are not defrauded by “deepfakes”. Blockchain has also been considered as a potential solution, as it can be used as a ledger that authenticates the source of a media asset and tracks any time the original has been altered.

As always, the law is struggling to keep pace with technology and, given that this is still an emerging technology, there is generally a lack of legal safeguards currently, but this is likely to change. Maine lawmakers are currently considering a law that would prohibit the use of “deepfake” technology in political advertising.

In the end, while the use of “deepfake” technology in advertising is not without risk, this revolutionary technology could create a myriad of valuable opportunities for marketers if used in the right way.

Copyright © 2020, Sheppard Mullin Richter & Hampton LLP.National Law Review, Volume X, Number 122

TRENDING LEGAL ANALYSIS


About this Author

Genevieve Perez, Sheppard Mullin Law Firm, Emtertainment and Digital Media Attorney
Associate

Genevieve Perez is an associate in the Entertainment and Digital Media Practice Group in the firm's New York office. Genevieve’s practice focuses on transactional matters in the entertainment, technology, media, fashion and advertising fields.

212-653-8700
 Jason Mueller Sheppard Mullin Intellectual Property
Partner

Jason Mueller is a partner in the Intellectual Property Practice Group in the firm's Dallas Office, serves on the Diversity and Inclusion committee and leads the Veterans at Sheppard affinity group within the firm.

Jason provides strategic business counseling on intellectual property and advertising issues, and has extensive trial and appellate experience in copyright, trademark, trade secret, patent and false advertising cases. As lead trial lawyer, he has tried cases before juries in the Eastern, Northern and Southern Districts of Texas to verdict. Outside of Texas Jason has served as lead trial counsel in courts of California, New York and several other states representing some of the world’s leading entertainment, energy, healthcare and building products companies.

Jason's trial experience informs the practical advice and counsel he delivers to clients on intellectual property and advertising issues. Beyond the courtroom, Jason leads an IP due diligence team, represents clients before the U.S. Patent and Trademark Office and the Trademark Trial and Appeals Board, and has defended claims or investigations initiated by the Federal Trade Commission, The Texas Attorney General’s Office, the New York Attorney General’s Office, as well as several AG offices in California.

Jason has substantial experience handling all aspects of advertising and marketing law, including sweepstakes, contests, raffles and other promotions. He has performed regulatory compliance reviews and developed and instituted compliance programs to manage risk related to advertising, social media initiatives, and content management and acquisition. His team reviews national print, radio and television advertising campaigns, and issues opinions relating to the adequacy of claim substantiation. The advertising review team has experience serving as primary, overflow or escalation counsel for the review of advertisements.

469-391-7402