2019: A Bot Odyssey
Tuesday, August 27, 2019

Hal: Good afternoon, gentlemen. I am a HAL 9000 computer. I became operational at the H.A.L. plant in Urbana, Illinois on the 12th of January 1992. My instructor was Mr. Langley, and he taught me to sing a song. If you’d like to hear it, I can sing it for you.

Putting aside HAL’s murderous tendencies, 2001: A Space Odyssey did a pretty good job at complying with a new California law that went into effect last month.

Beginning July 1, 2019, it is unlawful for any person to use a bot to communicate or interact with another person in California online with the intent to mislead the other person about its artificial identity for the purpose of knowingly deceiving the person about the content of the communication in order to incentivize a purchase or sale of goods or services in a commercial transaction or to influence a vote in an election. The new law provides a safe harbor for bots that disclose that they’re bots.

The statute defines a “bot” as “an automated online account where all or substantially all of the actions or posts of that account are not the result of a person.” That’s a pretty broad definition.

If you run a Twitter or Facebook bot account that encourages people in California to vote for a candidate, the bot should disclose that it’s a bot. The disclosure should “be clear, conspicuous, and reasonably designed” to inform persons with whom it communicates that it is a bot.

While the statute does not provide guidance on what makes such a disclosure sufficiently clear and conspicuous, the FTC’s guidance on endorsement disclosures may be a useful analog. When a paid influencer is endorsing a product on social media, she must clearly and conspicuously disclose material connections that might materially affect the weight or credibility of the endorsement (e.g., that the endorser was paid for endorsing the product). Many influencers comply with this by hashtagging the endorsement with #ad, #advertisement, or something similar. If an influencer squirrels the disclosure away in her bio, it’s not sufficiently clear and conspicuous, and the influencer runs the risk of FTC endorsement action for deceptive acts and practices. If we assume that similar rules apply to the new California law, bots may need to hashtag their tweets with “#bot” or similarly disclose that the message is automatically generated.

While the bill was passed primarily to address the proliferation of social media bots and their impact on politics, the vaguely worded statute may also capture businesses employing “chatbots” to engage prospective customers. Chatbots are computer programs designed to simulate human conversation and are regularly used on websites to answer frequently asked questions and convert website visitors to leads and customers.

Businesses that use chatbots should review their customer experience to make sure that the chatbot does not appear generated by a human. While enforcement of the law may trigger First Amendment concerns, a conservative approach would be for the chatbot to disclose that it’s a bot. In short: be more like HAL.

 

NLR Logo

We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins