“Data-Driven Thinking” is written by members of the media community and contains fresh ideas on the digital revolution in media.
Today’s column is written by Richard S. Eisert, partner and co-chair of the advertising, marketing & promotions practice group at Davis & Gilbert.
The influencer marketing industry continues to grow rapidly. Some expect influencers to be worth $10 billion collectively by 2020.
The industry’s explosive growth has, on the one hand, fostered great innovation, including the increasing prevalence of virtual influencers, which are powered by artificial intelligence (AI). On the other hand there has been increased legal oversight and enforcement, particularly by the Federal Trade Commission (FTC). In a sign that the FTC has yet to focus on the proliferation of artificial intelligence influencers, California has stepped into the breach.
The ‘Blade Runner Law’
In September, Gov. Jerry Brown signed into law California’s SB 1001, which has come to be known as the anti-bot law or the “Blade Runner Law.” SB 1001 makes it illegal to use bots to communicate with individuals online to incentivize the sale of a product or influence a vote in an election, unless the party behind the bot discloses that it is, in fact, a bot.
This law clearly stems in part from the widespread use of bots during the 2016 presidential election to disseminate disinformation and sow discord among the electorate. While that has made this law newsworthy, what is noteworthy for marketers is that it specifically addresses the use of bots for commercial purposes.
SB 1001 takes effect July 1, 2019. The California attorney general can seek a $2,500 penalty per violation and shut down the bots. Private citizens can also seek injunctive relief and restitution for harm caused by bots.
It is important to note what California’s anti-bot law does not cover.
The law’s focus on commercial and political bots excludes bots that don’t incentivize purchases or votes, such as customer service bots that help people pay a bill or obtain product assistance.
The law also exclusively covers “public facing” bots, meaning that automated services that are targeted and only visible to specific consumers likely would not fall within the scope of the law. This likely means, for example, that spam email services are not covered. Further, the disclosure requirement doesn’t completely prohibit the use of bots to incentivize purchases. Bots may be used for commercial purposes as long as consumers know or are at least not misled about the fact that they’re engaging with a bot.
A step beyond
Regardless of its intent, the law goes above and beyond what the FTC requires for influencer disclosures. Under the FTC’s Endorsement Guides, there is no distinction between a bot and a human influencer. A bot doesn’t need to self-identify as a bot when it is speaking on behalf of a brand or encouraging a purchase – it simply needs to include FTC-approved disclosures, such as #ad or #sponsored.
Under SB 1001, however, an automated influencer is required to disclose that it is a bot, not just that its post is brand-related. The law is not clear about what that disclosure might look like, but at the very least it will need to include more than just #ad.
So what are the ramifications of this new law once it goes into effect in 2019? Well, the fact that it is specific to California will not be of much solace, given that it is directed at online activity – which, by its very nature, is accessible to citizens in any state. That means that marketers intending to use AI influencers or other public-facing automated services for commercial purposes must comply with California’s law or somehow carve out California consumers from accessing or seeing the bot, which may prove difficult.
Whether California actually intends to enforce this law is another story. Many believe that it’s more of a policy statement than the basis for active enforcement. For one, the California attorney general’s office likely has far more pressing issues to deal with than sniffing out undisclosed AI influencers. Furthermore, even if the AG’s office is up to it, enforcement will be no easy task. Recent attempts to uncover the full extent of bots’ influence in the 2016 elections showed that it is a complicated undertaking to differentiate automated online accounts from anonymous online accounts.
Marketers, however, should not bet on a lack of enforcement. California’s anti-bot law is now the law of the (internet) land, so to speak, and its private right of action means that there is a risk of legal action, even if the AG does not actively enforce the law. Marketers should rethink how and the extent to which they use AI influencers, as well as how they intend to disclose to consumers that these influencers are automated.