Machine Learning and Sua Sponte Discrimination
Thursday, July 11, 2019

It violates The Fair Housing Act to advertise in ways that deny particular segments of the housing market information about housing opportunities. It also violates New York law. But what happens when you use an advertising medium that discriminates on its own? We may just find out.

New York Governor Andrew Cuomo recently directed the New York Department of Financial Services to investigate reports of housing providers harnessing Facebook’s abilities to target users with precision. Leveraging the unnerving amount of information that Facebook has on its users, advertisers are able to select who sees their ads (and who is blocked from seeing their ads) based on any number of protected characteristics, including race, religion, sexual orientation, and disability. When a regulated landlord makes advertising or leasing choices based on these characteristics, it’s clearly illegal.

However, we are faced with a reality where computer programs built on machine learning algorithms may discriminate on their own. Facebook allegedly uses machine learning to predict users’ response to a given ad and may create groupings based on protected classes. As a result, it’s possible that an advertiser doesn’t intend to discriminate, but Facebook’s machine learning blocks ads to a group of users that is exclusively comprised of persons of a specific protected class, displaying only to prospects in a majority ethnic, racial or religious group.

But what can be done? Does our increasing reliance on AI doom us to a more discriminatory society? Maybe not. It turns out there’s reason for hope—that maybe AI and machine learning will lead us to a more just society when deployed correctly. In May, 2019, several Berkeley professors published an academic paper comparing fintech lending to face-to-face lending. The paper reached two conclusions: (i) fintech lending results in 1/3 less price discrimination than face-to-face lending and (ii) fintech lending does not discriminate in accept/reject decisions.

Two lessons are clear from the Facebook fiasco: (1) companies like Facebook that deploy artificial intelligence must understand how decisions are being made and carefully design the decisioning process to avoid claims of discriminations and (2) attorneys representing companies using Facebook to advertise must be involved in not just the review of the ads, but also overseeing which groups receive those ads.

 

NLR Logo

We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins