3 minute read | June.22.2022
On June 21, the United States Department of Justice announced that it had secured a “groundbreaking” settlement resolving claims brought against a large social media platform for allegedly engaging in discriminatory advertising in violation of the Fair Housing Act. The settlement is one of the first significant federal actions involving claims of algorithmic bias and may indicate the complexity of applying “disparate impact” analysis under the anti-discrimination laws to complex algorithms in this area of increasingly intense regulatory focus.
The complaint alleges that the social media company’s ad targeting and delivery system relied on vast troves of data that the company had collected about its users, including data regarding users’ race, religion, sex, disability, national origin, or familial status, and allowed advertisers to target ads based on protected characteristics or proxies for protected characteristics. The complaint alleges that the company discriminated in three related ways:
The complaint follows the Department of Housing and Urban Development’s charge of discrimination filed in 2019, which the company elected to adjudicate in federal district court. It also follows the settlement of private litigation in 2019, which addressed some but not all of the conduct alleged in yesterday’s complaint.
The company denied all wrongdoing but agreed to the entry of an order requiring it to change its advertising practices in order to resolve the claims. The settlement, if entered by the court, would require the company to:
The settlement is contingent on the parties reaching agreement regarding the sufficiency of the company’s new system for personalizing ads for consumers by December 31, 2022. If they are unable to, the settlement will be void, and the parties will once again be in litigation.
Agencies across the federal government have been warning regulated institutions that the use of machine learning to crunch reams of data could lead to discriminatory decisions if the data itself relies on protected characteristics or close proxies of such characteristics. In March 2021, federal financial regulators issued a Request for Information regarding the use of artificial intelligence, including machine learning, by financial institutions. The agencies have not yet issued comprehensive guidance regarding the use of artificial intelligence. Accordingly, regulated financial institutions subject to fair lending laws should consider how the principles underlying DOJ’s claims and the negotiated resolution of those claims, would apply to their own use of algorithmic decision making.
If you have any questions regarding the settlement, please contact John Coleman or an Orrick attorney with whom you have worked in the past.