HUD warns on AI-fueled housing discrimination
Connecting state and local government leaders
The Department of Housing and Urban Development confirmed characteristics like race and income are protected from AI algorithmic discrimination.
The Department of Housing and Urban Development is monitoring how artificial intelligence applications can violate protective provisions within the Fair Housing Act, with two new documents outlining how the emerging tech can potentially discriminate against individuals seeking housing.
The documents tackle the misuse of AI algorithmic capabilities in two different contexts: tenant selection and housing availability advertisements. Research cited by HUD indicates that AI software can introduce bias and potential discrimination into these activities.
“Under this administration, HUD is committed to fully enforcing the Fair Housing Act and rooting out all forms of discrimination in housing,” said HUD Acting Secretary Adrianne Todman in a Thursday press release. “Today, we have released new guidance to ensure that our partners in the private sector who utilize artificial intelligence and algorithms are aware of how the Fair Housing Act applies to these practices.”
The tenant screening guidance released by HUD explains that housing providers cannot discriminate against applicants based on race, color, religion, sex or gender identity, national origin, disability or familial status — and that providers will be held accountable for discriminatory actions leveraged internally or by third party algorithms.
“A housing provider or a tenant screening company can violate the Fair Housing Act by
using a protected characteristic — or a proxy for a protected characteristic — as a screening
criterion,” the document reads. “This is true even if the decision for how to screen applicants is made in whole or in part by an automated system, including a system using machine learning or another form of AI.”
Similarly, HUD also stipulates that targeted advertisements for housing — encompassing entities or individuals posting ads for housing and property opportunities and services covered by the Fair Housing Act — can be held liable if found to discriminate based on HUD’s protected characteristics.
Discriminatory advertising practices include denying customers housing opportunities, targeting vulnerable populations for predatory products, deterring certain populations from opportunities or steering consumers to particular areas based on protected characteristics.
“Discriminatory advertising can contribute to, reinforce, and perpetuate residential segregation and other harms addressed by the Fair Housing Act,” the guidance reads.
The release of the documents fulfills a stipulation from the Biden administration’s AI Bill of Rights, mandates within President Joe Biden’s 2023 executive order on AI and an April statement signed by HUD and seven other federal agencies to protect civil rights in the face of commonly used automated systems.
The issue of algorithmic accountability has been featured in at least one bill, with Sen. Ron Wyden, D-Ore., introducing the Algorithmic Accountability Act of 2023 to help protect consumers from AI and machine learning models’ misuse in areas like housing processes.
NEXT STORY: This state’s ‘unsexy’ AI policy takes transparency to the next level