Using AI to Promote Equitable and Affordable Housing
AI with proper framework covering risk management, data governance, monitoring, transparency, and human oversight can protect against discriminatory AI output.
The deployment of artificial intelligence (AI) and machine learning technologies in housing is underway and poised to accelerate in the coming years. Although the new technology offers policymakers a significant opportunity to make the housing system more efficient and responsive to residents' needs, ensuring that these technologies are designed and used in ways that advance equity, fair housing, affordability, and other housing-related goals requires deliberate effort on the part of all stakeholders. In December 2023, the National Council of State Housing Agencies hosted a symposium to discuss some of the challenges, opportunities, and policy considerations surrounding AI as this technology becomes increasingly integrated into the housing ecosystem.
AI, like all technologies, is neither inherently good nor bad; its value comes from how it is used. For example, AI could prove a powerful tool for detecting subtle patterns of racial discrimination or other bias in housing systems, including in mortgages, appraisals, and investment. On the supply side, Michael Akinwumi, chief responsible AI officer at the National Fair Housing Alliance, sees the potential for deploying AI to analyze and suggest reforms to outdated zoning codes that stifle needed new development. Akinwumi described three ways that AI systems are improving housing affordability: AI-based platforms are helping to streamline the permitting process, reducing delays and associated costs; AI is optimizing construction and design; and AI is identifying optimal locations for affordable housing development.
Phebe Vayanos, codirector of the Center for Artificial Intelligence in Society and associate professor at the University of Southern California, discussed her group's work to develop an AI system to help mitigate homelessness in Los Angeles. Their objective was to optimally match clients with services to improve both the fairness and efficiency of the city's homelessness services. However, equity and efficiency are objectives that are not always in alignment and can be in conflict. By developing an AI based on current research into fair practices, Vayanos' team was able to produce measurably better outcomes more often than did state-of-the-art practices that did not use AI.
Challenges of AI
AI and machine learning are powerful analytic tools with enormous potential, panelists explained, although they carry risk. Their power derives from their ability to discern patterns in existing data. They "train" themselves to detect these patterns by analyzing datasets in pursuit of an established goal, such as efficiency, equity, or some other metric. According to Michael Neal, senior fellow at the Urban Institute's Housing Finance Policy Center, one challenge of particular concern for housing is that when the underlying training data reflects bias, the decisionmaking protocol of the AI incorporates those biases. Biases in the training data might result from biased measurements or reflect the ongoing consequences of past biased policies, such as exclusionary zoning, added Akinwumi.
Equity concerns also exist in access to AI's benefits, said Neal. For example, smaller institutions serving historically marginalized communities are less likely to have the resources needed to implement AI, thereby depriving their communities of the technology's benefits. Across the spectrum of development and deployment, AI should not be used solely as a tool for improving the efficiency of systems, and stakeholders should ensure that AI promotes equity goals, panelists urged.
The panelists argued that stakeholders will need well-crafted policies to ensure that AI systems are promoting socially responsible goals. James Tassos, deputy director of tax policy and strategic initiatives at the National Council of State Housing Agencies, outlined three focus areas: how AI systems are helping to expand the affordable housing supply, how AI systems can reduce costs through better data and tools, and how AI empowers community-based organizations and low-income communities.
In addition, responsible AI will incorporate equity goals into its algorithm; preserve privacy; produce reliable, safe, and valid results; produce explainable decisions; and be governable, said Akinwumi, describing principles aligned with the White House’s AI Bill of Rights. Kim Phan, a partner at Troutman Pepper, described the early efforts the European Union (EU) has taken to craft a regulatory framework for AI. Under EU regulations, housing, along with health and financial information, is considered a particularly high-risk domain for AI deployment given the privacy and equity concerns involved. To wit, the EU is creating rules covering risk management, data governance, monitoring, transparency, and human oversight of AI systems.
In the United States, policy development is already underway. The National Institute of Standards and Technology has created a voluntary framework for AI risk management, and the Consumer Financial Protection Bureau (CFPB) has released guidance for AI applications for home appraisals to protect against discriminatory AI output. For adverse credit reports, CFPB requires that adverse decisions, including landlord decisions to reject prospective tenants, be explainable. In addition, states such as California and Illinois have begun implementing their own consumer and employee protection laws governing AI.
The incorporation of AI tools into housing-related activities is underway. To balance the risks and benefits of this technology, well-crafted and intentional policy choices will be crucial, panelists argued. Significant work already is promoting housing equity, reducing bias in housing systems, and moving the needle toward more affordable housing.