Everything had been done correctly by Crystal Marie McDaniels. With nearly six-figure incomes and credit scores of 805 and 725, respectively, she and her husband Eskias had more money saved for a down payment than they would ever require.
The $375,000 four-bedroom home in Charlotte had a neighborhood pool for their son Nazret, a lush lawn, and 2,700 square feet. They would pay less each month for their mortgage than they had for rent in Los Angeles. She said prequalification was very easy. Then the phone rang two days prior to closing.
| Category | Details |
|---|---|
| Topic | Algorithmic Redlining in U.S. Mortgage Lending |
| Original Redlining Era | 1930s–1960s, Home Owners’ Loan Corporation (HOLC) drew racially coded maps |
| Governing Law | Fair Housing Act of 1968 (Title VIII, Civil Rights Act) |
| Key Dataset | Home Mortgage Disclosure Act (HMDA) Data |
| Denial Rate Disparity (Black vs. White applicants) | 80% more likely to be denied under similar financial profiles — The Markup, 2021 |
| Latino Applicant Disparity | 40% more likely to be denied than comparable White applicants |
| Asian/Pacific Islander Disparity | 50% more likely to be denied |
| Native American Disparity | 70% more likely to be denied |
| Notable Case | Crystal Marie & Eskias McDaniels, Charlotte, NC — denied despite credit scores of 805 and 725 |
| Key Regulatory Body | Consumer Financial Protection Bureau (CFPB) |
| Technical Mechanism | Biased training data, proxy variables (ZIP codes, employment type), feedback loops |
| Study Scope | Analysis of over 2 million conventional mortgage applications (2019) |
| Industry Critics | American Bankers Association, Mortgage Bankers Association |
| Research Voice | José Loya, Assistant Professor of Urban Planning, UCLA |
Between twelve and seventeen times, the loan officer had submitted their application to the underwriting department. The response was the same each and every time. No. The couple had already paid nonrefundable fees totaling $6,000. They reserved moving trucks. “It seemed like it was getting rejected by an algorithm,” afterwards, Crystal Marie said, “and then there was a person who could step in and decide to override that or not.”
Although her employer confirmed she was not in danger of losing her job, the official explanation offered was that she was a contractor rather than a full-time employee. Her white coworkers were contractors with mortgages.

One story can be written off as anecdotal. Two million of them are much more difficult to ignore. One of the most thorough statistical analyses of mortgage lending data ever published was carried out in 2021 by The Markup, which controlled for seventeen financial variables, such as loan-to-value and debt-to-income ratios. Lenders were 80 percent more likely to reject Black applicants than white applicants with almost identical financial profiles, according to the startling findings.
The likelihood of being rejected was 40% higher for Latino applicants. Fifty percent of applicants were Asian or Pacific Islander. Seventy percent of applicants were Native Americans. These weren’t anomalies in the area. These national rates were derived from nationwide HMDA data.
The machinery causing the harm is what distinguishes this moment from past civil rights struggles over housing. In the 1930s and 1940s, loan officers, neighborhood associations, and federal agencies literally drew red lines around Black communities on city maps, designating them as “hazardous” and denying them access to credit for generations.
For decades, redlining had a human face. That was prohibited by the Fair Housing Act of 1968. However, the inequality it created—segregated neighborhoods, stunted home equity, and wealth disparities that widened over time—never truly went away. It recently discovered new infrastructure.
Machine learning models that have been trained on historical data are the foundation of today’s mortgage underwriting systems. At that point, the issue becomes more complex and is more likely to be overlooked. It is not necessary for a model to be aware of an applicant’s race in order to discriminate. ZIP codes are all that are required. classification of employment. closeness to specific school districts.
These are known as proxy variables, and racial and economic histories are ingrained in them. An algorithm learns these patterns as if they were objective financial knowledge when it is trained on decades’ worth of lending decisions that were influenced by redlining. Risk assessment is used to encode discrimination.
There’s a persistent feeling that the industry is more aware of this than it acknowledges. The same trade groups opposed the disclosure when the Consumer Financial Protection Bureau first suggested extending HMDA data collection to include credit scores, debt-to-income ratios, and loan-to-value figures—exactly the data that lending trade groups had long claimed would explain away racial disparities. They objected to it being collected by the government.
They mentioned worries about cybersecurity. Ironically, the racial disparities did not decrease once researchers had access to some of that data. The CFPB’s own analysis indicates that lending disparities for applicants of color are not eliminated by taking credit scores into account.
According to José Loya, a professor of urban planning at UCLA who examined The Markup’s methodology, “Lenders used to tell us, ‘It’s because you don’t have the lending profiles; the ethno-racial differences would go away if you had them.”” Your work demonstrates that’s untrue. Despite having less debt, high-earning Black applicants were still rejected more frequently than high-earning white applicants with more debt. That eventually ceases to resemble financial risk modeling and begins to resemble something completely different.
The way the new language of algorithmic objectivity neatly fulfills the same purpose as the old maps—providing cover—is difficult to ignore. The argument is that the algorithm is not biased. It is merely data. However, information about a society that was influenced by prejudice itself is not objective. It was never the case. In theory, there is no financial risk associated with a zip code.
It contains the particular history of which neighborhoods were discouraged from investment for fifty years, which schools received inadequate funding, and which communities were denied credit. You haven’t completely eradicated discrimination if you feed that history into a machine learning model and refer to the result as a risk score. It’s automated by you.
In response to this evidence, the industry has generally questioned the methodology, claiming that the public data is insufficient to draw conclusions, while simultaneously advocating against expanding the data’s completeness. While acknowledging the existence of disparities, the American Bankers Association did not go into great detail to explain them. There were no particular errors in The Markup’s calculations found by any of the major trade associations.
It’s really unclear what will happen next. Automated decision systems are starting to be subject to fairness requirements under new regulatory frameworks, such as parts of the EU AI Act. The CFPB and Congress are debating algorithmic transparency mandates, audit requirements, and bias testing. In order to lessen disparate outcomes, some lenders have quietly started experimenting with fairness-aware machine learning techniques, which involve training models with explicit constraints.
It remains to be seen if any of that will be sufficient and how quickly it might proceed. Given how long the policy process typically takes, the families awaiting answers most likely cannot afford to wait.
In the end, Crystal Marie McDaniels acquired a home in Charlotte. No one looking to buy a house should have to endure months more of conflict, paperwork, and stress. The local pool was given to her son. However, the algorithm that rejected her seventeen times is most likely still in operation and isn’t considering any of that.
