Tthree hundred twenty-four. That was the score Mary Louis was given by an AI-powered tenant screening tool. The software, SafeRent, didn’t explain in its 11-page report how the score was calculated or how it weighed various factors. It didn’t say what the score actually signified. It just displayed Louis’s number and determined it was too low. In a box next to the result, the report read: “Score recommendation: DECLINE”.
Louis, who works as a security guard, had applied for an apartment in an eastern Massachusetts suburb. At the time she toured the unit, the management company said she shouldn’t have a problem having her application accepted. Though she had a low credit score and some credit card debt, she had a stellar reference from her landlord of 17 years, who said she consistently paid her rent on time. She would also be using a voucher for low-income renters, guaranteeing the management company would receive at least some portion of the monthly rent in government payments. Her son, also named on the voucher, had a high credit score, indicating he could serve as a backstop against missed payments.
But in May 2021, more than two months after she applied for the apartment, the management company emailed Louis to let her know that a computer program had rejected her application. She needed to have a score of at least 443 for her application to be accepted. There was no further explanation and no way to appeal the decision.
“Mary, we regret to inform you that the third party service we utilize to screen all prospective tenants has denied your tenancy,” the email read. “Unfortunately, the service’s SafeRent tenancy score was lower than is permissible under our tenancy standards.”
A tenant sues
Louis was left to rent a more expensive apartment. Management there didn’t score her algorithmically. But, she learned, her experience with SafeRent wasn’t unique. She was one of a class of more than 400 Black and Hispanic tenants in Massachusetts who use housing vouchers and said their rental applications were rejected because of their SafeRent score.
In 2022, they came together to sue the company under the Fair Housing Act, claiming SafeRent discriminated against them. Louis and the other named plaintiff, Monica Douglas, alleged the company’s algorithm disproportionately scored Black and Hispanic renters who use housing vouchers lower than white applicants. They alleged the software inaccurately weighed irrelevant account information about whether they’d be good tenants – credit scores, non-housing related debt – but did not factor in that they’d be using a housing voucher. Studies have shown that Black and Hispanic rental applicants are more likely to have lower credit scores and use housing vouchers than white applicants.
“It was a waste of time waiting to get a decline,” Lewis said. “I knew my credit wasn’t good. But the AI doesn’t know my behavior – it knew I fell behind on paying my credit card but it didn’t know I always pay my rent.”
Two years have passed since the group first sued SafeRent – so long that Lewis says she has moved on with her life and all but forgotten about the lawsuit, though she was one of only two named plaintiffs. But her actions may still protect other renters who make use of similar housing programs, known as Section 8 vouchers for their place in the US federal legal code, from losing out on housing because of an algorithmically determined score.
SafeRent has settled with Louis and Douglas. In addition to making a $2.3m payment, the company has agreed to stop using a scoring system or make any kind of recommendation when it came to prospective tenants who used housing vouchers for five years. Although SafeRent legally admitted no wrongdoing, it is rare for a tech company to accept changes to its core products as part of a settlement; the more common result of such agreements would be a financial agreement.
“While SafeRent continues to believe the SRS Scores comply with all applicable laws, litigation is time-consuming and expensive,” Yazmin Lopez, a spokesperson for the company, said in a statement. “It became increasingly clear that defending the SRS Score in this case would divert time and resources SafeRent can better use to serve its core mission of giving housing providers the tools they need to screen applicants.”
Your new AI landlord
Tenant-screening systems like SafeRent are often used as a way to “avoid engaging” directly with applicants and pass the blame for a denial to a computer system, said Todd Kaplan, one of the attorneys representing Lewis and the class of plaintiffs who sued the. company.
The property management company told Louis the software alone decided to reject her, but the SafeRent report indicated it was the management company that set the threshold for how high someone needed to score to have their application accepted.
Still, even for people involved in the application process, the workings of the algorithm are opaque. The property manager who showed Louis the apartment said she couldn’t see why Louis would have any problems renting the apartment.
“They’re putting in a bunch of information and SafeRent is coming up with their own scoring system,” Kaplan said. “It makes it harder for people to predict how SafeRent is going to view them. Not just for the tenants who are applying, even the landlords don’t know the ins and outs of SafeRent score.”
As part of Louis’s settlement with SafeRent, which was approved on 20 November, the company can no longer use a scoring system or recommend whether to accept or decline a tenant if they’re using a housing voucher. If the company does come up with a new scoring system, it is obliged to have it independently validated by a third-party fair housing organization.
“Removing the thumbs-up, thumbs-down determination really allows the tenant to say: ‘I’m a great tenant,’” said Kaplan. “It makes it a much more individualized determination.”
AI spreads to foundational parts of life
Nearly all of the 92 million people who are considered low-income in the US have been exposed to AI decision-making in fundamental parts of life such as employment, housing, medicine, schooling or government assistance, according to a new report about the harms. of AI by attorney Kevin de Liban, who represented low-income people as part of the Legal Aid Society. The founder of a new AI justice organization called TechTonic Justice, De Liban first started investigating these systems in 2016 when he was approached by patients with disabilities in Arkansas who suddenly stopped getting as many hours of state-funded in-home care because of automated decision. -making that cut human input. In one instance, the state’s Medicaid dispensation relied on a program that determined a patient did not have any problems with his foot because it had been amputated.
“This made me realize we shouldn’t defer to (AI systems) as a sort of supremely rational way of making decisions,” De Liban said. He said these systems make various assumptions based on “junk statistical science” that produce what he refers to as “absurdities”.
In 2018, after De Liban sued the Arkansas department of human services on behalf of these patients over the department’s decision-making process, the state legislature ruled the agency could no longer automate the determination of patients’ allotments of in-home care. De Liban’s was an early victory in the fight against the harms caused by algorithmic decision-making, though its use persists nationwide in other arenas such as employment.
Few regulations curb AI’s proliferation despite flaws
Laws limiting the use of AI, especially in making consequential decisions that can affect a person’s quality of life, are few, as are avenues of accountability for people harmed by automated decisions.
A survey conducted by Consumer Reports, released in July, found that a majority of Americans were “uncomfortable about the use of AI and algorithmic decision-making technology around major life moments as it relates to housing, employment, and healthcare”. Respondents said they were uneasy not knowing what information AI systems used to assess them.
Unlike in Lewis’s case, people are often not notified when an algorithm is used to make a decision about their lives, making it difficult to appeal or challenge those decisions.
“The existing laws that we have can be useful, but they’re limited in what they can get you,” De Liban said. “The market forces don’t work when it comes to poor people. All the incentive is in basically producing more bad technology, and there’s no incentive for companies to produce low-income people good options.”
Federal regulators under Joe Biden have made several attempts to catch up with the quickly evolving AI industry. The president issued an executive order that included a framework intended, in part, to address national security and discrimination-related risks in AI systems. However, Donald Trump has made promises to undo that work and slash regulations, including Biden’s executive order on AI.
That may make lawsuits like Louis’s a more important avenue for AI accountability than ever. Already, the lawsuit garnered the interest of the US Department of Justice and Department of Housing and Urban Development – both of which handle discriminatory housing policies that affect protected classes.
“To the extent that this is a landmark case, it has a potential to provide a roadmap for how to look at these cases and encourage other challenges,” Kaplan said.
Still, keeping these companies accountable in the absence of regulation will be difficult, De Liban said. Lawsuits take time and money, and the companies may find a way to build workarounds or similar products for people not covered by class action lawsuits. “You can’t bring these types of cases every day,” he said.