Sunday, December 3, 2023

“The machine has labeled you as high-risk”

Is Pittsburgh’s “child welfare” predictive analytics algorithm running amok? Inquiring minds (at the US Dept. of Justice) want to know! 

The Allegheny Family Screening Tool slaps an invisible scarlet number "risk score"
on every child whose parents or other caretakers have been accused of neglect.

(For a comprehensive examination of the dangers of this kind of computerized racial profiling, see our publication Big Data Is Watching You.)

 The most highly touted, most far-reaching example of computerized racial profiling in family policing (more accurate terms than “predictive analytics” in “child welfare”) is the one in Pittsburgh.  Called the Allegheny Family Screening Tool (AFST) it slaps an invisible scarlet number “risk score” between 1 and 20 on any child who is the subject of a hotline call alleging neglect.*

 In response to allegations of bias, the designers and supporters of AFST offer three defenses:  1. The algorithm is used only at the screening stage, to decide which allegations are most in need of investigation.  2. The investigators can’t be biased by this because they don’t even know the risk score.  3. The algorithm doesn’t tell investigators when to tear children from the arms of their families and consign them to foster care; that’s left to humans. 

We’ve always known that the claim about workers not knowing the risk score was b.s.  They may not know the precise risk number but if they’re told to rush out and investigate a particular family, they know the predictive analytics algorithm, known as the Allegheny Family Screening Tool (AFST), has labeled the family “high risk.”  So of course that’s going to bias the investigator.

But now we know about a case in Pittsburgh that is calling into question all the claims about AFST’s supposed lack of bias, and about who -- or what -- makes the decisions.  And the U.S. Department of Justice is doing some of the questioning.  These revelations come in a story from the American Bar Association’s ABA Journal.  That story builds on outstanding reporting by the Associated Press

Both stories center on Andrew and Lauren Hackney, who followed their doctor’s advice and took their infant to the ER when the baby wasn’t eating enough.  They believe the hospital called the Allegheny County family police agency, which immediately took away the child.  

Here’s where we find out who’s really calling the shots, the humans or the algorithm.   According to the Hackneys’ lawyer 

when the Hackneys asked their caseworker what they could do to get their daughter back, “the caseworker said, ‘I’m sorry. Your case is high-risk. The machine has labeled you as high-risk.’” [Emphasis added.] 

That was the first the Hackneys knew about AFST.  

As for why the machine labeled them high-risk – nobody knows.  Allegheny County brags that the data elements in AFST are public, but they don’t say how much weight they give to each factor or even if it counts for or against a family.  And they don’t say what led to the risk score in a given case. 

The Hackneys and their lawyer suspect that the algorithm, and the humans who may be slavishly following it, discriminated against them because of disabilities.  As the ABA Journal story explains: 

Lauren Hackney has attention-deficit/hyperactivity disorder that can cause memory loss, and Andrew Hackney has some resultant damage from a stroke. 

That’s why the Justice Department is involved. In this case, and at least two others, they are investigating whether, in Pittsburgh, humans and machines alike violated the federal Americans with Disabilities Act. 

The ABA Journal article quotes two experts on disability law and family policing, Prof. Robyn Powell of the University of Oklahoma School of Law and Prof. Sarah Lorr, co-director of the Disability and Civil Rights Clinic at Brooklyn Law School.  They say that 

Not only are some of the criteria [used in the AFST algorithm] explicitly disallowed … pointing to disability-related factors, but long-standing racial biases also are implicitly included. 

“The ADA explicitly says that state and local government entities cannot discriminate based on disability, and within that requirement is the idea that you cannot use screening tools or eligibility criteria that would [point to] people with disabilities,” Powell says. 

And in Colorado, where at least one county is using a similar algorithm, family defender Sarah Morris says 

“All this does is launder human biases through the mirage of some kind of transparent nonbiased machine calculation.” 

Notwithstanding the specific comments allegedly made by the caseworker in the Hackney case, a co-designer of AFST, Emily Putnam-Hornstein still claims that the risk score is just one of many tools and it’s “advisory only.” 

But even if that’s true, should we trust the “advice” of an algorithm designed by someone who denies the field has a racism problem, demeans Black activists, and takes pride in being part of a group that defends a self-described "race realist" law professor who hangs out with Tucker Carlson. 

She has said: "I think it is possible we don’t place enough children in foster care or early enough" and signed onto an extremist agenda that proposes requiring anyone reapplying for public benefits, and not otherwise seen by "mandatory reporters," to produce their children for a child abuse inspection.  And she helped design another algorithm that explicitly uses race as a risk factor.  (Designers of that one say it’s only supposed to be used to target “prevention.”) 

It's been almost two years since the machine said the Hackneys’ infant was “high risk.” She’s still in foster care.

*-In Pennsylvania, the state runs the child abuse hotline but counties do everything else.  Counties are allowed to screen most “neglect” allegations but not allegations of abuse or “severe neglect.”