From Crime to Poverty and Kids — AI Finds Applications Across Issues and Datasets.
In my last post I talked about the great work the Rapid Growth Institute has been doing in the world of statistical policing. By using the rich data that is available thanks to location services, social media, and other more traditional police surveillance methods, we are able to help predict and identify possible high-risk crime situations. This allows law enforcement to intervene rapidly, sometimes even before a crime has been committed.
Applying this same approach in other problem spaces has clear strategic value for RGI and for the partners we work with. Crime and poverty have overlapping datasets, so when Keep Kids Safe reached out we knew it could be a great opportunity to apply our AI methods from policing to their problem of reducing child abuse!
The Allegheny Family Screening Tool can be considered Version 1 of this concept: get data, find patterns, predict neglect — it’s that simple. Essentially the algorithm is designed to score families against standardized criteria, with higher scores representing worse situations. The field agents intervene once a family score calculated by the algorithm gets above the cutoff threshold. By removing decision making from humans, who can never achieve the objectivity of an AI algorithm, we ensure the decision making process is fairly applied to everyone. It’s this kind of work that makes me proud to be working at a company like RGI, who is having a real impact in this world.
There is some push back on this approach, but to be honest, I think they just don’t understand the algorithms. An algorithm can’t be bad, right? Maybe our data can be better, but you can only work with what you’ve got, so you can’t blame us for bad data. Can you?
Keep making the world a better place and stay tuned for more great stories from the world of AI.
Marcus