There are mixed opinions on whether AI is helping child-protection organizations properly identify high-risk children. But if built properly, predictive algorithms are seeing some success. When people call a child-abuse hotline to warn or suggest an organization take a closer look into a child’s home situation, some orgs use AI to comb through hundreds of data points related to that child’s situation and then predict the child’s level of in-home risk.
Sometimes the prediction is at odds with professional opinions. On one hand, people think that AI is helping the organizations reach beyond their workers’ biases. On the other hand, people think the predictive algorithms are biased themselves, which is also likely true. But likely the best solution is finding the right balance between humans and AI.