CMU AI researchers present new study to ensure fairness and accuracy of machine learning systems for public policy


The rapid rise of machine learning applications in criminal justice, hiring, healthcare and social service intentions is having a huge impact on society. These vast applications have heightened concerns about their potential functioning among machine learning and artificial intelligence researchers. New methods and established theoretical limits have been developed to improve the performance of ML systems. With such progress, it becomes necessary to understand how these methods and limitations translate into political decisions and impact society. Researchers continue to thrive on unbiased and accurate models that can be used in various fields.

A deeply ingrained guess is that there is a tradeoff between accuracy and fairness when using machine learning systems. Accuracy here refers to the accuracy of the model’s prediction with respect to the task at hand rather than the specific statistical property. The ML predictor is qualified as unfair if it treats people incongruously on the basis of sensitive or protected attributes (racial minorities, economically disadvantaged). In order to manage this, adjustments are made to data, labels, model training, scoring systems and other aspects related to the ML system. However, such changes tend to make the system less precise.

Researchers at Carnegie Mellon University say this trade-off is negligible in practice in various policy areas thanks to their study published in Nature Machine Intelligence. This study focuses on testing the assumed fairness-precision tradeoffs in resource allocation problems.

Researchers focused on the circumstances in which demanded resources are scarce, and machine learning systems have been used to allocate these resources. Emphasis was placed on the following four areas:

  • Prioritize limited awareness of mental health care based on a person’s risk of returning to prison to reduce re-incarceration.
  • Predict serious security breaches.
  • Model the risk that students will not graduate from high school on time to recognize those who need support.
  • Help teachers meet crowdfunding goals for class needs.

In each of these contexts, it was observed that the optimized models could effectively predict outcomes, but indicated considerable disparity in recommendations for interventions. However, when adjustments are implemented, inconsistencies based on race, age or income can be addressed without loss of precision.

Source: https://www.nature.com/articles/s42256-021-00396-x.pdf

All of these results suggest that there is no need for new and complex machine learning methods or a huge sacrifice of precision contrary to what is supposed. Instead, setting equity goals in advance and making design decisions based on needs would be the first steps to achieving that goal.

This research aims to inform fellow researchers and policy makers that the common belief about compromise is not necessarily true if one deliberately designs fair and equitable systems.

Machine learning, artificial intelligence, and computing communities need to start designing systems that maximize accuracy and fairness and embrace machine learning as a decision support tool.

Paper: https://www.nature.com/articles/s42256-021-00396-x

Reference: https://techxplore.com/news/2021-10-machine-fair-accurate.html


Source link

Previous Iron Man vs Vision: who would win a comic book battle
Next Critics presentation to Gbenga Adefaye: Media man at 60

No Comment

Leave a reply

Your email address will not be published. Required fields are marked *