We need to watch out for AI biases before it’s too late


Cognitive bias leads to AI bias, and the garbage in / out axiom applies. Experts offer advice on how to limit the fallout from AI bias.

Image: Shutterstock / metamorworks

Artificial intelligence (AI) is the ability of computer systems to simulate human intelligence. It didn’t take long for AI to become indispensable in most aspects of human life, with the field of cybersecurity being one of the beneficiaries.

AI can predict cyber attacks, help create enhanced security processes to reduce the likelihood of cyber attacks and mitigate their impact on IT infrastructure. AI can also free cybersecurity professionals to focus on more critical tasks within the organization.

However, aside from the advantages, AI-based solutions (for cybersecurity and other technologies) also have drawbacks and challenges. One of those concerns is the AI ​​bias.

SEE: Digital Transformation: A CXO Guide (Free PDF) (TechRepublic)

Cognitive biases and AI biases

AI bias is a direct result of human cognitive bias. So, let’s take a look at this first.

Cognitive bias is an evolving decision-making system in the mind that is intuitive, fast, and automatic. “The problem arises when we allow our fast, intuitive system to make decisions that we should really pass on to our slow, logical system,” Toby Macdonald writes in the BBC article. How do we really make decisions? “This is where mistakes creep in.”

Human cognitive biases can color decision making. And, equally problematic, models based on machine learning can inherit human-created data tainted with cognitive biases. This is where the AI ​​bias comes in.

Cem Dilmegani, in his article AIMultiple Bias in AI: what it is, types and examples of bias and tools to correct it, defines AI bias as: “AI bias is an anomaly in the output of machine learning algorithms. This could be due to discriminatory assumptions made during the algorithm development process or to bias in training data. “

SEE: AI can be unintentionally biased: data cleansing and awareness can help prevent the problem (TechRepublic)

Where AI bias most often comes into play is in the historical data used. “If the historical data is based on previous prejudicial human decisions, it can have a negative influence on the resulting models,” suggested Dr. Shay Hershkovitz, managing director and vice president of SparkBeyond, a problem-solving company based on AI, during an email conversation. with TechRepublic. “A classic example is the use of machine learning models to predict which candidates will be successful in a position. If past hiring and promotion decisions are biased, so will the model. “

Sadly, Dilmegani also said AI shouldn’t be going unbiased anytime soon. “After all, humans create the biased data while humans and human-made algorithms verify the data to identify and remove bias.”

How to Mitigate AI Bias

To reduce the impact of AI bias, Hershkovitz suggests:

  • Building AI solutions that provide explainable predictions / decisions, called “glass boxes” rather than “black boxes”
  • Integrate these solutions into human processes offering an appropriate level of monitoring
  • Ensure AI solutions are properly compared and frequently updated

The above solutions, when considered, underscore that humans must play an important role in reducing AI biases. As to how this is accomplished, Hershkovitz suggests the following:

  • Businesses and organizations need to be fully transparent and accountable for the AI ​​systems they develop.
  • AI systems must allow human monitoring of decisions.
  • The creation of standards, for the explainability of decisions made by AI systems, should be a priority.
  • Businesses and organizations should educate and train their developers to include ethics in their algorithm development considerations. A good place to start is the 2019 edition of the OECD Council recommendation on artificial intelligence (PDF), which discusses the ethical aspects of artificial intelligence.

Final thoughts

Hershkovitz’s concern about AI biases doesn’t mean he’s anti-AI. In fact, he cautions that we need to recognize that cognitive biases are often helpful. It represents relevant knowledge and experience, but only when it is based on fact, reason and widely accepted values, such as equality and parity.

He concluded: “In our time, where intelligent machines, powered by powerful algorithms, determine so many aspects of human existence, our role is to ensure that AI systems do not lose their pragmatic values ​​and moral.

Also look


Source link

Previous Energy Secretary Jennifer M. Granholm congratulates the winners of the 2021 U.S. Clean Energy Education and Empowerment Award
Next With E-office, ERP files go electronic | Chandigarh News

No Comment

Leave a reply

Your email address will not be published. Required fields are marked *