Growing threat of conflicting machine learning attacks



Big Data-powered Machine Learning (ML) and Deep Learning have made impressive strides in many areas. Recent years have seen a rapid increase in the use of machine learning, whereby computers can be programmed to identify patterns of information and make increasingly accurate predictions over time. Machine learning tools allow organizations to quickly identify new opportunities as well as potential risks. In cybersecurity, machine learning techniques have a lasting impact. However, while machine learning models have many potential advantages, they can be vulnerable to manipulation. This risk is known as “adversarial machine learning (AML)”. Let’s understand this in detail …

What is an adversarial machine learning attack?

The term “adversary” is used in the field of computer security to describe people or machines which may attempt to penetrate or corrupt a network or a computer program. The adversarial machine learning attack is a machine learning technique that attempts to fool models by providing misleading information. The most common reason is to cause the model to malfunction.

Contradictory machine learning was studied as early as 2004, but the threat has increased dramatically in recent years with the increasing use of AI and ML. Most artificial intelligence researchers would agree that one of the main concerns of machine learning is the adversarial attacks that result in unwanted behavior of trained models.

How do adversarial attacks work?

Adverse attacks exploit this characteristic to confuse machine learning algorithms by manipulating their input data. By adding tiny, unobtrusive patches of pixels to an image, a malicious actor can trick the machine learning algorithm into classifying it as something it is not.

For example, in 2018, a group of researchers showed that by adding stickers to a ‘Stop’ sign, they could trick an autonomous car’s computer vision system to confuse it with a speed limit sign – ‘speed limit 45’.

The researchers also managed to trick facial recognition systems by pasting a printed pattern onto glasses or hats. Additionally, there have been successful attempts to trick speech recognition systems into hearing ghost phrases by inserting white noise patterns into the audio.

What is a contradictory example?

The contradictory example refers to a specially crafted entry that is designed to appear “normal” to humans, but causes misclassification in a machine learning model.

Types of conflicting attacks

There are two main categories of accusatory attacks: “Black Box accusatory attacks” and “White Box accusatory attacks”. White-box adversarial attacks describe scenarios in which the attacker has access to the target model’s underlying training policy network, while black-box adversarial attacks describe scenarios in which the attacker does not have a full access to the policy network. So the adversarial black box attack uses a different model or no model at all to generate conflicting images in the hope that these will transfer to the target model.

Defense against adversarial machine learning

Researchers have proposed a multi-step approach to build defense against AML …

  • Threat modeling: Estimating the attacker’s goals and capabilities can provide an opportunity to prevent attacks. This is done by creating different models of the same ML system that can resist these attacks.
  • Attack simulation: Simulating attacks according to the attacker’s possible attack strategies can reveal flaws.
  • Assessment of the impact of attacks: In this method, it is necessary to assess the total impact that an attacker can have on the system, thus ensuring preparation in the event of such an attack.
  • Information laundering: By modifying the information extracted by the attacker, this type of defense can render the attack useless.
  • Contradictory training: In this approach, the engineers of the machine learning algorithm recycle their models on conflicting examples to make them robust against data disturbances.
  • Defensive distillation: It adds flexibility to the process of classifying an algorithm to make it less likely to be exploited.

Wrap

With machine learning quickly becoming central to organizations’ value proposition, they need to protect themselves against the associated risks. Thus, adversarial machine learning will always be important for the ethical purpose of protecting ML systems.

(The author Neelesh Kripalani is the CTO at Clover Infotech and the opinions expressed in this article are his own)


Previous From gold to bitcoin and beyond
Next Electronic stability control: everything you need to know

No Comment

Leave a reply

Your email address will not be published.