Machine Learning Threat Taxonomy

Read Failure Modes in Machine Learning – Security documentation

In the last two years, more than 200 papers have been written on how Machine Learning (ML) can fail because of adversarial attacks on the algorithms and data; this number balloons if we were to incorporate non-adversarial failure modes. The spate of papers has made it difficult for ML practitioners, let alone engineers, lawyers and policymakers, to keep up with the attacks against and defenses of ML systems. However, as these systems become more pervasive, the need to understand how they fail, whether by the hand of an adversary or due to the inherent design of a system, will only become more pressing. The purpose of this document is to jointly tabulate both the of these failure modes in a single place.

Understanding this threat becomes important as more cyber-security functions, especially security operations, become dependent on machine learning algorithms.

human being | casual photographer | nemophilist | philomath | human being khakis | t-shirt | flip-flops

What do you want to say about this post?

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: