Manipulating Machine Learning Systems by Manipulating Training Data

By on November 29th, 2019 in General
TrojDRL: Trojan Attacks on Deep Reinforcement Learning Agents by Panagiota Kiourti, Kacper Wardega, Susmit Jha, Wenchao Li (

Recent work has identified that classification models implemented as neural networks are vulnerable to data-poisoning and Trojan attacks at training time.

Via Bruce Schneier.

Got some feedback? Please leave a comment below.

This site uses Akismet to reduce spam. Learn how your comment data is processed.