Manipulating Machine Learning Systems by Manipulating Training Data

TrojDRL: Trojan Attacks on Deep Reinforcement Learning Agents by Panagiota Kiourti, Kacper Wardega, Susmit Jha, Wenchao Li (

Recent work has identified that classification models implemented as neural networks are vulnerable to data-poisoning and Trojan attacks at training time.

Via Bruce Schneier.

You Might Also Like

    %d bloggers like this: