Manipulating Machine Learning Systems by Manipulating Training Data

TrojDRL: Trojan Attacks on Deep Reinforcement Learning Agents by Panagiota Kiourti, Kacper Wardega, Susmit Jha, Wenchao Li (arXiv.org)

Recent work has identified that classification models implemented as neural networks are vulnerable to data-poisoning and Trojan attacks at training time.

Via Bruce Schneier.

What do you want to say about this post?

This site uses Akismet to reduce spam. Learn how your comment data is processed.