Manipulating Machine Learning Systems by Manipulating Training Data

TrojDRL: Trojan Attacks on Deep Reinforcement Learning Agents by Panagiota Kiourti, Kacper Wardega, Susmit Jha, Wenchao Li (arXiv.org)

Recent work has identified that classification models implemented as neural networks are vulnerable to data-poisoning and Trojan attacks at training time.

Via Bruce Schneier.

Author: Khürt Williams

A human who works in information security and enjoys photography, Formula 1 and craft ale. #nobridge