Attacking ML systems by changing  the order of the training data

Machine learning is vulnerable to a wide variety of attacks. It is now well understood that by changing the underlying data distribution, an adversary can poison the model trained with it or introduce backdoors. In this paper we present a novel class of training-time attacks that require no changes to the underlying dataset or model Read more about Attacking ML systems by changing  the order of the training data[…]