Evading Botnet Detectors Based on Flows and Random Forest with Adversarial Samples
Apruzzese, G., & Colajanni, M., IEEE International Symposium on Network Computing and Applications [BEST STUDENT PAPER AWARD], 2018 Conference
Oneliner: The first paper using adversarial examples against Botnet Detectors (yes, the title has a typo).
Abstract. Machine learning is increasingly adopted for a wide array of applications, due to its promising results and autonomous capabilities. However, recent research efforts have shown that, especially within the image processing field, these novel techniques are susceptible to adversarial perturbations. In this paper, we present an analysis that highlights and evaluates experimentally the fragility of network intrusion detection systems based on machine learning algorithms against adversarial attacks. In particular, our study involves a random forest classifier that utilizes network flows to distinguish between botnet and benign samples. Our results, derived from experiments performed on a public real dataset of labelled network flows, show that attackers can easily evade such defensive mechanisms by applying slight and targeted modifications to the network activity generated by their controlled bots. These findings pave the way for future techniques that aim to strengthen the performance of machine learning-based network intrusion detection systems.