When Adversarial Perturbations meet Concept Drift: an Exploratory Analysis on ML-NIDS

Apruzzese, G., Fass, A., & Pierazzi, F., ACM Workshop on Artificial Intelligence Security, 2024 Workshop
Oneliner: What happens when two popular phenomena in ML security join forces?

Abstract. We scrutinize the effects of “blind” adversarial perturbations against machine learning (ML)-based network intrusion detection systems (NIDS) affected by concept drift. There may be cases in which a real attacker – unable to access and hence unaware that the ML-NIDS is weakened by concept drift – attempts to evade the ML-NIDS with data perturbations. It is currently unknown if the cumulative effect of such adversarial perturbations and concept drift leads to a greater or lower impact on ML-NIDS. In this “open problem” paper, we seek to investigate this unusual, but realistic, setting—we are not interested in perfect knowledge attackers.

We begin by retrieving a publicly available dataset of documented network traces captured in a real, large (>300 hosts) organization. Overall, these traces include several years of raw traffic packets—both benign and malicious. Then, we adversarially manipulate malicious packets with problem-space perturbations, representing a physically realizable attack. Finally, we carry out the first exploratory analysis focused on comparing the effects of our “adversarial examples” with their respective unperturbed malicious variants in concept-drift scenarios. Through two case studies (a “short-term” one of 8 days; and a “long-term” one of 4 years) encompassing 48 detector variants, we find that, although our perturbations induce a lower detection rate in concept-drift scenarios, some perturbations yield adverse effects for the attacker in intriguing use cases. Overall, our study shows that the topics we covered are still an open problem which require a re-assessment from future research.

Paper PDF Cite Code ACM Digital Library