Event description
Machine Learning (ML) systems are increasingly integrated into critical applications, making their security a fundamental concern. However, these models are vulnerable to adversarial manipulations—deliberate attacks designed to exploit weaknesses in the learning process—which can compromise their accuracy, overall integrity, and sustainability. In this talk, I will introduce the foundational principles of ML security, explaining why protecting these systems is essential for safeguarding consumer safety. We will then explore key threat vectors, focusing on two major categories of adversarial threats: evasion attacks, which involve carefully crafted inputs that deceive models at test time, and poisoning attacks, in which an adversary manipulates training data to degrade a model's performance in the long run. Finally, the seminar will discuss proactive strategies to enhance ML robustness by design. These include systematic testing frameworks that assess model security against fast and reliable adversarial attacks, as well as advanced training methodologies that reduce model sensitivity to malicious perturbations. The goal is to outline best practices for integrating security into the ML development pipeline and to identify promising research directions for building more secure and resilient ML systems.