Coordinator: Prof. Antonio Vicino
Home |  DIISM |   | Login Privacy e Cookie policy



Adversarial Machine Learning


Battista Biggio
Università di Cagliari
Course Type
Type B
Week 1
Tuesday, Sept. 14, 2021: 9-12 (3 hours)
Wednesday, Sept. 15, 2021: 15-18 (3 hours)
Thursday, Sept. 16, 2021: 15-18 (3 hours)
Friday, Sept. 17, 2021: 9-12 (3 hours)

Week 2
Wednesday, Sept. 22, 2021: 15-18 (3 hours)
Thursday, Sept. 23, 2021: 15-18 (3 hours)
Friday, Sept. 24, 2021: 10-12 (2 hours) – Final assessment
Today machine-learning algorithms are used for many real-world applications, including image recognition, spam filtering, malware detection, biometric recognition. In these applications, the learning algorithm may have to face intelligent and adaptive attackers who can carefully manipulate data to purposely subvert the learning process. As machine learning algorithms have not been originally designed under such premises, they have been shown to be vulnerable to well-crafted attacks, including test-time evasion (namely, adversarial examples) and training-time poisoning attacks. In particular, the security of cloud-based machine-learning services has been questioned through the careful construction of adversarial queries that can reveal confidential information on the machine-learning service and its users. This course aims to introduce the fundamentals of the security of machine learning, the related field of adversarial machine learning, and some techniques to assess the vulnerability of machine-learning algorithms and to protect them from adversarial attacks. We report application examples including object recognition in images, biometric identity recognition, spam and malware detection.



PhD Students/Alumni

Dip. Ingegneria dell'Informazione e Scienze Matematiche - Via Roma, 56 53100 SIENA - Italy