Coordinator: Prof. Antonio Vicino
Home |  DIISM |   | Login Privacy e Cookie policy



Optimization For Learning


Anders Hansson
University of Linkoping Svezia
Course Type
Type A/B
April 2 h. 11.00-13.00, 14.30-16.00 aula 103
April 4 h. 11.00-13.00, 14.30-16.00 aula F
April 8 h. 11.00-13.00, 14.30-16.00 aula 103
April 10 h. 11.00-13.00, 14.30-16.00 aula 103
April 11 h. 11.00-13.00, 14.30-16.00 aula F
Optimization for Learning

In this course we will learn about optimization methods suitable for learning - both unsupervised learning and supervised learning. Among optimization methods and concepts covered are gradient methods, sub-gradient methods, stochastic gradient methods and back propagation. Among learning concepts discussed are entropy, graphical models, maximum likelihood estimation, the expectation maximization algorithm, the Boltzmann machine, mutual information, regression in finite dimensional spaces and Hilbert spaces, Gaussian processes, classification, the perceptron, support vector machines, and artificial neural networks. Classes of optimization problems covered are least squares problems, quadratic programs, conic optimization and rank optimization including heuristics like the nuclear norm and log-determinant.



PhD Students/Alumni

Dip. Ingegneria dell'Informazione e Scienze Matematiche - Via Roma, 56 53100 SIENA - Italy