April 26- May 2/2017 - 10.00-12.00 14.30-17.00 - Room 103
May 3/2017 10.00-12.00 Room 103
May 3/2017 14.30-19.00 Room 18
A tensor of order $d$ can be thought of as a multidimensional array with $d$ indexes that represents a multilinear operator with $d$ arguments in coordinates. Matrices are tensors of order $d = 2$. Just as matrices have found application in a myriad of applications, tensors are becoming one of the focal points in Big Data applications as much real-world data is inherently multidimensional. Unfortunately, the complexity of representing a tensor scales exponentially with its order $d$, which is commonly referred to as the curse of dimensionality. One powerful stratagem that circumvents this curse consists of imposing additional structure on these tensors so that they may be represented much more economically. This leads to the concept of tensor decompositions. Several tensor decompositions are in vogue today, most of which generalize the singular value decomposition of a matrix.
Like their matrix counterparts, tensors decompositions have found application in a variety of fields. The tensor rank decomposition determines the computational complexity of evaluating multilinear operators, such as matrix multiplication. Strassen's well-known $O(n^2.7)$ algorithm could have been discovered from a tensor rank decomposition of the matrix multiplication tensor. In fluorescence spectroscopy the constituents of a chemical mixture of fluorophores can be identified by computing the tensor rank decomposition of a tensor obtained from physical measurements. A similar application is blind source separation in which a mixture of observed signals is decomposed into the original source signals. The (hierarchical) Tucker decomposition is an excellent tool for data compression. It has found application in image and video compression. It can be employed for certain classification tasks such as handwritten digit classification.
In this course, we will investigate the fundamentals of tensor and their decompositions. The two main decompositions are covered: the tensor rank decomposition (also known as CANDECOMP, PARAFAC, and CP decomposition) and the (hierarchical) Tucker decomposition. Some of their theoretical properties relevant to applications are reviewed. Their usefulness will be illustrated by means of the several applications in which they can be employed, namely computational complexity, blind source separation, document clustering, data compression, and handwritten digit classification. Many of these applications will be illustrated in the Tensorlab v3 software package for Matlab.