Літня майстерня з Computer Vision, 3-7 липня
Частина 1. Statistical Decision Theory in Computer Vision, Structural Prediction and Learning
Starting toolbox for statistical recognition. Modeling with uncertainty, making decisions under uncertainty, the role of the loss function, estimating probabilistic models and making predictions with uncertainty. This classical framework is illustrated on simple computer vision examples: detection, tracking, noisy image recovery with nuisance parameters.
Probabilistic models of structured objects: from hidden Markov models to Markov Random Fields. These models are well suited for problems such as reading a license plate (sequence of digits), performing image segmentation and much more.
We consider learning of structured models. Learning conditionally independent models with maximum likelihood (image segmentation). Learning structured models by empirical risk minimization with structured support vector machine approach.
Частина 2. Generative Models. Latent Representation Learning and Analysis
Unsupervised/ Semi-Supervised Learning: Autoencoders, Variational Autoencoders, Generative Adversarial Networks (GANs), Semi-Supervised Learning with Ladder Networks.
Bridging the gap between VAEs and GANs: Adversarial Variational Bayes, f-divergence functions family for training Generative models.
Representation Learning: Mutual information, information theory perspective; Learning disentangled representations; Interpretability in deep visual representations.
Hidden states visualization and losses for embedding learning: t-SNE, triplet loss, center loss.
Python, Machine Learning basics, Neural Networks. It is good to be familiar with one of the Deep Learning framework.
Alexander Shekhovtsov, PhD
Researcher at Center for Machine Perception (CMP), Czech Technical University in Prague.
Engineer at Augmented Pixels.
Research engineer at sprint-42.
Head of Face Recognition Department at RingLabs.