MIA'16

- Program -

Monday 18 January

08:45am-09:00am
-
Welcome Message
09:00am-09:45am
-
09:45am-10:30am
-
Michael Bronstein (Università della Svizzera Italiana) (abstract) (slides)
-
Coffee break
11:00am-11:45am
-
Boaz Nadler (Weizmann Institute of Science) (abstract) (slides)
11:45am-12:30pm
-
Julie Delon (Paris 5 University) (abstract) (slides)
-
Lunch break
02:00pm-02:45pm
-
Michael Hintermüller (Humboldt-Universität zu Berlin) (abstract) (slides)
02:45pm-03:30pm
-
Simon Masnou (University of Lyon 1) (abstract) (slides)
-
Coffee break
04:00pm-04:45pm
-
Jan Modersitzki (Lübeck University) (abstract) (slides)
04:45pm-05:30pm
-

Tuesday 19 January

09:00am-09:45am
-
Anna Gilbert (University of Michigan) (abstract) (slides)
09:45am-10:30am
-
Felix Krahmer (Technical University of Munich) (abstract) (slides)
-
Coffee break
11:00am-11:45am
-
11:45am-12:30pm
-
Pierre Weiss (CNRS and Toulouse University) (abstract) (slides)
-
Lunch break
02:00pm-02:45pm
-
Amir Beck (Technion) (abstract) (slides)
02:45pm-03:30pm
-
Emilie Chouzenoux (Paris-Est University) (abstract) (slides)
-
Coffee break
04:00pm-04:45pm
-
Antonin Chambolle (CNRS and Ecole Polytechnique) (abstract) (slides)
04:45pm-05:30pm
-

Wednesday 20 January

09:00am-09:45am
-
09:45am-10:30am
-
Martin Burger (Muenster University) (abstract) (slides)
-
Coffee break
11:00am-11:45am
-
Don Geman (Johns Hopkins University) (abstract) (slides)
11:45am-12:30pm
-
Ron Kimmel (Technion) (abstract) (slides)
-
Lunch break
02:00pm-02:45pm
-
José Bioucas-Dias (Instituto Superior Técnico) (abstract) (slides)
02:45pm-03:30pm
-
Lorenzo Rosasco (Universita di Genova and MIT) (abstract) (slides)
-
Coffee break
04:00pm-04:45pm
-
Pina Marziliano (Nanyang Technological University) (abstract) (slides)
04:45pm-05:30pm
-
Tuomo Valkonen (Cambridge University) (abstract) (slides)

- Abstracts -

Amir Beck - Primal and Dual Predicted Decrease Approximation Methods

We introduce the notion of predicted decrease approximation (PDA) for constrained optimization, a flexible framework which includes as special cases known algorithms such as generalized conditional gradient, proximal gradient, greedy coordinate descent for separable constraints and working set methods for linear equality constraints with bounds. This allows to provide a unified convergence analysis for these methods. We further consider a partially strongly convex nonsmooth model and show that dual application of PDA-based methods yields new sublinear convergence rate estimates in terms of both primal and dual objectives. As an example of an application, we provide an explicit working set selection rule for SMO-type methods for training the support vector machine with an improved primal convergence analysis.

José Bioucas-Dias - Inverse Problems in Interferometric Phase Imaging

Interferometric phase imaging is a class of inverse problems aimed at the estimation of phase from sinusoidal and noisy observations. These degradation mechanisms (sinusoidal nonlinearity and noise) render interferometric phase imaging a quite challenging problem. In this talk I address two paradigmatic inverse problems of this class: a) interferometric denoising, which is the denoising of modulo-2pi phase images, and b) absolute phase estimation, which aims at recovering the original phase, including its 2pi-multiples. Interferomeric denoising is tackled by reformulating the original estimation problem as a non-local patch-based sparse regression in the complex domain. Absolute phase estimation is tackled in two alternative ways: 1) via convex relaxation of the original problem; 2) via phase unwrapping, which is an integer optimization problem applied to the denoised interferometric images. The estimates produced by the addressed methods are characterized and their effectiveness illustrated in a series of experiments with simulated and real data.

Michael Bronstein - Deep learning on geometric data

The past decade in computer vision research has witnessed the re-emergence of \"deep learning\" and in particular, convolutional neural network techniques, allowing to learn task-specific features from examples and achieving a breakthrough in performance in a wide range of applications. However, in the geometry processing and computer graphics communities, these methods are practically unknown. One of the reasons stems from the facts that 3D shapes (typically modeled as Riemannian manifolds) are not shift-invariant spaces, hence the very notion of convolution is rather elusive. In this talk, I will show some recent works from our group trying to bridge this gap. Specifically, I will show the construction of intrinsic convolutional neural networks on meshes and point clouds, with applications such as finding dense correspondence between deformable shapes and shape retrieval.

Martin Burger - Nonlinear Spectral Decomposition of Images

Spectral decompositions in Hilbert spaces such as the Fourier transform are standard tools in signal and image processing. Due to the specific structure of images a treatment in Banach spaces (bounded variation, Besov spaces ...) has proven to be more appropriate in many instances and hence raises the question for a (generalized) spectral decomposition. This talk will discuss a variational approach to the derivation of a nonlinear spectral decomposition. The latter is obtained from the history of certain gradient flows with respect to the norm or some dual functional, and can be used to recover the original image from a linear reconstruction formula. The latter immediately opens the possibility to construct novel filters on images, some applications of which will be discussed. Moreover, the connection to nonlinear eigenvalue problems will be discussed. Based on joint work with Martin Benning, Lina Eckardt, Guy Gilboa, Michael Möller.

Antonin Chambolle - Remarks on the acceleration of some alternating minimisation methods

We consider Dykstra-like algorithms, and show that in the simplest cases their structure is similar to forward-backward descent schemes. We show how this basic remark allows to implement Nesterov/FISTA-like acceleration and improve their theoretical, and in some cases empirical, rate of convergence.

Emilie Chouzenoux - Accelerated dual forward-backward algorithms: Application to video restoration

Inverse problems encountered in video processing often require to minimize criteria involving a very high number of variables. Among available optimization techniques, proximal methods have shown their efficiency in solving large scale possibly nonsmooth problems. When some of the proximity operators involved in these methods do not have closed form expressions, they may constitute a bottleneck in terms of computational complexity and memory requirements. In this talk, we address this problem and propose new accelerated dual forward-backward techniques for solving it. The numerical performance of our approaches is assessed through application examples in the context of old video sequences restoration.

Julie Delon - A hyperprior approach for inverse problems in imaging

Patch models have proven successful to solve a variety of inverse problems in image restoration. Recent methods, combining patch models with a Bayesian approach, achieve state-of-the-art results in several restoration problems. Different strategies are followed to determine the patch models, such as a fixed number of models to describe all image patches or a locally determined model for each patch. Local model estimation has proven very powerful for image denoising, but it becomes seriously ill-posed for other inverse problems such as interpolation of random missing pixels or zooming. In this work, we present a new framework for image restoration that makes possible to use local priors for these more general inverse problems. To this aim, we make use of what is known as hyperprior on the model parameters: this allows to overcome the ill-posedness of the local estimation and to obtain state-of-the-art results in problems such as interpolation, denoising and zooming. Experiments conducted on synthetic and real data show the effectiveness of the proposed approach. Moreover, we present an application of the proposed restoration method to the generation of HDR (high dynamic range) images from a single image captured using spatially varying exposures.

William Freeman - Measuring and visualizing small motions

We have developed a \"motion microscope\" to visualize small motions by synthesizing a video with the desired motions amplified. The project began as an algorithm to amplify small color changes in videos, allowing color changes from blood flow to be visualized. Modifications to this algorithm allow small motions to be amplified in a video. I will describe the algorithms, and show color-magnified videos of adults and babies, and motion-magnified videos of throats, pipes, cars, smoke, and pregnant bellies. The motion microscope lets us see the world of tiny motions, and it may be useful in areas of science and engineering. Having this tool led us to explore other vision problems involving tiny motions. I will describe recent work in analyzing fluid flow and depth by exploiting small motions in video or stereo video sequences caused by refraction of turbulent air flow (joint work with the authors below and Tianfan Xue, Anat Levin, and Hossein Mobahi). We have also developed a "visual microphone" to record sounds by watching objects, like a bag of chips, vibrate (joint with the authors below and Abe Davis and Gautam Mysore). Collaborators: Michael Rubinstein, Neal Wadhwa, and co-PI Fredo Durand.

Don Geman - Scene Interpretation by Entropy Pursuit

The grand challenge of computer vision is to build a machine which produces a rich semantic description of an underlying scene based on image data. Mathematical frameworks are advanced from time to time, but none clearly points the way to closing the performance gap with natural vision. Entropy pursuit is a sequential Bayesian approach to object detection and localization. The role of the prior model is to apply contextual constraints in order to determine and coherently integrate the evidence acquired at each step. The evidence is provided by a large family of powerful but expensive high-level classifiers (e.g., CNNs) which are implemented sequentially and adaptively. The order of execution is determined online, during scene parsing, and is driven by removing as much uncertainty as possible about the overall scene interpretation given the evidence to date. The goal is to match, or even exceed, the performance obtained with all the classifiers by implementing only a small fraction.

Anna Gilbert - Sparse Approximation, List Decoding, and Uncertainty Principles

We consider list versions of sparse approximation problems, where unlike the existing results in sparse approxi- mation that consider situations with unique solutions, we are interested in multiple solutions. We introduce these problems and present the first combinatorial results on the output list size. These generalize and enhance some of the existing results on threshold phenomenon and uncertainty principles in sparse approximations. Our definitions and results are inspired by similar results in list decoding. We also present lower bound examples that bolster our results and show they are of the appropriate size. Joint work with Atri Rudra, Hung Ngo, Mahmoud Abo Khamis.

Michael Hintermüller - Optimal Selection of the Regularization Function in a Generalized Total Variation Model

A generalized total variation model with a spatially varying regularization weight is considered. Existence of a solution is shown, and the associated Fenchel-predual problem is derived. For automatically selecting the regularization function, a bilevel optimization framework is proposed. In this context, the lower-level problem, which is parameterized by the regularization weight, is the Fenchel predual of the generalized total variation model and the upper-level objective penalizes violations of a variance corridor. The latter object relies on a localization of the image residual as well as on lower and upper bounds inspired by the statistics of the extremes. Numerical results using a projected gradient method for image denoising, deblurring, as well as Fourier and wavelet inpainting are shown.

Ron Kimmel - A Spectral Perspective on Shape Analysis

In this talk I will reveal in a brief my perspective on the development of Intel’s RealSense geometry sensor, the intrinsic and extrinsic transformations applied to Alice in Wonderland, and the benefit of having an axiomatic model when valid. As for computational/analytical tools, the differential structure of surfaces captured by the Laplace Beltrami Operator (LBO) can be used to construct a space for analyzing visual and geometric information. The decomposition of the LBO at one end, and the heat operator at the other provide us with efficient tools for dealing with images and shapes. Denoising, matching, segmenting, filtering, exaggerating are just few of the problems for which the LBO provides a convenient operating environment. We will review the optimality of a truncated basis provided by the LBO, and a selection of relevant metrics by which such optimal bases are constructed. A specific example is the scale invariant metric for surfaces, that we argue to be a relevant choice for the study of articulated shapes and forms in nature.

Felix Krahmer - Empirical Chaos Processes and their application in Blind Deconvolution

The motivation of this talk is the deconvolution of two unknown vectors w and x, each of which is sparse with respect to a generic (but known) basis. That is, one seeks to recover w and x from their circular convolution y = w*x. In this talk, we discuss a restricted isometry property for this problem, which then entails convergence guarantees for the non-convex sparse power factorization algorithm via recent work by Lee et al. A key ingredient of our proof are tail bounds for specific random processes. Such processes can be interpreted as the empirical process corresponding to a chaos process. We analyze such processes in terms of the Talagrand gamma_2 functional. For the blind deconvolution application, our results yield convergence guarantee whenever the sparsity of the signals is less than c L/log^6L, where L is the dimension and c is an absolute constant. This is joint work with Justin Romberg (Georgia Tech) and Ali Ahmed (MIT).

Carole Le Guyader - Joint Segmentation/Registration Model by Shape Alignment Via Weighted Total Variation Minimization and Nonlinear Elasticity

This presentation falls within the scope of joint segmentation-registration using nonlinear elasticity principles. Saint Venant-Kirchhoff materials being the simplest hyperelastic materials (hyperelasticity being a suitable framework when dealing with large and nonlinear deformations), we propose viewing the shapes to be matched as such materials. Then we introduce a variational model combining a measure of dissimilarity based on weighted total variation and a regularizer based on the stored energy function of a Saint Venant-Kirchhoff material. Adding a weighted total variation based criterion enables to align the edges of the objects even if the modalities are different. We derive a relaxed problem associated to the initial one for which we are able to provide a result of existence of minimizers. A description and analysis of a numerical method of resolution based on a decoupling principle is then provided including a theoretical result of Gamma-convergence. Applications on academic and biological images are provided.

Julien Mairal - A Universal Catalyst for First-Order Optimization

We introduce a generic scheme for accelerating first-order optimization methods in the sense of Nesterov. Our approach consists of minimizing a convex objective by approximately solving a sequence of well-chosen auxiliary problems, leading to faster convergence. This strategy applies to a large class of algorithms, including gradient descent, block coordinate descent, SAG, SAGA, SDCA, SVRG, Finito/MISO, and their proximal variants. For all of these approaches, we provide acceleration and explicit support for non-strongly convex objectives. In addition to theoretical speed-up, we also show that acceleration is useful in practice, especially for ill-conditioned problems where we measure significant improvements.

Stéphane Mallat - Understanding (or not) Deep Neural Networks

Deep convolutional networks provide state of the art classifications and regressions results over many high-dimensional problems, with spectacular results. We review their architecture, which scatters data with a cascade of linear filter weights and non-linearities. A mathematical framework is introduced to analyze their properties, with many open questions. Computations of invariants involve multiscale contractions with wavelets, the linearization of hierarchical symmetries, and sparse separations. Applications are shown for image and audio processing as well as quantum energy regressions.

Pina Marziliano - Sampling sparse signals with finite rate of innovation with applications in diffusion MRI

Sparse representation of signals has gained much interest in the signal processing community. In this talk, the sampling theory of signals that contain a finite number of degrees of freedom also known as signals with finite rate of innovation will first be presented followed by an example in biomedical imaging. In particular, a new reconstruction algorithm used to extract the orientation of fibers in diffusion Magnetic Resonance Imaging (dMRI) will be presented.

Simon Masnou - Discrete Varifolds, Point Clouds, and Surface Approximation

There are many models for the discrete approximations of a surface: point clouds, meshes, digital shapes (pixels, voxels), etc. We claim that it is possible to study these various approximations in a common setting using the notion of varifolds. Varifolds are tools from geometric measure theory which were introduced by Almgren in the context of Plateau’s problem. They carry both spatial and tangential informations, and they have nice properties in a variational context : compactness, continuity of mass, multiplicity information, control of regularity, and a generalized notion of mean curvature. The aforementioned approximations can be associated with "discrete varifolds". The talk will be devoted to approximation properties of such discrete varifolds, to a notion of approximated mean curvature for these objects, and to the convergence properties of this approximated curvature. Numerical evaluations on various 2D and 3D point clouds will illustrate these notions. This is joint work with Blanche Buet (Paris Sud) and Gian Paolo Leonardi (Modena).

Jan Modersitzki - Image Registration: Data Fusion and Motion Correction

Image registration is a fascinating, important and challenging problem in image processing and particularly in medical imaging. Given are two or more images taken at different times, from different devices or perspectives. The goal is then to automatically establish correspondences of objects within the images. To this end, geometrical transformations have to be computed, such that the transformed images match. In this talk, we introduce the problem mathematically and present typical areas of medical applications. In particular, we outline a state-of-the-art variational approach, that provides the necessary flexibility for a huge range of applications. The backbone of this approach is a well-designed objective function that is based on problem specific data-fitting terms and so-called regularizers. As the problem is inherently non-convex, the integration of constraints is used to incorporate additional information such as point-to-point correspondences, local rigidity of structures or volume preservation of the sought transformation. The talk particularly discusses a discontinuous sliding motion model as well as a tailored approach for dynamic contrast enhanced image sequences.

Boaz Nadler - Edge Detection under computational constraints: a sublinear approach

Edge Detection is an important task in image analysis. Various applications require real-time detection of long edges in large noisy images. Motivated by such settings, in this talk we will address the following question: How well can one detect long edges under severe computational constraints, that allow only a fraction of all image pixels to be processed ? We present fundamental lower bounds on edge detection in this setup, a sublinear algorithm for long edge detection and a theoretical analysis of the inevitable tradeoff between its statistical detection performance and the allowed computational budget. The competitive performance of our algorithm will be illustrated on both simulated and real images. Joint work with Inbal Horev, Meirav Galun, Ronen Basri (Weizmann) and Ery Arias-Castro (UCSD).

Lorenzo Rosasco - Less is more: optimal learning with subsampling regularization

In this talk, we discuss recent results on common techniques for scaling up nonparametric methods such as kernel methods and Gaussian processes. In particular, we focus on data dependent and independent sub-sampling methods, namely Nystrom and random features, and study their generalization properties within a statistical learning theory framework. On the one hand we show that these methods can achieve optimal learning errors while being computationally efficient. On the other hand, we show that subsampling can be seen as a form of regularization, rather than only a way to speed up computations.

Michael Unser - Sparsity and the optimality of splines for inverse problems: Deterministic vs. statistical justifications

In recent years, significant progress has been achieved in the resolution of ill-posed linear inverse problems by imposing l1/TV regularization constraints on the solution. Such sparsity-promoting schemes are supported by the theory of compressed sensing, which is finite dimensional for the most part. In this talk, we take an infinite-dimensional point of view by considering signals that are defined in the continuous domain. We claim that non-uniform splines whose type is matched to the regularization operator are optimal candidate solutions. We show that such functions are global minimizers of a broad family of convex variational problems where the measurements are linear and the regularization is a generalized form of total variation associated with some operator L. We then discuss the link with sparse stochastic processes that are solutions of the same type of differential equations.The pleasing outcome is that the statistical formulation yields maximum a posteriori (MAP) signal estimators that involve the same type of sparsity-promoting regularization, albeit in a discretized form. The latter corresponds to the log-likelihood of the projection of the stochastic model onto a finite-dimensional reconstruction space.

Tuomo Valkonen - What do regularisers do?

Which regulariser is the best? Is any of them any good? Do they introduce artefacts? What other qualitative properties do they have? These are some questions, on which I want to shed some light of the early dawn. Specifically, I will firstly discuss recent work on natural conditions, based on an analytical study of bilevel optimisation, ensuring that regularisation does indeed improve an image. Secondly, I will discuss work on proving jump set containment, of the solution in that of the data, for higher-order regularisers.

Pierre Weiss - On the decomposition of blurring operators in wavelet bases

In this talk, I will first provide a few properties of blurring operators (convolutions or more generally regularizing linear integral operators) when expressed in wavelet bases or wavelet frames. These properties were one of the initial motivations of Y. Meyer to develop the wavelet theory. Surprisingly, they have been used extremely scarcely in image processing, despite the success of wavelets in that domain. We will then show that those theoretical results may have an important impact on the practical resolution of inverse problems and in particular deblurring. As an example, we will show that they allow gaining from one to two orders of magnitude in the speed of resolution of convex l1-l2 deblurring problems.