Our society is in constant interaction with image content. Market data indicates that the number of acquisition mobile devices is going to considerably increase and in particular, a spectacular growth in users for the years to come is expected. Due to this trend, new needs and services for companies, professionals and consumers are emerging.
For a better and optimized use of color image data, accurate processing methods are required. This implies a control over the global statistics of images in order to (i) improve their quality (ii) improve image indexing methods and (iii) facilitate image analysis. The statistics of images can be of very different types: color, geometrical information, tone or texture. Direct applications of the control of such features are color and tone homogenization of a set of images (HDR images, video sequence, etc), removal of shadow or overexposed image areas, transfer of `style' attributes (textures), etc.
A largest set of image processing problems that are of major importance are also implicitly concerned. Indeed, defining generic methods aiming at homogenizing image statistics or higher-level image description (shapes, objects, etc) implies developing fast and robust metrics to compare these statistics. Image segmentation and image retrieval are consequently directly concerned.
Currently, no automatic method properly deals with the equalization of the global statistics between pairs of images, with a reasonable computational cost. Unless considering a tremendous manual user interaction, only approximate color equalization can currently be realized between color images.
Additionally, when describing the images with high-level content descriptors, we are facing heterogeneous features (colors, textures, shapes, objects, \dots) of huge dimensions. While a lot of research efforts are dedicated to the enhancement of these descriptors (by adding additional information such as text or semantic context) in a big data perspective, the metrics currently used to compare descriptor densities do not properly deal with important noise and data outliers.
A powerful framework that deals with these data outliers it the Optimal Transport (OT). In contrast to most distances from information theory (e.g. the Kullback-Leibler divergence), OT takes into account the spatial location of the density modes. It has been shown in the literature that OT produces state of the art results for the comparison of statistical descriptors. One current limitation of this approach is its expensive computational cost. Nevertheless, it has already been used in the context of color transfer and image retrieval with great success. By using a simple modelling of the ground distance, near real time computational costs were reached. In order to improve the accuracy performances and to deal with a largest set of applications, this simple modelling is not satisfying and generalized OT distances should be considered. Reducing the computational cost associated to advanced OT models is a scientific challenge.
A large scale deployment of OT would lead to a significant gap in performance improvement for various signal and image processing tasks. In this context, the goal of the GOTMI project is to generalize, enhance and adapt OT concepts in order to develop new methodological and numerical tools dedicated to fast color image processing. These scientific and technological issues act as a fundamental upstream research for a better dissemination and exploitation of massive image data.