Responsable : Luis Fredes et Camille Male
ReLU neural networks parameterizations are well-known to satisfy rescaling symmetries, which arise due to the homogeneity of the ReLU activation function. Ignoring such symmetries can lead to inefficient algorithms, non-informative theoretical bounds, and irrelevant interpretations of parameters. Can such symmetries be harnessed, theoretically and practically ? This is the goal of the path-lifting, a rescaling-invariant polynomial representation of the parameters of ReLU networks and their modern variants with max-pooling and skip connections.
Despite its combinatorial dimension, the path-lifting yields easily computable quantities that reveal useful properties of the corresponding functions, from Lipschitz regularity to convexity or statistical generalization bounds .... Besides introducing the general concept of path-lifting from basic examples and highlighting its key mathematical and computational properties, the talk will quickly tour some of its applications such as network pruning with guarantees.
Primarily based on joint work with A. Gonon, N. Brisebarre, E. Riccietti (https://hal.science/hal-04225201v5, https://hal.science/hal-04584311v3)
and with A. Gagneux, M. Massias, E. Soubies (https://hal.science/hal-04877619v1)
À préciser
A préciser
In this presentation, a response matrix (here, species abundances) is assumed to depend on explanatory variables (here, environmental variables) supposed many and redundant, thus demanding dimension reduction. The Supervised Component-based Generalized Linear Regression (SCGLR), a Partial Least Squares-type method, is designed to extract from the explanatory variables several components jointly supervised by the set of responses. However, this methodology still has some limitations we aim to overcome in this work. The first limitation comes from the assumption that all the responses are predicted by the same explanatory space. As a second limitation, the previous works involving SCGLR assume the responses independent conditional on the explanatory variables. Again, this is not very likely in practice, especially in situations like those in ecology, where a non-negligible part of the explanatory variables could not be measured. To overcome the first limitation, we assume that the responses are partitioned into several unknown groups. We suppose that the responses in each group are predictable from an appropriate number of specific orthogonal supervised components of the explanatory variables. The second work relaxes the conditional independence assumption. A set of few latent factors models the residual covariance matrix of the responses conditional on the components. The approaches presented in this work are tested on simulation schemes, and then applied on ecology datasets.
Séminaire joint avec OptimAI.
A définir
A définir
A définir