Personal tools

Compressible priors

New surprising connections between Bayesian estimation and sparse regression

A common Bayesian interpretation of sparse regression

L1 regularization, which is often used for signal denoising and inverse problems, is commonly interpreted as a Maximum A Posteriori (MAP) estimator under a Laplacian prior. The relevance of this interpretation has been questioned through two main contributions. 

Compressible priors

We established the relationship between statistical models and the notion of sparsity in terms of reconstruction accuracy, showing that a number of distributions often described as compressible are not. Noticeable examples of such distributions include the Laplacian distribution, generalized Gaussian distributions, and more generally all distributions with finite fourth moment. Compressible distributions include the Cauchy distribution and certain generalized Pareto distributions.


 In the context of additive white Gaussian noise removal, we showed that solving a penalized least squares regression problem with penalty φ(x) need not be interpreted as assuming a prior C · exp(−φ(x)) and using the MAP estimator. In particular, we for any prior P(x), the minimum mean square error (MMSE) estimator is the solution of a penalized least square problem with some penalty φ(x), which can be interpreted as the MAP estimator with the prior C · exp(−φ(x)). Vice-versa, for certain penalties φ(x), the solution of the penalized least squares problem is indeed the MMSE estimator, with a certain prior P(x). In general dP(x) differs from C · exp (−φ(x))dx. 

More details


RĂ©mi Gribonval, coordinator
Equipe-Projet METISS
INRIA Rennes - Bretagne Atlantique
Campus de Beaulieu
F-35042 Rennes cedex, France.

Phone: (+33/0) 299 842 506
Fax: (+33/0) 299 847 171
E-MAIL: contact