Loading…
NIPS 2013 has ended
Friday, December 6 • 7:00pm - 11:59pm
Reconciling 'priors'' & 'priors' without prejudice?

Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

There are two major routes to address linear inverse problems. Whereas regularization-based approaches build estimators as solutions of penalized regression optimization problems, Bayesian estimators rely on the posterior distribution of the unknown, given some assumed family of priors. While these may seem radically different approaches, recent results have shown that, in the context of additive white Gaussian denoising, the Bayesian conditional mean estimator is always the solution of a penalized regression problem. The contribution of this paper is twofold. First, we extend the additive white Gaussian denoising results to general linear inverse problems with colored Gaussian noise. Second, we characterize conditions under which the penalty function associated to the conditional mean estimator can satisfy certain popular properties such as convexity, separability, and smoothness. This sheds light on some tradeoff between computational efficiency and estimation accuracy in sparse regularization, and draws some connections between Bayesian estimation and proximal optimization.
None


Friday December 6, 2013 7:00pm - 11:59pm PST
Harrah's Special Events Center, 2nd Floor
  Posters
  • posterid Fri34
  • location Poster# Fri34