NuHAG :: TALKS

Talks given at NuHAG events

The theory of iterative thresholding algorithms and their acceleration methods


  Massimo Fornasier

  given at  strobl07 (19.06.07 09:30)
  id:  647
  length:  25min
  status:  accepted
  type:  talk
  LINK-Presentation: 
  ABSTRACT:
Since the work of Donoho and Johnstone, soft and hard thresholding operators have been extensively studied for denoising of digital signals, mainly in a statistical framework. Usually associated to wavelet or curvelet expansions, thresholding allows to eliminate those coefficients which encode noise components. The assumption is that signals are sparse in origin with respect to such expansions and that the effect of noise is essentially to perturb the sparsity structure by introducing non zero coefficients with relatively small magnitude. While a simple and direct thresholding is used for statistical estimation of the relevant components of an explicitly given signal, and to discard those considered disturbance, the computation of the sparse representation of a signal implicitly given as the solution of an operator equation or of an inverse problem requires more sophistication. We refer, for instance, to deconvolution and superresolution problems, image recovery and enhancing, and problems arising in geophysics and brain imaging. In these cases, thresholding has been combined with classical Landweber iterations to compute the solutions. In this talk we present a general theory of iterative thresholding algorithms which includes soft, hard, and the so-called firm thresholding operators. In particular, we develop a unified variational approach of such algorithms which allows for a complete
characterization of their convergence properties. As a matter of fact, despite their simplicity which makes them very appealing to users and their enormous impact for applications, iterative thresholding algorithms converges very slowly and might be impracticable in certain situations. By analyzing their typical convergence dynamics we propose acceleration methods based 1. on projected gradient iterations, 2. on alternating subspace corrections (domain decompositions.) Also for both these latter families of algorithms, a variational approach is fundamental in order to correctly analyse the convergence.
The talk partially summarizes recent joint results with Ingrid Daubechies, Ron DeVore, Sinan Gunturk, and Holger Rauhut.


Enter here the CODE for editing this talk:
If you have forgotten the CODE for your talk click here to send an email to the Webmaster!
NOTICE: In [EDIT-MODUS] you can also UPLOAD a presentation"