The recent field of compressive sensing \cite{cata06,do06-2,ra10,fora11}

has seen enormous growth and research activity in the past years.

It is based on the concept of sparsity, and its success in many applications is due to the empirical observation that many

types of signals and images are well-approximated by sparse expansions in terms of a suitable basis or frame. This means

that only a small number of coefficients are required to describe well many signals. Compressive sensing predicts that such

type of signals can be recovered efficiently from a surprisingly small number of linear measurements. Several approaches

for recovery algorithms apply, most notably $\ell_1$-minimization and greedy algorithms. Quite remarkably,

all (near-)optimal constructions of measurement matrices in this context known so far are based on randomness, so that

this field has strong interactions with nonasymptotic random matrix theory and Banach space geometry.

In mathematical terms, the compressive sensing problem consists in recovering

a vector $x \in \C^N$ from $y = A x \in \C^m$

where $A \in \C^{m \times N}$ is a suitable matrix (the so-called measurement matrix), and $m \ll N$. While

this is clearly impossible without further knowledge on $x$, compressive sensing makes the assumption that

$x$ is $s$-sparse, i.e., $\|x\|_0 := \# \{\ell, x_\ell \neq 0\} \leq s$. It turns out, that such preknowledge changes the situation

drastically. While the naive approach of using $\ell_0$-minimization, i.e., to recover $x$ via

\[

\min_{z \in \C^N} \|z\|_0 \quad \mbox{ subject to } A z = Ax,

\]

is NP-hard, its convex relaxation, $\ell_1$-minimization, i.e.,

\[

\min_{z \in \C^N} \|z\|_1 \quad \mbox{ subject to } Az = Ax

\]

can be computed with efficient convex optimization techniques. A by-now classical way of analyzing $\ell_1$-minimization

as well as other reconstruction algorithms, proceeds via the restricted isometry property. The restricted isometry constant

$\delta_s$ is defined as the smallest constant such that

\[

(1-\delta_s) \|x\|_2^2 \leq \|Ax\|_2^2 \leq (1+\delta_s) \|x\|_2^2 \quad \mbox{ for all } s\mbox{-sparse } x.

\]

If $\delta_{2s}$ is small enough then exact recovery

of all $s$-sparse vectors via $\ell_1$-minimization follows. The reconstruction is also stable under noise. Furthermore,

also other reconstruction methods such as greedy algorithms and iterative algorithms possess these features.

Suitable constructions of measurement matrices $A$

are usually based on randomness. Simple choices are either Bernoulli random matrices or

Gaussian random matrices $A$. In both types of random matrices, the entries $A_{j,k}$ are independent and

take the values $\pm 1$ with equal probably (Bernoulli matrices), or are standard normal distributed random variables (Gaussian matrices). A typical result in compressive sensing states that an $s$-sparse vector $x \in \C^N$ can

be recovered exactly (and stably) from $y = Ax$ with high probability on the random draw of $A$ using $\ell_1$-minimization

provided

\[

m \geq C s \ln(N/s).

\]

This means that the number of required measurements is dictated by the sparsity $s$ rather than by the signal length.

Such estimates are often established via the restricted isometry constants.

Lower estimates of Gelfand widths of $\ell_p$-balls \cite{gagl84,foparaul10}, $0

imply that the above estimate on $m$ is optimal up to the constant.

Despite the optimality of Bernoulli and Gaussian random matrices in this context, they are of limited

use for practical purposes since they do not possess any structure. However, structure is important in order

to capture the nature of a certain application. Moreover, structure may speed up matrix vector multiplies,

which is crucial for large scale recovery algorithms. Such considerations lead to various models of structured random

matrices \cite{ra10}.

In this talk we will consider a particular structured measurement matrix \cite{pfrata08,pfra10,pfra11}

that arises from time-frequency analysis.

Let $T$ denote the translation operator,

and $M$ the modulation operator on $\C^n$,

defined by

%\begin{equation}%\label{eq:trans_mod}

\[

(T^{k} g)_q=g_{q-k \mod n}\quad\mbox{and}\quad (M^\ell g)_q = e^{2\pi i \ell q/n} g_q. %= \omega^q h_q,

\]

The operators $\pi(\lambda) = M^\ell T^k$, $\lambda = (k,\ell)$, are called time-frequency shifts.

For a non-zero vector $g$ the set

\begin{equation}\label{GaborSystem}

\{\pi(\lambda) g: \lambda \in \Z_n{\times} \Z_n\}

\end{equation}

is called a Gabor system \cite{gr01} and the matrix

$\Psi_g \in \C^{n{\times} n^2}$ whose columns are the vectors $\pi(\lambda) g$, $\lambda

\in \Z_n{\times} \Z_n$, of a Gabor system, is referred to as a Gabor synthesis matrix.

It is a natural question whether this type of structured matrices are suitable for compressive sensing, and which

vectors $g$ are good for this task. Here, we choose the vector $g$ at random by taking all entries to be independent

and uniformly distributed on the complex torus $\{z \in \C, |z| = 1\}$.

We obtained essentially two types of recovery results so far \cite{pfra10,pfra11}.

The first considers nonuiform recovery \cite{pfra10}:

A (fixed) $s$-sparse vector $x \in \C^{n^2}$ can be recovered from a random draw of $\Psi_g \in \C^{n \times n^2}$

(this means that $g$ is chosen at random), that is, from $y = \Psi_g \in \C^n$ with probability at least $1-\varepsilon$

via $\ell_1$-minimization provided

\[

n \geq C s \log(n/\varepsilon).

\]

The second main result of the talk \cite{pfra11} shows that the restricted isometry constants of a random draw of $\Psi_g$

satisfy $\delta_s \leq \delta$ with high probability provided

\[

n \geq C_\delta (s \log n)^{3/2}.

\]

This implies uniform and stable $s$-sparse recovery using $\ell_1$-minimization and other recovery algorithms under the

stated bound relating $n$ and $s$.

We expect that the exponent $3/2$ above can be improved to $1$, but this seems to require more sophisticated

methods for estimating suprema of so-called Rademacher chaos processes than presently known \cite{ta05-2}.

We remark that the analysis

is very similar as the one for estimating restricted isometry constants of partial random circulant matrices \cite{rarotr10},

which also falls short of the conjectured optimal exponent $1$.

Potential applications of compressive sensing with time-frequency structured measurement matrices include radar,

wireless communications and sonar.

\medskip

{\bf Relation to the work of Hans Feichtinger.} Time-frequency analysis is a classical topic in the work of Hans Feichtinger,

to which he contributed a large variety of important results - both on the theory and on the applications.

The modulation spaces he has introduced in the 1980ies \cite{fe83-4}

are a crucial tool in analyzing how well a function can be

approximated by a sparse expansion in terms of a Gabor system \cite{gr01}. Also he has contributed significantly

to finite dimensional time-frequency analysis \cite{feluwe07},

and corresponding computational aspects. These subjects are clearly at the

core of the topic of this talk. The usefulness of time-frequency structured measurement matrices show that the work

of Hans Feichtinger continues to be very lively in modern aspects of computational harmonic analysis, and it is interesting

to discover connections also to probability theory and Banach space geometry \cite{leta91}.

I would like to give my sincere thanks to Hans Feichtinger for supporting me for a long time, in particular, for being the host

of my three PostDoc years at NuHAG, where I started to work on compressive sensing, and

on connections to time-frequency analysis.