Straightforward choices of the underlying basis, such as piecewise multilinear finite elements on uniform tensor product grids, lead to the well-known matrix ill-conditioning of discretized operators. We demonstrate that for low-rank representations, the use of tensor structure can additionally lead to representation ill-conditioning, a new effect specific to computations in tensor networks.

We address this problem in the context of FE discretizations by a suitable representation of a BPX preconditioner. Our construction will be discussed in further detail in the talk of V. Kazeev (Part II).

This is joint work with Vladimir Kazeev.

One of these schemes is

directional interpolation

: the oscillatory kernel function is split into a plane wave and a remainder that is smooth within a cone around this wave's direction. Interpolating the remainder yields a degenerate approximation that can be used to obtain a low-rank approximation.
Although this approach is fast and robust, it may lead to very high storage requirements. This problem can be solved by algebraic recompression: combining orthogonal projections and rank-revealing factorizations, the ranks can be significantly reduced. The resulting hybrid compression algorithm inherits the robustness and accuracy of the analytic approximation and the almost optimal storage requirements of algebraic compression methods.

References:

[1] Sriram Nagaraj, DPG Methods for Nonlinear Fiber Optics, Ph.D. dissertation, The University of Texas, April 2018.

[2] S. Petrides and L. Demkowicz, An Adaptive DPG Method for High Frequency Time-harmonic Wave Propagation Problems, Comput. Math. Appl., 74(8): 1999-2017, 2017.

The scheme begins with a collocation discretization in time, which turns a linear ODE into a linear system of algebraic equations. A low rank approximation to the solution of this system is computed via an alternating iteration, without solving the original large problem.

This introduces a certain approximation error; however, we show that the conservation laws, defined by linear functions of the solution, can be preserved in the low rank scheme with the machine precision.

One example of such functions is the probability normalization in the Fokker-Planck or chemical master equations. Moreover, the low rank representation allows cheap error estimation, and hence adaptation of time intervals, interleaved with adaptation of tensor ranks. Numerical experiments demonstrate that this fully adaptive normalization-preserving solution can be faster and more accurate than the stochastic simulation algorithms.

We discuss the solution of the respective equations by the sparse grid combination technique, by low-rank approximation, and also by $\mathcal{H}$-matrix techniques which can be seen as an adaptive low-rank approximation. Numerical results are presented to verify and quantify the proposed approach.

After a brief introduction to IgA, we describe a method for accelerating the assembly of the resulting stiffness matrices, a significant bottleneck in IgA. The procedure exploits the facts that (a) the IgA stiffness matrices have block-banded structure, and (b) they often have low Kronecker rank. Combined, these two properties allow us to reorder the nonzero entries of the stiffness matrix into a relatively small, dense matrix or tensor of low rank. A suitable black-box low-rank approximation algorithm is then applied to this matrix or tensor, which allows its fast and accurate computation. Numerical experiments demonstrate speedups up to several orders of magnitude over full Gauss quadrature in multiple examples.

Very recently, first efforts have been made towards accelerating also the solution of the linear systems arising in IgA problems by tensor methods. We describe here a novel approach based on greedily constructing suitable tensor product subspaces in which the solution of the linear system is approximated by means of Galerkin projection. This results in approximations to the solution as tensors in Tucker format. In each iteration, the tensor product subspace is enriched according to a rank one tensor which minimizes the residual and is computed using a modified Alternating Least Squares method. We demonstrate the performance of this algorithm on some numerical examples and give an outlook to future work.

We show that the issues of matrix ill-conditioning and representation ill-conditioning can be circumvented simultaneously by carefully combining classical ideas of multilevel preconditioning and more recent techniques for the explicit low-rank representation of matrices. Specifically, we construct an explicit tensor-structured representation of a BPX preconditioner, with ranks independent of the number of discretization levels. Combined with a carefully constructed representation of differentiation and $L_2$ projection, it allows to avoid both matrix and representation ill-conditioning. Numerical tests, including problems with highly oscillatory coefficients, show that our result paves the way to reliable and efficient solvers that remain stable for mesh sizes near machine precision (for up to $2^{50}\approx 10^{15}$ nodes in each dimension).

This is joint work with Markus Bachmayr.

We discuss how the tensor numerical methods based on different tensor approximation techniques apply in the solution of complicated multi-dimensional PDEs arising in computational quantum chemistry. We sketch the new tensor methods for calculation of electrostatic potentials of large bio-molecules (the Poisson-Boltzmann equation), and for the solution of (post) Hartree-Fock eigenvalue problems for estimation of excitation energies and density of states for optical spectra of molecules (the Bethe-Salpeter equation).

The efficiency of the tensor approach is illustrated by numerical examples.

http://personal-homepages.mis.mpg.de/bokh;

bokh@mis.mpg.de

Many gauge field theories can be formulated variationally using a multisymplectic Lagrangian formulation, and we will present a characterization of the exact generating functionals that generate the multisymplectic relation. By discretizing these using group-equivariant spacetime finite element spaces, we obtain methods that exhibit a discrete multimomentum conservation law. We will then briefly describe an approach for constructing group-equivariant interpolation spaces that take values in the space of Lorentzian metrics that can be efficiently computed using a generalized polar decomposition. The goal is to eventually apply this to the construction of variational discretizations of general relativity, which is a second-order gauge field theory whose configuration manifold is the space of Lorentzian metrics.

low rank

may be ill-justified. There are many natural instances where the object in question has high rank with respect to the classical notions of rank: matrix rank, tensor rank, multilinear rank – the latter two being the most straightforward generalizations of the former. To remedy this, we show that one may vastly expand these classical notions of ranks: Given any undirected graph $G$, there is a notion of $G$-rank associated with $G$, which provides us with as many different kinds of ranks as there are undirected graphs. In particular, the popular tensor network states in physics (e.g., mps

, ttns

, peps

, ctns

) may be regarded as functions of a specific $G$-rank for various choices of $G$. Among other things, we will see that a function, matrix, or tensor may have very high matrix, tensor, or multilinear rank and yet very low $G$-rank for some $G$. In fact the difference is in the orders of magnitudes and the gaps between $G$-ranks and these classical ranks are arbitrarily large for some important objects in computer science, mathematics, and physics. Furthermore, we show that there is a $G$ such that almost every tensor has $G$-rank exponentially lower than its rank or the dimension of its ambient space.Joint work with C. Haberstich and G. Perrin.

References:

[1] A. Nouy. Higher-order principal component analysis for the approximation of tensors in tree-based low rank formats. arXiv preprint, 2017.

In this talk, we will give an overview of some approaches towards the numerical treatment of PDEs with neural networks and study the two aspects above. We will recall some classical and some novel approximation theoretical results and tie these results to PDE discretisation. Afterwards, providing a counterpoint, we analyse the structure of network spaces and deduce considerable problems for the black box solver. In particular, we will identify a number of structural properties of the set of neural networks that render optimisation over this set especially challenging and sometimes impossible.

The talk is based on joint work with Helmut Bölcskei, Philipp Grohs, Gitta Kutyniok, Felix Voigtlaender, and Mones Raslan.

Hierarchical Tucker tensor format, introduced by Hackbusch and coworkers, and a particular case of Tensor Trains (TT) (Tyrtyshnikov) have been introduced for high dimensional problems. The parametrization has been known in quantum physics as tensor network states. The multi-linear parametrization can be applied to solve high-dimensional PDE into a variational framework restricted to tensor low rank classes. For non-linear or more complex problems this direct approach becomes more difficult due to required tensor product approximations of the operators.

In Variational Monte Carlo we replace our objective functional by an empirical functional, in a similar way as for risk minimization or loss functions in statistical learning. For the optimization we need only computable gradients at sample points. This works also for deep neuronal networks using backpropagation. At a price, we can show only

convergence in probability

, i.e. error estimates holds with high probability. The analysis is carried out in the spirit of Cucker and Smale, and first estimates about covering numbers for hierarchical tensors are available.This approach can be carried out immediately for parametric problems in uncertainty quantification. And can be formally extended to approximate the meta-stable eigen-functions of the corresponding Backward Kolmogorov operator by numerical approximation of the transfer operator (also know as Koopman operator), and vice versa the Fokker Planck operator. The latter is joint work with F. Nüske and F. Noe from FU Berlin.

References:

[1] Ch. Schwab and J. Zech.

Deep Learning in High Dimension

. Technical Report 2017-57, Seminar for Applied Mathematics, ETH Zürich. 2017.[2] J. Zech, D. Dũng and Ch. Schwab.

Multilevel approximation of parametric and stochastic PDEs

. Technical Report 2018-05, Seminar for Applied Mathematics, ETH Zürich. 2018.