NOMADS Seminars

We organize seminars in Numerical ODEs, Matrix Analysis and Data Science.

The seminars focus on various topics in numerical analysis, numerical linear algebra, scientific computing and matrix theory with particular emphasis on their application in control theory, network science, data mining and machine learning.

Questions or comments about the seminars should be sent to Nicola Guglielmi or Francesco Tudisco.

Speaker Title, Abstract & Other Info Date
Paolo Denti
University of Cape Town, South Africa
Pharmacometrics: a quantitative tool for pharmacological research
July 1, 2024
Christian Lubich
University of Tübingen, Germany
Regularized dynamical parametric approximation
This talk is about the numerical approximation of evolution equations by nonlinear parametrizations $u(t)=\Phi(q(t))$ with time-dependent parameters $q(t)$, which are to be determined in the computation. The motivation comes from approximations in quantum dynamics by multiple Gaussians and approximations of various dynamical problems by tensor networks and neural networks. In all these cases, the parametrization is typically irregular: the derivative $\Phi'(q)$ can have arbitrarily small singular values and may have varying rank. We derive approximation results for a regularized approach in the time-continuous case as well as in time-discretized cases. With a suitable choice of the regularization parameter and the time stepsize, the approach can be successfully applied in irregular situations, even though it runs counter to the basic principle in numerical analysis to avoid solving ill-posed subproblems when aiming for a stable algorithm. Numerical experiments with sums of Gaussians for approximating quantum dynamics and with neural networks for approximating the flow map of a system of ordinary differential equations illustrate and complement the theoretical results. The talk is based on joint work with Caroline Lasser, Jörg Nick and Michael Feischl.
June 6, 2024
Paolo Freguglia
University of L'Aquila, Italy
On the corporate organization. A proposal
Inspired by the classic conceptions of “economic equilibrium” related to the Arrow (1971) and Debreu (1959) model, we have constructed a mathematical paradigm. We start from a possible “economic game” which simulates the relationship between companies and customers, involving company sales, costs and value-assets. So we establish an equilibrium which can be seen as a weighted average of a random variable. Then we arrive at a basic iterable algebraic equation that establishes a relationship between sales and its estimated forecast. The analysis of the evolution of the simple deviations between sales and estimate forecasts is also proposed. Afterwards we keep into account the actual results related to a manufacturing company (with 130 clerks and workers), limiting our analysis to the evolution during 40 months (time unit is the month) of a single product relating to a single customer. In conclusion we gave some ideas about the notion of “sales trend”.
May 28, 2024
Santolo Leveque
Scuola Normale Superiore of Pisa, Italy
Parallel-in-Time Solver for the All-at-Once Runge-Kutta Discretization
Time-dependent PDEs arise quite often in many scientific areas, such as mechanics, biology, economics, or chemistry, just to name a few. Of late, researchers have devoted their effort in devising parallel-in-time methods for the numerical solution of time-dependent PDEs, adding a new dimension of parallelism and allowing to speed-up the solution process on modern supercomputers. In this talk, we present a parallel-in-time preconditioner for the all-at-once linear system arising when employing a Runge--Kutta method in time. The resulting system is solved iteratively for the numerical solution and for the stages of the method. The proposed preconditioner results in a block-diagonal solution for the stages, and a Schur complement obtained by solving again systems for the stages. A range of numerical experiments validate the robustness of the preconditioner with respect to the discretization parameters and to the number of stages, showing the speed-up achieved on a parallel architecture. This is a joint work with Luca Bergamaschi (University of Padua), Angeles Martinez (University of Trieste), and John W. Pearson (University of Edinburgh).
May 28, 2024
Desmond Higham
University of Edinburgh, United Kingdom
A Numerical Analysis Perspective on the Stability of AI Systems
Over the last decade, adversarial attack algorithms have revealed instabilities in deep learning tools. These algorithms raise issues regarding safety, reliability and interpretability in artificial intelligence (AI); especially in high risk settings. Ideas from numerical analysis play key roles in this landscape. From a practical perspective, there has been a war of escalation between those developing attack and defence strategies. At a more theoretical level, researchers have also studied bigger picture questions concerning the existence and computability of successful attacks. I will present examples of attack algorithms in image classification and optical character recognition. I will also outline recent results on the overarching question of whether, under reasonable assumptions, it is inevitable that AI tools will be vulnerable to attack.
May 8, 2024
Catherine Higham
University of Glasgow, United Kingdom
Quantum imaging with deep learning algorithms
Quantum imaging technology offers time-efficient, lightweight, less-invasive and low-cost solutions in many different contexts including autonomous driving, security and health. Enhancing this technology with deep learning addresses challenges arising in experimental optimisation, inverse and classification/regression tasks and improves overall performance. I will talk about the deep learning algorithms I have developed for several projects. These include real time scene and depth reconstruction for single photon sensor camera and LiDAR systems, and high-speed imaging using a Micro LED-on-CMOS light projector.
May 8, 2024
Mattia Manucci
University of Stuttgart, Germany
Approximating the smallest eigenvalue of large Hermitian matrices that depend on parameters
April 23, 2024
Emre Mengi
Koc University, Turkey
Large-scale minimization of the pseudospectral abscissa
The minimization of the spectral abscissa of a matrix dependent on parameters has drawn interest in the last couple of decades. The problem is motivated especially by the stability considerations for the associated linear control system. The major difficulty in the minimization of the spectral abscissa is that its dependence on the parameters is non-Lipschitz. A remedy to this is, for a prescribed epsilon > 0, minimizing the epsilon-pseudospectral abscissa, the real part of the rightmost point in the set consisting of eigenvalues of all matrices at a distance of epsilon with respect to the spectral norm. We present a subspace framework to minimize the epsilon-pseudospectral abscissa of a large matrix dependent on parameters analytically. At every subspace iteration, a one-sided subspace restriction on the parameter-dependent matrix yields a small rectangular pseudospectral abscissa minimization problem. We expand the restriction subspace based on the minimizer of this small problem. We prove in theory for the proposed subspace framework that, assuming the global minimizers of the small problems are retrieved, convergence to the global minimizer of the original large-scale pseudospectral abscissa minimization problem occurs in the infinite dimensional setting. Our theoretical findings are illustrated on real large-scale examples that concern the stabilization by static output feedback of benchmark linear control systems.
April 23, 2024
Ernst Hairer
Université de Genève, Switzerland
Leapfrog methods for relativistic charged-particle dynamics
A basic leapfrog integrator and its variational variant are proposed and studied for the numerical integration of the equations of motion of relativistic charged particles in an electromagnetic field. The methods are based on a four-dimensional formulation of the equations of motion. Structure-preserving properties of the numerical methods are analysed, in particular conservation and long-time near-conservation of energy and mass shell as well as preservation of volume in phase space. In the non-relativistic limit, the considered methods reduce to the Boris algorithm for non-relativistic charged-particle dynamics and its variational variant. This talk is based on joint-work with Christian Lubich
April 18, 2024
Matthias Voigt
FernUni, Switzerland
Structure-Preserving Model Order Reduction of Dynamical Systems
Model order reduction is a field of mathematics that deals with the complexity reduction of mathematical models in order to reduce the computational costs of tasks such as numerical simulation, optimization, or control. In this talk, first an overview of the field will be given and two of the classical systems theoretic reduction methods (balanced truncation and IRKA) will be reviewed. The second part of the presentation will focus on structure-preserving reduction methods for structured models. This includes a variant of balanced truncation for symmetric second-order systems and a rational fitting method for the class of port-Hamiltonian systems.
January 17, 2024
Alessandro Valerio
University of Trondheim, Norway
Neuroscience - AI
In this seminar, Alessandro Valerio, a graduate in neuroscience, explores the field of neuroscience and its potential intersections with artificial intelligence. Alessandro has an educational background in biology and neuroscience from the University of L’Aquila and Trieste, complemented by research experiences at SISSA in Trieste and at the Kavli Institute for System Neuroscience in Trondheim. The seminar provides an overview of the brain's structure, covering aspects from the macroscopic to the molecular level. A significant focus is given to the role of electrophysiological methods in neuroscience. Alessandro details both in vitro and in vivo techniques, which are crucial in understanding the intricacies of neural dynamics. A part of the seminar is dedicated to Alessandro's research journey. His work at SISSA centered on cellular neurobiology and in vitro electrophysiology, particularly in studying rhythmogenesis mechanisms. His journey continued in Trondheim, where he engaged in in vivo electrophysiology and multisensory integration studies at the Kavli Institute for System Neuroscience. The seminar concludes with Alessandro presenting his research goals. He aims to leverage AI advancements in neuroscience research and reciprocally apply neuroscience insights to advance AI development. This includes a focus on utilizing deep learning for EEG signal classification and exploring the potential of bio-inspired neural networks.
January 10, 2024
Lauri Nyman
Aalto University, Finland
Nearest singular pencil via Riemannian optimization
The problem of finding the nearest singular pencil to a given regular, complex or real, n × n matrix pencil A + λB is a long-standing problem in Numerical Linear Algebra. This problem turned out to be very difficult and, so far, just a few numerical algorithms are available in the literature for its solution, though they may be very expensive from a computational point of view. In this talk, we introduce a new algorithm for solving this problem based on Riemannian optimization, which looks for the closest singular pencil via the minimization of an objective function over the cartesian product of the unitary group by itself. Moreover, we present a collection of numerical experiments that show that the new algorithm can deal effectively with pencils of larger sizes than those considered by previous algorithms and find minimizers of, at least, the same quality than previous algorithms.
January 10, 2024
Mattia Manucci
University of Stuttgart, Germany
Model Order Reduction for switched Differential Algebraic Equations
In this presentation, we will discuss a projection-based Model Order Reduction (MOR) for large-scale systems of switched Differential Algebraic Equations (sDAEs). The main idea relies on exploiting the Quasi-Weierstrass form of each mode of the sDAEs system. Then, we can show that, under certain reasonable assumptions, the output of the sDAEs system is equivalent to the output of a switched system of ordinary differential equations (ODEs) with state jumps between the modes. We present how to construct a reduced system by computing the controllability and observability Gramians associated with the solution of generalized Lyapunov equations for bilinear systems. Finally, we discuss how to efficiently compute an approximation of the solution of such generalized Lyapunov equations, with error control, when the associated matrices are sparse and large. Numerical results are presented to showcase the efficiency and effectiveness of the developed MOR method.
December 20, 2023

Speaker Title, Abstract & Other Info Date
Christian Lubich
Universität Tübingen, Germany
Time integration of tree tensor networks
First I report on recent numerical experiments with time-dependent tree tensor network algorithms for the approximation of quantum spin systems. I will then describe the basics in the design of time integration methods that are robust to the usual presence of small singular values, that have good structure-preserving properties (norm, energy conservation or dissipation), and that allow for rank (= bond dimension) adaptivity and also have some parallelism. This discussion of basic concepts will be done for the smallest possible type of tensor network differential equations, namely low-rank matrix differential equations. Once this simplest case is understood, there is a systematic path to the extension of the integrators and their favourable properties to general tree tensor networks.
June 8, 2023
Simone Brugiapaglia
Concordia University, Canada
The mathematical foundations of deep learning: from rating impossibility to practical existence theorems
Deep learning is having a profound impact on industry and scientific research. Yet, while this paradigm continues to show impressive performance in a wide variety of applications, its mathematical foundations are far from being well established. In this talk, I will present recent developments in this area by illustrating two case studies.
First, motivated by applications in cognitive science, I will present 'rating impossibility' theorems. They identify frameworks where deep learning is provably unable to generalize outside the training set for the seemingly simple task of learning identity effects, i.e. classifying whether pairs of objects are identical or not.
Second, motivated by applications in scientific computing, I will illustrate 'practical existence' theorems. They combine universal approximation results for deep neural networks with compressed sensing and high-dimensional polynomial approximation theory. As a result, they yield sufficient conditions on the network architecture, the training strategy, and the number of samples able to guarantee accurate approximation of smooth functions of many variables.
Time permitting, I will also discuss work in progress and open questions.
June 1, 2023
Ernst Hairer
Université de Genève, Switzerland
Large-stepsize integrators for charged particles in a strong magnetic field
This talk considers the numerical treatment of the differential equation that describes the motion of electric particles in a strong magnetic field. A standard integrator is the Boris algorithm which, for small stepsizes, can be analysed by classical techniques. For a strong magnetic field the solution is highly oscillatory and the numerical integration is more challenging. New modifications of the Boris algorithm are discussed, and their accuracy and long-time behaviour are studied with means of modulated Fourier expansions. Emphasis is put on the situation where the stepsize is proportional to (or larger than) the wavelength of the oscillations.
The presented results have been obtained in collaboration with Christian Lubich.
April 20, 2023
Mauro Piccioni
Sapienza Università di Roma, Italy
Estimating the interaction graph for a class of Galves-Locherbach models (Part 2)
We explain how to estimate the presence and the nature of the interactions (either excitatory or inhibitory) in a multivariate point process modeling a network of spiking neurons (Galves and Locherbach, JSP 2013). In the first talk a Markovian model is analyzed whereas in the second delayed inhibitory interactions have to be detected.
April 19, 2023
Emilio De Santis
Sapienza Università di Roma, Italy
Estimating the interaction graph for a class of Galves-Locherbach models (Part 1)
We explain how to estimate the presence and the nature of the interactions (either excitatory or inhibitory) in a multivariate point process modeling a network of spiking neurons (Galves and Locherbach, JSP 2013). In the first talk a Markovian model is analyzed whereas in the second delayed inhibitory interactions have to be detected.
April 19, 2023
Kai Bergermann
TU Chemnitz, Germany
Adaptive rational Krylov methods for exponential Runge--Kutta integrators
We consider the solution of large stiff systems of ordinary differential equations with explicit exponential Runge--Kutta integrators. These problems arise from semi-discretized semi-linear parabolic partial differential equations on continuous domains or on inherently discrete graph domains. A series of results reduces the requirement of computing linear combinations of $\varphi$-functions in exponential integrators to the approximation of the action of a smaller number of matrix exponentials on certain vectors. State-of-the-art computational methods use polynomial Krylov subspaces of adaptive size for this task. They have the drawback that the required Krylov subspace iteration numbers to obtain a desired tolerance increase drastically with the spectral radius of the discrete linear differential operator, e.g., the problem size. We present an approach that leverages rational Krylov subspace methods promising superior approximation qualities. We prove a novel a-posteriori error estimate of rational Krylov approximations to the action of the matrix exponential on vectors for single time points, which allows for an adaptive approach similar to existing polynomial Krylov techniques. We discuss pole selection and the efficient solution of the arising sequences of shifted linear systems by direct and preconditioned iterative solvers. Numerical experiments show that our method outperforms the state of the art for sufficiently large spectral radii of the discrete linear differential operators. The key to this are approximately constant rational Krylov iteration numbers, which enable a near-linear scaling of the runtime with respect to the problem size.
April 18, 2023
Gianluca Ceruti
EPFL, Switzerland
A rank-adaptive robust BUG for dynamical low-rank approximation
In the present contribution, a rank-adaptive integrator for the dynamical low-rank approximation of matrix differential equations is introduced. First, the dynamical low-rank approximation together with the projector-splitting numerical integrator is summarised. Then, recent developments on the field are presented. The so-called fixed-rank unconventional BUG integrator is introduced and analysed. Next, a simple but extremely effective modification allowing for an adaptive choice of the rank, using subspaces generated by the integrator itself, is illustrated. It is shown that the novel BUG adaptive low-rank integrator retains the exactness, robustness and symmetry-preserving properties of the previously proposed fixed-rank integrators. Beyond that, up to the truncation tolerance, the rank-adaptive integrator preserves the norm when the differential equation does, it preserves the energy for Schroedinger equations and Hamiltonian systems, and it preserves the monotonic decrease in gradient flows. This seminar will focus on the numerical error analysis of the adaptive low-rank integrator. Furthemore, the first theoretically solid application to low-rank training in machine learning will be discussed.
The present contribution is based upon joint works with Ch. Lubich, J. Kusch, and F. Tudisco. The last developments in the field of machine learning have been possible only due to the great contributions of PhD candidates E. Zangrando(GSSI) and S. Schotthoefer(KIT).
March 16, 2023
Alfio Quarteroni
Politecnico di Milano, Italy, and EPFL, Switzerland
A Mathematical Heart
In this presentation I will report on the most recent achievements of the iHEART project, aimed at developing a comprehensive accurate mathematical and numerical model for the simulation of the complete cardiac function.
December 15, 2022

Speaker Title, Abstract & Other Info Date
Emre Mengi
Koc University, Turkey
Large-Scale Estimation of the Dominant Poles of a Transfer Function
The dominant poles of the transfer function of a descriptor system provide important insight into the behavior of the system. They indicate the parts of the imaginary axis where the transfer function exhibits large norm. Moreover, the dominant poles and corresponding eigenvectors can be put in use to form a reduced-order approximation to the system. In the talk, I will describe a subspace framework to compute a prescribed number of dominant poles of a large-scale descriptor system. The framework applies Petrov-Galerkin projections to the original system, then computes the dominant poles of the projected small-scale system, for instance by the QZ algorithm, and expands the subspaces so that the projected system after the subspace expansion interpolates the original system at these dominant poles. I will explain why the subspace framework converges at a quadratic rate, and report numerical results illustrating the rapid convergence, and accuracy of the approach.
September 19, 2022
Stefano Massei
University of Pisa, Italy
Improved parallel-in-time integration via low-rank updates and interpolation
This work is concerned with linear matrix equations that arise from the space-time discretization of time-dependent linear partial differential equations (PDEs). Such matrix equations have been considered, for example, in the context of parallel-in-time integration leading to a class of algorithms called ParaDiag. We develop and analyze two novel approaches for the numerical solution of such equations. Our first approach is based on the observation that the modification of these equations performed by ParaDiag in order to solve them in parallel has low rank. Building upon previous work on low-rank updates of matrix equations, this allows us to make use of tensorized Krylov subspace methods to account for the modification. Our second approach is based on interpolating the solution of the matrix equation from the solutions of several modifications. Both approaches avoid the use of iterative refinement needed by ParaDiag and related space-time approaches in order to attain good accuracy. In turn, our new approaches have the potential to outperform, sometimes significantly, existing approaches. This potential is demonstrated for several different types of PDEs.
September 14, 2022
Massimiliano Fasi
Durham University, UK
CPFloat: A C Library for Emulating Low-Precision Arithmetic
Low-precision floating-point arithmetic can be simulated via software by executing each arithmetic operation in hardware and rounding the result to the desired number of significant bits. For IEEE-compliant formats, rounding requires only standard mathematical library functions, but handling subnormals, underflow, and overflow demands special attention, and numerical errors can cause mathematically correct formulae to behave incorrectly in finite arithmetic. Moreover, the ensuing algorithms are not necessarily efficient, as the library functions these techniques build upon are typically designed to handle a broad range of cases and may not be optimized for the specific needs of floating-point rounding algorithms. CPFloat is a C library that offers efficient routines for rounding arrays of binary32 and binary64 numbers to lower precision. The software exploits the bit level representation of the underlying formats and performs only low-level bit manipulation and integer arithmetic, without relying on costly library calls. In numerical experiments the new techniques bring a considerable speedup (typically one order of magnitude or more) over existing alternatives in C, C++, and MATLAB. To the best of our knowledge, CPFloat is currently the most efficient and complete library for experimenting with custom low-precision floating-point arithmetic available in any language.
June 24, 2022
Stefano Serra-Capizzano
University of Insubria, Italy
The GLT class as a Generalized Fourier Analysis and applications
Recently, the class of Generalized Locally Toeplitz (GLT) sequences has been introduced as a generalization both of classical Toeplitz sequences and of variable coefficient differential operators and, for every sequence of the class, it has been demonstrated that it is possible to give a rigorous description of the asymptotic spectrum in terms of a function (the symbol) that can be easily identified. This generalizes the notion of a symbol for differential operators also of fractional type (discrete and continuous) or for Toeplitz sequences for which it is identified through the Fourier coefficients and is related to the classical Fourier Analysis. The GLT class has nice algebraic properties and indeed it has been proven that it is stable under linear combinations, products, and inversion when the sequence which is inverted shows a sparsely vanishing symbol (sparsely vanishing symbol = a symbol which vanishes at most in a set of zero Lebesgue measure). Furthermore, the GLT class virtually includes any approximation by local methods (Finite Difference, Finite Element, Isogeometric Analysis of partial differential equations (PDEs) and, based on this, we demonstrate that our results on GLT sequences can be used in a PDE setting in various directions.
Detailed PDF version of the abstract: HERE
May 26, 2022
Paolo Cifani
Gran Sasso Science Institute, Italy
Geometric integration of Lie-Poisson flows on the sphere
In this seminar I will touch upon the recent developments in structure-preserving (geometric) integration of Euler’s equations for two-dimensional incompressible flows. It has been known for half a century that the dynamics of incompressible ideal fluids in two dimensions can be understood as an evolution equation on the contangent bundle of the infinite-dimensional Lie group of symplectic dffeomorphisms. In particular, the vorticity equation constitutes a Lie-Poisson system characterized by an infinite number of first integrals, i.e. the integrated powers of vorticity. This set of constraints, absent in three dimensions, has profound effects on the energy transfer mechanisms across scales of motion. Yet, the construction of a numerical system which preserves this rich Poisson structure has been elusive. Most attempts either fail in fully preserving the geometric structure or have a high computational complexity. Here, I will show that, thanks to our recent advances, it possible to design a geometric integrator which embeds this fundamental principle of the continuum into the discrete system at a modest computational cost. The construction of such scheme, the main numerical algorithms and their parallelisation on modern supercomputing facilities will be discussed. Finally, an application to the spectrum of homogeneous two-dimensional turbulence will be illustrated.
May 24, 2022
Simone Camarri
University of Pisa, Italy
Adjoint-based passive control of hydrodynamic instabilities
A large number of research papers in the literature have been dedicated to the use of adjoint-based sensitivity and global stability analyses for both characterising and controlling instabilities in fluid mechanics. Such controls, which are mainly passive, are designed for stabilising linearly unstable configurations. The design strategy based on global stability analysis can be rigorously applied to relatively simple flows in laminar regime. More complex configurations can also be rigorously treated, as for instance cases in which the flow to be controlled is time periodic. Moreover, a large interest exists in the application of the same methods for the control of coherent large-scale flow structures in turbulent flows as, for instance, the quasi-periodic shedding of vortices in turbulent wakes. This is possible by postulating the marginal stability of mean flows, which is shown to apply for several cases of interest. In this seminar a review of the methods based on global stability and sensitivity analyses for the design and/or analysis of passive controls will be presented. Moreover, configurations of increasing complexity will be considered, ranging from laminar steady flows to turbulent flows.
May 18, 2022
Lev Lokutsievskiy
Steklov Institute, Moscow, Russia
Asymptotic control theory for affine switching systems of oscillators
The talk will be devoted to continuous-time affine control systems and their reachable sets. I will focus on the case when all eigenvalues of the linear part of the system have zero real part. In this case, the reachable sets usually have a non-exponential growth rate as T→∞, and it is usually polynomial. The simplest non-trivial example is the problem of stabilisation (or, conversely, destabilisation) of two pendulums by the same common control. An exact description of reachable sets is most often impossible here, but their asymptotic behaviour as T→∞ can be found with high accuracy. In the talk, I will present the asymptotic behaviour of reachable sets in the problem of controlling a system of N independent oscillators, and in the problem of controlling the wave equation for a closed string. In particular, in these problems the corresponding analog of the Lyapunov function can be found explicitly, and, consequently, the optimal behaviour at high energies can be found very accurately
May 11, 2022
Ernst Hairer
University of Geneva, Switzerland
High order PDE-convergence of ADI-type integrators for parabolic problems
This work considers space-discretised parabolic problems on a rectangular domain subject to Dirichlet boundary conditions. For the time integration s-stage AMF-W-methods, which are ADI (Alternating Direction Implicit) type integrators, are considered. They are particularly efficient when the space dimension of the problem is large. The classical algebraic conditions for order p (with p<=3) are shown to be sufficient for PDE-convergence of order p (independently of the spatial resolution) in the case of time independent Dirichlet boundary conditions. Under additional conditions, PDE-convergence of order p=3.25-eps for every eps>0 can be obtained. In the case of time dependent boundary conditions there is an order reduction. This is joint work with Severiano Gonzalez-Pinto and Domingo Hernandez-Abreu. Related publications can be downloaded from http://www.unige.ch/~hairer/preprints.html
Recording of the talk (available to GSSI only)
April 26, 2022
Marino Zennaro
University of Trieste, Italy
Computing antinorms on multicones
Abstract
March 31, 2022
Giorgio Fusco
University of L'Aquila, Italy
TBA
Feb 24, 2022
Erkki Somersalo
Case Western Reserve University, USA
Bayes meets data science to identify changes in brain activity during meditation from MEG measurements
Meditation as a potential alternative for pharmaceutical intervention to mitigate conditions such as chronic pain or clinical depression continues to obtain significant attention. One of the problems is that often the positive effects of meditation that have been reported are anecdotal or are based on self reporting. To quantify the effects of meditation, it is therefore important to develop methods based on medical imaging to identify brain regions that are involved in the meditation practice. In this talk, we review some recent results about this topic, addressed by using magnetoencephalography (MEG) to map brain activity during meditation. One of the difficulties here is that the data are less sensitive to activity taking place in the deep brain regions, including the limbic system that is believed to play an important role in meditation. The MEG inverse problem is addressed by using novel Bayesian methods combined with advanced numerical techniques, applied on data from professional Buddhist meditators. The reconstructed activity is then analyzed using data science techniques to distill the information about the activation changes during meditation.
Recording of the talk (available to GSSI only)
December 17, 2021
Matthew Colbrook
University of Cambridge, UK
Computing semigroups and time-fractional PDEs with error control
We develop an algorithm that computes strongly continuous semigroups on infinite-dimensional Hilbert spaces with explicit error control. Given a generator $A$, a time $t > 0$, an arbitrary initial vector $u_0$ and an error tolerance $\epsilon > 0$, the algorithm computes $\exp(tA)u_0$ with error bounded by $\epsilon$. The (parallelisable) algorithm is based on a combination of a regularized functional calculus, suitable contour quadrature rules and the adaptive computation of resolvents in infinite dimensions. As a particular case, we deal with semigroups on $L^2(R^d)$ that are generated by partial differential operators with polynomially bounded coefficients of locally bounded total variation. For analytic semigroups, we provide a quadrature rule whose error decreases like $\exp(−cN/ log(N))$ for $N$ quadrature points, that remains stable as $N \to \infty$, and which is also suitable for infinite-dimensional operators. Finally, we extend the method to time-fractional PDEs (where it avoids singularities as $t \to 0$ and large memory consumption). Numerical examples are given, including: Schrödinger and wave equations on the aperiodic Ammann–Beenker tiling and fractional beam equations arising in the modelling of small-amplitude vibration of viscoelastic materials. The spectral analysis (which is always needed for contour methods) is considerably simplified due to an infinite-dimensional “solve-then-discretise” approach.
December 1, 2021

Speaker Title, Abstract & Other Info Date
Lars Ruthotto
Emory University, USA
Numerical Methods for Deep Learning motivated by Partial Differential Equations
Understanding the world through data and computation has always formed the core of scientific discovery. Amid many different approaches, two common paradigms have emerged. On the one hand, primarily data-driven approaches—such as deep neural networks—have proven extremely successful in recent years. Their success is based mainly on their ability to approximate complicated functions with generic models when trained using vast amounts of data and enormous computational resources. But despite many recent triumphs, deep neural networks are difficult to analyze and thus remain mysterious. Most importantly, they lack the robustness, explainability, interpretability, efficiency, and fairness needed for high-stakes decision-making. On the other hand, increasingly realistic model-based approaches—typically derived from first principles and formulated as partial differential equations (PDEs)—are now available for various tasks. One can often calibrate these models—which enable detailed theoretical studies, analysis, and interpretation—with relatively few measurements, thus facilitating their accurate predictions of phenomena. In this talk, I will highlight recent advances and ongoing work to understand and improve deep learning by using techniques from partial differential equations. I will demonstrate how PDE techniques can yield better insight into deep learning algorithms, more robust networks, and more efficient numerical algorithms. I will also expose some of the remaining computational and numerical challenges in this area.
Slides of the talk
Recording of the talk (available to GSSI only)
June 17, 2021
Ivan Markovsky
Vrije Universiteit Brussel, Belgium
Data-driven dynamic interpolation and approximation
The behavioral system theory give theoretical foundation for nonparameteric representations of linear time-invariant systems based on Hankel matrices constructed from data. These data-driven representations led in turn to new system identification, signal processing, and control methods. In particular, data-driven simulation and linear quadratic tracking control problems were solved using the new approach [1,2]. This talk shows how the approach can be used further on for solving data-driven interpolation and approximation problems (missing data estimation) and how it can be generalized to some classes of nonlinear systems. The theory leads to algorithms that are both general (can deal simultaneously with missing, exact, and noisy data of multivariable systems) and simple (require existing numerical linear algebra methods only). This opens a practical computational way of doing system theory and signal processing directly from data without identification of a transfer function or a state space representation and doing model-based design.
References:
[1] I. Markovsky and P. Rapisarda. “Data-driven simulation and control”. Int. J. Control 81.12 (2008), pp. 1946--1959.
[2] I. Markovsky. A missing data approach to data-driven filtering and control. IEEE Trans. Automat. Contr., 62:1972--1978, April 2017.
[3] I. Markovsky and F. Dörfler. Data-driven dynamic interpolation and approximation. Technical report, Vrije Universiteit Brussel, 2021. Available from Ivan's webpage
Zoom link
Add event to Google calendar
Recording of the talk (available to GSSI only)
March 30, 2021
Eugene E. Tyrtyshnikov
Institute for Numerical Mathematics, Russian Academy of Sciences
Tikhonov's solution to a class of linear systems equivalent within perturbations
A standard approach to incorrect problems suggests that a problem of interest is reformulated with the knowledge of some additional a-priori information. This can be done by several well-known regularization techniques. Many practical problems are successfully solved on this way. What does not still look as completely satisfactory is that the new reset problem seems to appear rather implicitly in the very process of its solution.
In 1980, A. N. Tikhonov proposed a reformulation [1] that arises explicitly before the discussion of the solution methods. He suggested a notion of normal solution to a family of linear algebraic systems described by a given individual system and its vicinity comprising perturbed systems, under the assumption that there are compatible systems in the class notwithstanding the compatibility property of the given individual system. Tikhovov proved that the normal solution exists and is unique. However, a natural question about the correctness of the reset problem was not answered. In this talk we address a question of correctness of the reformulated incorrect problems that seems to have been missed in all previous considerations. The main result is the proof of correctness for Tikhonov's normal solution. Possible generalizations and difficulties will be also discussed.
[1] A. N. Tikhonov, Approximate systems of linear algebraic equations, USSR Computational Mathematics and Mathematical Physics, vol. 20, issue 6 (1980)
Zoom link
Add event to Google calendar
March 09, 2021
Michael Schaub
RWTH Aachen University
Learning from signals on graphs with unobserved edges
In many applications we are confronted with the following system identification scenario: we observe a dynamical process that describes the state of a system at particular times. Based on these observations we want to infer the (dynamical) interactions between the entities we observe. In the context of a distributed system, this typically corresponds to a "network identification" task: find the (weighted) edges of the graph of interconnections. However, often the number of samples we can obtain from such a process are far too few to identify the edges of the network exactly. Can we still reliably infer some aspects of the underlying system?
Motivated by this question we consider the following identification problem: instead of trying to infer the exact network, we aim to recover a (low-dimensional) statistical model of the network based on the observed signals on the nodes. More concretely, here we focus on observations that consist of snapshots of a diffusive process that evolves over the unknown network. We model the (unobserved) network as generated from an independent draw from a latent stochastic blockmodel (SBM), and our goal is to infer both the partition of the nodes into blocks, as well as the parameters of this SBM. We present simple spectral algorithms that provably solve the partition and parameter inference problems with high-accuracy.
We further discuss some possible variations and extensions of this problem setup.
Zoom link
Add event to Google calendar
Recording of the talk
February 17, 2021
Lothar Reichel
Kent State University
Large-scale regression with non-convex loss and penalty
We do non-convex optimization with application to image restoration and regression problems for which a sparse solution is desired.
Zoom link
Add event to Google calendar
February 4, 2021
Anders Hansen
University of Cambridge, UK
On the foundations of computational mathematics, Smale's 18th problem and the potential limits of AI
There is a profound optimism on the impact of deep learning (DL) and AI in the sciences with Geoffrey Hinton concluding that 'They should stop training radiologists now'. However, DL has an Achilles heel: it is universaly unstable so that small changes in the initial data can lead to large errors in the final result. This has been documented in a wide variety of applications. Paradoxically, the existence of stable neural networks for these applications is guaranteed by the celebrated Universal Approximation Theorem, however, the stable neural networks are never computed by the current training approaches. We will address this problem and the potential limitations of AI from a foundations point of view. Indeed, the current situation in AI is comparable to the situation in mathematics in the early 20th century, when David Hilbert’s optimism (typically reflected in his 10th problem) suggested no limitations to what mathematics could prove and no restrictions on what computers could compute. Hilbert’s optimism was turned upside down by Goedel and Turing, who established limitations on what mathematics can prove and which problems computers can solve (however, without limiting the impact of mathematics and computer science). We predict a similar outcome for modern AI and DL, where the limitations of AI (the main topic of Smale’s 18th problem) will be established through the foundations of computational mathematics. We sketch the beginning of such a program by demonstrating how there exist neural networks approximating classical mappings in scientific computing, however, no algorithm (even randomised) can compute such a network to even 1-digit accuracy (with probability better than 1/2). We will also show how instability is inherit in the methodology of DL demonstrating that there is no easy remedy, given the current methodology. Finally, we will demonstrate basic examples in inverse problems where there exists (untrained) neural networks that can easily compute a solution to the problem, however, the current DL techniques will need 10^80 data points in the training set to get even 1% success rate.
Recording of the talk
January 28, 2021
Gianluca Ceruti
University of Tuebingen
Numerical integrators for dynamical low-rank approximation
Discretization of time-dependent high-dimensional PDEs suffers of an undesired effect, known as curse of dimensionality. The amount of data to be stored and treated, grows exponentially, and exceeds standard capacity of common computational devices.
In this setting, time dependent model order reductions techniques are desirable.
In the present seminar, together with efficient numerical integrators, we present a recently developed approach: dynamical low-rank approximation.
Dynamical low-rank approximation for matrices will be firstly presented, and a numerical integrator with two remarkable properties will be introduced: the matrix projector splitting integrator.
Based upon this numerical integrator, we will construct two equivalent extensions for tensors, multi-dimensional arrays, in Tucker format - a high-order generalization of the SVD decomposition for matrices. These extensions are proven to preserve the excellent qualities of the matrix integrator.
To conclude, via a novel compact formulation of the Tucker integrator, we will further extend the matrix and Tucker projector splitting integrators to the most general class of Tree Tensor Networks. Important examples belonging to this class and of interest for applications are given, but not only restricted to, by Tensor Trains.
This seminar is based upon a joint work with Ch. Lubich and H. Walach.
Zoom link
Add event to Google calendar
January 13, 2021
Alexander Viguerie
GSSI
Efficient, stable, and reliable solvers for the Steady Incompressible Navier-Stokes equations: application to Computational Hemodynamics.
Over the past several years, computational fluid dynamics (CFD) simulations have become increasingly popular as a clinical tool for cardiologists at the patient-specific level. The use of CFD in this area poses several challenges. The clinical setting places heavy restrictions on both computational time and power. Simulation results are usually desired within minutes and are usually run on standard computers. For these reasons, steady-state Navier-Stokes simulations are usually preferred, as they can be completed in a fraction of the time required to run an unsteady computation. However, in many respects the steady problem is more difficult than the unsteady one, particularly in regards to solving the associated linear and nonlinear systems. Additionally, boundary data for patient-specific problems is often missing, incomplete, or unreliable. This makes the determination of a useful model challenging, as it requires the generation of reliable boundary data without introducing heavy computational costs. This seminar will address these challenges, as well as some others, and introduce new techniques for workarounds. Results from patient-specific cases will be presented and discussed.
Recording of the talk (available to GSSI only)
Zoom link
Add event to Google calendar
December 16, 2020
Martin Stoll
TU-Chemnitz
From PDEs to data science: an adventure with the graph Laplacian
In this talk we briefly review some basic PDE models that are used to model phase separation in materials science. They have since become important tools in image processing and over the last years semi-supervised learning strategies could be implemented with these PDEs at the core. The main ingredient is the graph Laplacian that stems from a graph representation of the data. This matrix is large and typically dense. We illustrate some of its crucial features and show how to efficiently work with the graph Laplacian. In particular, we need some of its eigenvectors and for this the Lanczos process needs to be implemented efficiently. Here, we suggest the use of the NFFT method for evaluating the matrix vector products without even fully constructing the matrix. We illustrate the performance on several examples.
Recording of the talk (available to GSSI only)
Zoom link
Add event to Google calendar
December 2, 2020
Patricia Diaz De Alba
GSSI
Numerical treatment for inverse electromagnetic problems
Electromagnetic induction surveys are among the most popular techniques for non-destructive investigation of soil properties, in order to detect the presence of both ground inhomogeneities and particular substances. Frequency-domain electromagnetic instruments allow the collection of data in different configurations, that is, varying the intercoil spacing, the frequency, and the height above the ground.
Based on a non-linear forward model used to describe the interaction between an electromagnetic field and the soil, the aim is to reconstruct the distribution of either the electrical conductivity or the magnetic permeability with respect to depth. To this end, the inversion of both the real (in-phase) and the imaginary (quadrature) components of the signal are studied by a regularized damped Gauss-Newton method. The regularization part of the algorithm is based on a low-rank approximation of the Jacobian of the non-linear model. Furthermore, in many situations, a regularization scheme retrieving smooth solutions is blindly applied, without taking into account the prior available knowledge. An algorithm for a regularization method that promotes the sparsity of the reconstructed electrical conductivity or magnetic permeability distribution is available. This regularization strategy incorporates a minimum gradient support stabilizer into a truncated generalized singular value decomposition scheme. The whole inversion algorithm has been enclosed in a MATLAB package, called FDEMtools, allowing the user to experiment with synthetic and experimental data sets, and different regularization strategies, in order to compare them and draw conclusions.
The numerical effectiveness of the inversion procedure is demonstrated on synthetic and real datasets by using FDEMtools package.
Zoom link
Add event to Google calendar
November 20, 2020
Christian Lubich
University of Tuebingen
Dynamical low-rank approximation
This talk reviews differential equations and their numerical solution on manifolds of low-rank matrices or of tensors with a rank structure such as tensor trains or general tree tensor networks. These low-rank differential equations serve to approximate, in a data-compressed format, large time-dependent matrices and tensors or multivariate functions that are either given explicitly via their increments or are unknown solutions to high-dimensional evolutionary differential equations, with multi-particle time-dependent Schrödinger equations and kinetic equations such as Vlasov equations as noteworthy examples of applications.
Recently developed numerical time integrators are based on splitting the projection onto the tangent space of the low-rank manifold at the current approximation. In contrast to all standard integrators, these projector-splitting methods are robust to the unavoidable presence of small singular values in the low-rank approximation. This robustness relies on exploiting geometric properties of the manifold of low-rank matrices or tensors: in each substep of the projector-splitting algorithm, the approximation moves along a flat subspace of the low-rank manifold. In this way, high curvature due to small singular values does no harm.
This talk is based on work done intermittently over the last decade with Othmar Koch, Bart Vandereycken, Ivan Oseledets, Emil Kieri, Hanna Walach and Gianluca Ceruti.
Zoom link
Add event to Google calendar
November 18, 2020
Raffaele D'Ambrosio
University of L'Aquila
Structure-preserving numerics for stochastic differential equations
Modern Numerical Analysis is not only devoted to accurately approximating the solutions of various problems through efficient and robust schemes, but also to retaining qualitative properties of the continuous problem over long times. Sometimes such conservation properties naturally characterize the numerical schemes, while in more complex situations preservation issues have to be conveyed into the numerical approximations. The talk is focused on presenting recent advances in structure-preservation issues for selected stochastic differential equations satisfying some characteristic invariance laws. The behaviour of stochastic multistep methods in the preservation of mean-square contractivity will be analyzed; we show, in this case, that conservation properties are hidden, as a matter of fact, into conditional stability properties of numerical schemes. The analysis will also be conveyed to the discretization of stochastic Hamiltonian problems for the numerical preservation of the behaviour of the expected Hamiltonian. The theoretical analysis will also be supported by the numerical evidence.
Zoom link
Add event to Google calendar
November 4, 2020
Giacomo Baggio
University of Padua
From Model-Based to Data-Driven Control of Network Dynamics
The control of complex dynamical networks has attracted increasing interest over the past few years, with reference, in particular, to problems of controllability, optimal input design, and minimal actuator placement. In this talk, I will address the problem of controlling linear dynamical networks from a control-energy or "practical" perspective. I will first focus on the model-based scenario and review the fundamental metrics, theoretical limitations, and challenges in controlling networks using a limited number of control nodes. In particular, I will emphasize the impact of the "degree" of non-normality of the network's adjacency matrix on the control performance. Then, I will switch to a data-driven scenario, and show how some network control problems can be efficiently solved by relying on experimental data only.
Slides of the seminar
Zoom link
Add event to Google calendar
October 14, 2020
Federico Poloni
University of Pisa
Inverses of quasidefinite matrices in block-factored form, with an application to control theory
(joint work with P. Benner)
We describe an algorithm to compute the explicit inverse of a dense quasi-definite matrix, i.e., a symmetric matrix of the form [-BB^T, A; A^T, C^TC], with the (1,1) block negative semidefinite and the (2,2) block positive semidefinite. The algorithm is a variant of Gauss-Jordan elimination that works on the low-rank factors $B$ and $C$ directly without ever forming those blocks. The individual elimination steps amount to a transformation called principal pivot transform; it was shown in [Poloni, Strabic 2016] how to perform it by working only on $A, B, C$, and we rely on that procedure here. We discuss the stability of the resulting method, and show how the algorithm (and in particular the produced low-rank factors) can be of use in control theory, in the context of the matrix sign iteration, which is a method used to solve algebraic Riccati equations.
Zoom link
Add event to Google calendar
September 30, 2020