Spring 2024

M Feb 5Maxim Olshanskii (U. of Houston)Unfitted finite element methods for PDEs posed on surfaces

Partial differential equations posed on surfaces arise in mathematical models for many natural phenomena: diffusion along grain boundaries, lipid interactions in biomembranes, pattern formation, and transport of surfactants on fluidic interfaces to mention a few. Numerical methods for solving PDEs posed on manifolds recently received considerable attention. In this talk, we discuss finite element methods to solve PDES on both stationary surfaces and surfaces with prescribed evolution. The focus of the talk is on geometrically unfitted methods, i.e. methods that avoid parametrization and triangulation of surfaces in a common sense. We explain how unfitted discretizations facilitate the development of a fully Eulerian numerical framework and enable easy handling of time-dependent surfaces including the case of topological transitions.
M Feb 12Neriman Tokcan (U. Mass Boston)
on zoom
Tensor Methods for Multi-Modal Genomics Data

Genomics datasets often involve multiple dimensions, incorporating factors such as genes, samples, and experimental conditions. Moreover, multi-omics integrates diverse omics data and explores molecular events occurring at distinct levels, encompassing DNA variations, epigenetic modifications, transcriptional activities, metabolite profiles, and clinical phenotypes. Such intricate data find effective representation with tensors, and tensor methods emerge as powerful tools in genomics analysis, uniquely equipped to unravel the complex and multi-dimensional nature of genomics data.
In this presentation, I will delve into tensor-based methods across various genomics applications, specifically focusing on tumor-microenvironment modeling. The talk will also touch on the limitations of tensor methods and highlight potential areas for future development, fostering a comprehensive understanding of their potential in advancing genomics research.
M Feb 19Presidents’ Day (University Holiday) No seminar
M Feb 26Anna Seigal (Harvard)Identifiability of overcomplete independent component analysis

Independent component analysis (ICA) is a classical data analysis method to study mixtures of independent sources. An ICA model is said to be identifiable if the mixing can be recovered uniquely. Identifiability is known to hold if and only if at most one of the sources is Gaussian, provided the number of sources is at most the number of observations. In this talk, I will discuss our work to generalize the identifiability of ICA to the overcomplete setting, where the number of sources can exceed the number of observations. I will also describe how the results connect to tensor decomposition. Based on joint work with Ada Wang https://arxiv.org/abs/2401.14709.
M Mar 4Christian Kümmerle (University of North Carolina at Charlotte) Low-Rank Optimization with Iteratively Reweighted Least Squares: Distance Geometry Problems & Beyond

Abstract: Non-convex functions such as Schatten-p quasinorms or the positive log determinant have been successfully used as surrogates for a rank objective as an attractive reformulation to low-rank constraints, which in turn are ubiquitous in machine learning, computer vision, high-dimensional statistics and control. While the optimization of these functions itself is still challenging, Iteratively Reweighted Least Squares (IRLS) provides a suitable algorithmic framework for their optimization that allows for scalable, data-efficient algorithms with a rigorous local convergence analysis. We present recent results on how this framework can find data embeddings from pairwise distance measurements of nearly-optimal sample complexity, and on the recovery of data that adheres to multiple, heterogenous parsimonious structures (e.g., row-sparsity and low-rankness).
M Mar 11Elisenda Grigsby (Boston College)Functional dimension of ReLU Networks
The parameter space for any fixed architecture of neural networks serves as a proxy during training for the associated class of functions – but how faithful is this representation? For any fixed feedforward ReLU network architecture with at least one hidden layer, it is well-known that many different parameter settings can determine the same function. It is less well-known that the degree of this redundancy is inhomogeneous across parameter space. This inhomogeneity should impact the dynamics of training via gradient descent, especially when compared with recent work suggesting that gradient descent favors flat minima of the loss landscape. In this talk, I will carefully define the notion of the local functional dimension of a feedforward ReLU network function, discuss the relationship between local functional dimension of a parameter and the geometry of the underlying decomposition of the domain into linear regions, and present some experimental results on the probability distribution underlying functional dimension at network initialization. Time permitting, I will say a few words about recent efforts to connect the notion of functional dimension with some classical notions of complexity from statistical learning theory. Some of this work is joint with Kathryn Lindsey, Rob Meyerhoff, and Chenxi Wu, and some is joint with Kathryn Lindsey and David Rolnick.
M Mar 18 Spring Break No seminar
M Mar 25David McCandlish (Cold Spring Harbor Lab)Exploring the structure of high-dimensional biological fitness landscapes
Abstract: The fitness landscape is a classical concept in evolutionary genetics that has remained influential from its initial proposal in 1932 until the present day. The fitness landscape takes the form of a function defined on a Hamming graph, where the nodes in the graph represent possible combinations of mutations, edges connect nodes that differ by a single mutation, and the value of the function represents the fitness (i.e. the expected rate of reproduction) of an organism carrying that combination of mutations. While the fitness landscape has long served an important conceptual and motivational role in evolutionary theory, recent high-throughput laboratory experiments measuring fitnesses for thousands to millions of combinations of mutations are now allowing us a first view of the structure of large empirical fitness landscapes. Here I will discuss several techniques developed in my group for analyzing this new type of data, employing ideas from spectral graph theory, Gaussian processes, and random walk based methods for nonlinear dimensionality reduction. I will use these techniques to explore a number of empirical fitness landscapes of both proteins and nucleic acids.
M Apr 1Ricardo Nochetto (U. of Maryland)Liquid Crystal Variational Problems
We discuss modeling, numerical analysis and computation of liquid crystal networks (LCNs). These materials couple a nematic liquid crystal with a rubbery material. When actuated with heat or light, the interaction of the liquid crystal with the rubber creates complex shapes. Thin bodies of LCNs are natural candidates for soft robotics applications. We start from the classical 3D trace energy formula and derive a reduced 2D membrane energy as the formal asymptotic limit of vanishing thickness and characterize the zero energy deformations. We design a sound numerical method and prove its Gamma convergence despite the strong nonlinearity and lack of convexity properties of the membrane energy. We present computations showing the geometric effects that arise from liquid crystal defects as well as computations of nonisometric origami within and beyond the theory. This work is joint with L. Bouck and S. Yang.
M Apr 8Wei Zhu (UMass Amherst )  Symmetry-Preserving Machine Learning: Theory and Applications
Abstract: Symmetry is prevalent in a variety of machine learning (ML) and scientific computing tasks, including computer vision and computational modeling of physical and engineering systems. Empirical studies have demonstrated that ML models designed to integrate the intrinsic symmetry of their tasks often exhibit substantially improved performance. Despite extensive theoretical and engineering advancements in the domain of “symmetry-preserving ML”, several critical questions remain unaddressed, presenting unique challenges and opportunities for applied mathematicians.
Firstly, real-world symmetries rarely manifest perfectly and are typically subject to various deformations. Therefore, a pivotal question arises: Can we effectively quantify and enhance the robustness of models to maintain an “approximate” symmetry, even under imperfect symmetry transformations? Secondly, although empirical evidence suggests that symmetry-preserving ML models typically require fewer training data to achieve equivalent accuracy, there is a need for more precise and rigorous quantification of this reduction in sample complexity attributable to symmetry preservation. Lastly, considering the non-convex nature of optimization in modern ML, can we ascertain whether algorithms like gradient descent can guide symmetry-preserving models to indeed converge to objectively better solutions compared to their generic counterparts, and if so, to what degree?
In this talk, I will present several of my research projects addressing these intriguing questions. Surprisingly, the answers are not as straightforward as one might assume and, in some cases, are counterintuitive. If time permits, I will also discuss our recent efforts on extending these results to ML-assisted structure-preserving computational models for complex physical systems.
F Apr 19
Gaël Rigaud (U of Stuttgart)Joint Cormack Applied Mathematics Colloquium

A data-driven approach enhanced by neural networks to address model inexactness and motion in imaging

The development of technologies leads to new applications, new challenges, and new issues in the field of Imaging. Two of the main challenges, namely model inexactness and motion, are tackled in this talk.
Dynamic inverse problems have been vastly studied in the last decade with the initial aim to reduce the artefacts observed in a CT scan due to the movements of the patient and have been developed into broader setups and applications. Motion leads in general to model inexactness.  For instance, in computerized tomography (CT), the movement of the patient alters the nature of the integration curves and therefore, intrinsically, the model itself. Since the motion is in general unknown, it implies that the model is not exactly known.
However, the model inexactness can be more specific with other applications. A good example is Compton scattering imaging. Modelling the Compton scattering effect leads to many challenges such as non-linearity of the forward model, multiple scattering and high level of noise for moving targets. While the non-linearity is addressed by a necessary linear approximation of the first-order scattering with respect to the sought-for electron density, the multiple-order scattering stands for a substantial and unavoidable part of the spectral data which is difficult to handle due to highly complex forward models.  Last but not least, the stochastic nature of the Compton effect may involve a large measurement noise, in particular when the object under study is subject to motion, and therefore time and motion must be taken into account.
To tackle these different issues, we study in this talk two data-driven techniques, namely the regularized sequential subspace optimization and a Bayesian method based on the generalized Golub-Kahan bidiagonalization. We then explore the possibilities to mimic and improve the stochastic approach with deep neural networks. The results are illustrated by simulations.
M Apr 22Ferdia Sherry (U. of Cambridge)Structure-preserving deep learning and its applications
Abstract: The emerging field of structure-preserving deep learning draws inspiration from structure-preserving numerical methods to better understand existing neural networks and design new neural network architectures that incorporate desirable structural constraints. Examples of such desirable properties include certain notions of stability, symmetries, and conserved quantities.

I will give an introduction to this field, starting from a paper that my collaborators and I wrote on the topic: Celledoni, E., Ehrhardt, M. J., Etmann, C., McLachlan, R. I., Owren, B., Schönlieb, C.-B., & Sherry, F. (2021). Structure-preserving deep learning. European Journal of Applied Mathematics, 32(5), 888–936. doi:10.1017/S0956792521000139
Following this, I will delve deeper into specific topics within this field that we have investigated in more detail, including applications to adversarial robustness of image classifiers, and deep-learning based regularisation of inverse problems in imaging.
M Apr 29Hannah Wayment-Steele (Brandeis)Predicting and discovering protein dynamics
Abstract: The functions of biomolecules are often based in their ability to convert between multiple conformations. Recent advances in deep learning for predicting and designing single structures of proteins mean that the next frontier lies in how well we can characterize, model, and predict protein dynamics. I will talk about two projects from my postdoctoral work in this direction. First, I will discuss a method that enables AlphaFold2 to sample multiple conformations of metamorphic proteins by clustering the input sequence alignment. This work enabled us to design a minimal set of 3 mutations to flip the populations of the fold-switching protein KaiB, as well as screen for novel putative alternate states. Beyond predicting multiple conformations, we would also like to be able to predict actual kinetics associated with transitions. Second, I will describe the development of large-scale benchmarks of dynamics from across multiple types of NMR experiments, and initial insights into if protein language models can predict these hallmarks of dynamics.