|M Feb 5
|Maxim Olshanskii (U. of Houston)
|Unfitted finite element methods for PDEs posed on surfaces
Partial differential equations posed on surfaces arise in mathematical models for many natural phenomena: diffusion along grain boundaries, lipid interactions in biomembranes, pattern formation, and transport of surfactants on fluidic interfaces to mention a few. Numerical methods for solving PDEs posed on manifolds recently received considerable attention. In this talk, we discuss finite element methods to solve PDES on both stationary surfaces and surfaces with prescribed evolution. The focus of the talk is on geometrically unfitted methods, i.e. methods that avoid parametrization and triangulation of surfaces in a common sense. We explain how unfitted discretizations facilitate the development of a fully Eulerian numerical framework and enable easy handling of time-dependent surfaces including the case of topological transitions.
|M Feb 12
|Neriman Tokcan (U. Mass Boston)
|Tensor Methods for Multi-Modal Genomics Data
Genomics datasets often involve multiple dimensions, incorporating factors such as genes, samples, and experimental conditions. Moreover, multi-omics integrates diverse omics data and explores molecular events occurring at distinct levels, encompassing DNA variations, epigenetic modifications, transcriptional activities, metabolite profiles, and clinical phenotypes. Such intricate data find effective representation with tensors, and tensor methods emerge as powerful tools in genomics analysis, uniquely equipped to unravel the complex and multi-dimensional nature of genomics data.
In this presentation, I will delve into tensor-based methods across various genomics applications, specifically focusing on tumor-microenvironment modeling. The talk will also touch on the limitations of tensor methods and highlight potential areas for future development, fostering a comprehensive understanding of their potential in advancing genomics research.
|M Feb 19
|Presidents’ Day (University Holiday)
|M Feb 26
|Anna Seigal (Harvard)
|Identifiability of overcomplete independent component analysis
Independent component analysis (ICA) is a classical data analysis method to study mixtures of independent sources. An ICA model is said to be identifiable if the mixing can be recovered uniquely. Identifiability is known to hold if and only if at most one of the sources is Gaussian, provided the number of sources is at most the number of observations. In this talk, I will discuss our work to generalize the identifiability of ICA to the overcomplete setting, where the number of sources can exceed the number of observations. I will also describe how the results connect to tensor decomposition. Based on joint work with Ada Wang https://arxiv.org/abs/2401.14709.
|M Mar 4
|Christian Kümmerle (University of North Carolina at Charlotte)
|M Mar 11
|Elisenda Grigsby (Boston College)
|M Mar 18
|M Mar 25
|David McCandlish (Cold Spring Harbor Lab)
|M Apr 1
|Ricardo Nochetto (U. of Maryland)
|M Apr 8
|Wei Zhu (UMass Amherst )
|F Apr 19
|Gaël Rigaud (U of Stuttgart)
|Joint Cormack Applied Mathematics Colloquium
A data-driven approach enhanced by neural networks to address model inexactness and motion in imaging
The development of technologies leads to new applications, new challenges, and new issues in the field of Imaging. Two of the main challenges, namely model inexactness and motion, are tackled in this talk.
Dynamic inverse problems have been vastly studied in the last decade with the initial aim to reduce the artefacts observed in a CT scan due to the movements of the patient and have been developed into broader setups and applications. Motion leads in general to model inexactness. For instance, in computerized tomography (CT), the movement of the patient alters the nature of the integration curves and therefore, intrinsically, the model itself. Since the motion is in general unknown, it implies that the model is not exactly known.
However, the model inexactness can be more specific with other applications. A good example is Compton scattering imaging. Modelling the Compton scattering effect leads to many challenges such as non-linearity of the forward model, multiple scattering and high level of noise for moving targets. While the non-linearity is addressed by a necessary linear approximation of the first-order scattering with respect to the sought-for electron density, the multiple-order scattering stands for a substantial and unavoidable part of the spectral data which is difficult to handle due to highly complex forward models. Last but not least, the stochastic nature of the Compton effect may involve a large measurement noise, in particular when the object under study is subject to motion, and therefore time and motion must be taken into account.
To tackle these different issues, we study in this talk two data-driven techniques, namely the regularized sequential subspace optimization and a Bayesian method based on the generalized Golub-Kahan bidiagonalization. We then explore the possibilities to mimic and improve the stochastic approach with deep neural networks. The results are illustrated by simulations.
|M Apr 22
|Ferdia Sherry (U. of Cambridge)
|M Apr 29
|Hannah Wayment-Steele (Brandeis)