Fall 2025
|
Time: Weekly on Thursdays, 3-4pm |
Contacts
Kathryn Beck |
| Date | Speaker | Topic |
|---|---|---|
| September 11 | Organizational Meeting | |
| September 18 | Shuang Guan, Tufts University |
Phaseless Sampling on \(\rho\)-th Root Lattices
|
| September 25 | No speaker | |
| October 2 | Kecheng Li, Tufts University |
Unique Equilibrium States for Non-Uniformly Expanding Maps with Small Potentials
|
| October 9 | Lior Alon, MIT |
Fourier Quasicrystals and Lee–Yang Varieties
For the next seventy years it was believed that only periodic atomic arrangements (true lattices) could exhibit long-range order, meaning a pure-point diffraction measure. This view was overturned in 1984 with Dan Shechtman’s discovery of quasicrystals—non-periodic solids displaying sharp diffraction peaks—a breakthrough recognized by the 2011 Nobel Prize in Chemistry. Mathematicians such as Bombieri, Taylor, and Meyer developed the theory explaining these aperiodic structures. Unlike periodic lattices, whose diffraction is supported exactly on a lattice, a quasicrystal’s diffraction lives on a dense set of peaks that becomes effectively discrete only after an intensity cut-off. Fourier Quasicrystals (FQs) sharpen this concept: they are sets whose diffraction (Fourier transform of the counting measure) is pure point with genuinely discrete support. Trivial examples include lattices and finite unions or translates of lattices. For two decades it was widely believed that no non-trivial FQs exist, a belief encapsulated in Lagarias’s conjecture and proved by Lev and Olevskii in 2016. This changed in 2020 when Kurasov and Sarnak constructed the first truly non-periodic one-dimensional FQ and introduced a general method based on Lee–Yang polynomials. In this talk I will present our result showing that all one-dimensional FQs arise from the Kurasov–Sarnak construction. I will then describe our recent work extending the theory to all dimensions via a new class of high-codimension algebraic varieties that we call Lee–Yang varieties. This talk is based on joint work with Alex Cohen, Cynthia Vinzant, Mario Kummer, and Pavel Kurasov, and is intended for a broad mathematical audience—no prior expertise required. |
| October 16 | Bernard Akwei, University of Connecticut |
Laplacian Eigenmaps on Manifolds and Fractals
|
| October 23 | Fulton Gonzalez, Tufts University |
The Modified Wave Equation on the Sphere
The second part of the talk concerns the generalized Euler-Poisson-Darboux equation on \(S^n\), given by \[ \Delta u=\left(\frac{\partial^2}{\partial t^2}+(n-1+2\alpha)\,(\cot t)\,\frac{\partial}{\partial t}+\alpha(n-1+\alpha)\right)u, \] where \(\alpha\) is an arbitrary complex parameter. If \(\alpha=0\) or \(\alpha=1\), this equation characterizes the range of the mean value operator over balls and spheres, respectively, of radius \(t\). We will discuss the solution of this equation in terms of a fractional convolution operator on \(S^n\). |
| October 30 | Dongwei Chen, Colorado State University |
The Integration Trick: from Signal Processing to Machine Learning
In the first part, we generalize the classical least-squares problem to a probabilistic function approximation problem over a reproducing kernel Hilbert space (RKHS). This formulation extends the kernel method widely used in machine learning and statistics. We establish the existence and uniqueness of the optimal solution under mild assumptions and derive a probabilistic representer theorem, showing that the optimizer admits an integral representation. In particular, when the measure is finitely supported or the Hilbert space is finite-dimensional, the problem reduces to a measure quantization problem, revealing a natural connection between probabilistic approximation and sampling theory. In the second part, we turn to finite frames and their generalization to probabilistic frames, obtained by interpreting finite frames as discrete probability measures. This probabilistic viewpoint enables the use of tools from optimal transport and the Wasserstein distance to study frame properties and related minimization problems. We will introduce the fundamental concepts and discuss frame perturbations and energy minimization problems. If time permits, we will conclude with some applications and open questions arising from this integration framework. |
| November 6 | Debarghya Mukherjee, Boston University |
Statistical Theory of Deep Neural Networks for Learning Structured ProblemsIn the modern landscape of machine learning and artificial intelligence, deep neural networks (DNNs) have demonstrated remarkable success across various domains. However, despite their widespread empirical effectiveness, our theoretical understanding of DNNs remains in its early stages. In this talk, I will present results on the estimation of non-parametric functions using deep neural networks, emphasizing how these models adapt to underlying low-dimensional compositional structures in data. Specifically, we show that when the true data-generating function possesses such a structure, DNNs can exploit it to achieve faster convergence rates. Additionally, I will discuss the minimax optimality of DNNs, with respect to both the sample size and the covariate dimensionality. Finally, I will highlight our recent findings on the analysis of Physics-Informed Neural Networks (PINNs), providing theoretical justification for their effectiveness in learning physical processes. These results contribute to a deeper understanding of why and when DNNs excel in complex function estimation tasks. |
| November 13 | Chandler Smith, Tufts University |
TBA |
| November 20 | Kathryn Beck, Tufts University |
TBA |
| November 27 | No speaker | Enjoy your Thanksgiving break! |
| December 4 | Kate Wall, Tufts University |
TBA |