Calendar

Time interval: Events:

Tuesday, April 7, 2026

Posted March 17, 2026
Last modified March 30, 2026

Algebra and Number Theory Seminar Questions or comments?

2:00 pm – 3:00 pm Lockett 233 or click here to attend on Zoom

Shahriyar Roshan-Zamir, Tulane University
Interpolation in Weighted Projective Spaces

Over an algebraically closed field, the double point interpolation problem asks for the vector space dimension of the projective hypersurfaces of degree d singular at a given set of points. After being open for 90 years, a series of papers by J. Alexander and A. Hirschowitz in 1992--1995 settled this question in what is referred to as the Alexander-Hirschowitz theorem. In this talk, we primarily use commutative algebra to prove analogous statements in the weighted projective space, a natural generalization of the projective space. For example, we introduce an inductive procedure for weighted projective space, similar to that originally due to A. Terracini from 1915, to demonstrate an example of a weighted projective plane where the analogue of the Alexander-Hirschowitz theorem holds without exceptions and prove our example is the only such plane. Furthermore, Terracini's lemma regarding secant varieties is adapted to give an interpolation bound for an infinite family of weighted projective planes. There are no prerequisites for this talk besides some elementary knowledge of algebra.

Event contact: Gene Kopp

Wednesday, April 8, 2026

Posted March 27, 2026

Informal Analysis Seminar Questions or comments?

12:30 pm – 1:30 pm Lockett 233

Yixing Miao, Louisiana State University
TBD

TBD


Posted January 15, 2026

Informal Geometry and Topology Seminar Questions or comments?

3:30 pm – 4:30 pm Lockett Hall 233

Nilangshu Bhattacharyya, Louisiana State University
TBD

TBD

Thursday, April 9, 2026

Posted March 20, 2026

Applied Analysis Seminar Questions or comments?

3:30 pm – 4:30 pm Louisana Digital Media Center

Tan Bui-Thanh, The University of Texas at Austin Professor and the Endowed William J. Murray, Jr. Fellow in Engineering
Rigorous Model-Constrained Scientific Machine Learning for Digital Twins: A Computational Mathematics Perspective

Digital twins (DTs) are high-fidelity virtual representations of physical systems and processes. At their foundation lie mathematical and physical models that describe system behavior across multiple spatial and temporal scales. A central purpose of DTs is to enable "what-if" analyses through hypothetical simulations, supporting lifecycle monitoring, parameter calibration against observational data, and systematic uncertainty quantification (UQ). For DTs to serve as a reliable basis for real-time forecasting, optimization, and decision-making, they must reconcile two traditionally competing requirements: mathematical rigor and physical fidelity, and computational efficiency at scale. This has motivated a new generation of approaches that combine classical tools from numerical analysis, partial differential equations, inverse problems, and optimization with the expressive power of Scientific Machine Learning (SciML). In this talk, I will outline a principled pathway from traditional computational mathematics to rigorously grounded SciML. I will then present recent Scientific Deep Learning (SciDL) methods for forward modeling, inverse and calibration problems, and uncertainty quantification, emphasizing mathematical structure, stability, and generalization. Both theoretical results and numerical demonstrations will be shown for representative problems governed by transport, heat, Burgers, Euler (including transonic and hypersonic regimes), and Navier- Stokes equations.

Event contact: Robert Lipton

Friday, April 10, 2026

Posted February 5, 2026
Last modified February 6, 2026

Control and Optimization Seminar Questions or comments?

9:30 am – 10:20 am Zoom (click here to join)

Wonjun Lee, Ohio State University
Linear Separability in Contrastive Learning via Neural Training Dynamics

The SimCLR method for contrastive learning of invariant visual representations has become extensively used in supervised, semi-supervised, and unsupervised settings, due to its ability to uncover patterns and structures in image data that are not directly present in the pixel representations. However, this success is still not well understood; neither the loss function nor invariance alone explains it. In this talk, I present a mathematical analysis that clarifies how the geometry of the learned latent distribution arises from SimCLR. Despite the nonconvex SimCLR loss and the presence of many undesirable local minimizers, I show that the training dynamics driven by gradient flow tend toward favorable representations. In particular, early training induces clustering in feature space. Under a structural assumption on the neural network, our main theorem proves that the learned features become linearly separable with respect to the ground-truth labels. To support the theoretical insights, I present numerical results that align with the theoretical predictions.


Posted March 27, 2026

Geometry and Topology Seminar Seminar website

1:30 pm Lockett 233

Chris Manon, University of Kentucky
TBA