Weekly Seminars
Launched in Spring 2025, the AMSC program’s weekly seminar series is an exciting initiative designed to foster collaboration between faculty and students, showcase research, and encourage student recruitment. Held every Monday at 4:00 PM in MATH 3206 (or virtually via Zoom), the series offers both synchronous and asynchronous presentation options. For past seminars, please view our archive.
Seminar This Week
The first seminar of the Spring 2026 semester will take place in February, with weekly sessions continuing throughout the term. Recordings of the seminars can be accessed by the UMD community at Spring 2026 AMSC Seminar Recordings. Stay tuned for updates!
Spring Seminar Schedule
- February 2nd Lin Cheng (ME)
- February 9th Paul Patrone (NIST)
- February 16th No seminar
- February 23rd Lizhen Lin (MATH/STAT)
- March 2nd Alexander Xu (BioEng)
- March 9th Dionisios Margetis (MATH/IPST)
- March 16th No Seminar: UMD Break
- March 23rd Nan Xu (BioEng)
- March 30th Vadim Karatayev (Bio)
- April 6th Ming C Lin (CS)
- April 13th Jonathan Poterjoy (AMSC)
- April 20th Brian Hunt (MATH/IPST)
- May 4th John Barras (ECE/ISR)
- May 11th No Seminar
Spring 2026 Seminar Details
Lin Cheng (ME)
Date: Monday, February 2, 2026
Time: 4:15 PM
Place: MATH 3206 (Colloquium Room)
Speaker: Lin Cheng (ME)
Abstract: Advanced manufacturing enables rapid fabrication of complex materials and structures, allowing exploration of large design spaces in materials composition, microstructure, and processing conditions. However, the high dimensionality of these spaces, together with manufacturing uncertainties and defects, poses major challenges for traditional computational science and engineering methods. Addressing these challenges requires new approaches that integrate multi-modal data with physics-based modeling and artificial intelligence to accelerate scientific discovery and materials innovation. This seminar introduces Scientific Artificial Intelligence (Sci-AI) approaches for discovering materials constitutive laws, modeling complex materials behavior, and designing new materials with targeted properties. The central theme is the tight coupling of physical principles, data-driven learning, and interpretability, enabling AI models to move beyond black-box prediction toward reliable scientific inference. The talk is organized around three interconnected themes. First, a hierarchical symbolic AI framework is presented for the automated discovery of physically interpretable constitutive laws directly from data, allowing the model to identify governing mechanisms, enforce dimensional consistency and physical constraints, and balance accuracy with model complexity. Second, a physics-informed, image-based encoder–decoder architecture is introduced to accelerate materials behavior modeling by learning compact latent representations of complex microstructural and field data, while seamlessly fusing high-fidelity simulations, in-situ imaging, and external sensing information across multiple length and time scales. Third, a generative AI framework is developed for inverse materials design, enabling the synthesis of physically plausible microstructures conditioned on nonlinear material properties, processing constraints, and performance targets, thereby closing the loop from data and modeling to design and manufacturing. Overall, this seminar highlights recent advances in Sci-AI for computational science and engineering and demonstrates how physics-guided machine learning can bridge data and models to enable advanced materials modeling and design.
Paul Patrone (NIST)
Date: Monday, February 9, 2026
Time: 4:15 PM
Place: MATH 3206 (Colloquium Room)
Speaker: Paul Patrone (NIST)
Title: A Unified Theory of Machine Learning through Probabilistic Consistency - AMSC
Abstract: With the growing adoption of machine-learning (ML) tools, there is an ever increasing need to develop rigorous methods for assessing the quality of their predictions and outputs. Despite this, fundamental questions about the connection between ML and probability remain unresolved. For example, do arbitrary ML models always have probabilistic interpretations? What does it mean for a ML model to be consistent with probability? And how could one extract probabilities from “hard” classifiers such as support vector machines?
In this talk, I will address these questions by deriving a level-set theory of classification that establishes an equivalence between certain types of self-consistent ML models and class-conditional probability distributions. I begin by considering the properties of binary Bayes classifiers, recognizing that the boundary sets separating classes can be re-interpreted as level-sets of density ratios, which quantify the relative probability that a sample point belongs to a given class. I then demonstrate how these level sets can be ordered in terms of an affine parameter related to the prevalence (fraction of elements in a class). This analysis subsequently implies that all Bayes classifiers have monotonicity and self-consistency properties, the latter being equivalent to the law of total probability. By reversing the analysis, I then discuss how for any classifier, the monotonicity and self-consistency properties (along with a normalization condition) imply the existence of probability distributions for which the classifier is in fact Bayes optimal. This allows one to determine when classifiers can be equipped with probabilistic interpretations, and it yields the density ratios via the level-set theory. Throughout, I illustrate these ideas in the context of real-world examples from diagnostics and image analysis.
Lizhen Lin (MATH/STAT)
Date: Monday, February 23rd, 2026
Time: 4:15 PM
Place: MATH 3206 (Colloquium Room)
Speaker: Lizhen Lin (MATH/STAT)
Title: Statistical Foundations of Deep Learning
Abstract: Deep learning has achieved groundbreaking performance in various application domains. Alongside its practical success, there has been a growing effort to explore the theoretical foundations of deep learning models. This talk will focus on the statistical foundations underlying deep neural network (DNN) models. From a statistical perspective, deep learning models can be largely viewed as a nonparametric function or distribution estimation problem, where the underlying function or distribution is parameterized by a DNN. In supervised settings, deep neural networks, including feedforward DNNs, are used for regression and classification tasks. For distribution estimation, deep generative models, where the generators or scores are modeled using DNNs, are the state-of-the-art deep learning models. Statistical theory provides insights into understanding why deep neural networks often outperform classical nonparametric models, and why and how these models perform exceptionally well in practice. Key insights include their ability to adapt to various intrinsic structures of the high-dimensional data, such as a lower-dimensional manifold structure, therefore circumventing the curse of dimensionality.
Alexander Xu (BioEng)
Date: Monday, March 2nd, 2026
Time: 4:15 PM
Place: MATH 3206 (Colloquium Room)
Speaker: Alexander Xu (BioEng)
Title: Topological Combinatorial Constructs (?) to Spatial Multicellular Tumor Architecture
Abstract: Cancer causes the cells of the body to shatter their well-defined roles, proliferate, and invade other tissues, leading to premature death. As a lapsed mathematician turned bioengineer, I lead a research group that studies cancer to propose novel therapies based on the spatial structure of tumor tissue. While there is no such thing as a "topological combinatorial construct" as far as I know, there is great significance in how different cells in our body are positioned in space and relative to each other. Our modern understanding of cancer proposes that a complex network of biological signals, partitioned into various cell types, is the fabric that frays and eventually dissolves in cancer. The functions woven into this fabric include immune cell control of diseased cells, secreted signals that attract and repel cells, and even a physical meshwork of collagenous and fibrotic material that impedes tumor and immune migration. Currently, my lab uses spatial molecular tools that can measure dozens of proteins and thousands of RNA biomolecules directly within intact tissue, allowing us to reconstruct the physical cellular architecture of tumors. We can use this information to characterize tumor tissue in depth and identify structures with predictive significance, based on the spatial cellular organization. However, the tools that we use to describe tumor structures are still simplistic, and our vocabulary is still limited when describe interacting fields of objects with hundreds to thousands of signals and properties. My goals for this seminar are to first present the structure and language of spatial biology data and its current applications, and then to recruit your minds to capture the underlying structures, patterns, and projections that will allow us to translate spatial data into actionable hypotheses to improve the treatment of cancer.
Dionisios Margetis (MATH/IPST)
Date: Monday, March 9th, 2026
Time: 4:15 PM
Place: MATH 3206 (Colloquium Room)
Speaker: Dionisios Margetis (MATH/IPST)
Title: On the quantum mechanics of charge excitations in confined geometries: Binding and dispersion near a plane
Abstract: In recent years, there are intensive efforts to control quantum systems. In particular, electron systems in atomically thin materials, surfaces and interfaces are technologically appealing, with numerous applications in optoelectronics. Theoretical and experimental studies in this direction have focused on semiconductor heterojunctions, semiconductor-insulator interfaces as well as monolayer graphene and various related heterostructures. Despite the tremendous progress made in these contexts, some fundamental questions remain unresolved. In this talk, I will formally discuss the dispersion of waves arising from charge density oscillations near a fixed plane in three spatial dimensions (3D) at zero temperature from Partial-Differential-Equation (PDE) and linear-spectral-analysis perspectives. The goal is to describe the interplay of microscopic scales that include a binding length in the emergence of the surface plasmon (SP), a collective low-energy charge excitation in the vicinity of the plane. The model is a time-dependent one-particle Hartree-type PDE in 3D that aims to provide a mean-field description of a confined interacting many-body quantum system. The linearization of this equation around the ground state yields a homogeneous integral equation for the wave function in the coordinate of the vertical direction. The existence of nontrivial solutions to this equation implies an SP dispersion relation, which non-linearly connects the temporal frequency and the wave number of charge oscillations near the plane. This relation is obtained exactly in closed form by a transform technique. In the strong binding limit, the classical SP dispersion law is recovered from the above result, in agreement with a hydrodynamic model based on a projected Euler-Poisson system.
Nan Xu (BioEng)
Date: Monday, March 23rd, 2026
Time: 4:15 PM
Place: MATH 3206 (Colloquium Room)
Speaker: Nan Xu (BioEng)
Title: Mathematical Modeling and Inference in Biomedical Imaging: Disentangling System Dynamics Across Brain Function and Structural Virology
Abstract: My research develops mathematically grounded methods for inference in high-dimensional biomedical imaging data, spanning functional brain dynamics (functional neuroimaging) and viral heterogeneity (cryo-EM). In neuroimaging, I model time-varying interactions among brain regions, moving beyond static and correlational connectivity to infer directed, spatiotemporally evolving network organization. These approaches support mechanistic interpretation and yield innovative biomarkers relevant to conditions such as post-concussive vestibular syndrome (PCVD). In structural virology, I study 3D reconstruction of virus particles from cryo-EM images. I develop symmetry-aware methods that preserve particle-specific asymmetry while enforcing global symmetry constraints across the population, improving reconstruction of virus(-like) particles such as bacteriophage HK97. The unifying theme is to exploit dynamics, constraints, and invariances for reliable inference under noise and heterogeneity.
Vadim Karatayev (Bio)
Date: Monday, March 30th, 2026
Time: 4:15 PM
Place: MATH 3206 (Colloquium Room)
Speaker: Vadim Karatayev (Bio)
Title: Nonlinear and network dynamics to understand and save ecosystems
Abstract: As systems undergo bifurcations, transient dynamics can provide deep insights into the clockwork of nature and illuminate novel control pathways. I will show how transients involving saddle-node bifurcations reveal the mechanisms of climate change impacts on giant kelp forests, windows of opportunity in restoring ecosystems, and effective solutions to global warming. I will also highlight my lab's new directions to understand network dynamics and point out open mathematical problems associated with each question.
Ming C Lin (CS)
Date: Monday, April 6th, 2026
Time: 4:15 PM
Place: MATH 3206 (Colloquium Room)
Speaker: Ming C Lin (CS)
Title: Dynamics-Aware Learning: from Simulated Reality to Physical World
Abstract: In this talk, we present an overview of some of our recent works on the differentiable programming paradigm for learning, control, and inverse modeling. These include using dynamics-inspired, learning-based algorithms for detailed garment recovery from video and 3D human body reconstruction from single- and multi-view images, to differentiable physics for robotics, quantum computing and VR applications. Our approaches adopt statistical, geometric, and physical priors and a combination of parameter estimation, shape recovery, physics-based simulation, neural network models, and differentiable physics, with applications to virtual try-on and robotics. We conclude by discussing possible future directions and open challenges
Jonathan Poterjoy (AMSC)
Date: Monday, April 13th, 2026
Time: 4:15 PM
Place: MATH 3206 (Colloquium Room)
Speaker: Jonathan Poterjoy (AMSC)
Title: Kernel and Generative Strategies for Handling Complex Observation Processes in Geophysical Data Assimilation
Abstract: Data assimilation in high-dimensional systems, such as numerical weather prediction, presents a formidable computational challenge. Operational centers routinely infer the probabilistic evolution of state vectors comprising more than a billion variables using physics-based models, noisy observations, and classic Bayesian filtering techniques. While many of these approaches rely on heavy approximations, recent advances make it feasible to move beyond rigid Gaussian assumptions for the prior. These non-Gaussian approaches are becoming increasingly attractive, as inexpensive surrogate models prove more effective at rapidly generating large Monte Carlo estimates of this density. Nevertheless, traditional likelihood estimation still relies on a well-defined measurement operator, or forward model, to link model states to observations, and considers uncertainty only in the form of an observation error covariance. In reality, this measurement process can be highly nonlinear, rely on incomplete physics, or remain fundamentally unknown.
To address this challenge, we present a suite of operator-free strategies that directly estimate likelihood functions from training data. These methods range from leveraging kernel mean embeddings to dynamically learn conditional distributions within a Reproducing Kernel Hilbert Space (RKHS) to employing probabilistic generative models such as conditional variational autoencoders (cVAEs). To explore the scalability of these techniques, we integrate them with contemporary filtering algorithms and assess their performance in a low-dimensional application that serves as an analog for weather forecasting and climate reconstruction. By weighing the trade-offs in accuracy and computational cost, this work describes a path toward implementation in next-generation Earth System models
Brian Hunt (MATH/IPST)
Date: Monday, April 20th, 2026
Time: 4:15 PM
Place: MATH 3206 (Colloquium Room)
Speaker: Brian Hunt (MATH/IPST)
Title: Using Machine Learning to Improve Modeling of Complex Dynamical Systems
Abstract: Recent advances in machine learning have been successful at forecasting complex systems, such as the weather, with purely data-driven models. Here I will describe our group's research into hybrid modeling, combining machine learning with a physics-based model. Our goal is to use data to improve the model's skill both at short-term forecasting and at long-term "climate" simulation. I will include some results from applying the hybrid approach to model the earth's weather and climate.
John Barras (ECE/ISR)
Date: Monday, May 4th, 2026
Time: 4:15 PM
Place: MATH 3206 (Colloquium Room)
Speaker: John Barras (ECE/ISR)
Title: Robust Machine Learning, Reinforcement Learning and Autonomy: A Unifying Theory via Performance and Risk Tradeoff
Abstract: Robustness is a fundamental concept in systems science and engineering. It is a critical consideration in all inference and decision-making problems. It has recently surfaced again in the context of machine learning (ML), reinforcement learning (RL) and artificial intelligence (AI). We describe a novel and unifying theory of robustness for ML/RL/AI emanating from our much earlier fundamental results on robust output feedback control for general systems (including nonlinear, HMM and set-valued). We briefly summarize this theory and the universal solution it provides consisting of two coupled HJB equations. These earlier results rigorously established the equivalence of three seemingly unrelated problems: the robust output feedback control problem, a partially observed differential game, and a partially observed risk sensitive stochastic control problem. We first show that the “four block” view of the above results leads naturally to a similar formulation of the robust ML problem, and to a rigorous path to analyze robustness and attack resiliency in ML. Then we describe a recent risk-sensitive approach, using an exponential criterion in deep learning, that explains the convergence of stochastic gradients despite over-parametrization. Finally, we describe our most recent results on robust and risk sensitive RL for control, using exponential rewards, that emerge from our earlier theory, with the important new extension that the models are now unknown. We show how all forms of regularized RL can be derived from our theory, including KL and Entropy regularization, relation to probabilistic graphical models, distributional robustness. The deeper reason for this unification emerges: it is the fundamental tradeoff between performance and risk measures in decision making, via rigorous duality. We close with open problems and future research directions.