Virginia Tech® home

Colloquium

  • Colloquium
  • STAT 5924
  • Thursdays
  • 3:30 pm to 4:30 pm
  • 1860 Litton-Reaves Hall

Colloquium Schedule Fall 2025

Ricky Rambharat

Ricky Rambharat (Sufficient Statistics LLC)

Bio: Ricky Rambharat is an applied statistician who worked at the Office of the Comptroller of the Currency (OCC), the national bank regulator for the U.S., for the past 18 years. He earned his PhD in Statistics from Carnegie Mellon University in 2005. Interestingly, following a 2-year stint at Duke University, he joined the OCC in 2007 at the onset of the Global Financial Crisis. As such, he got immediate exposure to challenging empirical issues confronting the nation’s largest banks, and these issues called upon his expertise in statistics to adequately establish quantitative guardrails to support the “Safety & Soundness” mission of the OCC. Ricky’s tenure at the OCC exposed him to banking supervisory matters in market risk, compliance risk, sampling and regulatory policy analysis. He has presented at reputable conference venues, published in leading journals, collaborated with both U.S. and international regulators and academics. The next phase of Ricky’s career will take him to the Defense Industry.

Presentation Title: Volatility Imparities in S&P 100 Index Options

Abstract:  The Standard & Poor's 100 Index (S&P 100 Index) trades both American and European-style options, which is atypical as most underlying trade options with only one of these features. As such, the options on the S&P 100 Index can be empirically assessed for adherence to prevailing theoretical results. One particular result, due to R. Myneni (1992, AOAP), establishes that the price of an American put option can be decomposed into its early-exercise premium (EEP) and the price of the corresponding European put option. This result suggests that option-implied measures of volatility should adhere to this decomposition. The present study investigates the extent of parity in the option-implied volatility signals between American and equivalently structured (same strike and tenor) European put options on the S&P 100 Index. Leveraging statistical process control, we document the extent of agreement in the implied volatilities of S&P 100 American and European put options during a time frame that spans before, during, and after the Global Financial Crisis (GFC, approximately 2007-2009). We further augment our study to investigate the statistical agreement between the market price of volatility risk of American and equally specified European put options on the S&P 100 Index. Our results indicate that parity between these volatility signals generally hold, thus adhering to theory, but notable statistical departures are evident, particularly during periods of extreme financial distress, thus affording likely profitable arbitrage opportunities.

Huanmei Wu

Bio:  Dr. Wu is a Professor and Chair of the Department of Health Services Administration and Policy, as well as the Assistant Dean of Global Engagement at Temple University College of Public Health. She is a multidisciplinary researcher who applies data management, AI/ML, NLP/LLM, and digital twins in the fields of life science, medicine, public health, and social work, including cancer radiotherapy, diabetes, cardiovascular disease, infectious diseases, aging, Alzheimer's disease, and other neurodegenerative conditions. She collaborates with academia, community health centers, research institutes, industrial partners, and local communities. Her research has secured funding from various agencies, such as NSF, NIH, USAID, PCORI, JDRF, RWJF, and more.

Website: https://cph.temple.edu/directory/huanmei-wu-tuo12759

Presentation Title: From Concept to Practice: Real-World Applications of Digital Twins for Health

Abstract:  Digital Twins for Health (DT4H) are virtual representations of individuals or health systems that are continuously updated with real-world data. Once largely theoretical, DT4H technologies are now moving into practice and transforming fields including medicine, public health, dentistry, and social work. This presentation will introduce the concept of DT4H and explain how digital twins are constructed through the integration of data, computational algorithms, AI/ML models, and feedback loops. Real-world examples will be highlighted, including applications in precision medicine, chronic disease management, drug development, and health resource allocation. Particular attention will be given to the essential role of data science and statistics in ensuring data quality, developing robust models, and validating outcomes. The talk will also examine challenges, opportunities, and ethical considerations in the adoption of digital twins. By the end, the audience will gain a clear understanding of how DT4H is being applied today and the pathways it opens for advancing patient care and public health.

Join us for snacks, drinks, and conversation as we get to know each other better. We’ve reserved the bar area, so we’ll be comfortably inside if it’s too warm/wet outside. 

We’ll go around and introduce ourselves. Graduate students and faculty, please be ready to share:

  • Your 5-year goal
  • Your current research interests
  • A hobby you enjoy

The Maroon Door

418 N Main St, Blacksburg, VA 24060

Lily Wang

Professor Lily Wang, George Mason University

Bio: Lily Wang is a professor of Statistics at George Mason University. She received her PhD in Statistics from Michigan State University in 2007. Prior to joining Mason in 2021, she was on the faculty of Iowa State University (2014-2021) and the University of Georgia (2007-2014).

Wang is highly regarded internationally for her work on non/semi-parametric regression methods. She has broad interests in statistical learning of data objects with complex features, methodologies for functional data, spatiotemporal data, imaging data, and survey sampling. Working at the interface of statistics, mathematics, and computer science, she is also interested in developing cutting-edge statistical methods for solving issues related to data science and big data analytics. The methods she developed have a wide application in economics, engineering, neuroimaging, epidemiology, environmental studies, and biomedical science.

She is a fellow of both the Institute of Mathematical Statistics (2020) and the American Statistical Association (2021) and an Elected Member of the International Statistical Institute (2008). She is the recipient of multiple NSF awards, SEC Research Fellowship (2019-2020) and ASA/NSF/BLS Senior Research Fellowship (2020-2011), the Mid-Career Achievement in Research Award (2021), the COVID-19 Exceptional Effort Research Impact Award (2021) from Iowa State University and the M. G. Michael Research Award (2012) from the University of Georgia.

Wang serves on the editorial boards of the Journal of the Royal Statistical Society, Series BJournal of Nonparametric Statistics, and Statistical Analysis and Data Mining. 

Website: https://www.gmu.edu/profiles/lwang41

Presentation Title: From Synthesis to Trust: Advanced Statistical Methods for Trustworthy Generative AI in Biomedical Imaging Studies

 

Abstract: Generative AI has rapidly transformed the biomedical imaging field by enabling image synthesis, helping address challenges of limited data availability, privacy, and diversity in biomedical research. Yet, the adoption of AI-generated images in biomedical studies requires rigorous methods to ensure their reliability for downstream analysis. In this talk, I will introduce novel and rigorous nonparametric approaches that strengthen the trustworthiness and statistical validity of synthetic biomedical imaging data. We develop simultaneous confidence regions to rigorously quantify uncertainty and detect meaningful differences between synthetic and original imaging data. To further enhance fidelity and utility, we propose a transformation that aligns the mean and covariance structures of synthetic images with those of the originals. I will also discuss methods for imputing missing imaging phenotypes using generative models and demonstrate how joint analysis of observed and imputed traits enhances inference while accounting for imputation error. Extensive simulations and applications to brain imaging data validate the proposed framework, demonstrating how these methods empower rigorous statistical inference and promote trustworthy advances in biomedical imaging.

Time: 11 am-12 pm, Friday, September 26, 2025

Location: Multipurpose Room, the Graduate Life Center

Dr. Matt Heiner

Dr. Matthew Heiner, Brigham Young University

About

Education

  • Ph.D., Statistical Science, UC Santa Cruz, 2019.
  • M.S., Statistics, Brigham Young University, 2014.
  • B.S., Statistics: Actuarial Science Emphasis, Brigham Young University, 2014.

Experience

  • Assistant Professor, Dept. of Statistics, BYU, 2019 - Present
  • Summer Graduate Pedagogy Mentor, UC, Santa Cruz, 2018.
  • Online Course Developer, UC, Santa Cruz, 2016-2017.
  • Summer Student Intern, Lawerence Livermore National Laboratory, 2015 and 2016.
  • Statistician Intern, Savvysherpa Inc., 2014.
  • Research Assistant, Brigham Young University, 2012-2014.
  • R Programmer/Intern, Acxiom Corporation, 2013.
  • Acturaial Intern, Aon Hewitt, 2011.

Website: https://heiner.byu.edu/mheiner.byu.edu 

Presentation Title: Quantile Slice Sampling

Abstract: We propose and demonstrate a novel, effective approach to simple slice sampling. Using the probability integral transform, we first generalize Neal's shrinkage algorithm, standardizing the procedure to an automatic and universal starting point: the unit interval. This enables the introduction of approximate (pseudo-) targets through a factorization used in importance sampling, a technique that has popularized elliptical slice sampling. Reasonably accurate pseudo-targets can boost sampler efficiency by requiring fewer rejections and by reducing target skewness. This strategy is effective when a natural, possibly crude approximation to the target exists. Alternatively, obtaining a marginal pseudo-target from initial samples provides an intuitive and automatic tuning procedure. We consider pseudo-target specification and interpretable diagnostics. We examine performance of the proposed sampler relative to other popular, easily implemented MCMC samplers on standard targets in isolation, and as steps within a Gibbs sampler in a Bayesian modeling context. We extend to multivariate slice samplers and demonstrate with a constrained state-space model. R package qslice is available on CRAN.

Jennifer Hill

Bio: Jennifer Hill develops and evaluates methods to help answer the types of causal questions that are vital to policy research and scientific development. In particular, she focuses on situations in which it is difficult or impossible to perform traditional randomized experiments, or when even seemingly pristine study designs are complicated by missing data or hierarchically structured data. Most recently, Hill has been pursuing three intersecting strands of research. The first focuses on Bayesian nonparametric methods that allow for flexible estimation of causal models and are less time-consuming and more precise than competing methods (e.g., propensity score approaches). The second line of work pursues strategies for exploring the impact of violations of typical causal inference assumptions, such as ignorability (all confounders measured) and common support (overlap). The third investigates how statistical methods are used and interpreted in practice.  Hill has published in a variety of leading journals, including Journal of the American Statistical Association, Statistical Science, American Political Science Review, American Journal of Public Health, and Developmental Psychology. Hill earned her PhD in Statistics at Harvard University in 2000 and completed a postdoctoral fellowship in Child and Family Policy at Columbia University's School of Social Work in 2002.

Hill is currently the Co-Chair of the Department of Applied Statistics, Social Science, and Humanities (ASH) Department as well as the Co-Director of the Center for Practice and Research at the Intersection of Information, Society, and Methodology (PRIISM). She was the co-founder of the Master's of Science Program in Applied Statistics for Social Science Research (A3SR). The A3SR program has concentration in Data Science for Social Impact. As far as we know this is the first degree granting program in Statistics or Data Science for Social Impact or Social Good in the world. The A3SR program also has a dual degree option with the MPA program at the Wagner School that allows students to earn both degrees in two years.

In 2021, Jennifer Hill was awarded the New York University Distinguished Teaching Award.

Website: https://steinhardt.nyu.edu/people/jennifer-hill

Presentation Title: Democratizing Methods

Abstract: The past few decades have seen an explosion in the development of freely available software to implement statistical methods and algorithms to help explore and analyze data. However, researchers tend to assume that releasing software packages implementing specific methods is sufficient for ensuring that the tools are adopted and used correctly. Typically, very little attention is paid to the user experience. This in turn means that the tools do not get used, are used incorrectly, or the results are misinterpreted. This talk will present a case study for how software development could be different by describing a causal analysis tool that scaffolds the user experience. I will discuss lessons learned through user studies and experimental evidence. I conclude with calls to action for those that develop methods and software. 

Yao Xie

Bio: Yao Xie is the Coca-Cola Foundation Chair, Professor at Georgia Institute of Technology in the H. Milton Stewart School of Industrial and Systems Engineering, and Associate Director of the Machine Learning Center. From September 2017 until May 2023, she was the Harold R. and Mary Anne Nash Early Career Professor. She received her Ph.D. in Electrical Engineering (minor in Mathematics) from Stanford University in 2012 and was a Research Scientist at Duke University. Her research lies at the intersection of statistics, machine learning, and optimization in providing theoretical guarantees and developing computationally efficient and statistically powerful methods for problems motivated by real-world applications. She received the National Science Foundation (NSF) CAREER Award in 2017, the INFORMS Wagner Prize Finalist in 2021, the INFORMS Gaver Early Career Award for Excellence in Operations Research in 2022, and the CWS Woodroofe Award in 2024. She is currently an Associate Editor for IEEE Transactions on Information Theory, Journal of the American Statistical Association-Theory and Methods, the American Statistician, Operations Research, Annals of Applied Statistics, Sequential Analysis: Design Methods and Applications, INFORMS Journal on Data Science, an Area Chair of NeurIPS, ICML, and ICLR, and Senior Program Committee of AAAI.

Website: https://www.isye.gatech.edu/users/yao-xie

Presentation Title: Conformal Prediction for Time Series and Spatial Data

Abstract: Conformal prediction (CP) provides a powerful, distribution-free framework for constructing prediction intervals with finite-sample coverage guarantees. However, many CP methods rely on the assumption of data exchangeability, which is typically violated in time series due to temporal dependence. This has motivated a growing body of work aimed at extending CP to the time-series setting. Another computational challenge comes from handling multi-dimensional time-series. In this talk, I will review recent advances in conformal prediction for time series, including our own contributions to a general framework for constructing distribution-free prediction intervals for time series that wrap around a given black-box algorithm. The new approach relaxes the exchangeability assumption and considers the temporal dependence of data. Theoretically, we establish marginal and conditional coverage guarantees based on temporal mixing, which comes with certain width optimality in some cases. Methodologically, we propose computationally efficient procedures based on ensemble predictors that are closely related to standard CP, yet tailored for time series. Finally, I will discuss a recent extension to spatial data by considering spatial mixing. 

Brian Reich

Bio: Brian is the Gertrude M Cox Distinguished Professor of Statistics at North Carolina State University.   His research interests include Bayesian methods, spatial statistics, extreme value analysis, variable selection and dimension reduction and machine learning.  In addition to these methodological interests, Brian applies these methods to environmental areas such as ecology, epidemiology, meteorology and climate. 

Website: https://bjreich.wordpress.ncsu.edu/

Presentation Title: Leveraging deep learning for spatiotemporal interpolation

 

Abstract:  Gaussian Process (GP) models are the workhorse of spatial statistics.  They provide provably optimal prediction and a robust framework for statistical inference.  However, fitting GPs usually requires unrealistic assumptions such as stationarity, i.e., the process behaves similarly across the spatial domain, and even with this overly simplistic assumption the computation is not scalable.  We consider a nonstationary model based on a latent dimension expansion and show that this model permits an arbitrarily precise spectral approximation. For computation, we place the model in the deep learning architecture and use variational Bayesian deep learning to fit the model to millions of observations in minutes.  While the variational Bayesian approach is fast to fit, to ensure valid uncertainty quantification we develop a local conformal inference step.  We study the theoretical properties of this approach, empirically compare performance against benchmarks and apply the method to spatiotemporal interpolation of global daily aerosol optical depth.

Anuj Srivastava

Bio:  Anuj Srivastava is a Professor in the Department of Statistics and a Distinguished Research Professor at the Florida State University. He obtained his Ph.D. from Washington University in St. Louis; after being a postdoc at Brown University for one year, he joined the faculty of FSU in 1997. His research interests include statistical analysis on nonlinear manifolds, statistical computer vision, functional data analysis, and shape analysis. He has held several visiting positions at European universities, including INRIA, France, the University of Lille, France, and Durham University, UK. He has graduated 40+ PhD students so far in his career, with placements in academia, industry, and government labs. He has co-authored more than 350 papers in peer-reviewed journals, top-tier conferences, and several books, including the 2016 Springer textbook on "Functional and Shape Data Analysis." He is a fellow of several institutions in Statistics (IMS and ASA), electrical engineering (IEEE), and computer science (AAAS and IAPR).

Website: https://anujsrivastava.com/

Presentation Title: Statistical Shape Analysis of Complex Natural Structures

 

Abstract: Statistical modeling and analysis of structured data is a fast-growing field in Statistics and Data Science. Rapid advances in imaging techniques have led to tremendous amounts of data for analyzing imaged objects across several scientific disciplines. Examples include shapes of cancer cells, botanical trees, human biometrics, 3D genome, brain anatomical structures, crowd videos, nano-manufacturing, and so on. Shapes are relevant even in non-imaging data contexts, e.g., the shapes of COVID rate curves or the shapes of activity cycles in lifestyle data. Imposing statistical models and inferences on shapes seems daunting because the shape is an abstract notion and one requires precise mathematical representations to quantify shapes. This talk has two parts. In the first part, I will present some recent developments in "elastic representations" of structures such as functions, curves, surfaces, and graphs. In the second part, I will focus on statistical analyses: computing shape summaries, estimation under shape constraints, hypothesis testing, time-series models, and regression models involving shapes.

Dr. Amy Braverman

Amy Braverman (Jet Propulsion Laboratory, California Institute of Technology)

Bio: Dr. Amy Braverman is a statistician specializing in statistical methods for analysis and uncertainty quantification in remote sensing. After graduating from Swarthmore College in 1982 with a B.A. in Economics, Dr. Braverman worked for nearly a decade in litigation support consulting. She returned to graduate school at UCLA in the early 1990's where she earned an M.A. in Mathematics and Ph.D. in Statistics. In 1999 she began a post-doc at JPL, and has been with the Lab ever since. Dr. Braverman's early work was in the use of data compression methods for analysis of massive data sets. As her career advanced, she has worked in spatial and spatio-temporal statistics, data fusion, statistical methods for the evaluation of climate models, and most recently in Uncertainty Quantification. She has been at the forefront of JPL's efforts to bring rigorous UQ to the derivation of geophysical information from remote sensing observations collected by NASA and JPL instruments. Dr. Braverman finds special satisfaction in mentoring post-docs and young researchers to build capability in Statistics and UQ at JPL, and in collaborating with academic colleagues to connect their research to JPL and NASA problems.

Website: https://www.jpl.nasa.gov/site/research/ajb/

Presentation Title: Simulation-based Uncertainty Quantification for Remote Sensing Inverse Problems

Abstract: Remote sensing data provide a vast trove of information about Earth systems. These data are inferred from electromagnetic spectra observed by Earth-orbiting instruments using computational algorithms that approximately solve the following inverse problem. Given a spectrum, find the best estimate of the underlying true geophysical state that gave rise to it. The algorithms typically start with an initial guess for the true state, and iteratively try to minimize cost functions based on differences between predicted spectra from a deterministic forward model, and spectra actually observed. This set-up incorporates many important aspects related to uncertainty quantification for complex physical-statistical systems including intractable likelihoods, unknown physics, parameter calibration, computational demands, etc. The remote sensing problem is also inherently spatial, which is both a curse, in that it usually involves massive amounts of data, and a blessing in that it makes dimension reduction possible if we can learn and quantify spatial structure. In this talk I will discuss recent work that demonstrates how we leverage simulation-based inference (Cranmer, Brehmer, and Louppe, 2020), spatial statistical modeling with Gaussian processes (Cressie and Wikle, 2012; Paciorek and Schervish, 2006; Sainsbury-Dale, Zammit-Mangion, and Huser, 2024), and conditional spatial simulation to obtain robust uncertainty estimates for the output of a remote sensing data processing pipeline. If time permits, I will also show results of some preliminary work to quantify and incorporate model discrepancy.

Download Schedules from Colloquia Gone by

Contact Information

Department of Statistics (MC0439)
Hutcheson Hall, RM 406-A, Virginia Tech
250 Drillfield Drive
Blacksburg, VA 24061

Phone: 540-231-5657

Department Head:
Robert B. Gramacy