Welcome to the homepage of the
Variational Analysis and Statistical Theory Reading Group at CMU

Variational analysis, derived as an extension of classical real and convex analysis, deals with non-smooth optimization problems and perturbation analysis (i.e., how solutions change when the objective or estimating function is changed slightly).

The idea to use variational analysis for statistical theory is not new, but mostly forgotten. Starting from Chernoff (1954, Annals of Mathematical Statistics), several statisticians have used these tools to obtain asymptotic distribution results. Through this reading group, we learn more and derive reliable inference methods for irregular problems (including high-dimensional methods such as lasso or Dantzig selector).


Meeting Information

Unless otherwise notified, our regular weekly meeting for Fall 2025 is:
Mondays, 11:00–12:00 pm at BH 229A

Contact

To join our mailing list, please email: Woonyoung Chang or Kenta Takatsu at {woonyouc, ktakatsu}@andrew.cmu.edu.

Resources

Paper Suggestions & Presentation Sign-Up

Past meetings

Meeting & Date Presenter Contents Cover
#1: August 25, 2025 Arun Kuchibhotla Hjort and Pollard (2011), Kuchibhotla (2018, deterministic inequalities)
Notes (PDF)
#2: September 1, 2025 No Meeting
#3: September 8, 2025 Woonyoung Chang Basic concepts and results on epi-convergence of objective functions (with constraints) and set-convergence of argmin sets, along with their statistical applications
Notes (PDF)
#4: September 15, 2025 Woonyoung Chang Basic concepts in variational analysis, particularly focused on quantitative notions, including the epi-distance between functions and the truncated Hausdorff distance between sets. We will further discuss to derive quantitative bounds on a distance between argmin sets with some applications in statistics.
Notes (PDF)
#5: September 22, 2025 Kenta Takatsu In this session, I will prove quantitative bounds for inf f, the eps-arg min f, the arg min f, and level sets. I will then introduce the Kenmochi condition and prove bounds on the arg min set for constrained alpha-Hölder optimization problems, along with further examples such as constraint softening.
Notes (PDF)
#6: September 29, 2025 Kenta Takatsu We will host our seminar speaker, Professor Royset, for the first thirty minutes in a Q&A session. For the remaining thirty minutes, I will present a proof of the asymptotic normality of M-estimation under constraints, followed by a discussion on constraint quantification. I will also show how these quantifications can fail in the simple case of mean estimation on the boundary with a non-negativity constraint.
Notes (PDF)
#7: October 6, 2025 Liwei Jiang Asymptotic normality and optimality in non-smooth optimization
Slides (PDF), Notes (PDF)
#8: October 20, 2025 Konrad Urban We will introduce some results on the differentiability properties of optimal solution maps for constrained optimization problems with respect to underlying stochastic components. The framework we will introduce requires assumptions about the differentiability of the objective function and the geometry of the constraint set. Under these assumptions, the derivative can be expressed as the solution to a quadratic program, which is based on the Lagrangian and the limiting geometry of the constraint set at the point of the optimal solution. Such a derivative is then useful, for instance, for deriving asymptotic distributions of estimators by applying the (directional) delta method.
#9: October 27, 2025 Konrad Urban We will continue with our discussion of derivatives of optimal solutions of optimization problems. We'll discuss the outline of the proof of the theorem introduced at the end of last session, showing that the derivative can typically be expressed as the solution to a constrained quadratic program. We'll discuss some of the theorem's assumptions, including Robinson's constraint qualification and the extended polyhedricity condition. We'll also discuss the second-order tangent set that plays a crucial role in the first-order expansion of the optimal solution. If time permits, we'll also briefly discuss other results about derivatives of optimal solutions in different settings.
Notes (PDF)
#10: October 27, 2025 Arun Kuchibhotla I will discuss asymptotic distribution of M-estimators and Z-estimators over a broad range of setting including smooth and non smooth problems. Some of the discussion will also be focused on the cases where the limiting distribution becomes Gaussian and on non-asymptotic versions of these results. Main source is Pflug (1996), and Dupacova and Wets.