Welcome to the Stochastic Control Theory Reading Seminar, organized by Georgy Gaitsgori and Richard Groenewald.
This Spring we will continue studying Stochastic Control Theory by reading Professor Daniel Ocone's lecture notes on Stochastic Control. Our talks will be held in-person in Columbia University (Math 528) on Mondays from 5p.m. to 6p.m. EDT.
This seminar is the logical continuation of the seminar held in Spring 2022.
If you would like to come or to be added on the mailing list, please email email@example.com.
|Date and time||Speaker||Title and abstract|
|Monday, January 30, 5:00p.m. EDT||Georgy Gaitsgori||Organizational meeting and reviewing stochastic calculus
We will discuss some organizational details and our plans. Afterward, we will cover the first pages of lecture notes. We will recall definitions of random fields, stochastic integral, and SDEs. We will also mention the main definitions and theorems about SDEs which will be used later.
|Monday, February 6, 5:00p.m. EDT||Georgy Gaitsgori||Reviewing stochastic differential equations
We will try to cover the first 16 pages of Ocone's lecture notes. We will recall definitions of Brownian motion, random field, stochastic integrals, and stochastic differential equations (SDEs). Then we will state the main results about SDE, such as the existence and uniqueness of solutions, comparison principle, Markovianity, etc. We will prove some of the theorems, and if time permits, we will discuss SDEs with random coefficients.
|Monday, February 13, 5:00p.m. EDT||Richard Groenewald||Introduction to Controlled SDEs
We will cover theorem 4 and all of sections 5, 6.1 and 6.2 of Ocone's notes. We will look at existence/uniqueness results for SDEs with a control parameter (a more general setting than in the previous lecture) and explore connections with PDEs of a particular form via a Feynman-Kac-esque approach.
|Monday, February 20, 5:00p.m. EDT||Georgy Gaitsgori||Abstract Dynamic Programming Equation, Markov semigroups and their generators
We will cover sections 6.2-6.4 of Ocone's notes. We will introduce the notions of an abstract two-parameter family of operators and its generator. We will consider a special case of Markov processes and their generators and prove an abstract dynamic programming equation. After that, we will find a generator of a diffusion process (SDE) and show how it is connected to a value function of a (control) problem we discussed last time.
|Monday, February 27, 5:00p.m. EDT||Richard Groenewald||More on Elliptic PDEs, and the notion of a "derivative" of a stochastic process
We will cover sections 6.5 and 6.6 of Ocone's notes. In particular, we will discuss an existence and uniqueness result for classical solutions to elliptic PDEs of the form discussed in the last few weeks. We will introduce the concept of a derivative of a stochastic process and use this to further analyze said PDEs.
|Monday, March 6, 5:00p.m. EDT||Georgy Gaitsgori||Introducing notions of Stochastic Control Problems
We will recall the formal notion of a controlled SDE and the results about the existence and uniqueness of its solutions. We will then start discussing finite horizon stochastic control problems. In particular, we will introduce notions of admissible controls and value functions and prove some properties (boundness and growth rate) of the latter.
|Monday, March 13, 5:00p.m. EDT||No seminar (Spring break)||
|Monday, March 20, 5:00p.m. EDT||Richard Groenewald||Heuristics on the Dynamic Programming Principle and HJB Equation
We will discuss the dynamic programming principle for our control problem along with the associated Hamilton-Jacobi-Bellman equation. We will formulate the definition of viscosity solutions to general PDEs. This session will follow pages 17 to 31 of part 2 of Ocone's notes.
|Monday, March 27, 5:00p.m. EDT||Georgy Gaitsgori||Recalling HJB equation and solving two particular stochastic control problems
We will first recall HJB equation from the last week and then see how to use it to solve stochastic control problems. We will then consider two problems from Benes, Shepp, and Witsenhausen's celebrated paper: the Bounded velocity follower problem and the Monotone follower problem with a finite horizon. We will find the corresponding HJB equations, solve them, and try to deduce the actual value functions of the problems and optimal controls from the obtained solutions.
|Monday, April 3, 5:00p.m. EDT||Georgy Gaitsgori||Solving Monotone follower problem and discussing DPP rigorously
We will continue our discussion from the last seminar. We will consider another stochastic control problem, namely Monotone follower, and use the HJB equation to find its solution. Then we will come back to dynamic programming principle and discuss how to prove it rigorously.
|Monday, April 10, 5:00p.m. EDT||Richard Groenewald||More on Viscosity Solutions and the Dynamic Programming Principle
We will review the concept of viscosity solutions and prove several useful properties. If time permits, we will discuss the ideas behind a rigorous proof of the dynamic programming principle in the setting of section 2 of Ocone's notes.
|Monday, April 17, 5:00p.m. EDT||Ioannis Karatzas||Sequential estimation with control and discretionary stopping
We show that "full-bang" control is optimal in a problem that combines features of (i) sequential least-squares estimation with Bayesian updating, for a random quantity observed in white noise; (ii) bounded control of the rate at which observations are received, with a superquadratic cost per unit time; and (iii) "fast" discretionary stopping. We describe also the optimal filtering and stopping rules in this context. [Joint work with Erik Ekstrom and Juozas Vaicenavicius, Uppsala.]
|Monday, April 24, 5:00p.m. EDT||No seminar||
|Monday, May 1, 5:00p.m. EDT||Georgy Gaitsgori||On the Kulldorff's Goal problem
We will discuss an explicitly solvable stochastic control problem of Martin Kulldorff. This is a problem of a goal type. Namely, one considers a controlled diffusion and tries to maximize the probability of getting to an endpoint of an interval in finite time. If time permits, we will mention some other goal problems and related results.