Optimal Stopping and Control Reading Seminar

Spring 2022


Welcome to the Optimal Stopping and Control Reading Seminar, run by the students of Columbia University.

This Spring we will continue studying Optimal Stopping anf Stochastic Control Theory. Our talks will be held in hybrid form over Zoom and in Columbia University on Wednesdays from 6p.m. to 7p.m. EDT.

This seminar is the logical continuation of the seminar held in Fall - OS Seminar, Fall 2021.

If you would like to come or to be added on the mailing list, please email ggaitsgori@math.columbia.edu.

Past Seminars

Date and time Speaker Title and abstract
Wednesday, January 26, 6:00p.m. EDT Georgy Gaitsgori On Chow-Robbins Game, Part 3: Continuous time problem and final remarks

We will finish our discussion about Chow-Robbins game by exploring the continuous time problem, the latest paper in the topic, and some proofs we omitted last times.
Wednesday, February 2, 6:00p.m. EDT Georgy Gaitsgori Introduction to control - Deterministic optimal control

We will start reading the book "Controlled Markov Processes" by Fleming and Soner, and we will go over the first part of the first chapter. We will introduce an optimal control problem, discuss some examples, and how different problems can be formulated. We will also introduce the dynamic programming principle and, if time permits, the dynamic programming equation.
Wednesday, February 9, 6:00p.m. EDT Georgy Gaitsgori Verification theorems, Dynamic programming and Pontryagin's principle

We continue reading the first chapter of Fleming and Soner's book. We review what we discussed last time, then we will disciss and prove two verification theorems considering the dynamic programming equation. After that, we will discuss Pontryagin's principle, which provides a general set of necessary conditions for an extremum in an optimal control problem.
Wednesday, February 16, 6:00p.m. EDT Shalin Parekh Fleming and Soner chapter 1.8 and 1.9

Calculus of variations and Hamilton Jacobi equations.
Wednesday, February 23, 6:00p.m. EDT Hindy Drillick Fleming and Soner chapter 1.8 and 1.9, part 2

We will continue reading section 1.8 of Fleming and Soner covering convex duality and Hamilton Jacobi equations.
Wednesday, March 2, 6:00p.m. EDT Shalin Parekh Intro to viscosity solutions

We will start reading the next chapter in Fleming and Soner, namely we will go over parts 2.3-2.5.
Wednesday, March 9, 6:00p.m. EDT Sid Mane How Stochastic Optimality is Deterministic

In today's talk, we peel back the history of dynamic programming in mathematical finance, tracing it to a 2008 paper of Alexander Schied, Torsten Schoneborn, and Michael Tehranchi. We will state the optimal liquidation problem, then show that the maximal expected utility for such a problem can be realized by means of a purely deterministic strategy.
Wednesday, March 16, 6:00p.m. EDT No seminar (Spring break)

Wednesday, March 23, 6:00p.m. EDT Sid Mane How Stochastic Optimality is Deterministic, part 2

As we continue our consideration of optimal basket liquidation, we will observe the power of the Radon-Nikodym density, proving both that an optimal liquidation strategy is deterministic and that it exists. Time-permitting, we will also discuss some of the examples in the paper.
Wednesday, March 30, 6:00p.m. EDT Georgy Gaitsgori Introduction to controlled Markov processes

We will start reading chapter 3 of Fleming and Soner. We will remind some basic notions from Markov processes, then provide some examples of control problems for such processes. After that, we will formally derive dynamic programming equation and prove one verification theorem.
Wednesday, April 6, 6:00p.m. EDT Aditya Makkar Introduction to Markov decision processes

Markov decision processes (MDPs) are a class of discrete-time stochastic control processes with a very special structure that makes them amenable to simple analysis. Despite this special structure they have many applications, for example in reinforcement learning. We will start with carefully defining the Markov decision process and then prove two intuitive results - one of them by Blackwell (1964). Time permitting, we end with a brief discussion on connections with reinforcement learning.
Wednesday, April 13, 6:00p.m. EDT Aditya Makkar Basic results in Markov decision processes

We continue with our discussion from last time where we defined the Markov decision processes. We start with defining the notions of optimality and then discuss results when a policy is optimal.
Wednesday, April 20, 6:00p.m. EDT Sid Mane Sannikov: Convexity, Monotonicity, Existence, and Optimality

The paper's main course: we show a solution to our contract problem exists and that it must satisfy certain regularity properties. We satiate or hunger for Optimality be establishing suitable conditions and showing they hold. We conclude with some economic food for thought.
Wednesday, April 27, 6:00p.m. EDT Hindy Drillick A History of Ulam's Problem

In this talk we will discuss the history of Ulam's problem which is about finding the distribution of the length of the longest increasing subsequence of a random permutation. We will use methods from interacting particle systems to prove a law of large numbers for this problem.

Continuation of the seminar - Stochastic Control Theory Reading Seminar, Spring 2023 and Optimal Stopping Theory Seminar, Spring 2024