parsiad.ca - Optimal stopping II: a dynamic programming equation









Search Preview

Optimal stopping II: a dynamic programming equation - Parsiad Azimzadeh

parsiad.ca

.ca > parsiad.ca

SEO audit: Content analysis

Language Error! No language localisation is found.
Title Optimal stopping II: a dynamic programming equation - Parsiad Azimzadeh
Text / HTML ratio 61 %
Frame Excellent! The website does not use iFrame solutions.
Flash Excellent! The website does not have any flash contents.
Keywords cloud stopping \\ programming viscosity Optimal dynamic inequality large post equation principle solution previous bounded \endalign \beginalign \min\left\ GNU \eqrefeqpde established
Keywords consistency
Keyword Content Title Description Headings
stopping 9
\\ 8
programming 6
viscosity 6
Optimal 6
dynamic 6
Headings
H1 H2 H3 H4 H5 H6
1 6 3 0 0 0
Images We found 2 images on this web page.

SEO Keywords (Single)

Keyword Occurrence Density
stopping 9 0.45 %
\\ 8 0.40 %
programming 6 0.30 %
viscosity 6 0.30 %
Optimal 6 0.30 %
dynamic 6 0.30 %
inequality 5 0.25 %
large 5 0.25 %
post 5 0.25 %
equation 5 0.25 %
principle 4 0.20 %
solution 4 0.20 %
previous 4 0.20 %
bounded 4 0.20 %
\endalign 4 0.20 %
\beginalign 4 0.20 %
\min\left\ 4 0.20 %
GNU 4 0.20 %
\eqrefeqpde 4 0.20 %
established 3 0.15 %

SEO Keywords (Two Word)

Keyword Occurrence Density
in the 8 0.40 %
is a 7 0.35 %
dynamic programming 6 0.30 %
such that 6 0.30 %
Optimal stopping 6 0.30 %
of the 5 0.25 %
previous post 4 0.20 %
of \eqrefeqpde 4 0.20 %
a dynamic 4 0.20 %
programming principle 3 0.15 %
to be 3 0.15 %
be a 3 0.15 %
Parsiad Azimzadeh 3 0.15 %
large enough 3 0.15 %
tx\varphi\in\mathcalO\times C12\mathcalO 3 0.15 %
a viscosity 3 0.15 %
GNU Octave 3 0.15 %
viscosity solution 3 0.15 %
the previous 3 0.15 %
u is 3 0.15 %

SEO Keywords (Three Word)

Keyword Occurrence Density Possible Spam
a dynamic programming 4 0.20 % No
dynamic programming principle 3 0.15 % No
u is a 3 0.15 % No
is a viscosity 3 0.15 % No
dynamic programming equation 3 0.15 % No
the previous post 3 0.15 % No
of \eqrefeqpde if 2 0.10 % No
in the dynamic 2 0.10 % No
inequality in the 2 0.10 % No
n sufficiently large 2 0.10 % No
solution of \eqrefeqpde 2 0.10 % No
viscosity solution of 2 0.10 % No
GNU Octave financial 2 0.10 % No
a viscosity solution 2 0.10 % No
in the viscosity 2 0.10 % No
Let t_nx_n be 2 0.10 % No
t_nx_n be a 2 0.10 % No
be a sequence 2 0.10 % No
a sequence in 2 0.10 % No
sequence in \mathcalO 2 0.10 % No

SEO Keywords (Four Word)

Keyword Occurrence Density Possible Spam
a dynamic programming equation 3 0.15 % No
n is large enough 2 0.10 % No
\mathcalO such that t_nx_n\rightarrowtx\text 2 0.10 % No
to t\mapstotX_tt_nx_n being bounded 2 0.10 % No
t\mapstotX_tt_nx_n being bounded on 2 0.10 % No
being bounded on the 2 0.10 % No
bounded on the interval 2 0.10 % No
inequality in the dynamic 2 0.10 % No
in the dynamic programming 2 0.10 % No
that t_nx_n\rightarrowtx\text and ut_nx_n\rightarrow 2 0.10 % No
such that t_nx_n\rightarrowtx\text and 2 0.10 % No
in \mathcalO such that 2 0.10 % No
the dynamic programming principle 2 0.10 % No
sequence in \mathcalO such 2 0.10 % No
a sequence in \mathcalO 2 0.10 % No
be a sequence in 2 0.10 % No
t_nx_n be a sequence 2 0.10 % No
Let t_nx_n be a 2 0.10 % No
in the previous post 2 0.10 % No
established in the previous 2 0.10 % No

Internal links in - parsiad.ca

Selected publications
Selected publications - Parsiad Azimzadeh
Blog
Parsiad Azimzadeh
read about the latest release here
GNU Octave financial 0.5.0 released - Parsiad Azimzadeh
Monte Carlo simulation framework
Monte Carlo simulations in GNU Octave financial package - Parsiad Azimzadeh
An introduction to regular Markov chains
An introduction to regular Markov chains - Parsiad Azimzadeh
mlinterp: Fast arbitrary dimension linear interpolation in C++
mlinterp: Fast arbitrary dimension linear interpolation in C++ - Parsiad Azimzadeh
Optimal stopping III: a comparison principle
Optimal stopping III: a comparison principle - Parsiad Azimzadeh
Optimal stopping II: a dynamic programming equation
Optimal stopping II: a dynamic programming equation - Parsiad Azimzadeh
Optimal stopping I: a dynamic programming principle
Optimal stopping I: a dynamic programming principle - Parsiad Azimzadeh
Introductory group theory
Introductory group theory - Parsiad Azimzadeh
Closed-form expressions for perpetual and finite-maturity American binary options
Closed-form expressions for perpetual and finite-maturity American binary options - Parsiad Azimzadeh
Fast Fourier Transform with examples in GNU Octave/MATLAB
Fast Fourier Transform with examples in GNU Octave/MATLAB - Parsiad Azimzadeh
Welcome
Welcome - Parsiad Azimzadeh
Markov chains (1)
Parsiad Azimzadeh
Optimal stopping (3)
Parsiad Azimzadeh
GNU Octave (2)
Parsiad Azimzadeh
Notes (2)
Parsiad Azimzadeh
Mathematical finance (1)
Parsiad Azimzadeh
RSS
Parsiad Azimzadeh

Parsiad.ca Spined HTML


Optimal stopping II: a dynamic programming equation - Parsiad Azimzadeh Parsiad Azimzadeh Selected publications Blog Menu Curriculum vitae Selected publications Blog Log in Optimal stopping II: a dynamic programming equation May 21, 2016 Parsiad Azimzadeh The pursuit is a continuation of a previous post on optimal stopping. In this post, we derive a dynamic programming equation (which turns out to be a partial differential equation (PDE) to be interpreted in the viscosity sense) for the optimal stopping problem. As before, we consider a filtered probability space (with filtration $(\mathcal{F}_{t})_{t\geq0}$) satisfying the usual conditions, on which a standard Brownian motion $W_{t}$ is defined. Let $X_{s}^{t,x}$ denote the strong solution of the stochastic differential equation (SDE) $$ dX_{s}=b(s,X_{s})ds+\sigma(s,X_{s})dW_{s}\text{ for }s>t\text{ and }X_{t}=x.$$ Let $T<\infty$ and $\mathscr{T}_{[t,T]}$ be the set of $[t,T]$ stopping times self-sustaining of $\mathcal{F}_{t}$. Consider the problem $$ u(t,x)=\sup_{\tau\in\mathscr{T}_{[t,T]}}J(t,x;\tau)\text{ where }J(t,x;\tau)=\mathbb{E}\left[g(\tau,X_{\tau}^{t,x})\right] $$ and $g$ is some given function. All assumptions of the previous post hold. The PDE we will derive (in the viscosity sense) is \begin{equation} \min\left\{ -\left(\partial_{t}+\mathcal{A}\right)u,u-g\right\} =0\text{ on }[0,T)\times\mathbb{R}^{d},\label{eq:pde}\tag{1} \end{equation} where $\mathcal{A}$ is the infinitesimal generator of the SDE above. Let's now pinpoint the notion of viscosity solution for this specific problem: Let $\mathcal{O}=[0,T)\times\mathbb{R}^{d}$. A locally regional function $v\colon\mathcal{O}\rightarrow\mathbb{R}$ is a viscosity subsolution (resp. supersolution) of \eqref{eq:pde} if \begin{align*} & \min\left\{ -\left(\partial_{t}+\mathcal{A}\right)\varphi(t,x),(v^{*}-g)(t,x)\right\} \leq0\\ \text{(resp. } & \min\left\{ -\left(\partial_{t}+\mathcal{A}\right)\varphi(t,x),(v_{*}-g)(t,x)\right\} \geq0\text{)} \end{align*} for all $(t,x,\varphi)\in\mathcal{O}\times C^{1,2}(\mathcal{O})$ such that $(v^{*}-\varphi)(t,x)=\max_{\mathcal{O}}(v^{*}-\varphi)=0$ (resp. $(v_{*}-\varphi)(t,x)=\min_{\mathcal{O}}(v_{*}-\varphi)=0$) and the maximum (resp. minimum) is strict. We say $v$ is a viscosity solution of \eqref{eq:pde} if it is both a subsolution and supersolution of \eqref{eq:pde}. Suppose $u\colon\mathcal{O}\rightarrow\mathbb{R}$ is locally bounded. Then, $u$ is a viscosity solution of \eqref{eq:pde}. We first prove that $u$ is a subsolution. Let $(t,x,\varphi)\in\mathcal{O}\times C^{1,2}(\mathcal{O})$ be such that $$ (u^{*}-\varphi)(t,x)=\max_{\mathcal{O}}(u^{*}-\varphi)=0 $$ where the maximum is strict. Assume, in order to victorious at a contradiction, that $$ \min\left\{ -\left(\partial_{t}+\mathcal{A}\right)\varphi(t,x),(u^{*}-g)(t,x)\right\} >0. $$ Equivalently, this can be expressed as $$ (\varphi-g)(t,x)=(u^{*}-g)(t,x)>0\text{ and }-\left(\partial_{t}+\mathcal{A}\right)\varphi(t,x)>0. $$ By continuity, we can find $h>0$ (with $t+h<T$) and $\delta>0$ such that $$ \varphi-g\geq\delta\text{ and }-\left(\partial_{t}+\mathcal{A}\right)\varphi\geq0\text{ on }\mathcal{N}_{h}=\left( (t-h,t+h)\times B_{h}(x) \right) \cap \mathcal{O} $$ where $B_{h}(x)$ is the wittiness of radius $h$ centred at $x$. Since $(t,x)$ is a strict maximizer, $$ -\gamma=\max_{\partial\mathcal{N}_{h}}\left(u^{*}-\varphi\right)<0.$$ Let $(t_{n},x_{n})$ be a sequence in $\mathcal{O}$ such that $$ (t_{n},x_{n})\rightarrow(t,x)\text{ and }u(t_{n},x_{n})\rightarrow u^{*}(t,x). $$ Let $$\theta_{n}=\inf\left\{ s>t_{n}\colon(s,X_{s}^{t_{n},x_{n}})\notin\mathcal{N}_{h}\right\} . $$ Note that for $n$ large enough, $(t_{n},X_{t_{n}}^{t_{n},x_{n}})=(t_{n},x_{n})\in\mathcal{N}_{h}$ (we will unchangingly seem $n$ is large unbearable for this to occur). Let $$ \eta_{n}=u(t_{n},x_{n})-\varphi(t_{n},x_{n}).$$ Let $\tau_n\in\mathscr{T}_{[t_n,T]}$ be arbitrary. By Ito's lemma, \begin{align*} u(t_{n},x_{n}) & =\eta_{n}+\varphi(t_{n},x_{n})\\ & \begin{gathered}=\eta_{n}+\mathbb{E}\left[\varphi(\tau_{n}\wedge\theta_{n},X_{\tau_{n}\wedge\theta_{n}}^{t_{n},x_{n}})-\int_{t_{n}}^{\tau_{n}\wedge\theta_{n}}\left(\partial_{t}+\mathcal{A}\right)\varphi(s,X_{s}^{t_{n},x_{n}})ds\right]\\ +\mathbb{E}\left[\int_{t_{n}}^{\tau_{n}\wedge\theta_{n}}\nabla_{x}\varphi(s,X_{s}^{t_{n},x_{n}})\cdot\sigma(s,X_{s}^{t_{n},x_{n}})dW_{s}\right] \end{gathered} \\ & =\eta_{n}+\mathbb{E}\left[\varphi(\tau_{n}\wedge\theta_{n},X_{\tau_{n}\wedge\theta_{n}}^{t_{n},x_{n}})-\int_{t_{n}}^{\tau_{n}\wedge\theta_{n}}\left(\partial_{t}+\mathcal{A}\right)\varphi(s,X_{s}^{t_{n},x_{n}})ds\right]. \end{align*} The Ito integral vanishes due to $t\mapsto(t,X_{t}^{t_{n},x_{n}})$ stuff regional on the interval $[t_{n},\tau_{n}\wedge\theta_{n}]$ so that $$ u(t_{n},x_{n})\geq\eta_{n}+\mathbb{E}\left[\varphi(\tau_{n}\wedge\theta_{n},X_{\tau_{n}\wedge\theta_{n}}^{t_{n},x_{n}})\right]. $$ Due to the inequalities established on $\mathcal{N}_{h}$, \begin{align*} u(t_n,x_n) & \geq\eta_n+\mathbb{E}\left[\varphi(\tau_n\wedge\theta_n,X_{\tau_n\wedge\theta_n}^{t_n,x_n})\right]\\ & =\eta_n+\mathbb{E}\left[\varphi(\tau_n,X_{\tau_n}^{t_n,x_n})\mathbf{1}_{\left\{ \tau_n <\theta_n\right\} }+\varphi(\theta_n,X_{\theta_n}^{t_n,x_n})\mathbf{1}_{\left\{ \tau_n \geq\theta_n \right\} }\right]\\ & \geq\eta_n+\mathbb{E}\left[\left(g(\tau_n,X_{\tau_n}^{t_n,x_n})+\delta\right)\mathbf{1}_{\left\{ \tau_n <\theta_n\right\} }+\left(u^{*}(\theta_n,X_{\theta_n}^{t_n,x_n})+\gamma\right)\mathbf{1}_{\left\{ \tau_n\geq\theta_n\right\} }\right]\\ & \geq\eta_n+\gamma\wedge\delta+\mathbb{E}\left[g(\tau_n,X_{\tau_n}^{t_n,x_n})\mathbf{1}_{\left\{ \tau_n<\theta_n\right\} }+u^{*}(\theta_n,X_{\theta_n}^{t_n,x_n})\mathbf{1}_{\left\{ \tau_n\geq\theta_n \right\} }\right]. \end{align*} Since $\tau_n\in\mathscr{T}_{[t_n,T]}$ is wrong-headed and $\eta_n+\gamma\wedge\delta>0$ for $n$ sufficiently large, this contradicts the $\leq$ inequality in the dynamic programming principle established in the previous post.We now prove that $u$ is a supersolution. The inequality $u-g\geq0$ follows from the value function since $$ u(t,x)=\sup_{\tau\in\mathscr{T}_{[t,T]}}J(t,x;\tau)\geq J(t,x;t)=\mathbb{E}[g(t,X_{t}^{t,x})]=g(t,x). $$ Taking the lower semicontinuous envelope on both sides of the inequality, we get $u_{*}-g\geq0$ (recall that $g$ is presumed to be continuous). Let $(t,x,\varphi)\in\mathcal{O}\times C^{1,2}(\mathcal{O})$ be such that $$ (u_{*}-\varphi)(t,x)=\min_{\mathcal{O}}(u_{*}-\varphi)=0. $$ Let $(t_{n},x_{n})$ be a sequence in $\mathcal{O}$ such that $$ (t_{n},x_{n})\rightarrow(t,x)\text{ and }u(t_{n},x_{n})\rightarrow u_{*}(t,x). $$ Let $$ \eta_{n}=u(t_{n},x_{n})-\varphi(t_{n},x_{n}) $$ and $$ h_{n}=\sqrt{\eta_{n}}\mathbf{1}_{\left\{ \eta_{n}\neq0\right\} }+\mathbf{1}_{\left\{ \eta_{n}=0\right\} }/n. $$ Also let $$ \theta_{n}=\inf\left\{ s>t_{n}\colon(s,X_{s}^{t_{n},x_{n}})\notin[t_{n},t_{n}+h_{n})\times B_{1}(x)\right\} $$ where we unchangingly seem $n$ is large unbearable for $t_n+h_n < T$ and $x_n \in B_1(x)$. Calling upon the $\geq$ inequality in the dynamic programming principle established in the previous post (with $\theta=\theta_{n}$), we have $$ \eta_{n}+\varphi(t_{n},x_{n})=u(t_{n},x_{n})\geq\mathbb{E}\left[u_{*}(\theta_{n},X_{\theta_{n}}^{t_{n},x_{n}})\right]\geq\mathbb{E}\left[\varphi(\theta_{n},X_{\theta_{n}}^{t_{n},x_{n}})\right]. $$ Applying Ito's lemma and dividing by $h_{n}$ yields $$ \frac{\eta_{n}}{h_{n}}\geq\mathbb{E}\left[\frac{1}{h_{n}}\int_{t_{n}}^{\theta_{n}}\left(\partial_{t}+\mathcal{A}\right)\varphi(s,X_{s}^{t_{n},x_{n}})ds\right]. $$ As usual, the Ito integral has vanished due to $t\mapsto(t,X_{t}^{t_{n},x_{n}})$ stuff regional on the interval $[t_{n},\theta_{n}]$. For any stock-still sample $\omega$ in the sample space and $n$ sufficiently large, note that $\theta_{n}(\omega)=t_{n}+h_{n}$ (since $h_{n}\rightarrow0$). By the midpoint value theorem for integrals, the random variable in the expectation converges scrutinizingly surely. Applying the dominated convergence theorem, we get \begin{align*} 0=\lim_{n\rightarrow\infty}\frac{\eta_{n}}{h_{n}} & \geq\lim_{n\rightarrow\infty}\mathbb{E}\left[\frac{1}{h_{n}}\int_{t_{n}}^{\theta_{n}}\left(\partial_{t}+\mathcal{A}\right)\varphi(s,X_{s}^{t_{n},x_{n}})ds\right]\\ & =\mathbb{E}\left[\lim_{n\rightarrow\infty}\frac{1}{h_{n}}\int_{t_{n}}^{\theta_{n}}\left(\partial_{t}+\mathcal{A}\right)\varphi(s,X_{s}^{t_{n},x_{n}})ds\right]\\ & =\mathbb{E}\left[\left(\partial_{t}+\mathcal{A}\right)\varphi(t,x)\right]\\ & =\left(\partial_{t}+\mathcal{A}\right)\varphi(t,x). \end{align*} Multiplying both sides of the inequality by $-1$ yields the desired result. Optimal stopping Parsiad Azimzadeh AboutPhD (University of Waterloo), MMath (University of Waterloo), BSc (Simon Fraser University)Latest postsAn introduction to regular Markov chainsmlinterp: Fast wrong-headed dimension linear interpolation in C++Optimal stopping III: a comparison principleOptimal stopping II: a dynamic programming equationOptimal stopping I: a dynamic programming principleGNU Octave financial 0.5.0 releasedMonte Carlo simulations in GNU Octave financial packageIntroductory group theoryClosed-form expressions for perpetual and finite-maturity American binary optionsFast Fourier Transform with examples in GNU Octave/MATLABPagesHomeSelected publicationsWelcomeTagsMarkov villenage (1)Optimal stopping (3)GNU Octave (2)Notes (2)Mathematical finance (1) RSS | Design: HTML5 UP Please enable JavaScript to view the comments powered by Disqus.