Hitting Time and Stopping Time

$$\huge\textbf{Hitting Time and Stopping Time} $$

1. Hitting time

Let a sequence of random variables $X_0, X_1, \ldots \sim \text{Markov}(\lambda, P)$ and suppose $X_n: (\Sigma, \mathcal F, \mathbb P) \to (E, \mathcal E)$ for each $n \in \{0, 1, \ldots \}$. A hitting time of $X_n$ on a set $A \subseteq E$ is defined by:

$$H^A ≜ \inf\{n ⩾ 0 : X_n ∈ A\}. $$

Also, $H_i^A$ for some $i \in E$ denotes hitting time on $A$ given $X_0 = i$. For the rest of this note, the subscript denotes the initial or given condition unless otherwise noted.

1.1. Probability of Hit

Consider the probability that $H_i^A$ is finite, $h_i^A = \mathbb Pi(H^A < ∞)$. The following result is true.

$\textbf{Theorem 6.1.}\quad$ The finite hitting time probabilities $(h_i^A)_{i\in I}$ satisfy the following equation - $$ h_{i}^{A}=\left\{\begin{array}{ll}1 & \text { if } i \in A \\ \sum_{j \in I} p_{i j} h_{j}^{A} & \text { if } i \notin A \end{array}\right. $$

intuitively, $h_i^A$ 表示hit的可能性

$Proof.$ We first prove the first statement, ${i.e.}$, the finite time hitting time probabilities satisfy the given equations. If $i \in A$, the equations are trivially satisfied. Since $i \in A$ implies

$$H_i^A = 0 \implies h_i^A = 1. $$

Consider $i \notin A$. Then, at least one step is required to reach $A$, so we can consider marginal probability on the value of $X_1$ to get

$$\begin{aligned} h_{i}^{A} & =\mathbb{P}\left(H^{A}<\infty \mid X_{0}=i\right) \\ & =\sum_{j} \mathbb{P}\left(H^{A}<\infty \mid X_{1}=j, X_{0}=i\right) \mathbb{P}\left(X_{1}=j \mid X_{0}=i\right) \\ & =\sum_{j} \mathbb{P}\left(H^{A}<\infty \mid X_{1}=j, X_{0}=i\right) p_{i j} \\ & =\sum_{j} p_{i j} \mathbb{P}\left(H^{A}<\infty \mid X_{1}=j\right)=\sum_{j} p_{i j} h_{j}^{A} &\text{(Markov Property)} \end{aligned} $$

对于某一个状态 $i$, 只有当它的每一个后序状态 $j$ 都一定可 hit 时, $i$ 才可一定可 hit

Next, we prove that the second statement, $i.e.$, $(h_i^A)_{i∈I}$ is the smallest non-negative solution to the equation above. Note that, if $i \in A$, then $h_i^A = 1 = x_i$ . All solutions ($x_i ’s$) are the same.

Consider $i \notin A$. Then,

$$\begin{aligned} x_{i}= & \sum_{j \in I} p_{i j} x_{j}=\sum_{j \in A} p_{i j} \cdot 1+\sum_{j \notin A} p_{i j} \sum_{k \in I} p_{j k} x_{k} \\ = & \sum_{j \in A} p_{i j} \cdot 1+\sum_{j \notin A} p_{i j}\left(\sum_{k \in A} p_{j k} \cdot 1+\sum_{k \notin A} p_{j k} \sum_{l \in I} p_{k l} x_{l}\right) \\ = & \mathbb{P}_{i}\left(X_{1} \in A\right)+\mathbb{P}_{i}\left(X_{2} \in A, X_{1} \notin A\right)+\ldots+\mathbb{P}_{i}\left(X_{n} \in A, X_{n-1} \notin A, \ldots, X_{0} \notin A\right) \\ & \quad+\sum_{t \in I} p_{i j} p_{j k} \ldots p_{w t} x_{t} \\ \geqslant & \mathbb{P}_{i}\left(X_{1} \in A\right)+\mathbb{P}_{i}\left(X_{2} \in A, X_{1} \notin A\right)+\ldots+\mathbb{P}_{i}\left(X_{n} \in A, X_{n-1} \notin A, \ldots, X_{0} \notin A\right) \\ & \quad \ldots\left(\sum_{t \in I} p_{i j} p_{j k} \ldots p_{w t} x_{t} \geqslant 0\right) \\ \geqslant & \mathbb{P}\left(H_{i}^{A} \leqslant n\right) \quad\end{aligned} $$

We have shown that any non-negative solution $x_i$ is lower bounded by $\mathbb P_i(H_i^A ⩽ n)$, for all $n$. Therefore,

$$x_i ⩾ \mathbb P_i(H_i^A < ∞) = h_i^A. $$

It can be noted that $x_i ≡ 1$ is a solution to $(∗)$, which may not be the minimal solution since $h_i^A$ is a probability thus $h_i^A ⩽ 1$. This solution guarantees the existence of the minimal solution. Also, the system of equations $(∗)$ can be expressed using a projection operator $\Pi_A$, which is element-wise multiplication of the indicator function.

$$\left\{\begin{array}{l}\Pi_{A^{c}}(I-P) h=0 \\ \Pi_{A} h=\mathbf {1}_{A}\end{array}\right. $$

This formulation is related to discrete harmonic analysis, and the set of equations can be considered a discrete version of the $\text{Laplace}$’s equation with a given boundary condition on $A$.

1.2. Gambler’s Ruin Problem

A gambler plays a game. He has $k$ bucks in his wallet when he started, and every round, he may win or lose 1 dollar. He will play the game until he has $N$ dollars in his wallet— he will be satisfied enough or lose all his money. We are interested in if he will go bankrupt. Let $X_n$ be the budget after $n$ games, and suppose the chance of winning is given by a fixed constant $p \in (0, 1)$. Then $X_n$ is a Markov chain.

$X_n$ can take value from $I = \{0, 1, \ldots , N\}$, The initial distribution is $X_0 = k$. The transition matrix is:

$$p_{i j}=\left\{\begin{array}{ll}p & \text { for } j=i+1 \\ 1-p & \text { for } j=i-1, \quad i \in\{1,2, \cdots, N-1\} \\ 0 & \text { otherwise }\end{array}\right. $$

$0$ and $N$ are both absorbing state, $i.e.$, $p_{00} = p_{NN} = 1$. The event when the gambler loses all the money can be expressed by

$A = \{0\}$. Then the probability of going bankrupt is,

$$h_{k}=\mathbb{P}\left(H^{\{0\}}<\infty \mid X_{0}=k\right) $$

Since $0 ∈ A, h_0 = 1$. While $N$ is an absorbing state, the state can never reach $0$ if started from $N$. Therefore, $h_N = 0$. For $i \in \{1, 2, · · · , N − 1\}$,

$$\begin{aligned} & h_{i}=p h_{i+1}+(1-p) h_{i-1} \\ \Rightarrow & p\left(h_{i+1}-h_{i}\right)=(1-p)\left(h_{i}-h_{i-1}\right) \\ \Rightarrow & h_{i+1}-h_{i}=\frac{1-p}{p}\left(h_{i}-h_{i-1}\right)=\eta^{i}\left(h_{1}-h_{0}\right)\end{aligned} $$

1.3. Mean Time before Hitting

2. Stopping times