Skip to Content
Stochastic Process

Stochastic Processes

2019-11-08Original-language archivelegacy assets may be incomplete

Probability

Definition of Probability

Three Axioms

  • 0P(E)10\leq P(E)\leq 1

  • P(S)=1P(S)=1

  • For any sequence of eventes E1,E2,,E_1,E_2,\cdots, that are mutually exclusive

    P(i=1Ei)=i=1P(Ei))P(\bigcup_{i=1}^\infty E_i)=\sum_{i=1}^{\infty}P(E_i))

Sequence of Events

  • increasing sequence of events: {En,n1},EnEn+1\{E_n,n\geq 1\},E_n\subset E_{n+1}

    limnEn=i=1Ei\lim_{n\rightarrow\infty}E_n=\bigcup_{i=1}^\infty E_i

  • decreasing sequence of events: {En,n1},EnEn+1\{E_n,n\geq 1\},E_n\supset E_{n+1}

    limnEn=i=1Ei\lim_{n\rightarrow\infty}E_n=\bigcap_{i=1}^\infty E_i

  • for an increasing or decreasing sequences of events

    limnP(En)=P(limnEn)\lim_{n\rightarrow\infty}P(E_n)=P(\lim_{n\rightarrow\infty}E_n)

  • Borel-Cantelli Lemma

    if n=1P(En)<\sum_{n=1}^{\infty}P(E_n)<\infty, then

    P(an infinite number of the Ei occur)=P(limsuptEi)=P(n=1i=nEi)=0P(\text{an infinite number of the }E_i\text{ occur})=P(\lim\sup_{t\rightarrow\infty}E_i)=P(\bigcap_{n=1}^\infty\bigcup_{i=n}^\infty E_i)=0

  • Converse

    EiE_i are independent events, n1P(En)=\sum_{n-1}^\infty P(E_n)=\infty, then

    P(an infinite number of the Ei occur)=1P(\text{an infinite number of the }E_i\text{ occur})=1

Random Variable

  • Random Varible XX: a function that assigns a real value to each outcome in SS

    P(XA)=P(X1(A))P(X\in A)=P(X^{-1}(A))

  • Distribution function FF:

    F(x)=P(Xx)=P(X(,x])F(x)=P(X\leq x)=P(X\in(-\infty,x]) F(x)=P(X>x)\overline{F}(x)=P(X>x)

Expected Value

  • E[X]=xdF(x)={xf(x)dxxxP(X=x)E[X]=\int_{-\infty}^{\infty}xdF(x)=\begin{cases}\int_{-\infty}^{\infty}xf(x)dx \newline \sum_xxP(X=x)\end{cases}
  • Var[X]=E[X2]E2[X]Var[X]=E[X^2]-E^2[X]
  • Var[Xi]=Var(Xi)+2i<jCov(Xi,Xj)Var[\sum X_i]=\sum Var(X_i)+2{\sum\sum}_{i<j}Cov(X_i,X_j)
  • Cov(X,Y)=E[XY]E[X]E[Y]Cov(X,Y)=E[XY]-E[X]E[Y]

Probability Identities

Let A1,A2,,AnA_1,A_2,\cdots,A_n be events; Ii=[[AiI_i=[[A_i occurs]]]]; N=i=1nIiN=\sum_{i=1}^nI_i; I=[[N>0]]I=[[N>0]]

  • 容斥原理

    1I=(11)N=i=0n(Ni)(1)iE[I]=E[N]=E[(N2)]++(1)n+1E[(Nn)]P(1nAi)=i=1nP(Ai)i<jP(AiAj)++(1)n+1P(A1A2An)\begin{aligned} 1-I &= (1-1)^N \newline & = \sum_{i=0}^n{N\choose i}(-1)^i\newline E[I] &= E[N] = E[{N\choose 2}] +\cdots + (-1)^{n+1}E[{N\choose n}]\newline P(\bigcup_{1}^{n}A_i) &=\sum_{i=1}^n P(A_i)-{\sum\sum}_{i<j}P(A_iA_j)+\cdots+(-1)^{n+1}P(A_1A_2\cdots A_n) \end{aligned}
  • exactly rr of the events A1,,AnA_1,\cdots, A_n occur

    Ir=[[N=r]]=(Nr)(11)Nr=(Nr)i=0Nr(Nri)(1)i=i=0nr(Nr+i)(r+ir)(1)iP=E[Ir]=i=0nr(1)i(r+ir)E[(Nr+i)]=i=0nr(1)i(r+ir)j1<j2<jr+iP(Aj1Ajr+i)\begin{aligned} I_r &= [[N=r]] \newline &= {N\choose r}(1-1)^{N-r}\newline &= {N\choose r}\sum_{i=0}^{N-r}{N-r\choose i}(-1)^i \newline &= \sum_{i=0}^{n-r}{N\choose r+i}{r+i\choose r}(-1)^i\newline P &= E[I_r] =\sum_{i=0}^{n-r}(-1)^i{r+i\choose r}E[{N\choose r+i}]\newline &= \sum_{i=0}^{n-r}(-1)^i{r+i\choose r}{\sum\cdots\sum}_{j_1<j_2<\cdots j_{r+i}}P(A_{j_1}\cdots A_{j_{r+i}}) \end{aligned}

Moment Generating

  • moment generating function of XX

    ψ(t)=E[etX]=etXdF(x) \psi(t)=E[e^{tX}]=\int e^{tX}dF(x)

    E[Xn]=ψn(0),n1E[X^n]=\psi^n(0),n\geq 1

  • moment generating function of random variable X1,,XnX_1,\cdots,X_n

    ψ(t1,,tn)=E[ej=1ntjXj] \psi(t*1,\cdots,t_n)=E[e^{\sum*{j=1}^nt_jX_j}]

    MGF determines probability distribution, but doesn't always exist

  • characteristic function of XX

    ϕ(t)=E[eitX]\phi(t)=E[e^{itX}]

    CF determines probability distribution, and always exist

  • Laplace transform of distribution FF (for nonnegative values):

    F~(s)=0esxdF(x)\widetilde{F}(s)=\int_{0}^{\infty}e^{-sx}dF(x)

Conditional Expectation

  • E[XY=y]=xdF(xy)E[X|Y=y]=\int_{-\infty}^{\infty}xdF(x|y)
  • E[X]=E[E[XY]]=E[XY=y]dFY(y)E[X]=E[E[X|Y]]=\int E[X|Y=y]dF_Y(y)

Ballot Problem

The probability AA receiving nn votes is always ahead in the count of BB receiving mm votes: nmn+m\frac{n-m}{n+m}

Exponential Distribution

  • E[etX]=0etxλeλxdx=λλtE[e^{tX}]=\int_0^{\infty}e^{tx}\lambda e^{-\lambda x}dx=\frac{\lambda}{\lambda-t}

  • Gamma with parameters (n,λ)(n,\lambda): S=XS=\sum X

    • E[etS]=(λλt)nE[e^{tS}]=(\frac{\lambda}{\lambda-t})^n
    • f(x)=λeλx(λx)n1Γ(n1)f(x)=\frac{\lambda e^{-\lambda x}(\lambda x)^{n-1}}{\Gamma(n-1)}
  • memeoryless: P(X>s+tX>t)=P(X>s)P(X>s+t|X>t)=P(X>s)

  • failure rate funtion: probability intensity that a tt-year-old item will fail

    λ(t)=f(t)F(t)\lambda(t)=\frac{f(t)}{\overline{F}(t)}

Inequality

  • Jensen's inqeuality: E[f(X)]f(E[X])E[f(X)]\geq f(E[X])

Stochastic Process

  • stochastic process X={X(t),tT}\underline{X}=\{X(t),t\in T\}, TT is an index set
  • discrete-time stochastic process: TT is a countable set
  • sample path: realization of X\underline{X}
  • independent increments (continuous-time): X(tn)X(tn1)X(t_n)-X(t_{n-1}) are independent
  • stationary increments: X(t+s)X(t)X(t+s)-X(t) has same distribution for all tt