Based on below derivation, you need to finish:
1. Simulate A(t):
• Generate A(t) numerically with nsim≥50,000.
• Summarize the simulated mean and variance.
2. Compute Theoretical E[A(t)]:
• Calculate E[A(t)] in R using the same parameters as the simulation.
3. Compute Theoretical E[A(t)2]:
• Compute E[A(t)2] with consistent parameter values.
4. Compare Results:
• Calculate theoretical variance Var(A(t)) as:
Var(A(t))=E[A(t)2]−(E[A(t)])2.
• Compare the theoretical mean and variance with simulation results.
• Try multiple combination of parameters for comparing the theoretical mean and variance with
simulation results.
To fully show your efforts, please submit html or pdf output with r markdown file.
Problem description
For a homogeneous Poisson process with rate λ, we aim to derive the variance of the random variable A(t),
which is defined as:
N(t)
X
A(t)= A′(t,t ), t∈[0,T]
i
i=1
where A′(t,t )= C exp(−γ(t−t )).
i 1+αi i
The derivation follows from the most basic definitions, starting step by step to achieve the final result. Key
assumptions are:
• N(t) denotes the number of events occurring in [0,t], such that N(t)∼Poisson(λt).
• Event times T ,T ,...,T who has values of t ,t ,...,t are uniformly distributed in [0,T],
1 2 N(T) 1 2 N(T)
without conditional on N(t)=n. If we conditional on N(t)=n, T has marginal distribution of
k
Beta(k,n+1−k)
• Parameters C >0,α>0,γ >0 are given.
1
Derivation for unconditional expectation and variance
Conditional Event times distribution
ByAppendixA,theorderedstatisticsU hasmarginaldistributionofBeta(k,n−k+1). Therandomvariable
(k)
for event times T in the homogeneous Poisson process defined as T =T ·U . Using the transformation
k k (k)
T =T ·U , we find the probability density function (PDF) of T by change of variables:
k (k) k
T du 1
T =T ·u =⇒ u= k, = .
k T dT T
k
Substitute into the Beta PDF:
(cid:18) t (cid:19) 1
f (t)=f · .
Tk U(k) T T
This becomes:
(cid:0)x(cid:1)i−1(cid:0)1− x(cid:1)n−i 1
f (x)= T T · , x∈[0,T],
T(i) B(i,n+1−i) T
xi−1(T −x)n−i
= , x∈[0,T],
TnB(i,n+1−i)
n! xi−1(T −x)n−i
= , 0
(i−1)!(n−i)! Tn
where B(k,n+1−k)=R1 uk−1(1−u)n−kdu= (k−1)!(n−k)!.
0 n!
Similarly, the joint density function of T and T , k ≤l is:
(k) (l)
n!
f (x,y)= xk−1(y−x)l−k−1(T−y)n−l, 0
T(i),T(j)|N(t)=n Tn(k−1)!(l−k−1)!(n−l)!
Conditional Expectation E[A(t)|N(t) = n]
First we derive the conditional expectation E(A(t)|N(t)=n):
n !
X C
E(A(t)|N(t)=n)=E e−γ(t−T(i)) |N(t)=n
1+αi
i=1
n
X C h i
= E e−γ(t−T(i)) |N(t)=n
1+αi
i=1
Ce−γt (cid:20) (cid:18) α (cid:19) (cid:18) α (cid:19)(cid:21) Xn
= ψ +n+1 −ψ +1 · E(eγT(i) |N(t)=n)
α γ γ
i=1
where the digamma function ψ(x)= d ln(Γ(x))= Γ′(x), and Γ(x)=R∞ zx−1e−zdz
dx Γ(x) 0
For E(cid:0) eγT(i) |N(t)=n(cid:1), by f T(i)|N(t)=n(x)= (i−1)n !(! n−i)!xi−1(t t− nx)n−i , 0
2
Z t
E(cid:0) eγT(i) |N(t)=n(cid:1)= eγxf T(i)|N(t)=n(x)dx
0
Z t n! xi−1(t−x)n−i
= eγx dx.
(i−1)!(n−i)! tn
0
Setting x = t·u, dx = tdu, 0 ≤ u ≤ 1, then substituting xi−1 = (tu)i−1 = ti−1ui−1, and (t−x)n−i =
(t−tu)n−i =tn−i(1−u)n−i into the integral:
n! Z 1 ti−1ui−1tn−i(1−u)n−i
E(eγT(i) |N(t)=n)= eγtu tdu
(i−1)!(n−i)! tn
0
n! Z 1
= eγtuui−1(1−u)n−idu
(i−1)!(n−i)!
0
= F (i;n+1;γt),
1 1
where F is the confluent hypergeometric function of the first kind, defined as: F (a;c;z) =
1 1 1 1
Γ(a)Γ Γ( (c c) −a)R 01 ezuua−1(1−u)c−a−1du. And similarly we can have E[emγT(i)|N(t)=n]=
1
F 1(i;n+1;mγt)
Thus we obtained E[A(t)|N(t)=n]=Pn Ce−λt F (i;n+1;γt).
i=1 1+αi 1 1
Unconditional Expectation E[A(t)]
Using the law of total expectation:
∞
X
E[A(t)]= E[A(t)|N(t)=n]P(N(t)=n).
n=1
Substituting P(N(t)=n)= (λt)nexp(−λt):
n!
X∞ e−λt(λt)n
E[A(t)]= E(A(t)|N(t)=n)
n!
n=1
=
X∞ e−λt(λt)n " Ce−γtXn 1F 1(i;n+1;γt)#
.
n! 1+αi
n=1 i=1
Unconditional second moment E[A(t)2]
First, let’s derive the conditional moment:
Define Q
(i)
=A′(t,T (i))= 1+C αie−γteγT(i):
n !2
X
A2(t)= Q
(i)
i=1
n
X X
= Q2 +2 Q Q .
(i) (i) (j)
i=1 1≤i
3
thus,
n
X X
E(A2(t)|N(t)=n)= E(Q2 |N(t)=n)+2 E(Q Q |N(t)=n)
(i) (i) (j)
i=1 1≤i
=(cid:18) C (cid:19)2 e−2γtE(cid:2) e2γT(i) |N(t)=n(cid:3)+2 C2e−2γt Eh eγ(T(i)+T(j)) |N(t)=ni
1+αi (1+αi)(1+αj)
(cid:18) C (cid:19)2 C2e−2γt h i
=
1+αi
e−2γt 1F 1(i;n+1;2γt)+2 (1+αi)(1+αj)E eγ(T(i)+T(j)) |N(t)=n
=C2e−2γt
Xn 1F 1( (i 1;n ++ αi1 ); 22γt)
+2
X E[e (γ 1(T +i+ αTj i) )(| 1N +(t α) j=
)
n]
.
i=1 1≤i
where
Z tZ t
E[eγ(T(i)+T(j)) |N(t)=n]= eγ(x+y)f T(i),T(j)|N(t)=n(x,y)dydx
0 x
n! Z tZ t
= eγ(x+y)xi−1(y−x)j−i−1(t−y)n−jdydx.
(i−1)!(j−i−1)!(n−j)!tn
0 x
by the law of total expectation and substituting the conditional expectation derived previously:
E[A(t)2]=
X∞
C2e−2γt
Xn 1F 1( (i 1;n ++ αi1 ); 22γt)
+2
X E[e (γ 1(T +i+ αTj i) )(| 1N +(t α) j=
)
n] e−λt(λ nt !)n
.
n=1 i=1 1≤i
Combining terms:
E[A(t)2]=C2e−(λ+2γ)tX∞ (λ nt !)n Xn 1F 1( (i 1;n ++ αi1 ); 22γt)
+2
X E[e (γ 1(T +i+ αTj i) )(| 1N +(t α) j=
)
n]
.
n=1 i=1 1≤i
where 1F 1(i;n+1;2γt) and E[eγ(Ti+Tj) |N(t)=n] are defined before.
Variance Var(A(t))
By definition of variance we have:
Var(A(t))=E[A(t)2]−(E[A(t)])2.
where E[A(t)] and E[A(t)2]are defined previously.
In summary,
E[A(t)|N(t)=n]=Ce−γtXn 1F 1(i;n+1;γt)
.
1+αi
i=1
4
E[A(t)]=Ce−(λ+γ)tX∞ (λt)n Xn 1F 1(i;n+1;γt)
.
n! 1+αi
n=1 i=1
h i n! Z tZ t
E eγ(T(i)+T(j)) |N(t)=n = eγ(x+y)xi−1(y−x)j−i−1(t−y)n−jdydx.
(i−1)!(j−i−1)!(n−j)!tn
0 x
E[A(t)2 |N(t)=n]=C2e−2γt
Xn 1F 1( (i 1;n ++ αi1 ); 22γt)
+2
X E(cid:2) eγ ((T 1(i +)+ αT( ij )) () 1| +N α(t j) )=n(cid:3)
.
i=1 1≤i
E[A(t)2]=C2e−(λ+2γ)tX∞ (λ nt !)n Xn 1F 1( (i 1;n ++ αi1 ); 22γt)
+2
X E(cid:2) eγ ((T 1(i +)+ αT( ij )) () 1| +N α(t j) )=n(cid:3)
.
n=1 i=1 1≤i
Var(A(t))=E[A(t)2]−(E[A(t)])2.
Appendix A: Order Statistics: Marginal and Joint Distributions
Let X ,X ,...,X ∼ i.i.d.F, where F is the cumulative distribution function (CDF) of the sample with
1 2 n
probability density function p(x).
Denote X ≤X ≤···≤X as the ordered statistics of the sample.
(1) (2) (n)
We derive the marginal and joint probability density functions of the ordered statistics X and X (k ≤l)
(k) (l)
under a uniform distribution U(0,1).
Marginal Distribution of X
(k)
ToderivethedensityofX inasmallinterval[x,x+∆x], thefollowingmustoccur: k−1observationsmust
(k)
lie strictly less than x, n−k observations must lie strictly greater than x+∆x. And exactly one observation
must fall in [x,x+∆x].
The probability of this configuration is proportional to the multinomial distribution:
n!
.
(k−1)!(n−k)!
The probabilities of the specific events are: [F(x)]k−1 for probability that k−1 observations fall below x,
[1−F(x+∆x)]n−kfor probability that n−k observations exceed x+∆x, and F(x+∆x)−F(x)≈p(x)∆x
for probability that exactly one observation lies in [x,x+∆x].
The total probability is:
n!
P(X ∈[x,x+∆x])≈ [F(x)]k−1[1−F(x+∆x)]n−kp(x)∆x.
(k) (k−1)!(n−k)!
Taking the limit as ∆x→0, the marginal density is:
n!
p (x)= [F(x)]k−1[1−F(x)]n−kp(x).
k (k−1)!(n−k)!
5
Joint Distribution of X and X , k ≤ l
(k) (l)
To derive the joint density of X and X , consider:
(k) (l)
P(X ∈[x,x+∆x],X ∈[y,y+∆y]), x
(k) (l)
The following conditions must hold: k−1 observations lie strictly less than x, l−k−1 observations lie
strictly in (x,y), n−l observations lie strictly greater than y+∆y. And exactly one observation lies in
[x,x+∆x] and one in [y,y+∆y].
And total Ways to Arrange n Observations:
n!
.
(k−1)!(l−k−1)!(n−l)!
The probabilities of the events are: [F(x)]k−1: k−1 for observations less than x, [F(y)−F(x)]l−k−1 for
l−k−1 observations in (x,y), [1−F(y+∆y)]n−l: n−l for observations greater than y+∆y. p(x)∆x and
p(y)∆y for contributions from the densities at x and y.
Combining these components, the joint probability is:
n!
P(X ∈[x,x+∆x],X ∈[y,y+∆y])≈ [F(x)]k−1[F(y)−F(x)]l−k−1[1−F(y)]n−lp(x)p(y)∆x∆y.
(k) (l) (k−1)!(l−k−1)!(n−l)!
Taking the limit as ∆x,∆y →0, the joint density is:
n!
p (x,y)= [F(x)]k−1[F(y)−F(x)]l−k−1[1−F(y)]n−lp(x)p(y).
k,l (k−1)!(l−k−1)!(n−l)!
Special Case: Uniform Distribution U(0,1)
For U(0,1), the CDF and PDF are:
F(x)=x, p(x)=1, 0≤x≤1.
Substituting F(u)=u and p(u)=1 into the Marginal Density of X :
(k)
n!
p (x)= xk−1(1−x)n−k, 0
u(k) (k−1)!(n−k)!
This is the Beta distribution with parameters k and n−k+1:
U ∼Beta(k,n−k+1).
(k)
Substituting F(u)=u and p(u)=1 into the joint density formula we have Joint Density of U and U :
(k) (l)
n!
p (x,y)= xk−1(y−x)l−k−1(1−y)n−l, 0
U(k),U(l) (k−1)!(l−k−1)!(n−l)!
In summary,
• The marginal density of X is:
(k)
6
n!
p (x)= [F(x)]k−1[1−F(x)]n−kp(x).
k (k−1)!(n−k)!
For U(0,1):
xk−1(1−x)n−k
p (x)= , U ∼Beta(k,n−k+1).
u(k) β(k,n−k+1) (k)
• The joint density of X and X (k ≤l) is:
(k) (l)
n!
p (x,y)= [F(x)]k−1[F(y)−F(x)]l−k−1[1−F(y)]n−lp(x)p(y).
k,l (k−1)!(l−k−1)!(n−l)!
For U(0,1):
n!
p (x,y)= xk−1(y−x)l−k−1(1−y)n−l, 0
U(k),U(l) (k−1)!(l−k−1)!(n−l)!