Hello, dear friend, you can consult us at any time if you have any questions, add WeChat: zz-x2580
STA 365: Assignment
This is the first assignment. It is to be treated like a take-home exam. Accordingly, unlike
the homework, this assignment should be completed independently (on your own) without
help or consultation from your colleagues, or from anyone else (except the course instructor
and TAs). The TAs will be instructed to flag answers that look sufficiently similar.
You are allowed to use the course lecture notes, the recommended textbooks, and external
sources. However, you must cite the sources you rely on, and you should be using these
sources to inform/help you in developing your own answer, not copying them.
You are encouraged to type out your answers in LaTeX or a word processor. If you need to
handwrite your responses, make sure that they are clear and legible, and that you scan a
high quality image. What cannot be read will be marked as incomplete.
Where a problem requires the use of R, you must produce your associated R code. While
you may use other software, solutions will be provided only in R.
Problem 1. (25 points)
Suppose that y ∼ Exp(θ) with density
pi(y) = θ exp(−θy).
Suppose that the prior for θ is Gamma(α, β) with density
pi(θ) ∝ θα−1 exp(−βθ).
(a) We observe only that y ≥ 10 without observing the exact value of y.
(i) What is the posterior distribution pi(θ|y ≥ 10), as a function of α and β?
(ii) What is the posterior mean of θ?
(iii) What is the posterior variance of θ?
(10 points)
(b) We are now told that y is exactly 15.
(i) What is the posterior distribution pi(θ|y = 15)?
1
(ii) What is the posterior mean of θ?
(iii) What is the posterior variance of θ?
(10 points)
(c) We assign weakly informative priors to θ in both parts (a) and (b) by setting α = 0.001
and β = 0.001. Which posterior variance of θ is higher, the one from part (a) or the one
from part (b)?
(5 points)
Problem 2. (25 points)
Suppose y ∼ Galenshore(a, θ), given by
pi(y) =
2
Γ(a)
θ2ay2a−1 exp(−θ2y2)
for y > 0, θ > 0, a > 0. Assume that a is known. It may be helpful to know that for this
density,
E[y] =
Γ(a+ 1/2)
θΓ(a)
, and E[y2] =
a
θ2
.
You can also assume that y1, ..., yn, yn+1 ∼ Galenshore(a, θ) for any n > 1.
a) Identify the natural conjugate prior for θ.
b) Find pi(θ|y1, ...yn).
c) Find E[θ|y1, ...yn].
d) Find pi(
∼
y|y1, ...yn).
e) Identify the Jeffreys’ prior for θ.
(5 points each)
Problem 3. (25 points)
(a) Suppose that we are interested in flight times between Toronto and Washington DC. We
will assume that the times are normally distributed and wish to make inferences on µ. We
know from previous observations that the average flight time is about 1.5 hours and so we
set µ0 to 1.5. Suppose we set τ0 to 1. Hence, the prior for µ is Normal(µ0, τ
2
0 ).
Now suppose we observe y1...yn which can be summarized by y = 1.6 and s
2 = 0.01. Find
pi(µ|y1, ...yn, σ2 = s2). (5 points)
(b) It is a common problem for measurements to be observed in rounded form. For a
simple example, suppose we weigh an object five times and measure weights, rounded to
the nearest pound, of 10,10,12,11,9. Assume that unrounded measurements are normally
2
distributed with a noninformative prior distribution on the mean µ and variance σ2, given
by
pi(µ, σ2) ∝ (σ2)−1.
(i) Give the posterior distribution for (µ, σ2) obtained by pretending that the observations
are exact, unrounded measurements. (10 points)
(ii) Give the posterior distribution for (µ, σ2) treating the measurements as rounded. (10
points)
Problem 4. (25 points)
(a) Find Jeffreys’ priors for the unknown parameters in the following models:
(i) Y ∼ N(µ, σ2) with µ known and σ−2 (precision) as the parameter that we need to specify
the prior for. (5 points)
(ii) Y ∼ N(µ, σ2) and we need a prior for (µ, σ−2). (10 points)
(b) Let
L(θ, a) =
{
k0(θ − a), if θ ≥ a,
k1(a− θ), if θ < a.
Find the Bayes estimator under this loss function. Recall that the Bayes estimator a is the
quantity that minimizes the posterior expected loss given by
E[L(θ, a)|y] =
∫
Ω
L(θ, a)pi(θ, y)dθ.
(10 points)