Statistical Methods for Data Science DATA7202
Statistical Methods for Data Science
项目类别:统计学

Hello, dear friend, you can consult us at any time if you have any questions, add  WeChat:  zz-x2580


Statistical Methods for Data Science

DATA7202
Assignment 3 (Weight: 25%)
Please answer the questions below. For theoretical questions, you should present rigorous proofs
and appropriate explanations. Your report should be visually appealing and all questions should
be answered in the order of their appearance. For programming questions, you should present your
analysis of data using Python, Matlab, or R, as a short report, clearly answering the objectives
and justifying the modeling (and hence statistical analysis) choices you make, as well as discussing
your conclusions. Do not include excessive amounts of output in your reports. All the code should
be copied into the appendix and the sources should be packaged separately and submitted on the
blackboard in a zipped folder with the name:
"student_last_name.student_first_name.student_id.zip".
For example, suppose that the student name is John Smith and the student ID is 123456789.
Then, the zipped file name will be John.Smith.123456789.zip.
1. [30 Marks] A weight data of young laboratory rats is given in Table 1.
rat id Week 8 Week 15 Week 22 Week 29 Week 36
1 151 199 246 283 320
2 145 199 249 293 354
3 147 214 263 312 328
4 155 200 237 272 297
5 135 188 230 280 323
6 159 210 252 298 331
7 141 189 231 275 305
8 159 201 248 297 338
9 177 236 285 340 376
10 134 182 220 260 296
11 160 208 261 313 352
12 143 188 220 273 314
13 154 200 244 289 325
14 171 221 270 326 358
15 163 216 242 281 312
16 160 207 248 288 324
17 142 187 234 280 316
18 156 203 243 283 317
19 157 212 259 307 336
20 152 203 246 286 321
21 154 205 253 298 334
22 139 190 225 267 302
23 146 191 229 272 302
24 157 211 250 285 323
25 132 185 237 286 331
26 160 207 257 303 345
27 169 216 261 295 333
28 157 205 248 289 316
29 137 180 219 258 291
30 153 200 244 286 324
Table 1: Rat measurements.
1
Consider the model:
yi,j ∼ N(α + βxi,j, σ2) 1 6 i 6 30, 1 6 j 6 5, xi,1 = 8, xi,2 = 15, xi,3 = 22, xi,4 = 29, xi,5 = 36.
α ∼ N(0, 1000)
β ∼ N(0, 1000)
σ ∼ G(1000, 1000).
(a) [15 Marks] Perform MCMC estimation (write the sampler by yourself or use JAGS or
similar software).
(b) Create 3 independent chains.
(c) [5 Marks] Consider the first chain. Show trace and autocorrelation plots for all parame-
ters. Discuss the convergence. Next, present the summary table and density plots for all
parameters.
(d) [5 Marks] Use all chains to present the Gelman-Rubin diagnostics plot. Discuss the
convergence.
(e) [5 Marks] Present a summary table, trace plots (all traces for each parameter are located
in one graph), and density plots for all parameters using all three chains.
2. [10 Marks] Show that any training set (with unique xi values), τ = {(xi, yi), i = 1, . . . , n} can
be fitted via a tree with zero training loss.
3. [10 Marks] Suppose during the construction of a decision tree we wish to specify a con-
stant regional prediction function hw on the region R¸w, based on the training data in R¸w,
say {(x1, y1), . . . , (xk, yk)}. Show that hw(x) := k−1
∑k
i=1 yi minimizes the squared-error loss.

留学ICU™️ 留学生辅助指导品牌
在线客服 7*24 全天为您提供咨询服务
咨询电话(全球): +86 17530857517
客服QQ:2405269519
微信咨询:zz-x2580
关于我们
微信订阅号
© 2012-2021 ABC网站 站点地图:Google Sitemap | 服务条款 | 隐私政策
提示:ABC网站所开展服务及提供的文稿基于客户所提供资料,客户可用于研究目的等方面,本机构不鼓励、不提倡任何学术欺诈行为。