# Do I get the right answers? If not, can someone please explain? (a) 2 points possible (graded, results hidden) Conside...

Do I get the right answers? If not, can someone please explain?  (a) 2 points possible (graded, results hidden) Consider a Gaussian linear model Y = aX + e in a Bayesian view. Consider the prior (a) = 1 for all a eR. Determine whether each of the following statements is true or false. (a) is a uniform prior. O True C False n(a) is a Jeffreys prior when we consider the likelihood L (Y = y|A = a, X = x) (where we assume x is known). O True C False
(b) 3 points possible (graded, results hidden) Consider a linear regression model Y = Xß+ oe where E E R" is a random vector with E [e] = 0, E [ee' ] = I,n, and no further assumptions are made about e X is an n by p deterministic matrix, and X X is invertible. 0 is an unknown constant. O Let B denote the least squares estimator of B in this context. Determine whether each of the the following statements is true or false. 1. B is the maximum likelihood estimator for B. O True C False 2. With the model written as Y = XB + ae, B has dimension 1 xp (i.e. is a row vector of length p). True O False 3. В has a Gaussian distribution (even for small n). True O False Similar Homework Help Questions
• ### in a Bayesian view. Consider the prior π(a)-1 for all a e R Consider a Gaussian linear model Y = aX+ E Determine whether each of the following statements is true or false. π(a) a uniform prior. (1) (... in a Bayesian view. Consider the prior π(a)-1 for all a e R Consider a Gaussian linear model Y = aX+ E Determine whether each of the following statements is true or false. π(a) a uniform prior. (1) (a) True (b) False L(Y=y14=a,X=x) (2) π(a) is a jeffreys prior when we consider the likelihood (where we assume xis known) (a) True (b)False Y-XB+ σε where ε E R" is a random vector with Consider a linear regression model E[ε1-0, E[eErJ-1....

• ### Problem 4 True or False A Bookmark this page Instructions: Be very careful with the multiple choice questions below. Some are "choose all that apply," and many tests your knowledge of when pa... Problem 4 True or False A Bookmark this page Instructions: Be very careful with the multiple choice questions below. Some are "choose all that apply," and many tests your knowledge of when particular statements apply As in the rest of this exam, only your last submission will count. 1 point possible (graded, results hidden) The likelihood ratio test is used to obtain a test with non-asymptotic level o True O False Submit You have used 0 of 3 attempts Save...

• ### 1.Given the Multiple Linear regression model as Y-Po + β.X1 + β2X2 + β3Xs + which in matrix notation is written asy-xß +ε where -έ has a N(0,a21) distribution + + ßpXo +ε A. Show that the OLS estimat... 1.Given the Multiple Linear regression model as Y-Po + β.X1 + β2X2 + β3Xs + which in matrix notation is written asy-xß +ε where -έ has a N(0,a21) distribution + + ßpXo +ε A. Show that the OLS estimator of the parameter vector B is given by B. Show that the OLS in A above is an unbiased estimator of β Hint: E(β)-β C. Show that the variance of the estimator is Var(B)-o(Xx)-1 D. What is the distribution o the...

• ### R and θ E (0, 1), define For x otherwise Let X, X, be i.i.d. random variables with density fo, for some unknown 0 E (0,1) 2 points possible (graded, results hidden) Let a be the number of X, which ar... R and θ E (0, 1), define For x otherwise Let X, X, be i.i.d. random variables with density fo, for some unknown 0 E (0,1) 2 points possible (graded, results hidden) Let a be the number of X, which are negative (X, < 0) and b be the number of Xi which are non-negative (X 0). (Note that the total number of samples is n a b and be careful not to mix up the roles of a and...

• ### For z e R and θ (0, 1), define otherwise. Let X1 , . . . , X" be i..d. random variables with density f, for some unknown θ E (0, 1) 1 point possible (graded, results hidden) To prepare, sketch th... For z e R and θ (0, 1), define otherwise. Let X1 , . . . , X" be i..d. random variables with density f, for some unknown θ E (0, 1) 1 point possible (graded, results hidden) To prepare, sketch the pdf f, (z) for different values of θ E (0,1) Which of the following properties of fo (z) guarantee that it is a probability density? (Check all that apply) Note (added May 3) Note that you are not...

• ### Exercise5 Consider a linear model with n -2m in which yi Bo Pi^i +ei,i-1,...,m, and Here €1, ,En are 1.1.d. from N(0,ơ)... Exercise5 Consider a linear model with n -2m in which yi Bo Pi^i +ei,i-1,...,m, and Here €1, ,En are 1.1.d. from N(0,ơ), β-(A ,A, β), and σ2 are unknown parameters, zı, known constants with x1 +... + Xm-Tm+1 + +xn0 , zn are 1, write the model in vector form as Y = Xß+ε describing the entries in the matrix X. 2, Determine the least squares estimator β of β. Exercise5 Consider a linear model with n -2m in which...

• ### *** PLEASE answer question 2 if u can, DO NOT JUST ANSWER first 3 or 4 parts of question 1, FOR I... *** PLEASE answer question 2 if u can, DO NOT JUST ANSWER first 3 or 4 parts of question 1, FOR I CAN DO THAT. AND DO NOT COPY AND PASTE. Otherwise , please let other chegg Experts answer so I can see it from a different perspective. *THANK YOU CHEGG EXPERT* ***PLZ ANSWER QUESTION [ 2 ] 1. The size of claims made on an insurance policy are modelled through the following distribu- tion: You are interested in estimating...

• ### As on the previous page, let X1,... ,Xn be iid with pdf where θ > 0. (to) 2 Possible points (qual... As on the previous page, let X1,... ,Xn be iid with pdf where θ > 0. (to) 2 Possible points (qualifiable, hidden results) Assume we do not actually get to observe Xı , . . . , X. . Instead let Yı , . . . , Y, be our observations where Yi = 1 (Xi 0.5) . Our goal is to estimate 0 based on this new data. What distribution does Y follow? First, choose the type of distribution:...

• ### can someone explains to me in details. i provided the answers yet i dont understand it realization of a random sample X... can someone explains to me in details. i provided the answers yet i dont understand it realization of a random sample X1, X2, X3 from an N(u,a2) distri 2. Suppose that xi,x2, x3 are a bution with o known. If we decide to use Bayesian inference to obtain a reliable estimate for the parameter , and use 1 h(H) e 2 the prior distribution of the parameter i. Write down an expression for the posterior distribution (on-1) as of You...

• ### 2. Consider the simple linear regression model: where e1, .. . , es, are i.i.d. N (0, o2), for i= 1,2,... , n. Suppose... 2. Consider the simple linear regression model: where e1, .. . , es, are i.i.d. N (0, o2), for i= 1,2,... , n. Suppose that we would like to estimate the mean response at x = x*, that is we want to estimate lyx=* = Bo + B1 x*. The least squares estimator for /uyx* is = bo bi x*, where bo, b1 are the least squares estimators for Bo, Bi. ayx= (a) Show that the least squares estimator for...

Need Online Homework Help?