Kinh tế lượngTrắc nghiệm

180 câu trắc nghiệm Kinh tế lượng – Phần 6

Chapter 17: The Theory of Linear Regression with One Regressor

KTL_001_C17_1: When the errors are heteroskedastic, then

● WLS is efficient in large samples, if the functional form of the heteroskedasticity is known.
○ OLS is biased.
○ OLS is still efficient as long as there is no serial correlation in the error terms.
○ weighted least squares is efficient.

KTL_001_C17_3: Asymptotic distribution theory is

○ not practically relevant, because we never have an infinite number of observations.
○ only of theoretical interest.
○ of interest because it tells you what the distribution approximately looks like in small samples.
● the distribution of statistics when the sample size is very large.

KTL_001_C17_3: Under the five extended least squares assumptions, the homoskedasticity-only tdistribution in this chapter

● has a Student t distribution with n-2 degrees of freedom.
○ has a normal distribution.
○ converges in distribution to a \(\chi _{n – 2}^2\) distribution.
○ has a Student t distribution with n degrees of freedom.

KTL_001_C17_4: If the errors are heteroskedastic, then

○ the OLS estimator is still BLUE as long as the regressors are nonrandom.
○ the usual formula cannot be used for the OLS estimator.
○ your model becomes overidentified.
● the OLS estimator is not BLUE.

KTL_001_C17_5: The advantage of using heteroskedasticity-robust standard errors is that

○ that they are easier to compute than the homoskedasticity-only standard errors.
● they produce asymptotically valid inferences even if you do not know the form of the conditional variance function.
○ it makes the OLS estimator BLUE, even in the presence of heteroskedasticity.
○ they do not unnecessarily complicate matters, since in real-world applications, the functional form of the conditional variance can easily be found.

KTL_001_C17_6: In order to use the t-statistic for hypothesis testing and constructing a 95% confidence interval as +- 1.96 standard errors, the following three assumptions have to hold:

● the conditional mean of ui, given Xi is zero; (Xi,Yi), i = 1,2, …, n are i.i.d. draws from their joint distribution; Xi and ui have four moments
○ the conditional mean of ui, given Xi is zero; (Xi,Yi), i = 1,2, …, n are i.i.d. draws from their joint distribution; homoskedasticity
○ the conditional mean of ui, given Xi is zero; (Xi,Yi), i = 1,2, …, n are i.i.d. draws from their joint distribution; the conditional distribution of ui given Xi is normal
○ none of the above

KTL_001_C17_7: If the functional form of the conditional variance function is incorrect, then

● the standard errors computed by WLS regression routines are invalid
○ the OLS estimator is biased
○ instrumental variable techniques have to be used
○ the regression \({R^2}\) can no longer be computed

KTL_001_C17_8: Suppose that the conditional variance is \({{\rm var}} \left( {{u_i}|{X_i}} \right) = \lambda h\left( {{X_i}} \right)\) where \(\lambda \) is a constant and h is a known function. The WLS estimator is

○ the same as the OLS estimator since the function is known
○ can only be calculated if you have at least 100 observations
● the estimator obtained by first dividing the dependent variable and regressor by the square root of hand then regressing this modified dependent variable on the modified regressor using OLS
○ the estimator obtained by first dividing the dependent variable and regressor by h and then regressing this modified dependent variable on the modified regressor using OLS

KTL_001_C17_9: The large-sample distribution of \({{\hat \beta }_1}\) is

● \(\sqrt n \left( {{{\hat \beta }_1} – {\beta _1}} \right) \to N\left[ {0,\frac{{{{\rm var}} ({\nu _i})}}{{{{\left[ {{{\rm var}} ({X_i})} \right]}^2}}}} \right]\begin{array}{ccccccccccccccc}{}&{where}&{{\nu _i} = }\end{array}\left( {{X_i} – {\mu _X}} \right){u_i}\)
○ \(\sqrt n \left( {{{\hat \beta }_1} – {\beta _1}} \right) \to N\left[ {0,\frac{{{{\rm var}} ({\nu _i})}}{{{{\left[ {{{\rm var}} ({X_i})} \right]}^2}}}} \right]\begin{array}{ccccccccccccccc}{}&{where}&{{\nu _i} = }\end{array}{u_i}\)
○ \(\sqrt n \left( {{{\hat \beta }_1} – {\beta _1}} \right) \to N\left[ {0,\frac{{{{\rm var}} ({\nu _i})}}{{{{\left[ {{{\rm var}} ({X_i})} \right]}^2}}}} \right]\begin{array}{ccccccccccccccc}{}&{where}&{{\nu _i} = }\end{array}{X_i}{u_i}\)
○ \(\sqrt n \left( {{{\hat \beta }_1} – {\beta _1}} \right) \to N\left[ {0,\frac{{\sigma _u^2}}{{{{\left[ {{{\rm var}} ({X_i})} \right]}^2}}}} \right]\)

KTL_001_C17_10: Assume that \({{\rm var}} \left( {{u_i}|{X_i}} \right) = {\phi _0} + {\phi _1}X_i^2\). One way to estimate \({\phi _0}\) and \({\phi _1}\) consistently is to regress

○ \({{\hat u}_i}\) on \(X_i^2\) using OLS
● \(\hat u_i^2\) on \(X_i^2\) using OLS
○ \(\hat u_i^2\) on \({\sqrt {{X_i}} }\) using OLS
○ \(\hat u_i^2\) on \(X_i^2\) using OLS, but suppressing the constant (“restricted least squares”)

Trang trước 1 2 3 4Trang sau
Xem thêm
Back to top button