Serial Correlation Exercises

Exercise 9.2

The estimated model is based on cross-section data. Therefore, the serial correlation test is meaningless.Cross-section data can always be rearranged to get different values for the DW statistic.
 

Exercise 9.3

According to Table A.5 of the textbook, dL = 1.24 for n = 27 and k' = 2. Since d = 0.65 < dL , we can reject the null hypothesis of no first-order serial correlation against the alternative of positive first-order serial correlation. Since the independent variables are exogenous, the parameter estimates and forecasts are unbiased. However, standard errors and t-values are incorrect and the goodness of fit is generally exagerated (because the estimate of the variance of the disturbances is biased downward). Hypothesis tests are invalid as a result. An alternative method is the Cochrane-Orcutt procedure for FGLS estimation, as outlined on p. 446 of the textbook. For a correct value of , GLS estimation results in Best Linear Unbiased estimates (BLUE) of the equation parameters. When a consistent estimator of  is used instead of its true value, the estimates are asymptotically efficient, but not efficient in small samples relative to an estimator based on the true value of .
 

Exercise 9.5

Let the assumption on the error terms be utut-1t , where t is independently normally distributed. The null hypothesis is  = 0, and the alternate hypothesis is  > 0. According to Table A.5 of the textbook, dL = 1.37 for n = 32 and k' = 1. Since d = 0.207 < dL , we can reject the null hypothesis of no first-order serial correlation against the alternative of positive first-order serial correlation. Therefore, we are not justified in feeling that the fit is excellent and the regression coefficients extremely significant. Serial correlation renders hypothesis tests invalid and generally results in goodness-of-fit statistics which exagerate the true value.

Exercise 9.6

a. utut-1t H0 :   = 0.
b. Since  k' = 4 and n = 41, dL  is in the range (1,285, 1.336) and  dU  is in the range (1.720, 1.721).
c. d = 0.97 < dL . Therefore we reject H0 and conclude that there is significant serial correlation.
d. OLS estimates are unbiased and consistent but not efficient (that is, not BLUE). Hypothesis tests are invalid.
e. The list of reasons include:
  1. DW test can (and often does) lead to an inconclusive test.
  2. The test is not applicable to higher-order serial correlation.
  3. The test is not valid if the model contains lagged dependent variables.
  4. If the number of variables is large, the DW table may not have the critical values.
  5. DW test gives critical values only for a limited set of levels of significance. LM test and p-value can be used for any value. However, some programs such as SHAZAM do give p-values for the DW d and can be used.
f. First transform the model as follows:
ln(Qt) -  ln (Qt-1) = 1 (1-) +  2 [ln(Pt) -  ln (Pt-1)] +  3 [ln(Yt) -  ln (Yt-1)] +  4 [ln(ACCIDt) -  ln (ACCIDt-1)] +  5 [ln(FATALt) -  ln (FATALt-1)] +  t
Next fix   , at say  1, to be any any value between -1 and +1. Generate the variables Qt* = ln(Qt) - 1 ln (Qt-1), Xt2* = ln(Pt) - 1 ln (Pt-1), and so on to Xt5*. Then regress Qt* against a constant, Xt2* and so on to Xt5*, and compute the error sum of squares ESS. Vary  1 between -1 and +1 and choose the value rhohatat which ESS is minimum. Then use this final rhohatand transform to obtain new Qt* etc.  Finally regress Qt* against a constant, Xt2* and so on to Xt5*to complete estimates and related statistics.

Exercise 9.7

a. ut1ut-12ut-23ut-34ut-4t .
b. H0 :  1234 = 0.  H1 :  At lesat one of the  's is not zero.
c. Regress LHt against a constant, LYt, and Lrt.. Compute
                                                                 .
Then regressuhatt against a constant, LYt, Lrt,uhatt-1,uhatt-2,uhatt-3, and uhatt-4, either using observations 5 through n or using zero for the missing observations on the lagged uhatt.
d. Compute LM = nR2 = 40 R2 (or 36 R2 if you use observations 5 through n) where R2 is the unadjusted R-square for the third step above.
e. Under the null hypothesis, LM has the chi-square distribution with 4 d.f.
f. Compute p-value = area to the right of LM in chi24. Reject H0 if p-value < 0.10.
g. Forecasts are unbiased, consistent, but not efficient.
h. Generalized CORC procedure.
Step 1: Regress LHt against a constant, LYt, and Lrt..
Step 2: Compute .
Step 3. Regressuhatt against a constant, LYt, Lrt,uhatt-1,uhatt-2,uhatt-3, and uhatt-4, using observations 5 through n, and obtain rhohati, i = 1 ... 4.
Step 4: Generate LH*t = LH1LHt-12LHt-23LHt-34LHt-4 and similarly for LY and Lr.
Step 5: Regress LH*t against a constant, LY*t, and Lr*t to get new estimates of alpha, beta and gamma.
Step 6: Go back to Step 2 and iterate until the error sum of squares from Step 4 does not change by more than some specified percent (say one percent).

Exercise 9.8

a. utut-1t .
b. H0 :   = 0,   H1 :   > 0.
c. We have k' = 5 and n = 0. dL  = 1.230 and  dU = 1.786. Because dL < d < dU, the test is inconclusive.
d. ut1ut-12ut-2t . H0 :  12 = 0.
e. n-2 = 38 and R2 = 0.687. Hence LM = 26.106. Under the null, LM has the chi-square distribution with 2 d.f.
f.  LM* = 13.816 < LM. Therefore we reject H0 and conclude that there is significant second order correlation.
g. Generalized CORC procedure for AR(2).
Step 1: Regress ln(Qt) against a constant, ln(Kt), ln(Lt), ln(At), ln(Ft), and ln(St).
Step 2: Compute uhatt = ln(Qt) - - ln(Kt) - ....
Step 3: Regressuhatt against uhatt-1 anduhatt-2 with no constant.
Step 4: Generate Y*t = ln(Qt) - rhohat1 ln(Qt-1) - rhohat2 ln(Qt-2) , X*t2 = ln(Kt) - rhohat1 ln(Kt-1) - rhohat2 ln(Kt-2) and similarly for the other explanatory variables.
Step 5: Regress Y*t against a constant and the X*t  variables and obtain new estimates of the beta's.
Step 6: Go back to Step 2 amd iterate until the error sum of squares for Step 4 does not change by more than some specified percent, say 0.01 percent.

Exercise 9.9

a.  ut1ut-12ut-23ut-3t .
b. H0 :  123 = 0.
c. LM = (n-3) R2 = 6.291.
d. Under H0, LM has a chi-square distribution with 3 d.f.
e. For a 10 percent test, the critical LM* = 6.25139.
f. Since LM > LM*, we reject the null and hence conclude that there is significant serial correlation.
g. Since serial correllation exists, OLS estimators are unbiased and consistent, but not BLUE (that is, not efficient), and all tests are invalid.

Exercise 9.10

a. SRt = RFRt + alpha MRt - alpha RFRt + vt  =  RFRt (1-alpha) + alpha MRt + vt . Therefore, beta1 = 0, beta2 = alpha, and beta3 = 1-alpha. The relevant restrictions are beta1 = 0 and beta2 + beta3 = 1.
b. First regress SRt against a constant, MRt and RFRt, and save the error sum of squares as ESSA. Next generate Yt = SRt - RFRt and Xt = MRt - RFRt. Then regress Yt against Xt without a constant term ands save the error sum of squares as ESSB.
c. Compute Fc = [(ESSB - ESSA)/2] / [ESSA/(n-3)].
d. Under the null hypothesis, Fc has the F-distribution with 2 and n-3 d.f.
e. The transformed model is:
(C)   SRt -SRt-1 = beta1 (1-) + beta2 (MRt -MRt-1) + beta3 (RFRt -RFRt-1) + t .
Step 1: Choose at a fixed value. Then generate SRt* = SRt -SRt-1,  MRt* = MRt -MRt-1,  and RFRt* = RFRt -RFRt-1.
Step 2: Regress SRt* against a constant, MRt* , and RFRt* , and get ESSC.
Step 3: Vary at broad steps from -0.99 through +0.99 say at steps of length 0.1. Choose the rhohat that minimizes ESSC at the starting point of a CORC iteration.
Step 4: Repeat Step 2 after using this rhohat in Step 1 and compute
                                            .
Step 5: Get new estimate
                                  
Step 6: Repeat Steps 1, 2, 4, and 5, using new rhohat values and iterate until rhohatfrom two successive iterations do not change more than say 0.001.
Step 7: Using this final rhohatestimate Model C.

Exercise 9.11

a. ut1ut-12ut-23ut-34ut-4t .
    H0 :  1234 = 0.
b. Breusch-Godfrey test.
  1. Regress LQ against a constant, LP, and LY, and get .
  2. Generate uhatt-1,uhatt-2,uhatt-3, and uhatt-4.
  3. Regressuhatt   against uhatt-1,uhatt-2,uhatt-3, uhatt-4, constant, LPt, and LYt, using only observations 5 through n. Alternatively, place zeroes where the lag values are unknown.
  4. Compute (n-4) R2 (nR2 if zeroes are inserted in the unknown lag values) where n is number of observations and R2 is unadjusted R2 from (b.3). Under H0, (n-4) R2  ~  chi24.
  5. Reject H0 if P[   chi24 > (n-4) R2] < level of significance or if (n-4) R2 > chi24(*), the point on chi24 such that the area to the right is equal to the level of significance.
c.  Generalized CORC procedure.
  1. Regress Regress LQ against a constant, LP, and LY.
  2. Get .
  3. Generate uhatt-1,uhatt-2,uhatt-3, and uhatt-4.
  4. Regressuhatt   against uhatt-1,uhatt-2,uhatt-3, and uhatt-4, [Note: no constant term here, and no LPt or LYt] and get  rhohat1,  rhohat2,  rhohat3, and  rhohat4.
  5. Generate LQt*= LQt - rhohat1 LQt-1 - ... - rhohat4 LQt-4, LPt*= LPt - rhohat1 LPt-1 - ... - rhohat4 LPt-4, and LYt*= LYt - rhohat1 LYt-1 - ... - rhohat4 LYt-4.
  6. Regress LQt* against 1 - rhohat - ... - rhohat, LPt*, and  LYt*, and get the next round of estimates of beta1, beta2, and beta3.
  7. Go back to Step 2 and iterate until ESS for Step 6 doesn't change by more than some pre-specified number or percentage.
d. Estimates are biased, but consistent, and asymptotically efficient.
e. The error variance specification is sigmat2 = alpha0 + alpha1 ut-12 + ... + alpha4 ut-42 .
  1. Regress LQ against a constant, LP, and LY.
  2. Compute .
  3. Generate uhat2t-i, for i = 0, 1, ..., 4.
  4. Regress uhat2t against a constant, uhat2t-1, ... uhat2t-4, and compute the unadjusted R2.
  5. Compute LM = n R2. Reject H0 if LM > 9.48773 which is the point on chi24 to the right of which is the area 0.05.
f.  WLS estimation of ARCH model.
  1. Use the estimated alphai from Step 4 above to obtain .
  2. Compute  .
  3. Generate LQt* = wt LQtLPt*= wt LPt, and LYt*= wt LYt.
  4. Regress LQt* against wt , LPt* and  LYt*, with no constant term.


[Up] [Home] [Economics] [Memorial]

Updated March 2, 2009

noelroy AT mun.ca