伍德里奇计量经济学英文版各章总结(共15页).doc

上传人:飞****2 文档编号:13951931 上传时间:2022-05-02 格式:DOC 页数:15 大小:111KB
返回 下载 相关 举报
伍德里奇计量经济学英文版各章总结(共15页).doc_第1页
第1页 / 共15页
伍德里奇计量经济学英文版各章总结(共15页).doc_第2页
第2页 / 共15页
点击查看更多>>
资源描述

《伍德里奇计量经济学英文版各章总结(共15页).doc》由会员分享,可在线阅读,更多相关《伍德里奇计量经济学英文版各章总结(共15页).doc(15页珍藏版)》请在taowenge.com淘文阁网|工程机械CAD图纸|机械工程制图|CAD装配图下载|SolidWorks_CaTia_CAD_UG_PROE_设计图分享下载上搜索。

1、精选优质文档-倾情为你奉上CHAPTER 1TEACHING NOTESYou have substantial latitude about what to emphasize in Chapter 1. I find it useful to talk about the economics of crime example (Example 1.1) and the wage example (Example 1.2) so that students see, at the outset, that econometrics is linked to economic reasonin

2、g, even if the economics is not complicated theory.I like to familiarize students with the important data structures that empirical economists use, focusing primarily on cross-sectional and time series data sets, as these are what I cover in a first-semester course. It is probably a good idea to men

3、tion the growing importance of data sets that have both a cross-sectional and time dimension.I spend almost an entire lecture talking about the problems inherent in drawing causal inferences in the social sciences. I do this mostly through the agricultural yield, return to education, and crime examp

4、les. These examples also contrast experimental and nonexperimental (observational) data. Students studying business and finance tend to find the term structure of interest rates example more relevant, although the issue there is testing the implication of a simple theory, as opposed to inferring cau

5、sality. I have found that spending time talking about these examples, in place of a formal review of probability and statistics, is more successful (and more enjoyable for the students and me).CHAPTER 2TEACHING NOTESThis is the chapter where I expect students to follow most, if not all, of the algeb

6、raic derivations. In class I like to derive at least the unbiasedness of the OLS slope coefficient, and usually I derive the variance. At a minimum, I talk about the factors affecting the variance. To simplify the notation, after I emphasize the assumptions in the population model, and assume random

7、 sampling, I just condition on the values of the explanatory variables in the sample. Technically, this is justified by random sampling because, for example, E(ui|x1,x2,xn) = E(ui|xi) by independent sampling. I find that students are able to focus on the key assumption SLR.4 and subsequently take my

8、 word about how conditioning on the independent variables in the sample is harmless. (If you prefer, the appendix to Chapter 3 does the conditioning argument carefully.) Because statistical inference is no more difficult in multiple regression than in simple regression, I postpone inference until Ch

9、apter 4. (This reduces redundancy and allows you to focus on the interpretive differences between simple and multiple regression.)You might notice how, compared with most other texts, I use relatively few assumptions to derive the unbiasedness of the OLS slope estimator, followed by the formula for

10、its variance. This is because I do not introduce redundant or unnecessary assumptions. For example, once SLR.4 is assumed, nothing further about the relationship between u and x is needed to obtain the unbiasedness of OLS under random sampling.CHAPTER 3TEACHING NOTESFor undergraduates, I do not work

11、 through most of the derivations in this chapter, at least not in detail. Rather, I focus on interpreting the assumptions, which mostly concern the population. Other than random sampling, the only assumption that involves more than population considerations is the assumption about no perfect colline

12、arity, where the possibility of perfect collinearity in the sample (even if it does not occur in the population) should be touched on. The more important issue is perfect collinearity in the population, but this is fairly easy to dispense with via examples. These come from my experiences with the ki

13、nds of model specification issues that beginners have trouble with.The comparison of simple and multiple regression estimates based on the particular sample at hand, as opposed to their statistical properties usually makes a strong impression. Sometimes I do not bother with the “partialling out” int

14、erpretation of multiple regression.As far as statistical properties, notice how I treat the problem of including an irrelevant variable: no separate derivation is needed, as the result follows form Theorem 3.1.I do like to derive the omitted variable bias in the simple case. This is not much more di

15、fficult than showing unbiasedness of OLS in the simple regression case under the first four Gauss-Markov assumptions. It is important to get the students thinking about this problem early on, and before too many additional (unnecessary) assumptions have been introduced.I have intentionally kept the

16、discussion of multicollinearity to a minimum. This partly indicates my bias, but it also reflects reality. It is, of course, very important for students to understand the potential consequences of having highly correlated independent variables. But this is often beyond our control, except that we ca

17、n ask less of our multiple regression analysis. If two or more explanatory variables are highly correlated in the sample, we should not expect to precisely estimate their ceteris paribus effects in the population.I find extensive treatments of multicollinearity, where one “tests” or somehow “solves”

18、 the multicollinearity problem, to be misleading, at best. Even the organization of some texts gives the impression that imperfect multicollinearity is somehow a violation of the Gauss-Markov assumptions: they include multicollinearity in a chapter or part of the book devoted to “violation of the ba

19、sic assumptions,” or something like that. I have noticed that masters students who have had some undergraduate econometrics are often confused on the multicollinearity issue. It is very important that students not confuse multicollinearity among the included explanatory variables in a regression mod

20、el with the bias caused by omitting an important variable.I do not prove the Gauss-Markov theorem. Instead, I emphasize its implications. Sometimes, and certainly for advanced beginners, I put a special case of Problem 3.12 on a midterm exam, where I make a particular choice for the function g(x). R

21、ather than have the students directly compare the variances, they should appeal to the Gauss-Markov theorem for the superiority of OLS over any other linear, unbiased estimator.CHAPTER 4TEACHING NOTESAt the start of this chapter is good time to remind students that a specific error distribution play

22、ed no role in the results of Chapter 3. That is because only the first two moments were derived under the full set of Gauss-Markov assumptions. Nevertheless, normality is needed to obtain exact normal sampling distributions (conditional on the explanatory variables). I emphasize that the full set of

23、 CLM assumptions are used in this chapter, but that in Chapter 5 we relax the normality assumption and still perform approximately valid inference. One could argue that the classical linear model results could be skipped entirely, and that only large-sample analysis is needed. But, from a practical

24、perspective, students still need to know where the t distribution comes from because virtually all regression packages report t statistics and obtain p-values off of the t distribution. I then find it very easy to cover Chapter 5 quickly, by just saying we can drop normality and still use t statisti

25、cs and the associated p-values as being approximately valid. Besides, occasionally students will have to analyze smaller data sets, especially if they do their own small surveys for a term project.It is crucial to emphasize that we test hypotheses about unknown population parameters. I tell my stude

26、nts that they will be punished if they write something like H0:= 0 on an exam or, even worse, H0: .632 = 0.One useful feature of Chapter 4 is its illustration of how to rewrite a population model so that it contains the parameter of interest in testing a single restriction. I find this is easier, bo

27、th theoretically and practically, than computing variances that can, in some cases, depend on numerous covariance terms. The example of testing equality of the return to two- and four-year colleges illustrates the basic method, and shows that the respecified model can have a useful interpretation. O

28、f course, some statistical packages now provide a standard error for linear combinations of estimates with a simple command, and that should be taught, too.One can use an F test for single linear restrictions on multiple parameters, but this is less transparent than a t test and does not immediately

29、 produce the standard error needed for a confidence interval or for testing a one-sided alternative. The trick of rewriting the population model is useful in several instances, including obtaining confidence intervals for predictions in Chapter 6, as well as for obtaining confidence intervals for ma

30、rginal effects in models with interactions (also in Chapter 6).The major league baseball player salary example illustrates the difference between individual and joint significance when explanatory variables (rbisyr and hrunsyr in this case) are highly correlated. I tend to emphasize the R-squared fo

31、rm of the F statistic because, in practice, it is applicable a large percentage of the time, and it is much more readily computed. I do regret that this example is biased toward students in countries where baseball is played. Still, it is one of the better examples of multicollinearity that I have c

32、ome across, and students of all backgrounds seem to get the point.CHAPTER 5TEACHING NOTESChapter 5 is short, but it is conceptually more difficult than the earlier chapters, primarily because it requires some knowledge of asymptotic properties of estimators. In class, I give a brief, heuristic descr

33、iption of consistency and asymptotic normality before stating the consistency and asymptotic normality of OLS. (Conveniently, the same assumptions that work for finite sample analysis work for asymptotic analysis.) More advanced students can follow the proof of consistency of the slope coefficient i

34、n the bivariate regression case. Section E.4 contains a full matrix treatment of asymptotic analysis appropriate for a masters level course.An explicit illustration of what happens to standard errors as the sample size grows emphasizes the importance of having a larger sample. I do not usually cover

35、 the LM statistic in a first-semester course, and I only briefly mention the asymptotic efficiency result. Without full use of matrix algebra combined with limit theorems for vectors and matrices, it is very difficult to prove asymptotic efficiency of OLS.I think the conclusions of this chapter are

36、important for students to know, even though they may not fully grasp the details. On exams I usually include true-false type questions, with explanation, to test the students understanding of asymptotics. For example: “In large samples we do not have to worry about omitted variable bias.” (False). O

37、r “Even if the error term is not normally distributed, in large samples we can still compute approximately valid confidence intervals under the Gauss-Markov assumptions.” (True).CHAPTER 6TEACHING NOTESI cover most of Chapter 6, but not all of the material in great detail. I use the example in Table

38、6.1 to quickly run through the effects of data scaling on the important OLS statistics. (Students should already have a feel for the effects of data scaling on the coefficients, fitting values, and R-squared because it is covered in Chapter 2.) At most, I briefly mention beta coefficients; if studen

39、ts have a need for them, they can read this subsection. The functional form material is important, and I spend some time on more complicated models involving logarithms, quadratics, and interactions. An important point for models with quadratics, and especially interactions, is that we need to evalu

40、ate the partial effect at interesting values of the explanatory variables. Often, zero is not an interesting value for an explanatory variable and is well outside the range in the sample. Using the methods from Chapter 4, it is easy to obtain confidence intervals for the effects at interesting x val

41、ues. As far as goodness-of-fit, I only introduce the adjusted R-squared, as I think using a slew of goodness-of-fit measures to choose a model can be confusing to novices (and does not reflect empirical practice). It is important to discuss how, if we fixate on a high R-squared, we may wind up with

42、a model that has no interesting ceteris paribus interpretation. I often have students and colleagues ask if there is a simple way to predict y when log(y) has been used as the dependent variable, and to obtain a goodness-of-fit measure for the log(y) model that can be compared with the usual R-squar

43、ed obtained when y is the dependent variable. The methods described in Section 6.4 are easy to implement and, unlike other approaches, do not require normality. The section on prediction and residual analysis contains several important topics, including constructing prediction intervals. It is usefu

44、l to see how much wider the prediction intervals are than the confidence interval for the conditional mean. I usually discuss some of the residual-analysis examples, as they have real-world applicability.CHAPTER 7TEACHING NOTES This is a fairly standard chapter on using qualitative information in re

45、gression analysis, although I try to emphasize examples with policy relevance (and only cross-sectional applications are included.). In allowing for different slopes, it is important, as in Chapter 6, to appropriately interpret the parameters and to decide whether they are of direct interest. For ex

46、ample, in the wage equation where the return to education is allowed to depend on gender, the coefficient on the female dummy variable is the wage differential between women and men at zero years of education. It is not surprising that we cannot estimate this very well, nor should we want to. In thi

47、s particular example we would drop the interaction term because it is insignificant, but the issue of interpreting the parameters can arise in models where the interaction term is significant. In discussing the Chow test, I think it is important to discuss testing for differences in slope coefficien

48、ts after allowing for an intercept difference. In many applications, a significant Chow statistic simply indicates intercept differences. (See the example in Section 7.4 on student-athlete GPAs in the text.) From a practical perspective, it is important to know whether the partial effects differ acr

49、oss groups or whether a constant differential is sufficient. I admit that an unconventional feature of this chapter is its introduction of the linear probability model. I cover the LPM here for several reasons. First, the LPM is being used more and more because it is easier to interpret than probit or logit models. Plus, once the proper parameter scalings are done for probit and logi

展开阅读全文
相关资源
相关搜索

当前位置:首页 > 教育专区 > 教案示例

本站为文档C TO C交易模式,本站只提供存储空间、用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。本站仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知淘文阁网,我们立即给予删除!客服QQ:136780468 微信:18945177775 电话:18904686070

工信部备案号:黑ICP备15003705号© 2020-2023 www.taowenge.com 淘文阁