最新多元线性回归模型:估计幻灯片.ppt

上传人:豆**** 文档编号:25219125 上传时间:2022-07-10 格式:PPT 页数:32 大小:702KB
返回 下载 相关 举报
最新多元线性回归模型:估计幻灯片.ppt_第1页
第1页 / 共32页
最新多元线性回归模型:估计幻灯片.ppt_第2页
第2页 / 共32页
点击查看更多>>
资源描述

《最新多元线性回归模型:估计幻灯片.ppt》由会员分享,可在线阅读,更多相关《最新多元线性回归模型:估计幻灯片.ppt(32页珍藏版)》请在taowenge.com淘文阁网|工程机械CAD图纸|机械工程制图|CAD装配图下载|SolidWorks_CaTia_CAD_UG_PROE_设计图分享下载上搜索。

1、2Parallels with Simple Regression b0 is still the intercept b1 to bk all called slope parameters u is still the error term (or disturbance) Still need to make a zero conditional mean assumption, so now assume that E(u|x1,x2, ,xk) = 0 Still minimizing the sum of squared residuals, so have k+1 first o

2、rder conditions9Simple vs Multiple Reg Estimatesample in the eduncorrelat are and OR ) ofeffect partial no (i.e. 0:unless Generally, regression multiple with the regression simple theCompare21221122110110 xxxxxyxybbbbbbbb10Goodness-of-FitSSR SSE SSTThen (SSR) squares of sum residual theis (SSE) squa

3、res of sum explained theis (SST) squares of sum total theis :following thedefine then Wepart, dunexplainean and part, explainedan of upmade being asn observatioeach ofcan think We222iiiiiiuyyyyuyy11Goodness-of-Fit (continued)w How do we think about how well our sample regression line fits our sample

4、 data?w Can compute the fraction of the total sum of squares (SST) that is explained by the model, call this the R-squared of regressionw R2 = SSE/SST = 1 SSR/SST12Goodness-of-Fit (continued)22222 values theand actual thebetweent coefficienn correlatio squared the toequal being as of think alsocan W

5、eyyyyyyyyRyyRiiiiii13More about R-squared R2 can never decrease when another independent variable is added to a regression, and usually will increase Because R2 will usually increase with the number of independent variables, it is not a good way to compare models14Assumptions for Unbiasednessw Popul

6、ation model is linear in parameters: y = b0 + b1x1 + b2x2 + bkxk + uw We can use a random sample of size n, (xi1, xi2, xik, yi): i=1, 2, , n, from the population model, so that the sample model is yi = b0 + b1xi1 + b2xi2 + bkxik + ui w E(u|x1, x2, xk) = 0, implying that all of the explanatory variab

7、les are exogenousw None of the xs is constant, and there are no exact linear relationships among them15Too Many or Too Few Variables What happens if we include variables in our specification that dont belong? There is no effect on our parameter estimate, and OLS remains unbiasedWhat if we exclude a

8、variable from our specification that does belong? OLS will usually be biased 16Omitted Variable Bias21111111022110 then, estimatebut we ,asgiven is model true theSupposexxyxxuxyuxxyiiibbbbbb17Omitted Variable Bias (cont)iiiiiiiiiiiiiuxxxxxxxuxxxxuxxy1121122111221101122110becomesnumerator theso , tha

9、tso model, true theRecallbbbbbbbb18Omitted Variable Bias (cont) 2112112112111121121121have wensexpectatio taking0,)E( sincexxxxxEuxxuxxxxxxxiiiiiiiiiibbbbbb19Omitted Variable Bias (cont) 12112112111110212 so then on of regression heConsider tbbbExxxxxxxxxiii20Summary of Direction of BiasCorr(x1, x2)

10、 0 Corr(x1, x2) 0Positive biasNegative biasb2 0Negative biasPositive bias21Omitted Variable Bias Summary Two cases where bias is equal to zeronb2 = 0, that is x2 doesnt really belong in modelnx1 and x2 are uncorrelated in the sample If correlation between x2 , x1 and x2 , y is the same direction, bi

11、as will be positive If correlation between x2 , x1 and x2 , y is the opposite direction, bias will be negative22The More General Case Technically, can only sign the bias for the more general case if all of the included xs are uncorrelated Typically, then, we work through the bias assuming the xs are

12、 uncorrelated, as a useful guide even if this assumption is not strictly true23Variance of the OLS Estimatorsw Now we know that the sampling distribution of our estimate is centered around the true parameterw Want to think about how spread out this distribution isw Much easier to think about this va

13、riance under an additional assumption, sowAssume Var(u|x1, x2, xk) = s2 (Homoskedasticity)24Variance of OLS (cont) Let x stand for (x1, x2,xk) Assuming that Var(u|x) = s2 also implies that Var(y| x) = s2 The 4 assumptions for unbiasedness, plus this homoskedasticity assumption are known as the Gauss

14、-Markov assumptions25Variance of OLS (cont) s other allon regressing from theis and where,1sAssumption Markov-Gauss Given the22222xxRRxxSSTRSSTVarjjjijjjjjsb26Components of OLS Variances The error variance: a larger s2 implies a larger variance for the OLS estimators The total sample variation: a la

15、rger SSTj implies a smaller variance for the estimators Linear relationships among the independent variables: a larger Rj2 implies a larger variance for the estimators27Misspecified Models same there then theyed,uncorrelat are and unless Thus, that so ,model edmisspecifi again theConsider 2111121110

16、 xxVarVarSSTVarxybbsbbb28Misspecified Models (cont) While the variance of the estimator is smaller for the misspecified model, unless b2 = 0 the misspecified model is biased As the sample size grows, the variance of each estimator shrinks to zero, making the variance difference less important29Estim

17、ating the Error Variancew We dont know what the error variance, s2, is, because we dont observe the errors, uiw What we observe are the residuals, iw We can use the residuals to form an estimate of the error variance30Error Variance Estimate (cont) 212221 thus,1jjjiRSSTsedfSSRknusbs df = n (k + 1), or df = n k 1 df (i.e. degrees of freedom) is the (number of observations) (number of estimated parameters)31The Gauss-Markov Theorem Given our 5 Gauss-Markov Assumptions it can be shown that OLS is “BLUE” Best Linear Unbiased Estimator Thus, if the assumptions hold, use OLS

展开阅读全文
相关资源
相关搜索

当前位置:首页 > 教育专区 > 教案示例

本站为文档C TO C交易模式,本站只提供存储空间、用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。本站仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知淘文阁网,我们立即给予删除!客服QQ:136780468 微信:18945177775 电话:18904686070

工信部备案号:黑ICP备15003705号© 2020-2023 www.taowenge.com 淘文阁