ورود به حساب

نام کاربری گذرواژه

گذرواژه را فراموش کردید؟ کلیک کنید

حساب کاربری ندارید؟ ساخت حساب

ساخت حساب کاربری

نام نام کاربری ایمیل شماره موبایل گذرواژه

برای ارتباط با ما می توانید از طریق شماره موبایل زیر از طریق تماس و پیامک با ما در ارتباط باشید


09117307688
09117179751

در صورت عدم پاسخ گویی از طریق پیامک با پشتیبان در ارتباط باشید

دسترسی نامحدود

برای کاربرانی که ثبت نام کرده اند

ضمانت بازگشت وجه

درصورت عدم همخوانی توضیحات با کتاب

پشتیبانی

از ساعت 7 صبح تا 10 شب

دانلود کتاب Introduction to econometrics

دانلود کتاب مقدمه ای بر اقتصاد سنجی

Introduction to econometrics

مشخصات کتاب

Introduction to econometrics

ویرایش: Updated 3 
نویسندگان: ,   
سری: Pearson series in economics 
ISBN (شابک) : 9781292071312, 1292071311 
ناشر: Pearson Education 
سال نشر: 2015 
تعداد صفحات: 841 
زبان: English 
فرمت فایل : PDF (درصورت درخواست کاربر به PDF، EPUB یا AZW3 تبدیل می شود) 
حجم فایل: 13 مگابایت 

قیمت کتاب (تومان) : 47,000



ثبت امتیاز به این کتاب

میانگین امتیاز به این کتاب :
       تعداد امتیاز دهندگان : 26


در صورت تبدیل فایل کتاب Introduction to econometrics به فرمت های PDF، EPUB، AZW3، MOBI و یا DJVU می توانید به پشتیبان اطلاع دهید تا فایل مورد نظر را تبدیل نمایند.

توجه داشته باشید کتاب مقدمه ای بر اقتصاد سنجی نسخه زبان اصلی می باشد و کتاب ترجمه شده به فارسی نمی باشد. وبسایت اینترنشنال لایبرری ارائه دهنده کتاب های زبان اصلی می باشد و هیچ گونه کتاب ترجمه شده یا نوشته شده به فارسی را ارائه نمی دهد.


توضیحاتی درمورد کتاب به خارجی



فهرست مطالب

Cover
Title Page
Copyright
Contents
Preface
PART ONE Introduction and Review
	CHAPTER 1 Economic Questions and Data
		1.1 Economic Questions We Examine
			Question #1: Does Reducing Class Size Improve Elementary School Education?
			Question #2: Is There Racial Discrimination in the Market for Home Loans?
			Question #3: How Much Do Cigarette Taxes Reduce Smoking?
			Question #4: By How Much Will U.S. GDP Grow Next Year?
			Quantitative Questions, Quantitative Answers
		1.2 Causal Effects and Idealized Experiments
			Estimation of Causal Effects
			Forecasting and Causality
		1.3 Data: Sources and Types
			Experimental Versus Observational Data
			Cross-Sectional Data
			Time Series Data
			Panel Data
	CHAPTER 2 Review of Probability
		2.1 Random Variables and Probability Distributions
			Probabilities, the Sample Space, and Random Variables
			Probability Distribution of a Discrete Random Variable
			Probability Distribution of a Continuous Random Variable
		2.2 Expected Values, Mean, and Variance
			The Expected Value of a Random Variable
			The Standard Deviation and Variance
			Mean and Variance of a Linear Function of a Random Variable
			Other Measures of the Shape of a Distribution
		2.3 Two Random Variables
			Joint and Marginal Distributions
			Conditional Distributions
			Independence
			Covariance and Correlation
			The Mean and Variance of Sums of Random Variables
		2.4 The Normal, Chi-Squared, Student t, and F Distributions
			The Normal Distribution
			The Chi-Squared Distribution
			The Student t Distribution
			The F Distribution
		2.5 Random Sampling and the Distribution of the Sample Average
			Random Sampling
			The Sampling Distribution of the Sample Average
		2.6 Large-Sample Approximations to Sampling Distributions
			The Law of Large Numbers and Consistency
			The Central Limit Theorem
		APPENDIX 2.1 Derivation of Results in Key Concept 2.3
	CHAPTER 3 Review of Statistics
		3.1 Estimation of the Population Mean
			Estimators and Their Properties
			Properties of Y
			The Importance of Random Sampling
		3.2 Hypothesis Tests Concerning the Population Mean
			Null and Alternative Hypotheses
			The p-Value
			Calculating the p-Value When sY Is Known
			The Sample Variance, Sample Standard Deviation, and Standard Error
			Calculating the p-Value When σγ Is Unknown
			The t-Statistic
			Hypothesis Testing with a Prespecified Significance Level
			One-Sided Alternatives
		3.3 Confidence Intervals for the Population Mean
		3.4 Comparing Means from Different Populations
			Hypothesis Tests for the Difference Between Two Means
			Confidence Intervals for the Difference Between Two Population Means
		3.5 Differences-of-Means Estimation of Causal Effects Using Experimental Data
			The Causal Effect as a Difference of Conditional Expectations
			Estimation of the Causal Effect Using Differences of Means
		3.6 Using the t-Statistic When the Sample Size Is Small
			The t-Statistic and the Student t Distribution
			Use of the Student t Distribution in Practice
		3.7 Scatterplots, the Sample Covariance, and the Sample Correlation
			Scatterplots
			Sample Covariance and Correlation
		APPENDIX 3.1 The U.S. Current Population Survey
		APPENDIX 3.2 Two Proofs That Y Is the Least Squares Estimator of ?Y
		APPENDIX 3.3 A Proof That the Sample Variance Is Consistent
PART TWO Fundamentals of Regression Analysis
	CHAPTER 4 Linear Regression with One Regressor
		4.1 The Linear Regression Model
		4.2 Estimating the Coefficients of the Linear Regression Model
			The Ordinary Least Squares Estimator
			OLS Estimates of the Relationship Between Test Scores and the Student-Teacher Ratio
			Why Use the OLS Estimator?
		4.3 Measures of Fit
			The R2
			The Standard Error of the Regression
			Application to the Test Score Data
		4.4 The Least Squares Assumptions
			Assumption #1: The Conditional Distribution of ui Given Xi Has a Mean of Zero
			Assumption #2: (Xi, Yi), i = 1,…, n, Are Independently and Identically Distributed
			Assumption #3: Large Outliers Are Unlikely
			Use of the Least Squares Assumptions
		4.5 Sampling Distribution of the OLS Estimators
			The Sampling Distribution of the OLS Estimators
		4.6 Conclusion
		APPENDIX 4.1 The California Test Score Data Set
		APPENDIX 4.2 Derivation of the OLS Estimators
		APPENDIX 4.3 Sampling Distribution of the OLS Estimator
	CHAPTER 5 Regression with a Single Regressor: Hypothesis Tests and Confidence Intervals
		5.1 Testing Hypotheses About One of the Regression Coefficients
			Two-Sided Hypotheses Concerning β
			One-Sided Hypotheses Concerning β1
			Testing Hypotheses About the Intercept β0
		5.2 Confidence Intervals for a Regression Coefficient
		5.3 Regression When X Is a Binary Variable
			Interpretation of the Regression Coefficients
		5.4 Heteroskedasticity and Homoskedasticity
			What Are Heteroskedasticity and Homoskedasticity?
			Mathematical Implications of Homoskedasticity
			What Does This Mean in Practice?
		5.5 The Theoretical Foundations of Ordinary Least Squares
			Linear Conditionally Unbiased Estimators and the Gauss-Markov Theorem
			Regression Estimators Other Than OLS
		5.6 Using the t-Statistic in Regression When the Sample Size Is Small
			The t-Statistic and the Student t Distribution
			Use of the Student t Distribution in Practice
		5.7 Conclusion
		APPENDIX 5.1 Formulas for OLS Standard Errors
		APPENDIX 5.2 The Gauss-Markov Conditions and a Proof of the Gauss-Markov Theorem
	CHAPTER 6 Linear Regression with Multiple Regressors
		6.1 Omitted Variable Bias
			Definition of Omitted Variable Bias
			A Formula for Omitted Variable Bias
			Addressing Omitted Variable Bias by Dividing the Data into Groups
		6.2 The Multiple Regression Model
			The Population Regression Line
			The Population Multiple Regression Model
		6.3 The OLS Estimator in Multiple Regression
			The OLS Estimator
			Application to Test Scores and the Student-Teacher Ratio
		6.4 Measures of Fit in Multiple Regression
			The Standard Error of the Regression (SER)
			The R2
			The \"Adjusted R2\"
			Application to Test Scores
		6.5 The Least Squares Assumptions in Multiple Regression
			Assumption #1: The Conditional Distribution of ui Given X1i, X2i, c, Xki Has a Mean of Zero
			Assumption #2: (X1i, X2i, c, Xki, Yi), i = 1, c, n, Are i.i.d.
			Assumption #3: Large Outliers Are Unlikely
			Assumption #4: No Perfect Multicollinearity
		6.6 The Distribution of the OLS Estimators in Multiple Regression
		6.7 Multicollinearity
			Examples of Perfect Multicollinearity
			Imperfect Multicollinearity
		6.8 Conclusion
		APPENDIX 6.1 Derivation of Equation (6.1)
		APPENDIX 6.2 Distribution of the OLS Estimators When There Are Two Regressors and Homoskedastic Errors
		APPENDIX 6.3 The Frisch-Waugh Theorem
	CHAPTER 7 Hypothesis Tests and Confidence Intervals in Multiple Regression
		7.1 Hypothesis Tests and Confidence Intervals for a Single Coefficient
			Standard Errors for the OLS Estimators
			Hypothesis Tests for a Single Coefficient
			Confidence Intervals for a Single Coefficient
			Application to Test Scores and the Student-Teacher Ratio
		7.2 Tests of Joint Hypotheses
			Testing Hypotheses on Two or More Coefficients
			The F-Statistic
			Application to Test Scores and the Student-Teacher Ratio
			The Homoskedasticity-Only F-Statistic
		7.3 Testing Single Restrictions Involving Multiple Coefficients
		7.4 Confidence Sets for Multiple Coefficients
		7.5 Model Specification for Multiple Regression
			Omitted Variable Bias in Multiple Regression
			The Role of Control Variables in Multiple Regression
			Model Specification in Theory and in Practice
			Interpreting the R2 and the Adjusted R2 in Practice
		7.6 Analysis of the Test Score Data Set
		7.7 Conclusion
		APPENDIX 7.1 The Bonferroni Test of a Joint Hypothesis
		APPENDIX 7.2 Conditional Mean Independence
	CHAPTER 8 Nonlinear Regression Functions
		8.1 A General Strategy for Modeling Nonlinear Regression Functions
			Test Scores and District Income
			The Effect on Y of a Change in X in Nonlinear Specifications
			A General Approach to Modeling Nonlinearities Using Multiple Regression
		8.2 Nonlinear Functions of a Single Independent Variable
			Polynomials
			Logarithms
			Polynomial and Logarithmic Models of Test Scores and District Income
		8.3 Interactions Between Independent Variables
			Interactions Between Two Binary Variables
			Interactions Between a Continuous and a Binary Variable
			Interactions Between Two Continuous Variables
		8.4 Nonlinear Effects on Test Scores of the Student-Teacher Ratio
			Discussion of Regression Results
			Summary of Findings
		8.5 Conclusion
		APPENDIX 8.1 Regression Functions That Are Nonlinear in the Parameters
		APPENDIX 8.2 Slopes and Elasticities for Nonlinear Regression Functions
	CHAPTER 9 Assessing Studies Based on Multiple Regression
		9.1 Internal and External Validity
			Threats to Internal Validity
			Threats to External Validity
		9.2 Threats to Internal Validity of Multiple Regression Analysis
			Omitted Variable Bias
			Misspecification of the Functional Form of the Regression Function
			Measurement Error and Errors-in-Variables Bias
			Missing Data and Sample Selection
			Simultaneous Causality
			Sources of Inconsistency of OLS Standard Errors
		9.3 Internal and External Validity When the Regression Is Used for Forecasting
			Using Regression Models for Forecasting
			Assessing the Validity of Regression Models for Forecasting
		9.4 Example: Test Scores and Class Size
			External Validity
			Internal Validity
			Discussion and Implications
		9.5 Conclusion
		APPENDIX 9.1 The Massachusetts Elementary School Testing Data
PART THREE Further Topics in Regression Analysis
	CHAPTER 10 Regression with Panel Data
		10.1 Panel Data
			Example: Traffic Deaths and Alcohol Taxes
		10.2 Panel Data with Two Time Periods: \"Before and After\" Comparisons
		10.3 Fixed Effects Regression
			The Fixed Effects Regression Model
			Estimation and Inference
			Application to Traffic Deaths
		10.4 Regression with Time Fixed Effects
			Time Effects Only
			Both Entity and Time Fixed Effects
		10.5 The Fixed Effects Regression Assumptions and Standard Errors for Fixed Effects Regression
			The Fixed Effects Regression Assumptions
			Standard Errors for Fixed Effects Regression
		10.6 Drunk Driving Laws and Traffic Deaths
		10.7 Conclusion
		APPENDIX 10.1 The State Traffic Fatality Data Set
		APPENDIX 10.2 Standard Errors for Fixed Effects Regression
	CHAPTER 11 Regression with a Binary Dependent Variable
		11.1 Binary Dependent Variables and the Linear Probability Model
			Binary Dependent Variables
			The Linear Probability Model
		11.2 Probit and Logit Regression
			Probit Regression
			Logit Regression
			Comparing the Linear Probability, Probit, and Logit Models
		11.3 Estimation and Inference in the Logit and Probit Models
			Nonlinear Least Squares Estimation
			Maximum Likelihood Estimation
			Measures of Fit
		11.4 Application to the Boston HMDA Data
		11.5 Conclusion
		APPENDIX 11.1 The Boston HMDA Data Set
		APPENDIX 11.2 Maximum Likelihood Estimation
		APPENDIX 11.3 Other Limited Dependent Variable Models
	CHAPTER 12 Instrumental Variables Regression
		12.1 The IV Estimator with a Single Regressor and a Single Instrument
			The IV Model and Assumptions
			The Two Stage Least Squares Estimator
			Why Does IV Regression Work?
			The Sampling Distribution of the TSLS Estimator
			Application to the Demand for Cigarettes
		12.2 The General IV Regression Model
			TSLS in the General IV Model
			Instrument Relevance and Exogeneity in the General IV Model
			The IV Regression Assumptions and Sampling Distribution of the TSLS Estimator
			Inference Using the TSLS Estimator
			Application to the Demand for Cigarettes
		12.3 Checking Instrument Validity
			Assumption #1: Instrument Relevance
			Assumption #2: Instrument Exogeneity
		12.4 Application to the Demand for Cigarettes
		12.5 Where Do Valid Instruments Come From?
			Three Examples
		12.6 Conclusion
		APPENDIX 12.1 The Cigarette Consumption Panel Data Set
		APPENDIX 12.2 Derivation of the Formula for the TSLS Estimator in Equation (12.4)
		APPENDIX 12.3 Large-Sample Distribution of the TSLS Estimator
		APPENDIX 12.4 Large-Sample Distribution of the TSLS Estimator When the Instrument Is Not Valid
		APPENDIX 12.5 Instrumental Variables Analysis with Weak Instruments
		APPENDIX 12.6 TSLS with Control Variables
	CHAPTER 13 Experiments and Quasi-Experiments
		13.1 Potential Outcomes, Causal Effects, and Idealized Experiments
			Potential Outcomes and the Average Causal Effect
			Econometric Methods for Analyzing Experimental Data
		13.2 Threats to Validity of Experiments
			Threats to Internal Validity
			Threats to External Validity
		13.3 Experimental Estimates of the Effect of Class Size Reductions
			Experimental Design
			Analysis of the STAR Data
			Comparison of the Observational and Experimental Estimates of Class Size Effects
		13.4 Quasi-Experiments
			Examples
			The Differences-in-Differences Estimator
			Instrumental Variables Estimators
			Regression Discontinuity Estimators
		13.5 Potential Problems with Quasi-Experiments
			Threats to Internal Validity
			Threats to External Validity
		13.6 Experimental and Quasi-Experimental Estimates in Heterogeneous Populations
			OLS with Heterogeneous Causal Effects
			IV Regression with Heterogeneous Causal Effects
		13.7 Conclusion
		APPENDIX 13.1 The Project STAR Data Set
		APPENDIX 13.2 IV Estimation When the Causal Effect Varies Across Individuals
		APPENDIX 13.3 The Potential Outcomes Framework for Analyzing Data from Experiments
PART FOUR Regression Analysis of Economic Time Series Data
	CHAPTER 14 Introduction to Time Series Regression and Forecasting
		14.1 Using Regression Models for Forecasting
		14.2 Introduction to Time Series Data and Serial Correlation
			Real GDP in the United States
			Lags, First Differences, Logarithms, and Growth Rates
			Autocorrelation
			Other Examples of Economic Time Series
		14.3 Autoregressions
			The First-Order Autoregressive Model
			The pth-Order Autoregressive Model
		14.4 Time Series Regression with Additional Predictors and the Autoregressive Distributed Lag Model
			Forecasting GDP Growth Using the Term Spread Stationarity
			Time Series Regression with Multiple Predictors
			Forecast Uncertainty and Forecast Intervals
		14.5 Lag Length Selection Using Information Criteria
			Determining the Order of an Autoregression
			Lag Length Selection in Time Series Regression with Multiple Predictors
		14.6 Nonstationarity I: Trends
			What Is a Trend?
			Problems Caused by Stochastic Trends
			Detecting Stochastic Trends: Testing for a Unit AR Root
			Avoiding the Problems Caused by Stochastic Trends
		14.7 Nonstationarity II: Breaks
			What Is a Break?
			Testing for Breaks
			Pseudo Out-of-Sample Forecasting
			Avoiding the Problems Caused by Breaks
		14.8 Conclusion
		APPENDIX 14.1 Time Series Data Used in Chapter 14
		APPENDIX 14.2 Stationarity in the AR(1) Model
		APPENDIX 14.3 Lag Operator Notation
		APPENDIX 14.4 ARMA Models
		APPENDIX 14.5 Consistency of the BIC Lag Length Estimator
	CHAPTER 15 Estimation of Dynamic Causal Effects
		15.1 An Initial Taste of the Orange Juice Data
		15.2 Dynamic Causal Effects
			Causal Effects and Time Series Data
			Two Types of Exogeneity
		15.3 Estimation of Dynamic Causal Effects with Exogenous Regressors
			The Distributed Lag Model Assumptions
			Autocorrelated ut, Standard Errors, and Inference
			Dynamic Multipliers and Cumulative Dynamic Multipliers
		15.4 Heteroskedasticity- and Autocorrelation-Consistent Standard Errors
			Distribution of the OLS Estimator with Autocorrelated Errors
			HAC Standard Errors
		15.5 Estimation of Dynamic Causal Effects with Strictly Exogenous Regressors
			The Distributed Lag Model with AR(1) Errors
			OLS Estimation of the ADL Model
			GLS Estimation
			The Distributed Lag Model with Additional Lags and AR(p) Errors
		15.6 Orange Juice Prices and Cold Weather
		15.7 Is Exogeneity Plausible? Some Examples
			U.S. Income and Australian Exports
			Oil Prices and Inflation
			Monetary Policy and Inflation
			The Growth Rate of GDP and the Term Spread
		15.8 Conclusion
		APPENDIX 15.1 The Orange Juice Data Set
		APPENDIX 15.2 The ADL Model and Generalized Least Squares in Lag Operator Notation
	CHAPTER 16 Additional Topics in Time Series Regression
		16.1 Vector Autoregressions
			The VAR Model
			A VAR Model of the Growth Rate of GDP and the Term Spread
		16.2 Multiperiod Forecasts
			Iterated Multiperiod Forecasts
			Direct Multiperiod Forecasts
			Which Method Should You Use?
		16.3 Orders of Integration and the DF-GLS Unit Root Test
			Other Models of Trends and Orders of Integration
			The DF-GLS Test for a Unit Root
			Why Do Unit Root Tests Have Nonnormal Distributions?
		16.4 Cointegration
			Cointegration and Error Correction
			How Can You Tell Whether Two Variables Are Cointegrated?
			Estimation of Cointegrating Coefficients
			Extension to Multiple Cointegrated Variables
			Application to Interest Rates
		16.5 Volatility Clustering and Autoregressive Conditional Heteroskedasticity
			Volatility Clustering
			Autoregressive Conditional Heteroskedasticity
			Application to Stock Price Volatility
		16.6 Conclusion
PART FIVE The Econometric Theory of Regression Analysis
	CHAPTER 17 The Theory of Linear Regression with One Regressor
		17.1 The Extended Least Squares Assumptions and the OLS Estimator
			The Extended Least Squares Assumptions
			The OLS Estimator
		17.2 Fundamentals of Asymptotic Distribution Theory
			Convergence in Probability and the Law of Large Numbers
			The Central Limit Theorem and Convergence in Distribution
			Slutsky\'s Theorem and the Continuous Mapping Theorem
			Application to the t-Statistic Based on the Sample Mean
		17.3 Asymptotic Distribution of the OLS Estimator and t-Statistic
			Consistency and Asymptotic Normality of the OLS Estimators
			Consistency of Heteroskedasticity-Robust Standard Errors
			Asymptotic Normality of the Heteroskedasticity-Robust t-Statistic
		17.4 Exact Sampling Distributions When the Errors Are Normally Distributed
			Distribution of β with Normal Errors
			Distribution of the Homoskedasticity-Only t-Statistic
		17.5 Weighted Least Squares
			WLS with Known Heteroskedasticity
			WLS with Heteroskedasticity of Known Functional Form
			Heteroskedasticity-Robust Standard Errors or WLS?
		APPENDIX 17.1 The Normal and Related Distributions and Moments of Continuous Random Variables
		APPENDIX 17.2 Two Inequalities
	CHAPTER 18 The Theory of Multiple Regression
		18.1 The Linear Multiple Regression Model and OLS Estimator in Matrix Form
			The Multiple Regression Model in Matrix Notation
			The Extended Least Squares Assumptions
			The OLS Estimator
		18.2 Asymptotic Distribution of the OLS Estimator and t-Statistic
			The Multivariate Central Limit Theorem
			Asymptotic Normality of β
			Heteroskedasticity-Robust Standard Errors
			Confidence Intervals for Predicted Effects
			Asymptotic Distribution of the t-Statistic
		18.3 Tests of Joint Hypotheses
			Joint Hypotheses in Matrix Notation
			Asymptotic Distribution of the F-Statistic
			Confidence Sets for Multiple Coefficients
		18.4 Distribution of Regression Statistics with Normal Errors
			Matrix Representations of OLS Regression Statistics
			Distribution of β with Normal Errors
			Distribution of s2/u
			Homoskedasticity-Only Standard Errors
			Distribution of the t-Statistic
			Distribution of the F-Statistic
		18.5 Efficiency of the OLS Estimator with Homoskedastic Errors
			The Gauss-Markov Conditions for Multiple Regression
			Linear Conditionally Unbiased Estimators
			The Gauss-Markov Theorem for Multiple Regression
		18.6 Generalized Least Squares
			The GLS Assumptions
			GLS When Ω Is Known
			GLS When Ω Contains Unknown Parameters
			The Zero Conditional Mean Assumption and GLS
		18.7 Instrumental Variables and Generalized Method of Moments Estimation
			The IV Estimator in Matrix Form
			Asymptotic Distribution of the TSLS Estimator
			Properties of TSLS When the Errors Are Homoskedastic
			Generalized Method of Moments Estimation in Linear Models
		APPENDIX 18.1 Summary of Matrix Algebra
		APPENDIX 18.2 Multivariate Distributions
		APPENDIX 18.3 Derivation of the Asymptotic Distribution of ?
		APPENDIX 18.4 Derivations of Exact Distributions of OLS Test Statistics with Normal Errors
		APPENDIX 18.5 Proof of the Gauss-Markov Theorem for Multiple Regression
		APPENDIX 18.6 Proof of Selected Results for IV and GMM Estimation
Appendix
References
Glossary
Index
	A
	B
	C
	D
	E
	F
	G
	H
	I
	J
	K
	L
	M
	N
	O
	P
	Q
	R
	S
	T
	U
	V
	W
	Z




نظرات کاربران