ورود به حساب

نام کاربری گذرواژه

گذرواژه را فراموش کردید؟ کلیک کنید

حساب کاربری ندارید؟ ساخت حساب

ساخت حساب کاربری

نام نام کاربری ایمیل شماره موبایل گذرواژه

برای ارتباط با ما می توانید از طریق شماره موبایل زیر از طریق تماس و پیامک با ما در ارتباط باشید


09117307688
09117179751

در صورت عدم پاسخ گویی از طریق پیامک با پشتیبان در ارتباط باشید

دسترسی نامحدود

برای کاربرانی که ثبت نام کرده اند

ضمانت بازگشت وجه

درصورت عدم همخوانی توضیحات با کتاب

پشتیبانی

از ساعت 7 صبح تا 10 شب

دانلود کتاب Regression and Other Stories

دانلود کتاب رگرسیون و داستانهای دیگر

Regression and Other Stories

مشخصات کتاب

Regression and Other Stories

ویرایش: [1 ed.] 
نویسندگان: , ,   
سری:  
ISBN (شابک) : 110702398X, 9781107023987 
ناشر: Cambridge University Press 
سال نشر: 2020 
تعداد صفحات: 548
[552] 
زبان: English 
فرمت فایل : PDF (درصورت درخواست کاربر به PDF، EPUB یا AZW3 تبدیل می شود) 
حجم فایل: 6 Mb 

قیمت کتاب (تومان) : 48,000



ثبت امتیاز به این کتاب

میانگین امتیاز به این کتاب :
       تعداد امتیاز دهندگان : 6


در صورت تبدیل فایل کتاب Regression and Other Stories به فرمت های PDF، EPUB، AZW3، MOBI و یا DJVU می توانید به پشتیبان اطلاع دهید تا فایل مورد نظر را تبدیل نمایند.

توجه داشته باشید کتاب رگرسیون و داستانهای دیگر نسخه زبان اصلی می باشد و کتاب ترجمه شده به فارسی نمی باشد. وبسایت اینترنشنال لایبرری ارائه دهنده کتاب های زبان اصلی می باشد و هیچ گونه کتاب ترجمه شده یا نوشته شده به فارسی را ارائه نمی دهد.


توضیحاتی در مورد کتاب رگرسیون و داستانهای دیگر

بیشتر کتاب های درسی رگرسیون بر نظریه و ساده ترین مثال ها تمرکز دارند. با این حال، مسائل آماری واقعی پیچیده و ظریف هستند. این کتاب در مورد تئوری رگرسیون نیست. این در مورد استفاده از رگرسیون برای حل مسائل واقعی مقایسه، تخمین، پیش‌بینی و استنتاج علی است. برخلاف سایر کتاب ها، بر روی مسائل کاربردی مانند حجم نمونه و داده های از دست رفته و طیف وسیعی از اهداف و تکنیک ها تمرکز دارد. این دقیقاً به روش ها و کدهای رایانه ای می رود که می توانید بلافاصله از آنها استفاده کنید. مثال‌های واقعی، داستان‌های واقعی از تجربه نویسندگان نشان می‌دهند که رگرسیون چه کاری می‌تواند انجام دهد و محدودیت‌های آن، با توصیه‌های عملی برای درک مفروضات و اجرای روش‌ها برای آزمایش‌ها و مطالعات مشاهده‌ای. آنها یک انتقال آرام به رگرسیون لجستیک و GLM انجام می دهند. تاکید بر محاسبات در R و Stan به جای مشتقات، با کد موجود به صورت آنلاین است. گرافیک و ارائه به درک مدل ها و برازش مدل کمک می کند.


توضیحاتی درمورد کتاب به خارجی

Most textbooks on regression focus on theory and the simplest of examples. Real statistical problems, however, are complex and subtle. This is not a book about the theory of regression. It is about using regression to solve real problems of comparison, estimation, prediction, and causal inference. Unlike other books, it focuses on practical issues such as sample size and missing data and a wide range of goals and techniques. It jumps right in to methods and computer code you can use immediately. Real examples, real stories from the authors' experience demonstrate what regression can do and its limitations, with practical advice for understanding assumptions and implementing methods for experiments and observational studies. They make a smooth transition to logistic regression and GLM. The emphasis is on computation in R and Stan rather than derivations, with code available online. Graphics and presentation aid understanding of the models and model fitting.



فهرست مطالب

Table of Contents
Preface
Part 1: Fundamentals
	Chapter 1: Overview
		1.1 The three challenges of statistics
		1.2 Why learn regression?
		1.3 Some examples of regression
		1.4 Challenges in building, understanding, and interpreting regressions
		1.5 Classical and Bayesian inference
		1.6 Computing least squares and Bayesian regression
		1.7 Bibliographic note
		1.8 Exercises
	Chapter 2: Data and measurement
		2.1 Examining where data come from
		2.2 Validity and reliability
		2.3 All graphs are comparisons
		2.4 Data and adjustment: trends in mortality rates
		2.5 Bibliographic note
		2.6 Exercises
	Chapter 3: Some basic methods in mathematics and probability
		3.1 Weighted averages
		3.2 Vectors and matrices
		3.3 Graphing a line
		3.4 Exponential and power-law growth and decline; logarithmic and log-log relationships
		3.5 Probability distributions
		3.6 Probability modeling
		3.7 Bibliographic note
		3.8 Exercises
	Chapter 4: Statistical inference
		4.1 Sampling distributions and generative models
		4.2 Estimates, standard errors, and confidence intervals
		4.3 Bias and unmodeled uncertainty
		4.4 Statistical significance, hypothesis testing, and statistical errors
		4.5 Problems with the concept of statistical significance
		4.6 Example of hypothesis testing: 55,000 residents need your help!
		4.7 Moving beyond hypothesis testing
		4.8 Bibliographic note
		4.9 Exercises
	Chapter 5: Simulation
		5.1 Simulation of discrete probability models
		5.2 Simulation of continuous and mixed discrete/continuous models
		5.3 Summarizing a set of simulations using median and median absolute deviation
		5.4 Bootstrapping to simulate a sampling distribution
		5.5 Fake-data simulation as a way of life
		5.6 Bibliographic note
		5.7 Exercises
Part 2: Linear Regression
	Chapter 6: Background on regression modeling
		6.1 Regression models
		6.2 Fitting a simple regression to fake data
		6.3 Interpret coefficients as comparisons, not effects
		6.4 Historial origins of regression
		6.5 The paradox of regression to the mean
		6.6 Bibliographic note
		6.7 Exercises
	Chapter 7: Linear regression with a single predictor
		7.1 Example: predicting presidential vote share from the economy
		7.2 Checking the model-fitting procedure using fake-data simulation
		7.3 Formulating comparisons as regression models
		7.4 Bibliographic note
		7.5 Exercises
	Chapter 8: Fitting regression models
		8.1 Least squares, maximum likelihood, and Bayesian inference
		8.2 Influence of individual points in a fitted regression
		8.3 Least squares slope as a weighted average of slope of pairs
		8.4 Comparing two fitting functions: lm and stan_glm
		8.5 Bibliographic note
		8.6 Exercises
	Chapter 9: Prediction and Bayesian inference
		9.1 Propagating uncertainty in inference using posterior simulations
		9.2 Prediction and uncertainty: predict, posterior_linpred, and posterior_predict
		9.3 Prior information and Bayesian synthesis
		9.4 Example of Bayesian inference: beauty and sex ratio
		9.5 Uniform, weakly informative, and informative priors in regression
		9.6 Bibliographic note
		9.7 Exercises
	Chapter 10: Linear regression with multiple predictors
		10.1 Adding predictors to a model
		10.2 Interpreting regression coefficients
		10.3 Interactions
		10.4 Indicator variables
		10.5 Formulating paired or blocked designs as a regression problem
		10.6 Example: uncertainty in predicting congressional elections
		10.7 Mathematical notation and statistical inference
		10.8 Weighted regression
		10.9 Fitting the same model to many datasets
		10.10 Bibliographic note
		10.11 Exercises
	Chapter 11: Assumptions, diagnosics, and model evaulation
		11.1 Assumptions of regression analysis
		11.2 Plotting the data and fitted model
		11.3 Residual plots
		11.4 Comparing data to replications from a fitted model
		11.5 Example: predictive simulation to check the fit of a time-series model
		11.6 Residual standard deviation and explained variance
		11.7 External validation: checking fitted model on new data
		11.8 Cross validation
		11.9 Bibliographic note
		11.10 Exercises
	Chapter 12: Transformations and regression
		12.1 Linear Transformations
		12.2 Centering and standardizing for models with interactions
		12.3 Correlation and "regression to the mean"
		12.4 Logarithmic transformations
		12.5 Other transformations
		12.6 Building and comparing regression models for prediction
		12.7 Models for regression coefficients
		12.8 Bibliographic note
		12.9 Exercises
Part 3: Generalized linear models
	Chaper 13: Logistic regression
		13.1 Logistic regression with a single predictor
		13.2 Interpreting logistic regression coefficients and the divide-by-4 rule
		13.3 Predictions and comparisons
		13.4 Latent-data formulation
		13.5 Maximum likelihood and Bayesian inference for logistic regression
		13.6 Cross validation and log score for logistic regression
		13.7 Building a logistic regression model: wells in Bangladesh
		13.8 Bibliographic note
		13.9 Exercises
	Chapter 14: Working with logistic regression
		14.1 Graphing logistic regression and binary data
		14.2 Logistic regression with interactions
		14.3 Predictive simulation
		14.4 Average predictive comparisons on the probability scale
		14.5 Residuals for discrete-data regression
		14.6 Identification and separation
		14.7 Bibliographic note
		14.8 Exercises
	Chapter 15: Other generalized linear models
		15.1 Definition and notation
		15.2 Poisson and negative binomial regression
		15.3 Logistic-binomial model
		15.4 Probit regression: normally distributed latent data
		15.5 Ordered and unordered categorical regression
		15.6 Robust regression using the t model
		15.7 Constructive choice models
		15.8 Going beyond generalized linear models
		15.9 Bibliographic note
		15.10 Exercises
Part 4: Before and after fitting a regression
	Chapter 16: Design and sample size decisions
		16.1 The problem with statistical power
		16.2 General principles of design, as illustrated by estimates of proportions
		16.3 Sample size and design calculations for continuous outcomes
		16.4 Interactions are harder to estimate than main effects
		16.5 Dersign calculations after the data have been collected
		16.6 Design analysis using fake-data simulation
		16.7 Bibliographic note
		16.8 Exercises
	Chapter 17: Poststratification and missing-data imputation
		17.1 Poststratification: using regression to generlize to a new population
		17.2 Fake-data simulation for regression and poststratification
		17.3 Models for missingness
		17.4 Simple approaches for handling missing data
		17.5 Understanding multiple imputation
		17.6 Nonignorable missing-data models
		17.7 Bibliographic note
		17.8 Exercises
Part 5: Casual inference
	Chapter 18: Casual inference and randomized experiments
		18.1 Basics of casual inference
		18.2 Average causal effects
		18.3 Randomized experiments
		18.4 Sampling distributions, randomization distributions, and bias in estimation
		18.5 Using additional information in experimental design
		18.6 Properties, assumptions, and limitations of randomized experiments
		18.7 Bibliographic note
		18.8 Exercises
	Chapter 19: Casual inference using regression on the treatment variable
		19.1 Pre-treatment covariates, treatments, and potential outcomes
		19.2 Example: the effect of showing children an educational television show
		19.3 Including pre-treatment predictors
		19.4 Varying treatment effects, interactions, and poststratification
		19.5 Challenges of interpreting regression coefficients as treatment effects
		19.6 Do not adjust for post-treatment variables
		19.7 Intermediate outcomes and causal paths
		19.8 Bibliographic note
		19.9 Exercises
	Chapter 20: Observational studies with all confounders assumed to be measured
		20.1 The challenge of causal inference
		20.2 Using regression to estimate a causal effect from observational data
		20.3 Assumption of ignorable treatment assignment in an observational study
		20.4 Imbalance and lack of complete overlap
		20.5 Example: evaluating a child care program
		20.6 Subclassification and average treatment effects
		20.7 Propensity score matching for the child care example
		20.8 Restructuring to create balanced treatment and control groups
		20.9 Additional considerations with observational studies
		20.10 Bibliographic note
		20.11 Exercises
	Chapter 21: Additional topics in causal inference
		21.1 Estimating causal effects indirectly using instrumental variables
		21.2 Instrumental variables in a regression framework
		21.3 Regression discontinuity: known assignment mechanism but no overlap
		21.4 Identification using variation within or between groups
		21.5 Causes of effects and effects of causes
		21.6 Bibliographic note
		21.7 Exercises
Part 6: What comes next?
	Chapter 22: Advanced regression and multilevel models
		22.1 Expressing the models so far in a common framework
		22.2 Incomplete data
		22.3 Correlated errors and multivariate models
		22.4 Regularization for models with many predictors
		22.5 Multilevel or hierarchial models
		22.6 Nonlinear models, a demonstration using Stan
		22.7 Nonparametric regression and machine learning
		22.8 Computational efficiency
		22.9 Bibliographic note
		22.10 Exercises
Appendixes
	Appendix A Computing in R
	Appendix B 10 quick tips to improve your regression modeling
References
Author Index
Subject Index




نظرات کاربران