ورود به حساب

نام کاربری گذرواژه

گذرواژه را فراموش کردید؟ کلیک کنید

حساب کاربری ندارید؟ ساخت حساب

ساخت حساب کاربری

نام نام کاربری ایمیل شماره موبایل گذرواژه

برای ارتباط با ما می توانید از طریق شماره موبایل زیر از طریق تماس و پیامک با ما در ارتباط باشید


09117307688
09117179751

در صورت عدم پاسخ گویی از طریق پیامک با پشتیبان در ارتباط باشید

دسترسی نامحدود

برای کاربرانی که ثبت نام کرده اند

ضمانت بازگشت وجه

درصورت عدم همخوانی توضیحات با کتاب

پشتیبانی

از ساعت 7 صبح تا 10 شب

دانلود کتاب Statistical Inference for Engineers and Data Scientists

دانلود کتاب استنباط آماری برای مهندسان و دانشمندان داده

Statistical Inference for Engineers and Data Scientists

مشخصات کتاب

Statistical Inference for Engineers and Data Scientists

ویرایش:  
نویسندگان: ,   
سری:  
ISBN (شابک) : 9781107185920, 1107185920 
ناشر: Cambridge University Press 
سال نشر: 2019 
تعداد صفحات: 421 
زبان: English 
فرمت فایل : PDF (درصورت درخواست کاربر به PDF، EPUB یا AZW3 تبدیل می شود) 
حجم فایل: 26 مگابایت 

قیمت کتاب (تومان) : 52,000



ثبت امتیاز به این کتاب

میانگین امتیاز به این کتاب :
       تعداد امتیاز دهندگان : 11


در صورت تبدیل فایل کتاب Statistical Inference for Engineers and Data Scientists به فرمت های PDF، EPUB، AZW3، MOBI و یا DJVU می توانید به پشتیبان اطلاع دهید تا فایل مورد نظر را تبدیل نمایند.

توجه داشته باشید کتاب استنباط آماری برای مهندسان و دانشمندان داده نسخه زبان اصلی می باشد و کتاب ترجمه شده به فارسی نمی باشد. وبسایت اینترنشنال لایبرری ارائه دهنده کتاب های زبان اصلی می باشد و هیچ گونه کتاب ترجمه شده یا نوشته شده به فارسی را ارائه نمی دهد.


توضیحاتی در مورد کتاب استنباط آماری برای مهندسان و دانشمندان داده

این کتاب یک مقدمه ریاضی در دسترس و به روز برای ابزارهای مورد نیاز برای پرداختن به مشکلات استنتاج مدرن در مهندسی و علوم داده است، ایده آل برای دانشجویان فارغ التحصیل دروس استنتاج آماری و تشخیص و تخمین، و مرجعی ارزشمند برای محققان و متخصصان. . این کتاب درسی با انبوهی از تصاویر و مثال‌ها برای توضیح ویژگی‌های کلیدی تئوری و ارتباط با برنامه‌های کاربردی دنیای واقعی، مطالب اضافی برای کشف مفاهیم پیشرفته‌تر و مشکلات متعدد انتهای فصل برای آزمایش دانش خواننده، راهنمای "رفتن به" برای یادگیری در مورد اصول اصلی استنتاج آماری و کاربرد آن در مهندسی و علوم داده. راهنمای راه حل های محافظت شده با رمز عبور و گالری تصاویر کتاب به صورت آنلاین در دسترس هستند.


توضیحاتی درمورد کتاب به خارجی

This book is a mathematically accessible and up-to-date introduction to the tools needed to address modern inference problems in engineering and data science, ideal for graduate students taking courses on statistical inference and detection and estimation, and an invaluable reference for researchers and professionals. With a wealth of illustrations and examples to explain the key features of the theory and to connect with real-world applications, additional material to explore more advanced concepts, and numerous end-of-chapter problems to test the reader's knowledge, this textbook is the 'go-to' guide for learning about the core principles of statistical inference and its application in engineering and data science. The password-protected solutions manual and the image gallery from the book are available online.



فهرست مطالب

Contents
Preface
List of Acronyms
1 Introduction
	1.1 Background
	1.2 Notation
		1.2.1 Probability Distributions
		1.2.2 Conditional Probability Distributions
		1.2.3 Expectations and Conditional Expectations
		1.2.4 Unified Notation
		1.2.5 General Random Variables
	1.3 Statistical Inference
		1.3.1 Statistical Model
		1.3.2 Some Generic Estimation Problems
		1.3.3 Some Generic Detection Problems
	1.4 Performance Analysis
	1.5 Statistical Decision Theory
		1.5.1 Conditional Risk and Optimal Decision Rules
		1.5.2 Bayesian Approach
		1.5.3 Minimax Approach
		1.5.4 Other Non-Bayesian Rules
	1.6 Derivation of Bayes Rule
	1.7 Link Between Minimax and Bayesian Decision Theory
		1.7.1 Dual Concept
		1.7.2 Game Theory
		1.7.3 Saddlepoint
		1.7.4 Randomized Decision Rules
	Exercises
	References
Part I Hypothesis Testing
	2 Binary Hypothesis Testing
		2.1 General Framework
		2.2 Bayesian Binary Hypothesis Testing
			2.2.1 Likelihood Ratio Test
			2.2.2 Uniform Costs
			2.2.3 Examples
		2.3 Binary Minimax Hypothesis Testing
			2.3.1 Equalizer Rules
			2.3.2 Bayes Risk Line and Minimum Risk Curve
			2.3.3 Differentiable V (π0)
			2.3.4 Nondifferentiable V (π0)
			2.3.5 Randomized LRTs
			2.3.6 Examples
		2.4 Neyman–Pearson Hypothesis Testing
			2.4.1 Solution to the NP Optimization Problem
			2.4.2 NP Rule
			2.4.3 Receiver Operating Characteristic
			2.4.4 Examples
			2.4.5 Convex Optimization
		Exercises
	3 Multiple Hypothesis Testing
		3.1 General Framework
		3.2 Bayesian Hypothesis Testing
			3.2.1 Optimal Decision Regions
			3.2.2 Gaussian Ternary Hypothesis Testing
		3.3 Minimax Hypothesis Testing
		3.4 Generalized Neyman–Pearson Detection
		3.5 Multiple Binary Tests
			3.5.1 Bonferroni Correction
			3.5.2 False Discovery Rate
			3.5.3 Benjamini–Hochberg Procedure
			3.5.4 Connection to Bayesian Decision Theory
		Exercises
		References
	4 Composite Hypothesis Testing
		4.1 Introduction
		4.2 Random Parameter ±
			4.2.1 Uniform Costs Over Each Hypothesis
			4.2.2 Nonuniform Costs Over Hypotheses
		4.3 Uniformly Most Powerful Test
			4.3.1 Examples
			4.3.2 Monotone Likelihood Ratio Theorem
			4.3.3 Both Composite Hypotheses
		4.4 Locally Most Powerful Test
		4.5 Generalized Likelihood Ratio Test
			4.5.1 GLRT for Gaussian Hypothesis Testing
			4.5.2 GLRT for Cauchy Hypothesis Testing
		4.6 Random versus Nonrandom θ
		4.7 Non-Dominated Tests
		4.8 Composite m-ary Hypothesis Testing
			4.8.1 Random Parameter ±
			4.8.2 Non-Dominated Tests
			4.8.3 m-GLRT
		4.9 Robust Hypothesis Testing
			4.9.1 Robust Detection with Conditionally Independent Observations
			4.9.2 Epsilon-Contamination Class
		Exercises
		References
	5 Signal Detection
		5.1 Introduction
		5.2 Problem Formulation
		5.3 Detection of Known Signal in Independent Noise
			5.3.1 Signal in i.i.d. Gaussian Noise
			5.3.2 Signal in i.i.d. Laplacian Noise
			5.3.3 Signal in i.i.d. Cauchy Noise
			5.3.4 Approximate NP Test
		5.4 Detection of Known Signal in Correlated Gaussian Noise
			5.4.1 Reduction to i.i.d. Noise Case
			5.4.2 Performance Analysis
		5.5 m-ary Signal Detection
			5.5.1 Bayes Classification Rule
			5.5.2 Performance Analysis
		5.6 Signal Selection
			5.6.1 i.i.d. Noise
			5.6.2 Correlated Noise
		5.7 Detection of Gaussian Signals in Gaussian Noise
			5.7.1 Detection of a Gaussian Signal in White Gaussian Noise
			5.7.2 Detection of i.i.d. Zero-Mean Gaussian Signal
			5.7.3 Diagonalization of Signal Covariance
			5.7.4 Performance Analysis
			5.7.5 Gaussian Signals With Nonzero Mean
		5.8 Detection of Weak Signals
		5.9 Detection of Signal with Unknown Parameters in White Gaussian Noise
			5.9.1 General Approach
			5.9.2 Linear Gaussian Model
			5.9.3 Nonlinear Gaussian Model
			5.9.4 Discrete Parameter Set
		5.10 Deflection-Based Detection of Non-Gaussian Signal in Gaussian Noise
		Exercises
		References
	6 Convex Statistical Distances
		6.1 Kullback–Leibler Divergence
		6.2 Entropy and Mutual Information
		6.3 Chernoff Divergence, Chernoff Information, and Bhattacharyya Distance
		6.4 Ali–Silvey Distances
		6.5 Some Useful Inequalities
		Exercises
		References
	7 Performance Bounds for Hypothesis Testing
		7.1 Simple Lower Bounds on Conditional Error Probabilities
		7.2 Simple Lower Bounds on Error Probability
		7.3 Chernoff Bound
			7.3.1 Moment-Generating and Cumulant-Generating Functions
			7.3.2 Chernoff Bound
		7.4 Application of Chernoff Bound to Binary Hypothesis Testing
			7.4.1 Exponential Upper Bounds on PF and PM
			7.4.2 Bayesian Error Probability
			7.4.3 Lower Bound on ROC
			7.4.4 Example
		7.5 Bounds on Classification Error Probability
			7.5.1 Upper and Lower Bounds in Terms of Pairwise Error Probabilities
			7.5.2 Bonferroni’s Inequalities
			7.5.3 Generalized Fano’s Inequality
		7.6 Appendix: Proof of Theorem 7.4
		Exercises
		References
	8 Large Deviations and Error Exponents for Hypothesis Testing
		8.1 Introduction
		8.2 Chernoff Bound for Sum of i.i.d. Random Variables
			8.2.1 Cramér’s Theorem
			8.2.2 Why is the Central Limit Theorem Inapplicable Here?
		8.3 Hypothesis Testing with i.i.d. Observations
			8.3.1 Bayesian Hypothesis Testing with i.i.d. Observations
			8.3.2 Neyman–Pearson Hypothesis Testing with i.i.d. Observations
			8.3.3 Hoeffding Problem
			8.3.4 Example
		8.4 Refined Large Deviations
			8.4.1 The Method of Exponential Tilting
			8.4.2 Sum of i.i.d. Random Variables
			8.4.3 Lower Bounds on Large-Deviations Probabilities
			8.4.4 Refined Asymptotics for Binary Hypothesis Testing
			8.4.5 Non-i.i.d. Components
		8.5 Appendix: Proof of Lemma 8.1
		Exercises
		References
	9 Sequential and Quickest Change Detection
		9.1 Sequential Detection
			9.1.1 Problem Formulation
			9.1.2 Stopping Times and Decision Rules
			9.1.3 Two Formulations of the Sequential Hypothesis Testing Problem
			9.1.4 Sequential Probability Ratio Test
			9.1.5 SPRT Performance Evaluation
		9.2 Quickest Change Detection
			9.2.1 Minimax Quickest Change Detection
			9.2.2 Bayesian Quickest Change Detection
		Exercises
		References
	10 Detection of Random Processes
		10.1 Discrete-Time Random Processes
			10.1.1 Periodic Stationary Gaussian Processes
			10.1.2 Stationary Gaussian Processes
			10.1.3 Markov Processes
		10.2 Continuous-Time Processes
			10.2.1 Covariance Kernel
			10.2.2 Karhunen–Loève Transform
			10.2.3 Detection of Known Signals in Gaussian Noise
			10.2.4 Detection of Gaussian Signals in Gaussian Noise
		10.3 Poisson Processes
		10.4 General Processes
			10.4.1 Likelihood Ratio
			10.4.2 Ali–Silvey Distances
		10.5 Appendix: Proof of Proposition 10.1
		Exercises
		References
Part II Estimation
	11 Bayesian Parameter Estimation
		11.1 Introduction
		11.2 Bayesian Parameter Estimation
		11.3 MMSE Estimation
		11.4 MMAE Estimation
		11.5 MAP Estimation
		11.6 Parameter Estimation for Linear Gaussian Models
		11.7 Estimation of Vector Parameters
			11.7.1 Vector MMSE Estimation
			11.7.2 Vector MMAE Estimation
			11.7.3 Vector MAP Estimation
			11.7.4 Linear MMSE Estimation
			11.7.5 Vector Parameter Estimation in Linear Gaussian Models
			11.7.6 Other Cost Functions for Bayesian Estimation
		11.8 Exponential Families
			11.8.1 Basic Properties
			11.8.2 Conjugate Priors
		Exercises
		References
	12 Minimum Variance Unbiased Estimation
		12.1 Nonrandom Parameter Estimation
		12.2 Sufficient Statistics
		12.3 Factorization Theorem
		12.4 Rao–Blackwell Theorem
		12.5 Complete Families of Distributions
			12.5.1 Link Between Completeness and Sufficiency
			12.5.2 Link Between Completeness and MVUE
			12.5.3 Link Between Completeness and Exponential Families
		12.6 Discussion
		12.7 Examples: Gaussian Families
		Exercises
		References
	13 Information Inequality and Cramér–Rao Lower Bound
		13.1 Fisher Information and the Information Inequality
		13.2 Cramér–Rao Lower Bound
		13.3 Properties of Fisher Information
		13.4 Conditions for Equality in Information Inequality
		13.5 Vector Parameters
		13.6 Information Inequality for Random Parameters
		13.7 Biased Estimators
		13.8 Appendix: Derivation of (13.16)
		Exercises
		References
	14 Maximum Likelihood Estimation
		14.1 Introduction
		14.2 Computation of ML Estimates
		14.3 Invariance to Reparameterization
		14.4 MLE in Exponential Families
			14.4.1 Mean-Value Parameterization
			14.4.2 Relation to MVUEs
			14.4.3 Asymptotics
		14.5 Estimation of Parameters on Boundary
		14.6 Asymptotic Properties for General Families
			14.6.1 Consistency
			14.6.2 Asymptotic Efficiency and Normality
		14.7 Nonregular ML Estimation Problems
		14.8 Nonexistence of MLE
		14.9 Non-i.i.d. Observations
		14.10 M-Estimators and Least-Squares Estimators
		14.11 Expectation-Maximization (EM) Algorithm
			14.11.1 General Structure of the EM Algorithm
			14.11.2 Convergence of EM Algorithm
			14.11.3 Examples
		14.12 Recursive Estimation
			14.12.1 Recursive MLE
			14.12.2 Recursive Approximations to Least-Squares Solution
		14.13 Appendix: Proof of Theorem 14.2
		14.14 Appendix: Proof of Theorem 14.4
		Exercises
		References
	15 Signal Estimation
		15.1 Linear Innovations
		15.2 Discrete-Time Kalman Filter
			15.2.1 Time-Invariant Case
		15.3 Extended Kalman Filter
		15.4 Nonlinear Filtering for General Hidden Markov Models
		15.5 Estimation in Finite Alphabet Hidden Markov Models
			15.5.1 Viterbi Algorithm
			15.5.2 Forward-Backward Algorithm
			15.5.3 Baum–Welch Algorithm for HMM Learning
		Exercises
		References
Appendix A Matrix Analysis
Appendix B Random Vectors and Covariance Matrices
Appendix C Probability Distributions
Appendix D Convergence of Random Sequences
Index




نظرات کاربران