ورود به حساب

نام کاربری گذرواژه

گذرواژه را فراموش کردید؟ کلیک کنید

حساب کاربری ندارید؟ ساخت حساب

ساخت حساب کاربری

نام نام کاربری ایمیل شماره موبایل گذرواژه

برای ارتباط با ما می توانید از طریق شماره موبایل زیر از طریق تماس و پیامک با ما در ارتباط باشید


09117307688
09117179751

در صورت عدم پاسخ گویی از طریق پیامک با پشتیبان در ارتباط باشید

دسترسی نامحدود

برای کاربرانی که ثبت نام کرده اند

ضمانت بازگشت وجه

درصورت عدم همخوانی توضیحات با کتاب

پشتیبانی

از ساعت 7 صبح تا 10 شب

دانلود کتاب Fundamentals of Stochastic Models

دانلود کتاب مبانی مدل های تصادفی

Fundamentals of Stochastic Models

مشخصات کتاب

Fundamentals of Stochastic Models

ویرایش:  
نویسندگان:   
سری: Operations Research Series 
ISBN (شابک) : 036771261X, 9780367712617 
ناشر: CRC Press 
سال نشر: 2023 
تعداد صفحات: 814
[815] 
زبان: English 
فرمت فایل : PDF (درصورت درخواست کاربر به PDF، EPUB یا AZW3 تبدیل می شود) 
حجم فایل: 34 Mb 

قیمت کتاب (تومان) : 47,000



ثبت امتیاز به این کتاب

میانگین امتیاز به این کتاب :
       تعداد امتیاز دهندگان : 10


در صورت تبدیل فایل کتاب Fundamentals of Stochastic Models به فرمت های PDF، EPUB، AZW3، MOBI و یا DJVU می توانید به پشتیبان اطلاع دهید تا فایل مورد نظر را تبدیل نمایند.

توجه داشته باشید کتاب مبانی مدل های تصادفی نسخه زبان اصلی می باشد و کتاب ترجمه شده به فارسی نمی باشد. وبسایت اینترنشنال لایبرری ارائه دهنده کتاب های زبان اصلی می باشد و هیچ گونه کتاب ترجمه شده یا نوشته شده به فارسی را ارائه نمی دهد.


توضیحاتی درمورد کتاب به خارجی



فهرست مطالب

Cover
Half Title
Series Page
Title Page
Copyright Page
Contents
List of Figures
List of Tables
Preface
Acknowledgments
Chapter 1: Introduction
	1.1. Stochastic Process Classification
	1.2. Organization of the Book
Part I: Fundamentals of Stochastic Models
	Chapter 2: Discrete-Time Markov Chains
		2.1. Dynamics of Probability Measures
		2.2. Formulation of DTMC
		2.3. Performance Analysis of DTMC
			2.3.1. Classification of States
			2.3.2. Steady State Analysis
			2.3.3. Positive Recurrence for DTMC
			2.3.4. Transient Analysis
			2.3.5. Branching Processes
		References
	Chapter 3: Continuous-Time Markov Chains
		3.1. Formulation of CTMC
		3.2. Analyzing the First CTMC: A Birth-and-Death Process
		3.3. Transition Probability Functions for CTMC
			3.3.1. Uniformization
		3.4. Stationary Distribution of CTMC
			3.4.1. Open Jackson Networks
		3.5. Using Transforms
		3.6. Using Time Reversibility
		3.7. Some Useful Continuous-time Markov Chains
			3.7.1. Poisson Process and Its Extensions
				3.7.1.1. Non-Homogeneous Poisson Process
				3.7.1.2. Pure Birth Process
				3.7.1.3. The Yule Process
			3.7.2. Pure Death Processes
				3.7.2.1. Transient Analysis on CTMC
		References
	Chapter 4: Structured Markov Chains
		4.1. Phase-Type Distributions
		4.2. Properties of PH Distribution
			4.2.1. Closure Properties
			4.2.2. Dense Property of PH Distribution
			4.2.3. Non-Uniqueness of Representation of PH Distribution
		4.3. Fitting PH Distribution to Empirical Data or a Theoretical Distribution
			4.3.1. The EM Approach
		4.4. The EM Algorithm
			4.4.1. Convergence of EM Algorithm
			4.4.2. EM Algorithm for PH Distribution
		4.5. Markovian Arrival Processes
			4.5.1. From PH Renewal Processes to Markovian Arrival Processes
			4.5.2. Transition Probability Function Matrix
		4.6. Fitting MAP to Empirical Data
			4.6.1. Grouped Data for MAP Fitting
		4.7. Quasi-Birth-and-Death Process (QBD) – Analysis of MAP/PH/1 Queue
			4.7.1. QBD – A Structured Markov Chain
				4.7.1.1. Matrix-Geometric Solution – R-Matrix
				4.7.1.2. Fundamental Period – G-Matrix
			4.7.2. MAP/PH/1 Queue – A Continuous-Time QBD Process
		4.8. GI/M/1 Type and M/G/1 Type Markov Chains
			4.8.1. GI/M/1 Type Markov Chains
			4.8.2. M/G/1 Type Markov Chains
			4.8.3. Continuous-Time Counterpart of Discrete-Time QBD Process
		References
	Chapter 5: Renewal Processes and Embedded Markov Chains
		5.1. Renewal Processes
			5.1.1. Basic Results of Renewal Processes
			5.1.2. More on Renewal Equations
			5.1.3. Limit Theorems for Renewal Processes
			5.1.4. Blackwell’s Theorem and Key Renewal Theorem
			5.1.5. Inspection Paradox
			5.1.6. Some Variants of Renewal Processes
			5.1.7. Renewal Reward Processes
			5.1.8. Regenerative Processes
		5.2. Markov Renewal Processes
			5.2.1. Basic Results for Markov Renewal Processes
			5.2.2. Results for Semi-Markov Processes
			5.2.3. Semi-Regenerative Processes
			5.2.4. M/G/1 and GI/M/1 Queues
				5.2.4.1. M/G/1 Queue
				5.2.4.2. GI/M/1 Queue
			5.2.5. An Inventory Model
			5.2.6. Supplementary Variable Method
		References
	Chapter 6: Random Walks and Brownian Motions
		6.1. Random Walk Processes
			6.1.1. Simple Random Walk – Basics
			6.1.2. Spitzer’s Identity Linking Random Walks to Queueing Systems
			6.1.3. System Point Level Crossing Method
			6.1.4. Change of Measures in Random Walks
			6.1.5. The Binomial Securities Market Model
			6.1.6. The Arc Since Law
			6.1.7. The Gambler’s Ruin Problem
			6.1.8. General Random Walk
				6.1.8.1. A Brief Introduction to DTMP
			6.1.9. Basic Properties of GRW
				6.1.9.1. Central Limit Theorem for GRW
		6.2. Brownian Motion
			6.2.1. Brownian Motion as a Limit of Random Walks
			6.2.2. Gaussian Processes
			6.2.3. Sample Path Properties
				6.2.3.1. Infinite Zeros and Non-Differentiability
				6.2.3.2. Re-Scaling a Process
				6.2.3.3. The Reflection Principle – Hitting Times and Maximum of BM
				6.2.3.4. Conditional Distribution of BM
				6.2.3.5. BM as a Martingale
			6.2.4. Transition Probability Function of BM
			6.2.5. The Black-Scholes Formula
		References
	Chapter 7: Reflected Brownian Motion Approximations to Simple Stochastic Systems
		7.1. Approximations to G/G/1 Queue
		7.2. Queue Length as Reflection Mapping
		7.3. Functional Strong Law of Large Numbers (FSLLN) – Fluid Limit
		7.4. Functional Central Limit Theorem (FCLT) – Diffusion Limit
		7.5. Heavy Traffic Approximation to G/G/1 Queue
		7.6. Bounds for Fluid and Diffusion Limit Approximations
		7.7. Applications of RBM Approach
			7.7.1. A Two-Station Tandem Queue
			7.7.2. A Production-Inventory Model
		References
	Chapter 8: Large Queueing Systems
		8.1. Multi-Dimensional Reflected Brownian Motion Approximation to Queueing Networks
			8.1.1. Oblique Reflection Mapping
			8.1.2. A Fluid Network
			8.1.3. A Brownian Motion Network
			8.1.4. Fluid and Diffusion Approximations to Queueing Networks
		8.2. Decomposition Approach
			8.2.1. Superposition of Flows
			8.2.2. Flowing through a Queue
			8.2.3. Splitting a Flow
			8.2.4. Decomposition of a Queueing Network
		8.3. One-Stage Queueing System with Many Servers
			8.3.1. Multi-Server Queues without Customer Abandonments
				8.3.1.1. Increasing ρ with fixed s and μ
				8.3.1.2. Increasing λ and s with fixed ρ
				8.3.1.3. Increasing λ and s with an Increasing ρ
			8.3.2. Multi-Server Queues with Customer Abandonments
		8.4. Queues with Time-Varying Parameters
			8.4.1. Fluid Approximation
			8.4.2. Diffusion Approximation
		8.5. Mean Field Method for a Large System with Many Identical Interacting Parts
		References
	Chapter 9: Static Optimization in Stochastic Models
		9.1. Optimization Based on Regenerative Cycles
			9.1.1. Optimal Age Replacement Policy for a Multi-State System
			9.1.2. Optimal Threshold Policy for a M/G/1 Queue
		9.2. Optimization Based on Stationary Performance Measures – Economic Analysis of Stable Queueing Systems
			9.2.1. Individual Optimization
			9.2.2. Social Optimization
			9.2.3. Service Provider Optimization
		9.3. Optimal Service-Order Policy for a Multi-Class Queue
			9.3.1. Preliminary Results for a Multi-Class M/G/1 Queue
			9.3.2. Optimal Service-Order Policy for a Multi-Class Queue with Nonpreemtive Priority – cμ Rule
		9.4. Customer Assignment Problem in a Queue Attended by Heterogeneous Servers
			9.4.1. Problem Description
			9.4.2. Characterization of Optimal Policy
			9.4.3. Optimal Multi-Threshold Policy – c/μ Rule
		9.5. Performance Measures in Optimization of Stochastic Models
			9.5.1. Value at Risk and Conditional Value at Risk
			9.5.2. A Newsvendor Problem
		References
	Chapter 10: Dynamic Optimization in Stochastic Models
		10.1. Discrete-Time Finite Markov Decision Process
		10.2. Computational Approach to DTMDP
			10.2.1. Value Iteration Method
			10.2.2. Policy Iteration Method
			10.2.3. Computational Complexity
			10.2.4. Average Cost MDP with Infinite Horizon
		10.3. Semi-Markov Decision Process
			10.3.1. Characterizing the Structure of Optimal Policy
			10.3.2. Computational Approach to SMDP
		10.4. Stochastic Games – An Extension of MDP
		References
	Chapter 11: Learning in Stochastic Models
		11.1. Multi-Arm Bandits Problem
			11.1.1. Sample Average Methods
			11.1.2. Effect of Initial Values
			11.1.3. Upper Confidence Bounds
			11.1.4. Action Preference Method
		11.2. Monte Carlo-Based MDP Models
			11.2.1. Model-Based Learning
			11.2.2. Model-Free Learning
			11.2.3. Model-Free Learning with Bootstrapping
			11.2.4. Q-Learning
			11.2.5. Temporal-Difference Learning
			11.2.6. Convergence of Learning Algorithms
			11.2.7. Learning in Stochastic Games – An Extension of Q-Learning
		11.3. Hidden Markov Models
		11.4. Partially Observable Markov Decision Processes
		References
Part II: Appendices: Elements of Probability and Stochastics
	Chapter A: Basics of Probability Theory
		A.1. Probability Space
		A.2. Basic Probability Rules
			A.2.0.1. Bayesian Belief Networks
		A.3. Random Variables
		A.4. Probability Distribution Function
			A.4.1. Multivariate Distribution and Copulas
			A.4.2. Transforming Distribution Functions
		A.5. Independent Random Variables
		A.6. Transforms for Random Variables
		A.7. Popular Distributions in Stochastic Models
			A.7.1. Chi-Square Distribution
			A.7.2. F Distribution
			A.7.3. t Distribution
			A.7.4. Derivation of Probability Density Function
			A.7.5. Some Comments on Degrees of Freedom
		A.8. Limits of Sets
		A.9. Borel-Cantelli Lemmas
		A.10. A Fundamental Probability Model
		A.11. Sets of Measure Zero
		A.12. Cantor Set
		A.13. Integration in Probability Measure
			A.13.1. Radon-Nikodym Theorem
		References
	Chapter B: Conditional Expectation and Martingales
		B.1. σ-Algebra Representing Amount of Information
		B.2. Conditional Expectation in Discrete Time
		B.3. Conditional Expectation in Continuous-Time
		B.4. Martingales
			B.4.1. Optional Sampling
		References
	Chapter C: Some Useful Bounds, Inequalities, and Limit Laws
		C.1. Markov Inequality
		C.2. Jensen’s Inequality
		C.3. Cauchy–Schwarz Inequality
		C.4. Hölder’s Inequality
		C.5. Chernoff Bounds and Hoeffding’s Inequality
		C.6. The Law of the Iterated Logarithm
			C.6.1. Preparation
			C.6.2. Verification
		References
	Chapter D: Non-Linear Programming in Stochastics
		D.1. Non-Linear Optimization – Multi-Linear Regressions
			D.1.1. Multiple Linear Regression
				D.1.1.1. Least Squares Method
				D.1.1.2. Maximum Likelihood Method
		D.2. Entropy and Submodular Functions Optimization
			D.2.1. Entropy
			D.2.2. Maximum Entropy Principle in Approximating a Probability Distribution
			D.2.3. Submodular Function
			D.2.4. Naive Bayes’ Model and Feature Section Problem
		References
	Chapter E: Change of Probability Measure for a Normal Random Variable
		References
	Chapter F: Convergence of Random Variables
		F.1. Convergence in Distribution
		F.2. Convergence in Probability
		F.3. Convergence in Mean
		F.4. Almost Sure Convergence
		References
	Chapter G: Major Theorems for Stochastic Process Limits
		G.1. Skorohod Representation Theorem
		G.2. Continuous Mapping Theorem
		G.3. Random Time-Change Theorem
		G.4. Convergence-Together Theorem
		G.5. Donsker’s Theorem – FCLT
		G.6. Strong Approximation Theorem
		References
	Chapter H: A Brief Review on Stochastic Calculus
		H.1. Construction of BM–Existence of BM
		H.2. Diffusion Processes and Kolmogorov’s Equations
		H.3. Stochastic Differential Equations
		H.4. Some Stochastic Calculus Rules
			H.4.1. Variation of BM
			H.4.2. Stochastic Integration
			H.4.3. Stochastic Differentiation
		H.5. Ito’s Formula
		H.6. Some Theorems on Stochastic Integration
		H.7. More on Stochastic Differential Equations
		References
	Chapter I: Comparison of Stochastic Processes – Stochastic Orders
		I.1. Basic Stochastic Ordering
			I.1.1. Coupling
		I.2. Failure Rate Ordering
		I.3. Likelihood Ratio Ordering
		I.4. Variability Ordering
		References
	Chapter J: Matrix Algebra and Markov Chains
		J.1. Positive and Non-Negative Vectors and Matrices
		J.2. Power of Matrices
			J.2.1. Functions of Square Matrices
			J.2.2. Geometric Series of Matrices
		J.3. Stochastic Matrices and Markov Chains
			J.3.1. Positive Semi-Definite Matrices
		J.4. A Brief Look at M-Matrix
		J.5. Definitions of Derivatives in Matrix Calculus
		References
Index




نظرات کاربران