ورود به حساب

نام کاربری گذرواژه

گذرواژه را فراموش کردید؟ کلیک کنید

حساب کاربری ندارید؟ ساخت حساب

ساخت حساب کاربری

نام نام کاربری ایمیل شماره موبایل گذرواژه

برای ارتباط با ما می توانید از طریق شماره موبایل زیر از طریق تماس و پیامک با ما در ارتباط باشید


09117307688
09117179751

در صورت عدم پاسخ گویی از طریق پیامک با پشتیبان در ارتباط باشید

دسترسی نامحدود

برای کاربرانی که ثبت نام کرده اند

ضمانت بازگشت وجه

درصورت عدم همخوانی توضیحات با کتاب

پشتیبانی

از ساعت 7 صبح تا 10 شب

دانلود کتاب Nonlinear System Identification: From Classical Approaches to Neural Networks, Fuzzy Models, and Gaussian Processes

دانلود کتاب شناسایی غیرخطی سیستم: از رویکردهای کلاسیک گرفته تا شبکه های عصبی ، مدل های فازی و فرایندهای گاوسی

Nonlinear System Identification: From Classical Approaches to Neural Networks, Fuzzy Models, and Gaussian Processes

مشخصات کتاب

Nonlinear System Identification: From Classical Approaches to Neural Networks, Fuzzy Models, and Gaussian Processes

ویرایش: [2 ed.] 
نویسندگان:   
سری:  
ISBN (شابک) : 3030474380, 9783030474386 
ناشر: Springer 
سال نشر: 2021 
تعداد صفحات: 1225
[1233] 
زبان: English 
فرمت فایل : PDF (درصورت درخواست کاربر به PDF، EPUB یا AZW3 تبدیل می شود) 
حجم فایل: 63 Mb 

قیمت کتاب (تومان) : 40,000



ثبت امتیاز به این کتاب

میانگین امتیاز به این کتاب :
       تعداد امتیاز دهندگان : 9


در صورت تبدیل فایل کتاب Nonlinear System Identification: From Classical Approaches to Neural Networks, Fuzzy Models, and Gaussian Processes به فرمت های PDF، EPUB، AZW3، MOBI و یا DJVU می توانید به پشتیبان اطلاع دهید تا فایل مورد نظر را تبدیل نمایند.

توجه داشته باشید کتاب شناسایی غیرخطی سیستم: از رویکردهای کلاسیک گرفته تا شبکه های عصبی ، مدل های فازی و فرایندهای گاوسی نسخه زبان اصلی می باشد و کتاب ترجمه شده به فارسی نمی باشد. وبسایت اینترنشنال لایبرری ارائه دهنده کتاب های زبان اصلی می باشد و هیچ گونه کتاب ترجمه شده یا نوشته شده به فارسی را ارائه نمی دهد.


توضیحاتی در مورد کتاب شناسایی غیرخطی سیستم: از رویکردهای کلاسیک گرفته تا شبکه های عصبی ، مدل های فازی و فرایندهای گاوسی



این کتاب به مهندسان و دانشمندان دانشگاه و صنعت، درک کاملی از اصول اساسی شناسایی سیستم های غیرخطی ارائه می دهد. آنها را مجهز می کند تا مدل ها و روش های مورد بحث را برای مشکلات واقعی با اطمینان به کار ببرند، در حالی که آنها را از مشکلات احتمالی که ممکن است در عمل به وجود بیاید آگاه می کند.

علاوه بر این، کتاب مستقل است و فقط به درک اولیه از جبر ماتریسی، سیگنال‌ها و سیستم‌ها و آمار نیاز دارد. بر این اساس، می‌تواند به عنوان مقدمه‌ای برای شناسایی سیستم خطی عمل کند و یک نمای کلی از روش‌های بهینه‌سازی اصلی مورد استفاده در مهندسی ارائه می‌کند. تمرکز بر دستیابی به درک شهودی از موضوع و کاربرد عملی تکنیک های مورد بحث است. کتاب به سبک قضیه/اثبات نوشته نشده است. در عوض، ریاضیات به حداقل می رسد، و ایده های پوشش داده شده با شکل های متعدد، مثال ها، و برنامه های کاربردی در دنیای واقعی نشان داده شده است.

در گذشته، شناسایی سیستم غیرخطی زمینه‌ای بود که با انواع رویکردهای موقتی مشخص می‌شد که هر یک فقط برای یک کلاس بسیار محدود از سیستم‌ها قابل استفاده بودند. با ظهور شبکه‌های عصبی، مدل‌های فازی، مدل‌های فرآیند گاوسی و تکنیک‌های مدرن بهینه‌سازی ساختار، اکنون می‌توان کلاس بسیار گسترده‌تری از سیستم‌ها را مدیریت کرد. اگرچه یکی از جنبه‌های اصلی سیستم‌های غیرخطی این است که تقریباً هر یک از آنها منحصربه‌فرد هستند، اما از آن زمان ابزارهایی توسعه یافته‌اند که به هر رویکرد اجازه می‌دهد تا برای طیف گسترده‌ای از سیستم‌ها اعمال شود.



توضیحاتی درمورد کتاب به خارجی

This book provides engineers and scientists in academia and industry with a thorough understanding of the underlying principles of nonlinear system identification. It equips them to apply the models and methods discussed to real problems with confidence, while also making them aware of potential difficulties that may arise in practice. 

Moreover, the book is self-contained, requiring only a basic grasp of matrix algebra, signals and systems, and statistics. Accordingly, it can also serve as an introduction to linear system identification, and provides a practical overview of the major optimization methods used in engineering. The focus is on gaining an intuitive understanding of the subject and the practical application of the techniques discussed. The book is not written in a theorem/proof style; instead, the mathematics is kept to a minimum, and the ideas covered are illustrated with numerous figures, examples, and real-world applications. 

In the past, nonlinear system identification was a field characterized by a variety of ad-hoc approaches, each applicable only to a very limited class of systems. With the advent of neural networks, fuzzy models, Gaussian process models, and modern structure optimization techniques, a much broader class of systems can now be handled. Although one major aspect of nonlinear systems is that virtually every one is unique, tools have since been developed that allow each approach to be applied to a wide variety of systems.




فهرست مطالب

Preface to the Second Edition
Preface to the First Edition
Contents
Notation
1 Introduction
	1.1 Relevance of Nonlinear System Identification
		1.1.1 Linear or Nonlinear?
		1.1.2 Prediction
		1.1.3 Simulation
		1.1.4 Optimization
		1.1.5 Analysis
		1.1.6 Control
		1.1.7 Fault Detection
	1.2 Views on Nonlinear System Identification
	1.3 Tasks in Nonlinear System Identification
		1.3.1 Choice of the Model Inputs
		1.3.2 Choice of the Excitation Signals
		1.3.3 Choice of the Model Architecture
		1.3.4 Choice of the Dynamics Representation
		1.3.5 Choice of the Model Order
		1.3.6 Choice of the Model Structure and Complexity
		1.3.7 Choice of the Model Parameters
		1.3.8 Model Validation
		1.3.9 The Role of Fiddle Parameters
	1.4 White Box, Black Box, and Gray Box Models
	1.5 Outline of the Book and Some Reading Suggestions
	1.6 Terminology
Part I Optimization
	2 Introduction to Optimization
		2.1 Overview of Optimization Techniques
		2.2 Kangaroos
		2.3 Loss Functions for Supervised Methods
			2.3.1 Maximum Likelihood Method
			2.3.2 Maximum A Posteriori and Bayes Method
		2.4 Loss Functions for Unsupervised Methods
	3 Linear Optimization
		3.1 Least Squares (LS)
			3.1.1 Covariance Matrix of the Parameter Estimate
			3.1.2 Errorbars
			3.1.3 Orthogonal Regressors
			3.1.4 Regularization/Ridge Regression
				3.1.4.1 Efficient Computation
				3.1.4.2 Covariances for Ridge Regression
				3.1.4.3 Prior Parameters for Ridge Regression
			3.1.5 Ridge Regression: Alternative Formulation
			3.1.6 L1 Regularization
			3.1.7 Noise Assumptions
			3.1.8 Weighted Least Squares (WLS)
			3.1.9 Robust Regression
			3.1.10 Least Squares with Equality Constraints
			3.1.11 Smoothing Kernels
				3.1.11.1 Ridge Regression
			3.1.12 Effective Number of Parameters
			3.1.13 L2 Boosting
				3.1.13.1 Shrinkage
		3.2 Recursive Least Squares (RLS)
			3.2.1 Reducing the Computational Complexity
			3.2.2 Tracking Time-Variant Processes
			3.2.3 Relationship Between the RLS and the KalmanFilter
		3.3 Linear Optimization with Inequality Constraints
		3.4 Subset Selection
			3.4.1 Methods for Subset Selection
			3.4.2 Orthogonal Least Squares (OLS) for Forward Selection
			3.4.3 Ridge Regression or Subset Selection?
		3.5 Summary
		3.6 Problems
	4 Nonlinear Local Optimization
		4.1 Batch and Sample Adaptation
			4.1.1 Mini-Batch Adaptation
			4.1.2 Sample Adaptation
		4.2 Initial Parameters
		4.3 Direct Search Algorithms
			4.3.1 Simplex Search Method
			4.3.2 Hooke-Jeeves Method
		4.4 General Gradient-Based Algorithms
			4.4.1 Line Search
				4.4.1.1 Interval Reduction
				4.4.1.2 Interval Location
			4.4.2 Finite Difference Techniques
			4.4.3 Steepest Descent
			4.4.4 Newton's Method
			4.4.5 Quasi-Newton Methods
			4.4.6 Conjugate Gradient Methods
		4.5 Nonlinear Least Squares Problems
			4.5.1 Gauss-Newton Method
			4.5.2 Levenberg-Marquardt Method
		4.6 Constrained Nonlinear Optimization
		4.7 Summary
		4.8 Problems
	5 Nonlinear Global Optimization
		5.1 Simulated Annealing (SA)
		5.2 Evolutionary Algorithms (EA)
			5.2.1 Evolution Strategies (ES)
			5.2.2 Genetic Algorithms (GA)
			5.2.3 Genetic Programming (GP)
		5.3 Branch and Bound (B&B)
		5.4 Tabu Search (TS)
		5.5 Summary
		5.6 Problems
	6 Unsupervised Learning Techniques
		6.1 Principal Component Analysis (PCA)
		6.2 Clustering Techniques
			6.2.1 k-Means Algorithm
			6.2.2 Fuzzy C-Means (FCM) Algorithm
			6.2.3 Gustafson-Kessel Algorithm
			6.2.4 Kohonen's Self-Organizing Map (SOM)
			6.2.5 Neural Gas Network
			6.2.6 Adaptive Resonance Theory (ART) Network
			6.2.7 Incorporating Information About the Output
		6.3 Summary
		6.4 Problems
	7 Model Complexity Optimization
		7.1 Introduction
		7.2 Bias/Variance Tradeoff
			7.2.1 Bias Error
			7.2.2 Variance Error
			7.2.3 Tradeoff
				7.2.3.1 Dependency on the Amount of Data
				7.2.3.2 Optimism
		7.3 Evaluating the Test Error and Alternatives
			7.3.1 Training, Validation, and Test Data
			7.3.2 Cross-Validation (CV)
				7.3.2.1 S-Fold Cross-Validation
				7.3.2.2 Leave-One-Out Error
				7.3.2.3 Leave-One-Out Versus S-Fold CV
				7.3.2.4 Bootstrapping
				7.3.2.5 Why Ensemble Methods Work
			7.3.3 Information Criteria
				7.3.3.1 Effective Number of Parameters and Effective Amount of Data
			7.3.4 Multi-Objective Optimization
			7.3.5 Statistical Tests
			7.3.6 Correlation-Based Methods
		7.4 Explicit Structure Optimization
		7.5 Regularization: Implicit Structure Optimization
			7.5.1 Effective Parameters
			7.5.2 Regularization by Non-Smoothness Penalties
				7.5.2.1 Curvature Penalty
				7.5.2.2 Ridge Regression
				7.5.2.3 Weight Decay
			7.5.3 Regularization by Early Stopping
			7.5.4 Regularization by Constraints
			7.5.5 Regularization by Staggered Optimization
			7.5.6 Regularization by Local Optimization
		7.6 Structured Models for Complexity Reduction
			7.6.1 Curse of Dimensionality
			7.6.2 Hybrid Structures
				7.6.2.1 Parallel Model
				7.6.2.2 Series Model
				7.6.2.3 Parameter Scheduling Model
			7.6.3 Projection-Based Structures
			7.6.4 Additive Structures
			7.6.5 Hierarchical Structures
			7.6.6 Input Space Decomposition with Tree Structures
		7.7 Summary
		7.8 Problems
	8 Summary of Part I
Part II Static Models
	9 Introduction to Static Models
		9.1 Multivariable Systems
		9.2 Basis Function Formulation
			9.2.1 Global and Local Basis Functions
			9.2.2 Linear and Nonlinear Parameters
		9.3 Extended Basis Function Formulation
		9.4 Static Test Process
		9.5 Evaluation Criteria
	10 Linear, Polynomial, and Look-Up Table Models
		10.1 Linear Models
		10.2 Polynomial Models
			10.2.1 Regularized Polynomials
				10.2.1.1 Penalization of Offset
			10.2.2 Orthogonal Polynomials
			10.2.3 Summary Polynomials
		10.3 Look-Up Table Models
			10.3.1 One-Dimensional Look-Up Tables
			10.3.2 Two-Dimensional Look-Up Tables
			10.3.3 Optimization of the Heights
			10.3.4 Optimization of the Grid
			10.3.5 Optimization of the Complete Look-Up Table
			10.3.6 Incorporation of Constraints
				10.3.6.1 Constraints on the Grid
				10.3.6.2 Constraints on the Heights
			10.3.7 Properties of Look-Up Table Models
		10.4 Summary
		10.5 Problems
	11 Neural Networks
		11.1 Construction Mechanisms
			11.1.1 Ridge Construction
			11.1.2 Radial Construction
			11.1.3 Tensor Product Construction
		11.2 Multilayer Perceptron (MLP) Network
			11.2.1 MLP Neuron
			11.2.2 Network Structure
			11.2.3 Backpropagation
			11.2.4 MLP Training
				11.2.4.1 Initialization
				11.2.4.2 Regulated Activation Weight Neural Network (RAWN) or Extreme Learning Machine
				11.2.4.3 Nonlinear Optimization of the MLP
				11.2.4.4 Combined Training Methods for the MLP
			11.2.5 Simulation Examples
			11.2.6 MLP Properties
			11.2.7 Projection Pursuit Regression (PPR)
			11.2.8 Multiple Hidden Layers
			11.2.9 Deep Learning
		11.3 Radial Basis Function (RBF) Networks
			11.3.1 RBF Neuron
			11.3.2 Network Structure
			11.3.3 RBF Training
				11.3.3.1 Random Center Placement
				11.3.3.2 Clustering for Center Placement
				11.3.3.3 Complexity Controlled Clustering for Center Placement
				11.3.3.4 Grid-Based Center Placement
				11.3.3.5 Subset Selection for Center Placement
				11.3.3.6 Nonlinear Optimization for Center Placement
			11.3.4 Simulation Examples
			11.3.5 RBF Properties
			11.3.6 Regularization Theory
			11.3.7 Normalized Radial Basis Function (NRBF)Networks
				11.3.7.1 Training
				11.3.7.2 Side Effects of Normalization
				11.3.7.3 Properties
		11.4 Other Neural Networks
			11.4.1 General Regression Neural Network (GRNN)
			11.4.2 Cerebellar Model Articulation Controller(CMAC)
			11.4.3 Delaunay Networks
			11.4.4 Just-In-Time Models
		11.5 Summary
		11.6 Problems
	12 Fuzzy and Neuro-Fuzzy Models
		12.1 Fuzzy Logic
			12.1.1 Membership Functions
			12.1.2 Logic Operators
			12.1.3 Rule Fulfillment
			12.1.4 Accumulation
		12.2 Types of Fuzzy Systems
			12.2.1 Linguistic Fuzzy Systems
			12.2.2 Singleton Fuzzy Systems
			12.2.3 Takagi-Sugeno Fuzzy Systems
		12.3 Neuro-Fuzzy (NF) Networks
			12.3.1 Fuzzy Basis Functions
			12.3.2 Equivalence Between RBF Networks and Fuzzy Models
			12.3.3 What to Optimize?
				12.3.3.1 Optimization of the Consequent Parameters
				12.3.3.2 Optimization of the Premise Parameters
				12.3.3.3 Optimization of the Rule Structure
				12.3.3.4 Optimization of Operators
			12.3.4 Interpretation of Neuro-Fuzzy Networks
			12.3.5 Incorporating and Preserving Prior Knowledge
			12.3.6 Simulation Examples
		12.4 Neuro-Fuzzy Learning Schemes
			12.4.1 Nonlinear Local Optimization
			12.4.2 Nonlinear Global Optimization
			12.4.3 Orthogonal Least Squares Learning
			12.4.4 Fuzzy Rule Extraction by a Genetic Algorithm (FUREGA)
				12.4.4.1 Coding of the Rule Structure
				12.4.4.2 Overcoming the Curse of Dimensionality
				12.4.4.3 Nested Least Squares Optimization of the Singletons
				12.4.4.4 Constrained Optimization of the Input Membership Functions
				12.4.4.5 Application Example
			12.4.5 Adaptive Spline Modeling of Observation Data (ASMOD)
		12.5 Summary
		12.6 Problems
	13 Local Linear Neuro-Fuzzy Models: Fundamentals
		13.1 Basic Ideas
			13.1.1 Illustration of Local Linear Neuro-Fuzzy Models
			13.1.2 Interpretation of the Local Linear Model Offsets
				13.1.2.1 Advantages of Local Description
			13.1.3 Interpretation as Takagi-Sugeno Fuzzy System
			13.1.4 Interpretation as Extended NRBF Network
		13.2 Parameter Optimization of the Rule Consequents
			13.2.1 Global Estimation
			13.2.2 Local Estimation
			13.2.3 Global Versus Local Estimation
			13.2.4 Robust Regression
			13.2.5 Regularized Regression
			13.2.6 Data Weighting
		13.3 Structure Optimization of the Rule Premises
			13.3.1 Local Linear Model Tree (LOLIMOT) Algorithm
				13.3.1.1 The LOLIMOT Algorithm
				13.3.1.2 Computational Complexity
				13.3.1.3 Two Dimensions
				13.3.1.4 Convergence Behavior
				13.3.1.5 AICC
			13.3.2 Different Objectives for Structure and Parameter Optimization
			13.3.3 Smoothness Optimization
			13.3.4 Splitting Ratio Optimization
			13.3.5 Merging of Local Models
			13.3.6 Principal Component Analysis for Preprocessing
			13.3.7 Models with Multiple Outputs
		13.4 Summary
		13.5 Problems
	14 Local Linear Neuro-Fuzzy Models: Advanced Aspects
		14.1 Different Input Spaces for Rule Premises and Consequents
			14.1.1 Identification of Processes with Direction-Dependent Behavior
			14.1.2 Piecewise Affine (PWA) Models
		14.2 More Complex Local Models
			14.2.1 From Local Neuro-Fuzzy Models to Polynomials
			14.2.2 Local Quadratic Models for Input Optimization
				14.2.2.1 Local Sparse Quadratic Models
			14.2.3 Different Types of Local Models
		14.3 Structure Optimization of the Rule Consequents
		14.4 Interpolation and Extrapolation Behavior
			14.4.1 Interpolation Behavior
			14.4.2 Extrapolation Behavior
				14.4.2.1 Ensuring Interpretable Extrapolation Behavior
				14.4.2.2 Incorporation of Prior Knowledge into the Extrapolation Behavior
		14.5 Global and Local Linearization
		14.6 Online Learning
			14.6.1 Online Adaptation of the Rule Consequents
				14.6.1.1 Local Recursive Weighted Least Squares Algorithm
				14.6.1.2 How Many Local Models to Adapt
				14.6.1.3 Convergence Behavior
				14.6.1.4 Robustness Against Insufficient Excitation
				14.6.1.5 Parameter Variances and Blow-Up Effect
				14.6.1.6 Computational Effort
				14.6.1.7 Structure Mismatch
			14.6.2 Online Construction of the Rule PremiseStructure
		14.7 Oblique Partitioning
			14.7.1 Smoothness Determination
			14.7.2 Hinging Hyperplanes
			14.7.3 Smooth Hinging Hyperplanes
			14.7.4 Hinging Hyperplane Trees (HHT)
		14.8 Hierarchical Local Model Tree (HILOMOT) Algorithm
			14.8.1 Forming the Partition of Unity
			14.8.2 Split Parameter Optimization
				14.8.2.1 LOLIMOT Splits
				14.8.2.2 Local Model Center
				14.8.2.3 Convergence Behavior
			14.8.3 Building up the Hierarchy
			14.8.4 Smoothness Adjustment
			14.8.5 Separable Nonlinear Least Squares
				14.8.5.1 Idea
				14.8.5.2 Termination Criterion
				14.8.5.3 Constrained Optimization
				14.8.5.4 Robust Estimation
				14.8.5.5 Alternatives to Separable Nonlinear Least Squares
			14.8.6 Analytic Gradient Derivation
				14.8.6.1 Derivative of the Local Model Network
				14.8.6.2 Derivative of the Sigmoidal Splitting Function
				14.8.6.3 Derivative of the Local Model
				14.8.6.4 Summary
			14.8.7 Analyzing Input Relevance from Partitioning
				14.8.7.1 Relevance for One Split
				14.8.7.2 Relevance for the Whole Network
			14.8.8 HILOMOT Versus LOLIMOT
		14.9 Errorbars, Design of Excitation Signals,and Active Learning
			14.9.1 Errorbars
				14.9.1.1 Errorbars with Global Estimation
				14.9.1.2 Errorbars with Local Estimation
			14.9.2 Detecting Extrapolation
			14.9.3 Design of Excitation Signals
		14.10 Design of Experiments
			14.10.1 Unsupervised Methods
				14.10.1.1 Random
				14.10.1.2 Sobol Sequence
				14.10.1.3 Latin Hypercube (LH)
				14.10.1.4 Optimized Latin Hypercube
			14.10.2 Model Variance-Oriented Methods
				14.10.2.1 Optimal Design
				14.10.2.2 Polynomials
				14.10.2.3 Basis Function Network
				14.10.2.4 Multilayer Perceptron, Local Model Network, etc.
				14.10.2.5 Gaussian Process Regression
			14.10.3 Model Bias-Oriented Methods
				14.10.3.1 Model Committee
				14.10.3.2 Model Ensemble
				14.10.3.3 HILOMOT DoE
			14.10.4 Active Learning with HILOMOT DoE
				14.10.4.1 Active Learning in General
				14.10.4.2 Active Learning with HILOMOT DoE
				14.10.4.3 Query Optimization
				14.10.4.4 Sequential Strategy
				14.10.4.5 Comparison of HILOMOT DoE with Unsupervised Design
				14.10.4.6 Exploiting the Separation Between Premise and Consequent Input Spaces in Local Model Networks for DoE
				14.10.4.7 Semi-Batch Strategy
				14.10.4.8 Active Learning for Slow Modeling Approaches
				14.10.4.9 Applications of HILOMOT DoE
		14.11 Bagging Local Model Trees
			14.11.1 Unstable Models
			14.11.2 Bagging with HILOMOT
			14.11.3 Bootstrapping for Confidence Assessment
			14.11.4 Model Weighting
		14.12 Summary and Conclusions
		14.13 Problems
	15 Input Selection for Local Model Approaches
		15.1 Test Processes
			15.1.1 Test Process One (TP1)
			15.1.2 Test Process Two (TP2)
			15.1.3 Test Process Three (TP3)
			15.1.4 Test Process Four (TP4)
		15.2 Mixed Wrapper-Embedded Input Selection Approach: Authored by Julian Belz
			15.2.1 Investigation with Test Processes
				15.2.1.1 Test Process One
			15.2.2 Test Process Two
			15.2.3 Extensive Simulation Studies
				15.2.3.1 Evaluation Criteria
				15.2.3.2 Search Strategies
				15.2.3.3 A Priori Considerations
				15.2.3.4 Comparison Results
		15.3 Regularization-Based Input Selection Approach: Authored by Julian Belz
			15.3.1 Normalized L1 Split Regularization
			15.3.2 Investigation with Test Processes
				15.3.2.1 Test Process One
				15.3.2.2 Test Process Four
		15.4 Embedded Approach: Authored by Julian Belz
			15.4.1 Partition Analysis
			15.4.2 Investigation with Test Processes
				15.4.2.1 Test Process Three
				15.4.2.2 Test Process Two
		15.5 Visualization: Partial Dependence Plots
			15.5.1 Investigation with Test Processes
				15.5.1.1 Test Process One
				15.5.1.2 Test Process Two
		15.6 Miles per Gallon Data Set
			15.6.1 Mixed Wrapper-Embedded Input Selection
			15.6.2 Regularization-Based Input Selection
			15.6.3 Visualization: Partial Dependence Plot
			15.6.4 Critical Assessment of Partial Dependence Plots
	16 Gaussian Process Models (GPMs)
		16.1 Overview on Kernel Methods
			16.1.1 LS Kernel Methods
			16.1.2 Non-LS Kernel Methods
		16.2 Kernels
		16.3 Kernel Ridge Regression
			16.3.1 Transition to Kernels
		16.4 Regularizing Parameters and Functions
			16.4.1 Discrepancy in Penalty Terms
		16.5 Reproducing Kernel Hilbert Spaces (RKHS)
			16.5.1 Norms
			16.5.2 RKHS Objective and Solution
			16.5.3 Equivalent Kernels and Locality
			16.5.4 Two Points of View
				16.5.4.1 Similarity-Based View
				16.5.4.2 Superposition of Kernels View
		16.6 Gaussian Processes/Kriging
			16.6.1 Key Idea
			16.6.2 Some Basics
			16.6.3 Prior
			16.6.4 Posterior
			16.6.5 Incorporating Output Noise
			16.6.6 Model Variance
			16.6.7 Incorporating a Base Model
				16.6.7.1 Subsequent Optimization
				16.6.7.2 Simultaneous Optimization
			16.6.8 Relationship to RBF Networks
			16.6.9 High-Dimensional Kernels
		16.7 Hyperparameters
			16.7.1 Influence of the Hyperparameters
			16.7.2 Optimization of the Hyperparameters
				16.7.2.1 Number of Hyperparameters
				16.7.2.2 One Versus Multiple Length Scales
				16.7.2.3 Hyperparameter Optimization Methods
			16.7.3 Marginal Likelihood
				16.7.3.1 Likelihood for the Noise-Free Case
				16.7.3.2 Marginal Likelihood for the Noisy Case
				16.7.3.3 Marginal Likelihood Versus Leave-One-Out Cross Validation
			16.7.4 A Note on the Prior Variance
		16.8 Summary
		16.9 Problems
	17 Summary of Part II
Part III Dynamic Models
	18 Linear Dynamic System Identification
		18.1 Overview of Linear System Identification
		18.2 Excitation Signals
		18.3 General Model Structure
			18.3.1 Terminology and Classification
			18.3.2 Optimal Predictor
				18.3.2.1 Simulation
				18.3.2.2 Prediction
			18.3.3 Some Remarks on the Optimal Predictor
			18.3.4 Prediction Error Methods
		18.4 Time Series Models
			18.4.1 Autoregressive (AR)
			18.4.2 Moving Average (MA)
			18.4.3 Autoregressive Moving Average (ARMA)
		18.5 Models with Output Feedback
			18.5.1 Autoregressive with Exogenous Input (ARX)
				18.5.1.1 Least Squares (LS)
				18.5.1.2 Consistency Problem
				18.5.1.3 Instrumental Variables (IV) Method
				18.5.1.4 Correlation Functions Least Squares (COR-LS)
			18.5.2 Autoregressive Moving Average with Exogenous Input (ARMAX)
				18.5.2.1 Estimation of ARMAX Models
			18.5.3 Autoregressive Autoregressive with Exogenous Input (ARARX)
			18.5.4 Output Error (OE)
				18.5.4.1 Nonlinear Optimization of the OE Model Parameters
				18.5.4.2 Repeated Least Squares and Filtering for OE Model Estimation
			18.5.5 Box-Jenkins (BJ)
			18.5.6 State Space Models
			18.5.7 Simulation Example
		18.6 Models Without Output Feedback
			18.6.1 Finite Impulse Response (FIR)
				18.6.1.1 Comparison ARX Versus FIR
			18.6.2 Regularized FIR Models
				18.6.2.1 TC Kernel
				18.6.2.2 Filter Interpretation
			18.6.3 Bias and Variance of Regularized FIR Models
			18.6.4 Impulse Response Preservation (IRP) FIRApproach
				18.6.4.1 Impulse Response Preservation (IRP)
				18.6.4.2 Hyperparameter Optimization
				18.6.4.3 Order Selection
				18.6.4.4 Consequences of Undermodeling
				18.6.4.5 Summary
			18.6.5 Orthonormal Basis Functions (OBF)
				18.6.5.1 Laguerre Filters
				18.6.5.2 Poisson Filters
				18.6.5.3 Kautz Filters
				18.6.5.4 Generalized Filters
			18.6.6 Simulation Example
		18.7 Some Advanced Aspects
			18.7.1 Initial Conditions
			18.7.2 Consistency
			18.7.3 Frequency-Domain Interpretation
			18.7.4 Relationship Between Noise Model and Filtering
			18.7.5 Offsets
		18.8 Recursive Algorithms
			18.8.1 Recursive Least Squares (RLS) Method
			18.8.2 Recursive Instrumental Variables (RIV) Method
			18.8.3 Recursive Extended Least Squares (RELS)Method
			18.8.4 Recursive Prediction Error Methods (RPEM)
		18.9 Determination of Dynamic Orders
		18.10 Multivariable Systems
			18.10.1 P-Canonical Model
			18.10.2 Matrix Polynomial Model
			18.10.3 Subspace Methods
		18.11 Closed-Loop Identification
			18.11.1 Direct Methods
			18.11.2 Indirect Methods
				18.11.2.1 Two-Stage Method
				18.11.2.2 Coprime Factor Identification
			18.11.3 Identification for Control
		18.12 Summary
		18.13 Problems
	19 Nonlinear Dynamic System Identification
		19.1 From Linear to Nonlinear System Identification
		19.2 External Dynamics
			19.2.1 Illustration of the External Dynamics Approach
				19.2.1.1 Relationship Between the Input/Output Signals and the Approximator Input Space
				19.2.1.2 Principal Component Analysis and Higher-Order Differences
				19.2.1.3 One-Step Prediction Surfaces
				19.2.1.4 Effect of the Sampling Time
			19.2.2 Series-Parallel and Parallel Models
			19.2.3 Nonlinear Dynamic Input/Output Model Classes
				19.2.3.1 Models with Output Feedback
				19.2.3.2 Models Without Output Feedback
			19.2.4 Restrictions of Nonlinear Input/Output Models
		19.3 Internal Dynamics
		19.4 Parameter Scheduling Approach
		19.5 Training Recurrent Structures
			19.5.1 Backpropagation-Through-Time (BPTT)Algorithm
			19.5.2 Real-Time Recurrent Learning
		19.6 Multivariable Systems
			19.6.1 Issues with Multiple Inputs
				19.6.1.1 Asymmetry Going from ARX → OE to NARX → NOE
				19.6.1.2 Mixed Dynamic and Static Behavior
		19.7 Excitation Signals
			19.7.1 From PRBS to APRBS
				19.7.1.1 APRBS Construction
				19.7.1.2 APRBS: Smoothing the Steps
			19.7.2 Ramp
			19.7.3 Multisine
			19.7.4 Chirp
			19.7.5 APRBS
				19.7.5.1 Sinusoidal APRBS
			19.7.6 NARX and NOBF Input Spaces
			19.7.7 MISO Systems
				19.7.7.1 Excitation of One Input at a Time
				19.7.7.2 Excitation of All Inputs Simultaneously
				19.7.7.3 Hold Time
			19.7.8 Tradeoffs
		19.8 Optimal Excitation Signal Generator: Coauthored by Tim O. Heinz
			19.8.1 Approaches with Fisher Information
			19.8.2 Optimized Nonlinear Input Signal (OMNIPUS) for SISO Systems
			19.8.3 Optimized Nonlinear Input Signal (OMNIPUS) for MISO Systems
				19.8.3.1 Separate Optimization of Each Input
				19.8.3.2 Escaping the Curse of Dimensionality
				19.8.3.3 Results for Two Inputs
				19.8.3.4 Input Signal Correlation
				19.8.3.5 Input Value Distribution
				19.8.3.6 Extensions
		19.9 Determination of Dynamic Orders
		19.10 Summary
		19.11 Problems
	20 Classical Polynomial Approaches
		20.1 Properties of Dynamic Polynomial Models
		20.2 Kolmogorov-Gabor Polynomial Models
		20.3 Volterra-Series Models
		20.4 Parametric Volterra-Series Models
		20.5 NDE Models
		20.6 Hammerstein Models
		20.7 Wiener Models
		20.8 Problems
	21 Dynamic Neural and Fuzzy Models
		21.1 Curse of Dimensionality
			21.1.1 MLP Networks
			21.1.2 RBF Networks
			21.1.3 Singleton Fuzzy and NRBF Models
		21.2 Interpolation and Extrapolation Behavior
		21.3 Training
			21.3.1 MLP Networks
			21.3.2 RBF Networks
			21.3.3 Singleton Fuzzy and NRBF Models
		21.4 Integration of a Linear Model
		21.5 Simulation Examples
			21.5.1 MLP Networks
			21.5.2 RBF Networks
			21.5.3 Singleton Fuzzy and NRBF Models
		21.6 Summary
		21.7 Problems
	22 Dynamic Local Linear Neuro-Fuzzy Models
		22.1 One-Step Prediction Error Versus Simulation Error
		22.2 Determination of the Rule Premises
		22.3 Linearization
			22.3.1 Static and Dynamic Linearization
			22.3.2 Dynamics of the Linearized Model
			22.3.3 Different Rule Consequent Structures
		22.4 Model Stability
			22.4.1 Influence of Rule Premise Inputs on Stability
				22.4.1.1 Rule Premise Inputs Without Output Feedback
				22.4.1.2 Rule Premise Inputs with Output Feedback
			22.4.2 Lyapunov Stability and Linear MatrixInequalities (LMIs)
			22.4.3 Ensuring Stable Extrapolation
		22.5 Dynamic LOLIMOT Simulation Studies
			22.5.1 Nonlinear Dynamic Test Processes
			22.5.2 Hammerstein Process
			22.5.3 Wiener Process
			22.5.4 NDE Process
			22.5.5 Dynamic Nonlinearity Process
		22.6 Advanced Local Linear Methods and Models
			22.6.1 Local Linear Instrumental Variables (IV) Method
			22.6.2 Local Linear Output Error (OE) Models
			22.6.3 Local Linear ARMAX Models
		22.7 Local Regularized Finite Impulse Response Models: Coauthored by Tobias Münker
			22.7.1 Structure
			22.7.2 Local Estimation
			22.7.3 Hyperparamter Tuning
			22.7.4 Evaluation of Performance
		22.8 Local Linear Orthonormal Basis Functions Models
		22.9 Structure Optimization of the Rule Consequents
		22.10 Summary and Conclusions
		22.11 Problems
	23 Neural Networks with Internal Dynamics
		23.1 Fully Recurrent Networks
		23.2 Partially Recurrent Networks
		23.3 State Recurrent Networks
		23.4 Locally Recurrent Globally Feedforward Networks
		23.5 Long Short-Term Memory (LSTM) Networks
		23.6 Internal Versus External Dynamics
		23.7 Problems
Part IV Applications
	24 Applications of Static Models
		24.1 Driving Cycle
			24.1.1 Process Description
			24.1.2 Smoothing of a Driving Cycle
			24.1.3 Improvements and Extensions
			24.1.4 Differentiation
			24.1.5 The Role of Look-Up Tables in Automotive Electronics
			24.1.6 Modeling of Exhaust Gases
			24.1.7 Optimization of Exhaust Gases
			24.1.8 Outlook: Dynamic Models
		24.2 Summary
	25 Applications of Dynamic Models
		25.1 Cooling Blast
			25.1.1 Process Description
			25.1.2 Experimental Results
				25.1.2.1 Excitation Signal Design
				25.1.2.2 Modeling and Identification
		25.2 Diesel Engine Turbocharger
			25.2.1 Process Description
			25.2.2 Experimental Results
				25.2.2.1 Excitation and Validation Signals
				25.2.2.2 Modeling and Identification
				25.2.2.3 Model Properties
				25.2.2.4 Choice of Sampling Time
		25.3 Thermal Plant
			25.3.1 Process Description
			25.3.2 Transport Process
				25.3.2.1 Modeling and Identification
				25.3.2.2 Model Properties
			25.3.3 Tubular Heat Exchanger
				25.3.3.1 Modeling and Identification
				25.3.3.2 Model Properties
			25.3.4 Cross-Flow Heat Exchanger
				25.3.4.1 Data
				25.3.4.2 Modeling and Identification
				25.3.4.3 Model Properties
		25.4 Summary
	26 Design of Experiments
		26.1 Practical DoE Aspects: Authored by Julian Belz
			26.1.1 Function Generator
			26.1.2 Order of Experimentation
			26.1.3 Biggest Gap Sequence
			26.1.4 Median Distance Sequence
			26.1.5 Intelligent k-Means Sequence
				26.1.5.1 Intelligent k-Means Initialization
			26.1.6 Other Determination Strategies
			26.1.7 Comparison on Synthetic Functions
			26.1.8 Summary
			26.1.9 Corner Measurement
			26.1.10 Comparison of Space-Filling Designs
		26.2 Active Learning for Structural Health Monitoring
			26.2.1 Simulation Results
			26.2.2 Experimental Results
		26.3 Active Learning for Engine Measurement
			26.3.1 Problem Setting
			26.3.2 Operating Point-Specific Engine Models
				26.3.2.1 Results
				26.3.2.2 Reducing Measurement Time with HILOMOT DoE
				26.3.2.3 Multiple Outputs
			26.3.3 Global Engine Model
		26.4 Nonlinear Dynamic Excitation Signal Design for Common Rail Injection
			26.4.1 Example: High-Pressure Fuel Supply System
			26.4.2 Identifying the Rail Pressure System
				26.4.2.1 Local Model Networks
				26.4.2.2 Gaussian Process Models (GPMs)
			26.4.3 Results
				26.4.3.1 Operating Point Depending Constraints
				26.4.3.2 Data Acquisition
				26.4.3.3 Accuracy of the Simulation Results
				26.4.3.4 Qualitative Analysis
				26.4.3.5 Quantitative Analysis
				26.4.3.6 Data Coverage of the Input Space
	27 Input Selection Applications
		27.1 Air Mass Flow Prediction
			27.1.1 Mixed Wrapper-Embedded Input Selection
			27.1.2 Partition Analysis
		27.2 Fan Metamodeling: Authored by Julian Belz
			27.2.1 Centrifugal Impeller Geometry
			27.2.2 Axial Impeller Geometry
			27.2.3 Why Metamodels?
			27.2.4 Design of Experiments: Centrifugal FanMetamodel
			27.2.5 Design of Experiments: Axial Fan Metamodel
			27.2.6 Order of Experimentation
			27.2.7 Goal-Oriented Active Learning
			27.2.8 Mixed Wrapper-Embedded Input Selection
			27.2.9 Centrifugal Fan Metamodel
			27.2.10 Axial Fan Metamodel
			27.2.11 Summary
		27.3 Heating, Ventilating, and Air Conditioning System
			27.3.1 Problem Configuration
			27.3.2 Available Data Sets
			27.3.3 Mixed Wrapper-Embedded Input Selection
			27.3.4 Results
	28 Applications of Advanced Methods
		28.1 Nonlinear Model Predictive Control
		28.2 Online Adaptation
			28.2.1 Variable Forgetting Factor
			28.2.2 Control and Adaptation Models
			28.2.3 Parameter Transfer
			28.2.4 Systems with Multiple Inputs
			28.2.5 Experimental Results
		28.3 Fault Detection
			28.3.1 Methodology
			28.3.2 Experimental Results
		28.4 Fault Diagnosis
			28.4.1 Methodology
			28.4.2 Experimental Results
		28.5 Reconfiguration
	29 LMN Toolbox
		29.1 Termination Criteria
			29.1.1 Corrected AIC
			29.1.2 Corrected BIC
			29.1.3 Validation
			29.1.4 Maximum Number of Local Models
			29.1.5 Effective Number of Parameters
			29.1.6 Maximum Training Time
		29.2 Polynominal Degree of Local Models
		29.3 Dynamic Models
			29.3.1 Nonlinear Orthonormal Basis Function Models
		29.4 Different Input Spaces x and z
		29.5 Smoothness
		29.6 Data Weighting
		29.7 Visualization and Simplified Tool
A Vectors and Matrices
	A.1 Vector and Matrix Derivatives
	A.2 Gradient, Hessian, and Jacobian
B Statistics
	B.1 Deterministic and Random Variables
	B.2 Probability Density Function (pdf)
	B.3 Stochastic Processes and Ergodicity
	B.4 Expectation
	B.5 Variance
	B.6 Correlation and Covariance
	B.7 Properties of Estimators
References
Index




نظرات کاربران