دسترسی نامحدود
برای کاربرانی که ثبت نام کرده اند
برای ارتباط با ما می توانید از طریق شماره موبایل زیر از طریق تماس و پیامک با ما در ارتباط باشید
در صورت عدم پاسخ گویی از طریق پیامک با پشتیبان در ارتباط باشید
برای کاربرانی که ثبت نام کرده اند
درصورت عدم همخوانی توضیحات با کتاب
از ساعت 7 صبح تا 10 شب
ویرایش: 1 نویسندگان: Doraiswami. Rajamani, Diduch. Chris, Stevenson. Maryhelen سری: ISBN (شابک) : 9781119990123, 9781118536490 ناشر: Wiley سال نشر: 2014 تعداد صفحات: 538 زبان: English فرمت فایل : PDF (درصورت درخواست کاربر به PDF، EPUB یا AZW3 تبدیل می شود) حجم فایل: 7 مگابایت
کلمات کلیدی مربوط به کتاب شناسایی سیستمهای فیزیکی: برنامههای کاربردی برای نظارت بر شرایط، تشخیص خطا، حسگر نرم و طراحی کنترلکننده: مهندسی سیستم ها. مهندسی سیستم -- ریاضیات. فن آوری و مهندسی / مهندسی (عمومی) فن آوری و مهندسی / مرجع
در صورت تبدیل فایل کتاب Identification of physical systems : applications to condition monitoring, fault diagnosis, softsensor, and controller design به فرمت های PDF، EPUB، AZW3، MOBI و یا DJVU می توانید به پشتیبان اطلاع دهید تا فایل مورد نظر را تبدیل نمایند.
توجه داشته باشید کتاب شناسایی سیستمهای فیزیکی: برنامههای کاربردی برای نظارت بر شرایط، تشخیص خطا، حسگر نرم و طراحی کنترلکننده نسخه زبان اصلی می باشد و کتاب ترجمه شده به فارسی نمی باشد. وبسایت اینترنشنال لایبرری ارائه دهنده کتاب های زبان اصلی می باشد و هیچ گونه کتاب ترجمه شده یا نوشته شده به فارسی را ارائه نمی دهد.
شناسایی یک سیستم فیزیکی با مشکل شناسایی مدل ریاضی آن با استفاده از داده های ورودی و خروجی اندازه گیری شده سر و کار دارد. از آنجایی که سیستم فیزیکی به طور کلی پیچیده، غیرخطی است و داده های ورودی-خروجی آن نویز خراب است، مسائل نظری و عملی اساسی وجود دارد که باید در نظر گرفته شوند.
شناسایی سیستمهای فیزیکیاین نیاز را برطرف میکند و یک
رویکرد سیستماتیک و یکپارچه برای مشکل شناسایی سیستم فیزیکی و
کاربردهای عملی آن ارائه میکند. با شروع با روش حداقل مربعات،
نویسندگان طرحهای مختلفی را برای رسیدگی به مسائل مربوط به
دقت، تنوع در رژیمهای عملیاتی، حلقه بسته و زیرسیستمهای به هم
پیوسته توسعه میدهند. همچنین یک سیگنال ناپارامتری یا طرح
مبتنی بر داده برای شناسایی ابزاری برای ارائه یک تصویر
ماکروسکوپی سریع از سیستم برای تکمیل تصویر میکروسکوپی دقیق
ارائه شده توسط طرح مبتنی بر مدل پارامتری ارائه شده است. در
نهایت، یک ادغام متوالی از طرحهای کاملاً متفاوت، مانند
ناپارامتریک، فیلتر کالمن، و مدل پارامتری، برای برآوردن سرعت و
دقت مورد نیاز سیستمهای حیاتی مأموریت توسعه داده شده
است.
ویژگیهای کلیدی:
شناسایی سیستم های فیزیکی مرجعی جامع برای محققان و دست اندرکارانی است که در این زمینه فعالیت می کنند و همچنین منبع اطلاعاتی مفیدی برای دانشجویان کارشناسی ارشد برق می باشد. مهندسی کامپیوتر، زیست پزشکی، شیمی و مکانیک.
Identification of a physical system deals with the problem of identifying its mathematical model using the measured input and output data. As the physical system is generally complex, nonlinear, and its input–output data is corrupted noise, there are fundamental theoretical and practical issues that need to be considered.
Identification of Physical Systems addresses this
need, presenting a systematic, unified approach to the
problem of physical system identification and its practical
applications. Starting with a least-squares method, the
authors develop various schemes to address the issues of
accuracy, variation in the operating regimes, closed loop,
and interconnected subsystems. Also presented is a
non-parametric signal or data-based scheme to identify a
means to provide a quick macroscopic picture of the system to
complement the precise microscopic picture given by the
parametric model-based scheme. Finally, a sequential
integration of totally different schemes, such as
non-parametric, Kalman filter, and parametric model, is
developed to meet the speed and accuracy requirement of
mission-critical systems.
Key features:
Identification of Physical Systems is a comprehensive reference for researchers and practitioners working in this field and is also a useful source of information for graduate students in electrical, computer, biomedical, chemical, and mechanical engineering.
Content: Preface xv Nomenclature xxi 1 Modeling of Signals and Systems 1 1.1 Introduction 1 1.2 Classification of Signals 2 1.2.1 Deterministic and Random Signals 3 1.2.2 Bounded and Unbounded Signal 3 1.2.3 Energy and Power Signals 3 1.2.4 Causal, Non-causal, and Anti-causal Signals 4 1.2.5 Causal, Non-causal, and Anti-causal Systems 4 1.3 Model of Systems and Signals 5 1.3.1 Time-Domain Model 5 1.3.2 Frequency-Domain Model 8 1.4 Equivalence of Input-Output and State-Space Models 8 1.4.1 State-Space and Transfer Function Model 8 1.4.2 Time-Domain Expression for the Output Response 8 1.4.3 State-Space and the Difference Equation Model 9 1.4.4 Observer Canonical Form 9 1.4.5 Characterization of the Model 10 1.4.6 Stability of (Discrete-Time) Systems 10 1.4.7 Minimum Phase System 11 1.4.8 Pole-Zero Locations and the Output Response 11 1.5 Deterministic Signals 11 1.5.1 Transfer Function Model 12 1.5.2 Difference Equation Model 12 1.5.3 State-Space Model 14 1.5.4 Expression for an Impulse Response 14 1.5.5 Periodic Signal 14 1.5.6 Periodic Impulse Train 15 1.5.7 A Finite Duration Signal 16 1.5.8 Model of a Class of All Signals 17 1.5.9 Examples of Deterministic Signals 18 1.6 Introduction to Random Signals 23 1.6.1 Stationary Random Signal 23 1.6.2 Joint PDF and Statistics of Random Signals 24 1.6.3 Ergodic Process 27 1.7 Model of Random Signals 28 1.7.1 White Noise Process 29 1.7.2 Colored Noise 30 1.7.3 Model of a Random Waveform 30 1.7.4 Classification of the Random Waveform 31 1.7.5 Frequency Response and Pole-Zero Locations 31 1.7.6 Illustrative Examples of Filters 36 1.7.7 Illustrative Examples of Random Signals 36 1.7.8 Pseudo Random Binary Sequence (PRBS) 38 1.8 Model of a System with Disturbance and Measurement Noise 41 1.8.1 Input-Output Model of the System 41 1.8.2 State-Space Model of the System 44 1.8.3 Illustrative Examples in Integrated System Model 47 1.9 Summary 50 References 54 Further Readings 54 2 Characterization of Signals: Correlation and Spectral Density 57 2.1 Introduction 57 2.2 Definitions of Auto- and Cross-Correlation (and Covariance) 58 2.2.1 Properties of Correlation 61 2.2.2 Normalized Correlation and Correlation Coefficient 66 2.3 Spectral Density: Correlation in the Frequency Domain 67 2.3.1 Z-transform of the Correlation Function 69 2.3.2 Expressions for Energy and Power Spectral Densities 71 2.4 Coherence Spectrum 74 2.5 Illustrative Examples in Correlation and Spectral Density 76 2.5.1 Deterministic Signals: Correlation and Spectral Density 76 2.5.2 Random Signals: Correlation and Spectral Density 87 2.6 Input-Output Correlation and Spectral Density 91 2.6.1 Generation of Random Signal from White Noise 92 2.6.2 Identification of Non-Parametric Model of a System 93 2.6.3 Identification of a Parametric Model of a Random Signal 94 2.7 Illustrative Examples: Modeling and Identification 98 2.8 Summary 109 2.9 Appendix 112 References 116 3 Estimation Theory 117 3.1 Overview 117 3.2 Map Relating Measurement and the Parameter 119 3.2.1 Mathematical Model 119 3.2.2 Probabilistic Model 120 3.2.3 Likelihood Function 122 3.3 Properties of Estimators 123 3.3.1 Indirect Approach to Estimation 123 3.3.2 Unbiasedness of the Estimator 124 3.3.3 Variance of the Estimator: Scalar Case 125 3.3.4 Median of the Data Samples 125 3.3.5 Small and Large Sample Properties 126 3.3.6 Large Sample Properties 126 3.4 Cramer-Rao Inequality 127 3.4.1 Scalar Case: and Scalars while y is a Nx1 Vector 128 3.4.2 Vector Case: is a Mx1 Vector 129 3.4.3 Illustrative Examples: Cramer-Rao Inequality 130 3.4.4 Fisher Information 138 3.5 Maximum Likelihood Estimation 139 3.5.1 Formulation of Maximum Likelihood Estimation 139 3.5.2 Illustrative Examples: Maximum Likelihood Estimation of Mean or Median 141 3.5.3 Illustrative Examples: Maximum Likelihood Estimation of Mean and Variance 148 3.5.4 Properties of Maximum Likelihood Estimator 154 3.6 Summary 154 3.7 Appendix: Cauchy-Schwarz Inequality 157 3.8 Appendix: Cram'er-Rao Lower Bound 157 3.8.1 Scalar Case 158 3.8.2 Vector Case 160 3.9 Appendix: Fisher Information: Cauchy PDF 161 3.10 Appendix: Fisher Information for i.i.d. PDF 161 3.11 Appendix: Projection Operator 162 3.12 Appendix: Fisher Information: Part Gauss-Part Laplace 164 Problem 165 References 165 Further Readings 165 4 Estimation of Random Parameter 167 4.1 Overview 167 4.2 Minimum Mean-Squares Estimator (MMSE): Scalar Case 167 4.2.1 Conditional Mean: Optimal Estimator 168 4.3 MMSE Estimator: Vector Case 169 4.3.1 Covariance of the Estimation Error 171 4.3.2 Conditional Expectation and Its Properties 172 4.4 Expression for Conditional Mean 172 4.4.1 MMSE Estimator: Gaussian Random Variables 173 4.4.2 MMSE Estimator: Unknown is Gaussian and Measurement Non-Gaussian 174 4.4.3 The MMSE Estimator for Gaussian PDF 176 4.4.4 Illustrative Examples 178 4.5 Summary 183 4.6 Appendix: Non-Gaussian Measurement PDF 184 4.6.1 Expression for Conditional Expectation 184 4.6.2 Conditional Expectation for Gaussian x and Non-Gaussian y 185 References 188 Further Readings 188 5 Linear Least-Squares Estimation 189 5.1 Overview 189 5.2 Linear Least-Squares Approach 189 5.2.1 Linear Algebraic Model 190 5.2.2 Least-Squares Method 190 5.2.3 Objective Function 191 5.2.4 Optimal Least-Squares Estimate: Normal Equation 193 5.2.5 Geometric Interpretation of Least-Squares Estimate: Orthogonality Principle 194 5.3 Performance of the Least-Squares Estimator 195 5.3.1 Unbiasedness of the Least-Squares Estimate 195 5.3.2 Covariance of the Estimation Error 197 5.3.3 Properties of the Residual 198 5.3.4 Model and Systemic Errors: Bias and the Variance Errors 201 5.4 Illustrative Examples 205 5.4.1 Non-Zero-Mean Measurement Noise 209 5.5 Cram'er-Rao Lower Bound 209 5.6 Maximum Likelihood Estimation 210 5.6.1 Illustrative Examples 210 5.7 Least-Squares Solution of Under-Determined System 212 5.8 Singular Value Decomposition 213 5.8.1 Illustrative Example: Singular and Eigenvalues of Square Matrices 215 5.8.2 Computation of Least-Squares Estimate Using the SVD 216 5.9 Summary 218 5.10 Appendix: Properties of the Pseudo-Inverse and the Projection Operator 221 5.10.1 Over-Determined System 221 5.10.2 Under-Determined System 222 5.11 Appendix: Positive Definite Matrices 222 5.12 Appendix: Singular Value Decomposition of a Matrix 223 5.12.1 SVD and Eigendecompositions 225 5.12.2 Matrix Norms 226 5.12.3 Least Squares Estimate for Any Arbitrary Data Matrix H 226 5.12.4 Pseudo-Inverse of Any Arbitrary Matrix 228 5.12.5 Bounds on the Residual and the Covariance of the Estimation Error 228 5.13 Appendix: Least-Squares Solution for Under-Determined System 228 5.14 Appendix: Computation of Least-Squares Estimate Using the SVD 229 References 229 Further Readings 230 6 Kalman Filter 231 6.1 Overview 231 6.2 Mathematical Model of the System 233 6.2.1 Model of the Plant 233 6.2.2 Model of the Disturbance and Measurement Noise 233 6.2.3 Integrated Model of the System 234 6.2.4 Expression for the Output of the Integrated System 235 6.2.5 Linear Regression Model 235 6.2.6 Observability 236 6.3 Internal Model Principle 236 6.3.1 Controller Design Using the Internal Model Principle 237 6.3.2 Internal Model (IM) of a Signal 237 6.3.3 Controller Design 238 6.3.4 Illustrative Example: Controller Design 241 6.4 Duality Between Controller and an Estimator Design 244 6.4.1 Estimation Problem 244 6.4.2 Estimator Design 244 6.5 Observer: Estimator for the States of a System 246 6.5.1 Problem Formulation 246 6.5.2 The Internal Model of the Output 246 6.5.3 Illustrative Example: Observer with Internal Model Structure 247 6.6 Kalman Filter: Estimator of the States of a Stochastic System 250 6.6.1 Objectives of the Kalman Filter 251 6.6.2 Necessary Structure of the Kalman Filter 252 6.6.3 Internal Model of a Random Process 252 6.6.4 Illustrative Example: Role of an Internal Model 254 6.6.5 Model of the Kalman Filter 255 6.6.6 Optimal Kalman Filter 256 6.6.7 Optimal Scalar Kalman Filter 256 6.6.8 Optimal Kalman Gain 260 6.6.9 Comparison of the Kalman Filters: Integrated and Plant Models 260 6.6.10 Steady-State Kalman Filter 261 6.6.11 Internal Model and Statistical Approaches 261 6.6.12 Optimal Information Fusion 262 6.6.13 Role of the Ratio of Variances 262 6.6.14 Fusion of Information from the Model and the Measurement 263 6.6.15 Illustrative Example: Fusion of Information 264 6.6.16 Orthogonal Properties of the Kalman Filter 266 6.6.17 Ensemble and Time Averages 266 6.6.18 Illustrative Example: Orthogonality Properties of the Kalman Filter 267 6.7 The Residual of the Kalman Filter with Model Mismatch and Non-Optimal Gain 267 6.7.1 State Estimation Error with Model Mismatch 268 6.7.2 Illustrative Example: Residual with Model Mismatch and Non-Optimal Gain 271 6.8 Summary 274 6.9 Appendix: Estimation Error Covariance and the Kalman Gain 277 6.10 Appendix: The Role of the Ratio of Plant and the Measurement Noise Variances 279 6.11 Appendix: Orthogonal Properties of the Kalman Filter 279 6.11.1 Span of a Matrix 284 6.11.2 Transfer Function Formulae 284 6.12 Appendix: Kalman Filter Residual with Model Mismatch 285 References 287 7 System Identification 289 7.1 Overview 289 7.2 System Model 291 7.2.1 State-Space Model 291 7.2.2 Assumptions 292 7.2.3 Frequency-Domain Model 292 7.2.4 Input Signal for System Identification 293 7.3 Kalman Filter-Based Identification Model Structure 297 7.3.1 Expression for the Kalman Filter Residual 298 7.3.2 Direct Form or Colored Noise Form 300 7.3.3 Illustrative Examples: Process, Predictor, and Innovation Forms 302 7.3.4 Models for System Identification 304 7.3.5 Identification Methods 305 7.4 Least-Squares Method 307 7.4.1 Linear Matrix Model: Batch Processing 308 7.4.2 The Least-Squares Estimate 308 7.4.3 Quality of the Least-Squares Estimate 312 7.4.4 Illustrative Example of the Least-Squares Identification 313 7.4.5 Computation of the Estimates Using Singular Value Decomposition 315 7.4.6 Recursive Least-Squares Identification 316 7.5 High-Order Least-Squares Method 318 7.5.1 Justification for a High-Order Model 318 7.5.2 Derivation of a Reduced-Order Model 323 7.5.3 Formulation of Model Reduction 324 7.5.4 Model Order Selection 324 7.5.5 Illustrative Example of High-Order Least-Squares Method 325 7.5.6 Performance of the High-Order Least-Squares Scheme 326 7.6 The Prediction Error Method 327 7.6.1 Residual Model 327 7.6.2 Objective Function 327 7.6.3 Iterative Prediction Algorithm 328 7.6.4 Family of Prediction Error Algorithms 330 7.7 Comparison of High-Order Least-Squares and the Prediction Error Methods 330 7.7.1 Illustrative Example: LS, High Order LS, and PEM 331 7.8 Subspace Identification Method 334 7.8.1 Identification Model: Predictor Form of the Kalman Filter 334 7.9 Summary 340 7.10 Appendix: Performance of the Least-Squares Approach 347 7.10.1 Correlated Error 347 7.10.2 Uncorrelated Error 347 7.10.3 Correlation of the Error and the Data Matrix 348 7.10.4 Residual Analysis 350 7.11 Appendix: Frequency-Weighted Model Order Reduction 352 7.11.1 Implementation of the Frequency-Weighted Estimator 354 7.11.2 Selection of the Frequencies 354 References 354 8 Closed Loop Identification 357 8.1 Overview 357 8.1.1 Kalman Filter-Based Identification Model 358 8.1.2 Closed-Loop Identification Approaches 358 8.2 Closed-Loop System 359 8.2.1 Two-Stage and Direct Approaches 359 8.3 Model of the Single Input Multi-Output System 360 8.3.1 State- Space Model of the Subsystem 360 8.3.2 State-Space Model of the Overall System 361 8.3.3 Transfer Function Model 361 8.3.4 Illustrative Example: Closed-Loop Sensor Network 362 8.4 Kalman Filter-Based Identification Model 364 8.4.1 State-Space Model of the Kalman Filter 364 8.4.2 Residual Model 365 8.4.3 The Identification Model 366 8.5 Closed-Loop Identification Schemes 366 8.5.1 The High-Order Least-Squares Method 366 8.6 Second Stage of the Two-Stage Identification 372 8.7 Evaluation on a Simulated Closed-Loop Sensor Net 372 8.7.1 The Performance of the Stage I Identification Scheme 372 8.7.2 The Performance of the Stage II Identification Scheme 373 8.8 Summary 374 References 377 9 Fault Diagnosis 379 9.1 Overview 379 9.1.1 Identification for Fault Diagnosis 380 9.1.2 Residual Generation 380 9.1.3 Fault Detection 380 9.1.4 Fault Isolation 381 9.2 Mathematical Model of the System 381 9.2.1 Linear Regression Model: Nominal System 382 9.3 Model of the Kalman Filter 382 9.4 Modeling of Faults 383 9.4.1 Linear Regression Model 383 9.5 Diagnostic Parameters and the Feature Vector 384 9.6 Illustrative Example 386 9.6.1 Mathematical Model 386 9.6.2 Feature Vector and the Influence Vectors 387 9.7 Residual of the Kalman Filter 388 9.7.1 Diagnostic Model 389 9.7.2 Key Properties of the Residual 389 9.7.3 The Role of the Kalman Filter in Fault Diagnosis 389 9.8 Fault Diagnosis 390 9.9 Fault Detection: Bayes Decision Strategy 390 9.9.1 Pattern Classification Problem: Fault Detection 391 9.9.2 Generalized Likelihood Ratio Test 392 9.9.3 Maximum Likelihood Estimate 392 9.9.4 Decision Strategy 394 9.9.5 Other Test Statistics 395 9.10 Evaluation of Detection Strategy on Simulated System 396 9.11 Formulation of Fault Isolation Problem 396 9.11.1 Pattern Classification Problem: Fault Isolation 397 9.11.2 Formulation of the Fault Isolation Scheme 398 9.11.3 Fault Isolation Tasks 399 9.12 Estimation of the Influence Vectors and Additive Fault 399 9.12.1 Parameter-Perturbed Experiment 400 9.12.2 Least-Squares Estimates 401 9.13 Fault Isolation Scheme 401 9.13.1 Sequential Fault Isolation Scheme 402 9.13.2 Isolation of the Fault 403 9.14 Isolation of a Single Fault 403 9.14.1 Fault Discriminant Function 403 9.14.2 Performance of Fault Isolation Scheme 404 9.14.3 Performance Issues and Guidelines 405 9.15 Emulators for Offline Identification 406 9.15.1 Examples of Emulators 407 9.15.2 Emulators for Multiple Input-Multiple-Output System 407 9.15.3 Role of an Emulator 408 9.15.4 Criteria for Selection 409 9.16 Illustrative Example 409 9.16.1 Mathematical Model 409 9.16.2 Selection of Emulators 410 9.16.3 Transfer Function Model 410 9.16.4 Role of the Static Emulators 411 9.16.5 Role of the Dynamic Emulator 412 9.17 Overview of Fault Diagnosis Scheme 414 9.18 Evaluation on a Simulated Example 414 9.18.1 The Kalman Filter 414 9.18.2 The Kalman Filter Residual and Its Auto-correlation 414 9.18.3 Estimation of the Influence Vectors 416 9.18.4 Fault Size Estimation 416 9.18.5 Fault Isolation 417 9.19 Summary 418 9.20 Appendix: Bayesian Multiple Composite Hypotheses Testing Problem 422 9.21 Appendix: Discriminant Function for Fault Isolation 423 9.22 Appendix: Log-Likelihood Ratio for a Sinusoid and a Constant 424 9.22.1 Determination of af, bf , and cf 424 9.22.2 Determination of the Optimal Cost 425 References 426 10 Modeling and Identification of Physical Systems 427 10.1 Overview 427 10.2 Magnetic Levitation System 427 10.2.1 Mathematic Model of a Magnetic Levitation System 427 10.2.2 Linearized Model 429 10.2.3 Discrete-Time Equivalent of Continuous-Time Models 430 10.2.4 Identification Approach 432 10.2.5 Identification of the Magnetic Levitation System 433 10.3 Two-Tank Process Control System 436 10.3.1 Model of the Two-Tank System 436 10.3.2 Identification of the Closed-Loop Two-Tank System 438 10.4 Position Control System 442 10.4.1 Experimental Setup 442 10.4.2 Mathematical Model of the Position Control System 442 10.5 Summary 444 References 446 11 Fault Diagnosis of Physical Systems 447 11.1 Overview 447 11.2 Two-Tank Physical Process Control System 448 11.2.1 Objective 448 11.2.2 Identification of the Physical System 448 11.2.3 Fault Detection 449 11.2.4 Fault Isolation 451 11.3 Position Control System 452 11.3.1 The Objective 452 11.3.2 Identification of the Physical System 452 11.3.3 Detection of Fault 455 11.3.4 Fault Isolation 455 11.3.5 Fault Isolability 455 11.4 Summary 457 References 457 12 Fault Diagnosis of a Sensor Network 459 12.1 Overview 459 12.2 Problem Formulation 461 12.3 Fault Diagnosis Using a Bank of Kalman Filters 461 12.4 Kalman Filter for Pairs of Measurements 462 12.5 Kalman Filter for the Reference Input-Measurement Pair 463 12.6 Kalman Filter Residual: A Model Mismatch Indicator 463 12.6.1 Residual for a Pair of Measurements 463 12.7 Bayes Decision Strategy 464 12.8 Truth Table of Binary Decisions 465 12.9 Illustrative Example 467 12.10 Evaluation on a Physical Process Control System 469 12.11 Fault Detection and Isolation 470 12.11.1 Comparison with Other Approaches 473 12.12 Summary 474 12.13 Appendix 475 12.13.1 Map Relating yi(z) to yj(z) 475 12.13.2 Map Relating r(z) to yj(z) 476 References 477 13 Soft Sensor 479 13.1 Review 479 13.1.1 Benefits of a Soft Sensor 479 13.1.2 Kalman Filter 479 13.1.3 Reliable Identification of the System 480 13.1.4 Robust Controller Design 480 13.1.5 Fault Tolerant System 481 13.2 Mathematical Formulation 481 13.2.1 Transfer Function Model 482 13.2.2 Uncertainty Model 482 13.3 Identification of the System 483 13.3.1 Perturbed Parameter Experiment 484 13.3.2 Least-Squares Estimation 484 13.3.3 Selection of the Model Order 485 13.3.4 Identified Nominal Model 485 13.3.5 Illustrative Example 486 13.4 Model of the Kalman Filter 488 13.4.1 Role of the Kalman Filter 488 13.4.2 Model of the Kalman Filter 489 13.4.3 Augmented Model of the Plant and the Kalman Filter 489 13.5 Robust Controller Design 489 13.5.1 Objective 489 13.5.2 Augmented Model 490 13.5.3 Closed-Loop Performance and Stability 490 13.5.4 Uncertainty Model 491 13.5.5 Mixed-sensitivity Optimization Problem 492 13.5.6 State-Space Model of the Robust Control System 493 13.6 High Performance and Fault Tolerant Control System 494 13.6.1 Residual and Model-mismatch 494 13.6.2 Bayes Decision Strategy 495 13.6.3 High Performance Control System 495 13.6.4 Fault-Tolerant Control System 496 13.7 Evaluation on a Simulated System: Soft Sensor 496 13.7.1 Offline Identification 497 13.7.2 Identified Model of the Plant 497 13.7.3 Mixed-sensitivity Optimization Problem 498 13.7.4 Performance and Robustness 499 13.7.5 Status Monitoring 499 13.8 Evaluation on a Physical Velocity Control System 500 13.9 Conclusions 502 13.10 Summary 503 References 507 Index 509