دسترسی نامحدود
برای کاربرانی که ثبت نام کرده اند
برای ارتباط با ما می توانید از طریق شماره موبایل زیر از طریق تماس و پیامک با ما در ارتباط باشید
در صورت عدم پاسخ گویی از طریق پیامک با پشتیبان در ارتباط باشید
برای کاربرانی که ثبت نام کرده اند
درصورت عدم همخوانی توضیحات با کتاب
از ساعت 7 صبح تا 10 شب
ویرایش: 1
نویسندگان: Gérard Dreyfus
سری:
ISBN (شابک) : 3540229809, 9783540229803
ناشر: Springer
سال نشر: 2005
تعداد صفحات: 516
زبان: English
فرمت فایل : PDF (درصورت درخواست کاربر به PDF، EPUB یا AZW3 تبدیل می شود)
حجم فایل: 6 مگابایت
در صورت تبدیل فایل کتاب Neural Networks: Methodology and Applications به فرمت های PDF، EPUB، AZW3، MOBI و یا DJVU می توانید به پشتیبان اطلاع دهید تا فایل مورد نظر را تبدیل نمایند.
توجه داشته باشید کتاب شبکه های عصبی: روش شناسی و کاربردها نسخه زبان اصلی می باشد و کتاب ترجمه شده به فارسی نمی باشد. وبسایت اینترنشنال لایبرری ارائه دهنده کتاب های زبان اصلی می باشد و هیچ گونه کتاب ترجمه شده یا نوشته شده به فارسی را ارائه نمی دهد.
شبکه های عصبی یک تکنیک پردازش داده قدرتمند است که به بلوغ و کاربرد گسترده رسیده است. هنگامی که به وضوح درک شده و به درستی مورد استفاده قرار گیرد، جزء اجباری در جعبه ابزار هر مهندس است که می خواهد از داده های موجود به بهترین شکل استفاده کند، به منظور ساخت مدل ها، پیش بینی ها، استخراج داده ها، تشخیص اشکال یا سیگنال ها و غیره. مبانی نظری برای کاربردهای واقعی، این کتاب در نظر گرفته شده است تا روشهای روشنی را برای استفاده از شبکههای عصبی در کاربردهای صنعتی، مالی یا بانکی در اختیار مهندسان و محققان قرار دهد که نمونههای بسیاری از آن در کتاب ارائه شده است. برای بهره مندی از خوانندگانی که مایل به کسب دانش عمیق تر از موضوعات هستند، این کتاب دارای ضمیمه هایی است که جزئیات نظری را برای بینش بیشتر و جزئیات الگوریتمی را برای برنامه ریزی و اجرای کارآمد ارائه می دهد. فصول توسط متخصصان نوشته شده و ویرایش شده اند تا مقدمه ای منسجم و جامع و در عین حال غیر زائد و کاربردی ارائه دهند.
Neural networks represent a powerful data processing technique that has reached maturity and broad application. When clearly understood and appropriately used, they are a mandatory component in the toolbox of any engineer who wants make the best use of the available data, in order to build models, make predictions, mine data, recognize shapes or signals, etc. Ranging from theoretical foundations to real-life applications, this book is intended to provide engineers and researchers with clear methodologies for taking advantage of neural networks in industrial, financial or banking applications, many instances of which are presented in the book. For the benefit of readers wishing to gain deeper knowledge of the topics, the book features appendices that provide theoretical details for greater insight, and algorithmic details for efficient programming and implementation. The chapters have been written by experts and edited to present a coherent and comprehensive, yet not redundant, practically oriented introduction.
NEURAL NETWORKS: METHODOLOGY AND APPLICATIONS......Page 1
Springerlink......Page 0
Half-title......Page 2
Title Page......Page 3
Copyright Page......Page 4
Preface......Page 6
Reading Guide......Page 7
Contents......Page 10
List of Contributors......Page 18
G. Dreyfus......Page 20
1.1 Neural Networks: Definitions and Properties......Page 21
1.1.1.1 Feedforward Neural Networks......Page 22
1.1.1.3 Direct Terms......Page 25
1.1.1.4 Recurrent (Feedback) Neural Networks......Page 27
1.1.1.5 Summary......Page 30
1.1.2.2 Unsupervised Training......Page 31
1.1.3.2 Some Neural Networks Are Parsimonious......Page 32
1.1.3.3 An Elementary Example......Page 33
1.1.4 Feedforward Neural Networks with Supervised Training for Static Modeling and Discrimination (Classification)......Page 34
1.1.4.1 Static Modeling......Page 36
1.1.4.3 Classification (Discrimination)......Page 39
1.1.5 Feedforward Neural Networks with Unsupervised Training for Data Analysis and Visualization......Page 40
1.1.6.1 Semiphysical Modeling......Page 41
1.1.7 Recurrent Neural Networks Without Training for Combinatorial Optimization......Page 42
1.2.1 When to Use Neural Networks?......Page 43
1.2.2 How to Design Neural Networks?......Page 44
1.2.2.1 Relevant Inputs......Page 45
1.2.2.3 The Number of Hidden Neurons......Page 46
1.2.2.4 The Training of Feedforward Neural Networks: An Optimization Problem......Page 49
1.2.2.5 Conclusion......Page 50
1.3 Feedforward Neural Networks and Discrimination (Classification)......Page 51
1.3.2 When Is a Statistical Classifier such as a Neural Network Appropriate?......Page 52
1.3.3 Probabilistic Classification and Bayes Formula......Page 55
1.3.4 Bayes Decision Rule......Page 60
1.3.5.1 Two-Class Problems......Page 62
1.3.5.2 C-Class Problems......Page 63
1.3.5.3 Classifier Design Methodology......Page 68
1.4.1 Introduction......Page 69
1.4.2 An Application in Pattern Recognition: The Automatic Reading of Zip Codes......Page 70
1.4.3 An Application in Nondestructive Testing: Defect Detection by Eddy Currents......Page 74
1.4.4 An Application in Forecasting: The Estimation of the Probability of Election to the French Parliament......Page 75
1.4.5 An Application in Data Mining: Information Filtering......Page 76
1.4.5.1 Input Selection......Page 77
1.4.5.3 Filter Design and Training......Page 79
1.4.6 An Application in Bioengineering: Quantitative Structure-Relation Activity Prediction for Organic Molecules......Page 81
1.4.7 An Application in Formulation: The Prediction of the Liquidus Temperatures of Industrial Glasses......Page 83
1.4.8 An Application to the Modeling of an Industrial Process: The Modeling of Spot Welding......Page 84
1.4.9 An Application in Robotics: The Modeling of the Hydraulic Actuator of a Robot Arm......Page 87
1.4.10 An Application of Semiphysical Modeling to a Manufacturing Process......Page 89
1.4.11.1 Prevision of Ozone Pollution Peaks......Page 90
1.4.11.2 Modeling the Rainfall-Water Height Relation in an Urban Catchment......Page 92
1.4.12 An Application in Mobile Robotics......Page 94
1.5 Conclusion......Page 95
1.6.1.1 Neurons with Parameterized Inputs......Page 96
1.6.1.2 Neurons with Parameterized Nonlinearities......Page 97
1.6.2 The Ho and Kashyap Algorithm......Page 98
References......Page 99
2.1.1 From Black-Box Models to Knowledge-Based Models......Page 104
2.1.3 How to Deal With Uncertainty? The Statistical Context of Modeling and Machine Learning......Page 105
2.2.1 What is a Random Variable?......Page 106
2.2.1.2 Joint Distributions......Page 107
2.2.3 Unbiased Estimator of a Parameter of a Distribution......Page 108
2.2.4 Variance of a Random Variable......Page 109
2.2.4.2 Unbiased Estimator of the Variance of a Random Variable......Page 110
2.3 Static Black-Box Modeling......Page 111
2.3.1 Regression......Page 112
2.3.2 Introduction to the Design Methodology......Page 113
2.4.1 Reduction of the Dimension of Representation Space......Page 114
2.4.2.1 Input Selection Strategies......Page 115
2.4.2.2 Comparison Criteria......Page 116
2.4.2.3 Variable Selection by the Probe Feature Method......Page 117
2.4.2.4 Relation Between Fisher’s Test and the Probe Feature Method......Page 121
2.5 Estimation of the Parameters (Training) of a Static Model......Page 122
2.5.1.1 Nonadaptive (Batch) Training of Models that are Linear with Respect to Their Parameters......Page 125
2.5.1.2 Adaptive (On-Line) Training of Models that are Linear with Respect to Their Parameters: The Least Mean Squares Algorithm......Page 128
2.5.2.1 Input Normalization......Page 129
2.5.2.2 Computation of the Gradient of the Cost Function......Page 130
2.5.2.3 Updating the Parameters as a Function of the Gradient of the Cost Function......Page 134
2.5.2.4 Summary......Page 139
2.5.4 Training with Regularization......Page 140
2.5.4.1 Early Stopping......Page 141
2.5.4.2 Regularization by Weight Decay......Page 144
2.5.5 Conclusion on the Training of Static Models......Page 149
2.6 Model Selection......Page 150
2.6.1.1 Introduction......Page 152
2.6.2.1 Introduction......Page 153
2.6.2.2 Cross-Validation......Page 154
2.6.2.4 Leave-One-Out......Page 155
2.6.3.1 Local Approximation of the Least Squares Method......Page 156
2.6.3.2 The Effect of Withdrawing an Example on the Model......Page 157
2.6.4.1 Model Selection Within a Family of Models of Given Complexity: Global Criteria......Page 161
2.6.4.2 Selection of the Best Architecture: Local Criteria (LOCL Method)......Page 164
2.6.4.4 Experimental Planning......Page 167
2.7 Dynamic Black-Box Modeling......Page 168
2.7.1 State-Space Representation and Input-Output Representation......Page 169
2.7.2.1 Input-Output Representations......Page 170
2.7.2.2 Illustration......Page 174
2.7.2.5 State-Space Representations......Page 177
2.7.3 Nonadaptive Training of Dynamic Models in Canonical Form......Page 181
2.7.3.1 Nonadaptive (Batch) Training of Feedforward Input-Output Models: Directed (Teacher-Forced) Training......Page 182
2.7.3.2 Nonadaptive (Batch) Training of Recurrent Input-Output Models: Semidirected Training......Page 183
2.7.3.4 Nonadaptive (Batch) Training of Feedforward State-Space Models: Directed Training......Page 185
2.7.3.5 Adaptive (On-Line) Training of Recurrent Neural Networks......Page 186
2.7.4 What to Do in Practice? A Real Example of Dynamic Black-Box Modeling......Page 187
2.7.4.1 Input-Output Model......Page 188
2.7.4.2 State-Space Model......Page 189
2.7.5.1 Definition......Page 190
2.7.5.2 An Example of Derivation of a Canonical Form......Page 191
2.8.1.1 From Black-Box Modeling to Knowledge-Based Modeling......Page 194
2.8.1.2 Design and Training of a Dynamic Semiphysical Model......Page 195
2.8.1.3 Discretization of a Knowledge-Based Model......Page 200
2.9 Conclusion: What Tools?......Page 205
2.10.1.1 Design......Page 206
2.10.1.2 Example......Page 207
2.10.3.2 Student Distribution......Page 208
2.10.4.1 Fisher’s Test......Page 209
2.10.4.2 Computation of the Cumulative Distribution Function of the Rank of the Probe Feature......Page 211
2.10.5.2 The Levenberg–Marquardt Algorithm......Page 212
2.10.6 Line Search Methods for the Training Rate......Page 214
2.10.7 Kullback–Leibler Divergence Between two Gaussians......Page 215
2.10.8 Computation of the Leverages......Page 216
References......Page 218
3.1 Introduction......Page 222
3.2.1 Preprocessing of Inputs......Page 223
3.2.2 Preprocessing Outputs for Supervised Classification......Page 224
3.2.3 Preprocessing Outputs for Regression......Page 225
3.4.1 Principle of PCA......Page 226
3.5 Curvilinear Component Analysis......Page 230
3.5.1 Formal Presentation of Curvilinear Component Analysis......Page 232
3.5.2 Curvilinear Component Analysis Algorithm......Page 234
3.5.3 Implementation of Curvilinear Component Analysis......Page 235
3.5.4 Quality of the Projection......Page 236
3.5.5 Difficulties of Curvilinear Component Analysis......Page 237
3.5.6 Applied to Spectrometry......Page 238
3.6 The Bootstrap and Neural Networks......Page 239
3.6.1 Principle of the Bootstrap......Page 241
3.6.2 Bootstrap Estimation of the Standard Deviation......Page 242
3.6.3 The Generalization Error Estimated by the Bootstrap......Page 243
3.6.4 The NeMo Method......Page 244
3.6.5 Testing the NeMo Method......Page 246
3.6.6 Conclusions......Page 248
References......Page 249
M. Samuelides......Page 250
4.1.1 Formal Definition of a Controlled Dynamical System by State Equation......Page 251
4.1.2 An Example of Discrete Dynamical System......Page 252
4.1.3 Example: The Linear Oscillator......Page 253
4.1.4 Example: The Inverted Pendulum......Page 254
4.1.6 Markov Chain as a Model for Discrete-Time Dynamical Systems with Noise......Page 255
4.1.7 Linear Gaussian Model as an Example of a Continuous-State Dynamical System with Noise......Page 258
4.1.8 Auto-Regressive Models......Page 259
4.2.1.1 Outline of the Algorithm......Page 261
4.2.1.2 Example of Application......Page 262
4.2.1.3 Statistical Background......Page 263
4.2.1.4 Application to a Linear System: The Harmonic Oscillator......Page 264
4.2.2.2 Network with Delay (NARX Model)......Page 265
4.3.1 Recursive Estimation of Empirical Mean......Page 269
4.3.2 Recursive Estimation of Linear Regression......Page 271
4.3.3 Recursive Identification of an AR Model......Page 272
4.3.4 General Recursive Prediction Error Method (RPEM)......Page 274
4.3.5 Application to the Linear Identification of a Controlled Dynamical System......Page 275
4.3.5.1 Addressing Measurement Inaccuracy......Page 276
4.4.1.1 Observing Dynamic Linear Systems......Page 277
4.4.1.2 Filtering State Noise and Reconstructing the State Trajectory......Page 278
4.4.1.3 Variational Approach of Optimal Filtering......Page 279
4.4.2.1 Definition of the Kalman Filter for a Linear Stationary System......Page 280
4.4.2.3 Kalman Filtering for a Time-Varying Linear System......Page 283
4.4.3.1 Case of Nonlinear Systems......Page 284
4.4.3.2 Using Extended Kalman Filter for Parametric Identification......Page 285
4.4.3.3 Adaptive Training of Neural Networks Using Kalman Filtering......Page 286
4.5.2 Neural Simulator of a Closed Loop Controlled Dynamical System......Page 289
4.5.3.1 The Elman Network......Page 291
4.5.3.2 The Hopfield Network......Page 292
4.5.4 Canonical Form for Recurrent Networks......Page 294
4.6 Learning for Recurrent Networks......Page 295
4.6.2 Unfolding of the Canonical Form and Backpropagation Through Time (BPTT)......Page 296
4.6.3 Real-Time Learning Algorithms for Recurrent Network (RTRL)......Page 300
4.6.4 Application of Recurrent Networks to Measured Controlled Dynamical System Identification......Page 301
4.7.1 Computation of the Kalman Gain and Covariance Propagation......Page 302
4.7.2 The Delay Distribution Is Crucial for Recurrent Network Dynamics......Page 304
References......Page 306
M. Samuelides......Page 308
5.1.1 Basic Model of Closed-Loop Control......Page 309
5.1.2 Controllability......Page 310
5.1.3 Stability of Controlled Dynamical Systems......Page 311
5.2.1 Straightforward Inversion......Page 313
5.2.1.1 Illustrative Example: The Inverted Pendulum......Page 314
5.2.2 Model Reference Adaptive Control......Page 316
5.2.3 Internal Model Based Control......Page 318
5.2.4.1 Using a Recurrent Neural Network to Control a Partially Observed Dynamical System......Page 320
5.3.1 Example of a Deterministic Problem in a Discrete State Space......Page 322
5.3.2 Example of a Markov Decision Problem......Page 324
5.3.3.1 Controlled Markov Chain......Page 326
5.3.3.3 Shortest Stochastic Path Problem......Page 327
5.3.3.4 Infinite Horizon Problem with Discounted Cost......Page 328
5.3.4.1 Bellman’s Optimality Principle......Page 329
5.3.4.2 Dynamic Programming Algorithm under Finite Horizon Assumption......Page 330
5.3.5.1 Bellman’s Optimality Principle......Page 331
5.3.5.3 Value-Function Iteration Method......Page 332
5.4.1 Policy Evaluation Using Monte Carlo Method and Reinforcement Learning......Page 333
5.4.2.1 TD(1) Algorithm and Temporal Difference Definition......Page 335
5.4.2.2 TD(λ) Algorithm and Eligibility Trace Method......Page 336
5.4.2.3 Back to Actor-Critics Methodology and Optimistic Iteration of Policy......Page 337
5.4.3.1 Description of the Q-Learning Algorithm......Page 338
5.4.3.2 The Choice of an Exploration Policy......Page 339
5.4.3.3 Application of Q-Learning to Partially Observed Problems......Page 340
5.4.4.1 Approximate Reinforcement Learning......Page 341
5.4.4.3 Q-Learning in a Continuous Space......Page 343
References......Page 344
M. B. Gordon......Page 348
6.1 Training for Pattern Discrimination......Page 349
6.1.1 Training and Generalization Errors......Page 350
6.1.2 Discriminant Surfaces......Page 351
6.2 Linear Separation: The Perceptron......Page 353
6.3.1 Separating Hyperplane......Page 355
6.3.2 Aligned Field......Page 356
6.3.3 Stability of an Example......Page 357
6.4.1 Perceptron Algorithm......Page 358
6.4.2 Convergence Theorem for the Perceptron Algorithm......Page 360
6.4.3 Training by Minimization of a Cost Function......Page 361
6.4.4 Cost Functions for the Perceptron......Page 363
6.4.5 Example of Application: The Classification of Sonar Signals......Page 370
6.4.7 An Interpretation of Training in Terms of Forces......Page 372
6.5.1 Spherical Perceptron......Page 374
6.5.2 Constructive Heuristics......Page 375
6.5.2.1 Constructive Algorithm NetLS......Page 377
6.5.3 Support Vector Machines (SVM)......Page 378
6.6 Problems with More than two Classes......Page 381
6.7.1 The Probabilistic Framework......Page 383
6.7.2 A Probabilistic Interpretation of the Perceptron Cost Functions......Page 385
6.7.3 The Optimal Bayesian Classifier......Page 387
6.7.4 Vapnik’s Statistical Learning Theory......Page 388
6.7.4.1 The Vapnik–Chervonenkis Dimension......Page 390
6.7.5 Prediction of the Typical Behavior......Page 391
6.7.5.1 The Typical Capacity of the Perceptron......Page 392
6.8.1 Bounds to the Number of Iterations of the Perceptron Algorithm......Page 393
6.8.2 Number of Linearly Separable Dichotomies......Page 394
References......Page 395
F. Badran, M. Yacoub, and S. Thiria......Page 398
7.1 Notations and Definitions......Page 400
7.2.1 Outline of the k-Means Algorithm......Page 402
7.2.2 Stochastic Version of k-Means......Page 405
7.2.3 Probabilistic Interpretation of k-Means......Page 407
7.3.1 Self-Organizing Maps......Page 411
7.3.2 The Batch Optimization Algorithm for Topological Maps......Page 416
7.3.3 Kohonen’s Algorithm......Page 423
7.3.5 Neural Architecture and Topological Maps......Page 425
7.3.6 Architecture and Adaptive Topological Maps......Page 427
7.3.7 Interpretation of Topological Self-Organization......Page 428
7.3.8 Probabilistic Topological Map......Page 431
7.4 Classification and Topological Maps......Page 434
7.4.1 Labeling the Map Using Expert Data......Page 435
7.4.2 Searching a Partition that Is Appropriate to the Classes......Page 436
7.4.3 Labeling and Classification......Page 439
7.5 Applications......Page 440
7.5.1 A Satellite Remote Sensing Application......Page 441
7.5.1.2 The Data......Page 442
7.5.1.3 The Role of Encoding......Page 445
7.5.1.4 Quantization Using PRSOM......Page 446
7.5.2 Classification and PRSOM......Page 449
7.5.3.1 Information Coding......Page 458
7.5.3.2 Specific Features of Learning Process......Page 459
References......Page 460
8.1 Modelling an Optimisation Problem......Page 462
8.1.1 Examples......Page 463
8.1.2 The Travelling Salesman Problem (TSP)......Page 464
8.2 Complexity of an Optimization Problem......Page 465
8.3 Classical Approaches to Combinatorial Problems......Page 466
8.4 Introduction to Metaheuristics......Page 467
8.5 Techniques Derived from Statistical Physics......Page 468
8.5.1.1 Simulated Annealing......Page 469
8.5.1.2 Rescaled Simulated Annealing......Page 472
8.5.2.1 Microcanonical Annealing......Page 475
8.5.3 Example: Travelling Salesman Problem......Page 476
8.5.3.2 Simulated Annealing......Page 477
8.5.3.3 Rescaled Simulated Annealing......Page 478
8.5.3.4 Microcanonical Annealing......Page 479
8.6.1 Formal Neural Networks for Optimization......Page 482
8.6.2 Architectures of Neural Networks for Optimisation......Page 484
8.6.3 Energy Functions for Combinatorial Optimisation......Page 485
8.6.4.1 Binary Hopfield Neural Networks......Page 486
8.6.4.2 Analog Hopfield Neural Networks......Page 487
8.6.4.3 Application of Hopfield Neural Networks to Optimization......Page 489
8.6.4.5 Cost Function......Page 490
8.6.4.6 Constraints......Page 491
8.6.4.7 Energy of the Neural Network......Page 492
8.6.4.8 Limitations of Hopfield Neural Networks......Page 493
8.6.5.2 Analog Hopfield Networks with Annealing......Page 494
8.6.5.4 Boltzmann Machine......Page 495
8.6.5.5 Mean Field Annealing......Page 496
8.6.5.6 Pulsed Neural Networks......Page 497
8.6.5.7 High-Order Neural Networks......Page 498
8.6.5.8 Lagragian Neural Networks......Page 499
8.6.5.10 Rangarajan Neural Networks......Page 500
8.6.5.11 Mixed-Penalty Neural Networks......Page 502
8.8 Genetic Algorithms......Page 503
8.10.1 The Choice of a Technique......Page 504
References......Page 505
About the Authors......Page 510
Index......Page 512