دسترسی نامحدود
برای کاربرانی که ثبت نام کرده اند
برای ارتباط با ما می توانید از طریق شماره موبایل زیر از طریق تماس و پیامک با ما در ارتباط باشید
در صورت عدم پاسخ گویی از طریق پیامک با پشتیبان در ارتباط باشید
برای کاربرانی که ثبت نام کرده اند
درصورت عدم همخوانی توضیحات با کتاب
از ساعت 7 صبح تا 10 شب
ویرایش: [4 ed.]
نویسندگان: Adrian A. Hopgood
سری:
ISBN (شابک) : 0367336162, 9780367336165
ناشر: CRC Press
سال نشر: 2021
تعداد صفحات: 488
[515]
زبان: English
فرمت فایل : PDF (درصورت درخواست کاربر به PDF، EPUB یا AZW3 تبدیل می شود)
حجم فایل: 14 Mb
در صورت تبدیل فایل کتاب Intelligent Systems for Engineers and Scientists: A Practical Guide to Artificial Intelligence به فرمت های PDF، EPUB، AZW3، MOBI و یا DJVU می توانید به پشتیبان اطلاع دهید تا فایل مورد نظر را تبدیل نمایند.
توجه داشته باشید کتاب سیستم های هوشمند برای مهندسان و دانشمندان: راهنمای عملی برای هوش مصنوعی نسخه زبان اصلی می باشد و کتاب ترجمه شده به فارسی نمی باشد. وبسایت اینترنشنال لایبرری ارائه دهنده کتاب های زبان اصلی می باشد و هیچ گونه کتاب ترجمه شده یا نوشته شده به فارسی را ارائه نمی دهد.
نسخه چهارم کاملاً به روز شده دارای پوشش جدیدی از الگوریتم های یادگیری عمیق است. با استفاده از زبانی واضح و مختصر، اصول هوش مصنوعی (AI) و کاربردهای عملی آن را توضیح می دهد. این به مهندسان و دانشمندان زمینه ای محکم در هوش مصنوعی می دهد تا بتوانند سیستم ها را در حوزه مورد علاقه خود پیاده سازی کنند.
The thoroughly updated fourth edition features new coverage of deep learning algorithms. Using clear and concise language, it explains the principles of artificial intelligence (AI) and its practical applications. It gives engineers and scientists a solid grounding in AI so that its they can implement systems in their own domain of interest.
Cover Half Title Title Page Copyright Page Dedication Table of Contents Preface Acknowledgements Author Chapter 1: Introduction 1.1 Artificial Intelligence and Intelligent Systems 1.2 A Spectrum of Intelligent Behavior 1.3 Knowledge-Based Systems (KBSs) 1.4 The Knowledge Base 1.4.1 Rules and Facts 1.4.2 Inference Networks 1.4.3 Semantic Networks 1.5 Deduction, Abduction, and Induction 1.6 The Inference Engine 1.7 Declarative and Procedural Programming 1.8 Expert Systems 1.9 Knowledge Acquisition 1.10 Search 1.11 Computational Intelligence (CI) 1.12 Integration with Other Software Further Reading Chapter 2: Rule-Based Systems 2.1 Rules and Facts 2.2 A Rule-Based System for Boiler Control 2.3 Rule Examination and Rule Firing 2.4 Maintaining Consistency 2.5 The Closed-World Assumption 2.6 Use of Local Variables within Rules 2.7 Forward Chaining (a Data-Driven Strategy) 2.7.1 Single and Multiple Instantiation of Local Variables 2.7.2 Rete Algorithm 2.8 Conflict Resolution 2.8.1 First Come, First Served 2.8.2 Priority Values 2.8.3 Metarules 2.9 Backward Chaining (a Goal-Driven Strategy) 2.9.1 The Backward-Chaining Mechanism 2.9.2 Implementation of Backward Chaining 2.9.3 Variations of Backward Chaining 2.9.4 Format of Backward-Chaining Rules 2.10 A Hybrid Strategy 2.11 Explanation Facilities 2.12 Summary Further Reading Chapter 3: Handling Uncertainty: Probability and Fuzzy Logic 3.1 Sources of Uncertainty 3.2 Bayesian Updating 3.2.1 Representing Uncertainty by Probability 3.2.2 Direct Application of Bayes’ Theorem 3.2.3 Likelihood Ratios 3.2.4 Using the Likelihood Ratios 3.2.5 Dealing with Uncertain Evidence 3.2.6 Combining Evidence 3.2.7 Combining Bayesian Rules with Production Rules 3.2.8 A Worked Example of Bayesian Updating 3.2.9 Discussion of the Worked Example 3.2.10 Advantages and Disadvantages of Bayesian Updating 3.3 Certainty Theory 3.3.1 Introduction 3.3.2 Making Uncertain Hypotheses 3.3.3 Logical Combinations of Evidence 3.3.3.1 Conjunction 3.3.3.2 Disjunction 3.3.3.3 Negation 3.3.4 A Worked Example of Certainty Theory 3.3.5 Discussion of the Worked Example 3.3.6 Relating Certainty Factors to Probabilities 3.4 Fuzzy Logic: Type-1 3.4.1 Crisp Sets and Fuzzy Sets 3.4.2 Fuzzy Rules 3.4.3 Defuzzification 3.4.3.1 Stage 1: Scaling the Membership Functions 3.4.3.2 Stage 2: Finding the Centroid 3.4.3.3 Defuzzifying at the Extremes 3.4.3.4 Sugeno Defuzzification 3.4.3.5 A Defuzzification Anomaly 3.5 Fuzzy Control Systems 3.5.1 Crisp and Fuzzy Control 3.5.2 Fuzzy Control Rules 3.5.3 Defuzzification in Control Systems 3.6 Fuzzy Logic: Type-2 3.7 Other Techniques 3.7.1 Dempster–Shafer Theory of Evidence 3.7.2 Inferno 3.8 Summary Further Reading Chapter 4: Agents, Objects, and Frames 4.1 Birds of a Feather: Agents, Objects, and Frames 4.2 Intelligent Agents 4.3 Agent Architectures 4.3.1 Logic-Based Architectures 4.3.2 Emergent Behavior Architectures 4.3.3 Knowledge-Level Architectures 4.3.4 Layered Architectures 4.4 Multiagent Systems (MASs) 4.4.1 Benefits of a Multiagent System 4.4.2 Building a Multiagent System 4.4.3 Contract Nets 4.4.4 Cooperative Problem-Solving (CPS) 4.4.5 Shifting Matrix Management (SMM) 4.4.6 Comparison of Cooperative Models 4.4.7 Communication Between Agents 4.5 Swarm Intelligence 4.6 Object-Oriented Systems 4.6.1 Introducing Object-Oriented Programming (OOP) 4.6.2 An Illustrative Example 4.6.3 Data Abstraction 4.6.3.1 Classes 4.6.3.2 Instances 4.6.3.3 Attributes (or Data Members) 4.6.3.4 Operations (or Methods or Member Functions) 4.6.3.5 Creation and Deletion of Instances 4.6.4 Inheritance 4.6.4.1 Single Inheritance 4.6.4.2 Multiple and Repeated Inheritance 4.6.4.3 Specialization of Methods 4.6.4.4 Class Browsers 4.6.5 Encapsulation 4.6.6 Unified Modeling Language (UML) 4.6.7 Dynamic (or Late) Binding 4.6.8 Message Passing and Function Calls 4.6.9 Metaclasses 4.6.10 Type Checking 4.6.11 Persistence 4.6.12 Concurrency 4.6.13 Active Values and Daemons 4.6.14 Summary of Object-Oriented Systems 4.7 Objects and Agents 4.8 Frame-Based Systems 4.9 Summary: Agents, Objects, and Frames Further Reading Chapter 5: Symbolic Learning 5.1 Introduction 5.2 Learning by Induction 5.2.1 Overview 5.2.2 Learning Viewed as a Search Problem 5.2.3 Techniques for Generalization and Specialization 5.2.3.1 Universalization 5.2.3.2 Replacing Constants with Variables 5.2.3.3 Using Conjunctions and Disjunctions 5.2.3.4 Moving Up or Down a Hierarchy 5.2.3.5 Chunking 5.3 Case-Based Reasoning (CBR) 5.3.1 Storing Cases 5.3.1.1 Abstraction Links and Index Links 5.3.1.2 Instance-Of Links 5.3.1.3 Scene Links 5.3.1.4 Exemplar Links 5.3.1.5 Failure Links 5.3.2 Retrieving Cases 5.3.3 Adapting Case Histories 5.3.3.1 Null Adaptation 5.3.3.2 Parameterization 5.3.3.3 Reasoning by Analogy 5.3.3.4 Critics 5.3.3.5 Reinstantiation 5.3.4 Dealing with Mistaken Conclusions 5.4 Summary Further Reading Chapter 6: Single-Candidate Optimization Algorithms 6.1 Optimization 6.2 The Search Space 6.3 Searching the Parameter Space 6.4 Hill-Climbing and Gradient-Descent Algorithms 6.4.1 Hill-Climbing 6.4.2 Steepest Gradient Descent or Ascent 6.4.3 Gradient-Proportional Descent or Ascent 6.4.4 Conjugate Gradient Descent or Ascent 6.4.5 Tabu Search 6.5 Simulated Annealing 6.6 Summary Further Reading Chapter 7: Genetic Algorithms for Optimization 7.1 Introduction: Evolutionary Algorithms 7.2 The Basic Genetic Algorithm 7.2.1 Chromosomes 7.2.2 Algorithm Outline 7.2.3 Crossover 7.2.4 Mutation 7.2.5 Validity Check 7.3 Selection 7.3.1 Selection Pitfalls 7.3.2 Fitness-Proportionate Selection 7.3.3 Fitness Scaling for Improved Selection 7.3.3.1 Linear Fitness Scaling 7.3.3.2 Sigma Scaling 7.3.3.3 Boltzmann Fitness Scaling 7.3.3.4 Linear Rank Scaling 7.3.3.5 Nonlinear Rank Scaling 7.3.3.6 Probabilistic Nonlinear Rank Scaling 7.3.3.7 Truncation Selection 7.3.3.8 Transform Ranking 7.3.4 Tournament Selection 7.3.5 Comparison of Selection Methods 7.4 Elitism 7.5 Multiobjective Optimization 7.6 Gray Code 7.7 Variable-Length Chromosomes 7.8 Building Block Hypothesis 7.8.1 Schema Theorem 7.8.2 Inversion 7.9 Selecting GA Parameters 7.10 Monitoring Evolution 7.11 Finding Multiple Optima 7.12 Genetic Programming (GP) 7.13 Other Forms of Population-Based Optimization 7.14 Summary Further Reading Chapter 8: Shallow Neural Networks 8.1 Introduction 8.2 Neural Network Applications 8.2.1 Classification 8.2.2 Nonlinear Estimation and Prediction 8.2.3 Clustering 8.2.4 Memory and Recall 8.3 Nodes and Interconnections 8.4 Single and Multilayer Perceptrons (SLPs and MLPs) 8.4.1 Network Topology 8.4.2 Perceptrons as Classifiers 8.4.3 Training a Perceptron 8.4.4 Hierarchical Perceptrons 8.4.5 Buffered Perceptrons 8.4.6 Some Practical Considerations 8.4.6.1 Overtraining 8.4.6.2 Leave-One-Out and K-Fold Cross-Validation 8.4.6.3 Data Scaling 8.5 Recurrent Networks 8.5.1 Simple Recurrent Network (SRN) 8.5.2 Hopfield Network 8.5.3 Maxnet 8.5.4 The Hamming Network 8.6 Unsupervised Networks 8.6.1 Adaptive Resonance Theory (ART) Networks 8.6.2 Kohonen Self-Organizing Networks 8.6.3 Radial Basis Function (RBF) Networks 8.7 Spiking Neural Networks (SNNs) 8.8 Summary Further Reading Chapter 9: Deep Neural Networks 9.1 Deep Learning 9.2 Convolutional Neural Networks (CNNs) for Image Recognition 9.2.1 Origins 9.2.2 Motivation for Convolutional Networks 9.2.3 CNN Structure 9.2.3.1 Input Layer 9.2.3.2 Feature Maps 9.2.3.3 Pooling and Classification Layers 9.2.4 Pretrained Networks and Transfer Learning 9.2.5 CNNs in Context 9.3 Generative Networks 9.3.1 Generative Versus Discriminative Algorithms 9.3.2 Autoencoder Networks 9.3.3 Generative Adversarial Networks (GANs) 9.4 Long Short-Term Memory (LSTM) Networks 9.5 Summary Further Reading Chapter 10: Hybrid Systems 10.1 Convergence of Techniques 10.2 Blackboard Systems for Multifaceted Problems 10.3 Parameter Setting 10.3.1 Genetic–Neural Systems 10.3.2 Genetic–Fuzzy Systems 10.3.3 Fuzzy–Genetic Systems 10.4 Capability Enhancement 10.4.1 Neuro–Fuzzy Systems 10.4.2 Memetic Algorithms: Genetic Algorithms with Local Search 10.4.3 Learning Classifier Systems (LCSs) 10.5 Clarification and Verification of Neural Network Outputs 10.6 Summary Further Reading Chapter 11: AI Programming Languages and Tools 11.1 A Range of Intelligent Systems Tools 11.2 Features of AI Languages 11.2.1 Lists 11.2.2 Other Data Types 11.2.3 Programming Environments 11.3 Lisp 11.3.1 Background 11.3.2 Lisp Functions 11.3.3 A Worked Example 11.4 Prolog 11.4.1 Background 11.4.2 A Worked Example 11.4.3 Backtracking in Prolog 11.5 Python 11.5.1 Background 11.5.2 A Worked Example 11.6 Comparison of AI Languages 11.7 Summary Further Reading Lisp Prolog Python Chapter 12: Systems for Interpretation and Diagnosis 12.1 Introduction 12.2 Deduction and Abduction for Diagnosis 12.2.1 Exhaustive Testing 12.2.2 Explicit Modeling of Uncertainty 12.2.3 Hypothesize-and-Test 12.3 Depth of Knowledge 12.3.1 Shallow Knowledge 12.3.2 Deep Knowledge 12.3.3 Combining Shallow and Deep Knowledge 12.4 Model-Based Reasoning 12.4.1 The Limitations of Rules 12.4.2 Modeling Function, Structure, and State 12.4.2.1 Function 12.4.2.2 Structure 12.4.2.3 State 12.4.3 Using the Model 12.4.4 Monitoring 12.4.5 Tentative Diagnosis 12.4.5.1 The Shotgun Approach 12.4.5.2 Structural Isolation 12.4.5.3 The Heuristic Approach 12.4.6 Fault Simulation 12.4.7 Fault Repair 12.4.8 Using Problem Trees 12.4.9 Summary of Model-Based Reasoning 12.5 Case Study: A Blackboard System for Interpreting Ultrasonic Images 12.5.1 Ultrasonic Imaging 12.5.2 Agents in DARBS 12.5.3 Rules in DARBS 12.5.4 The Stages of Image Interpretation 12.5.4.1 Arc Detection Using the Hough Transform 12.5.4.2 Gathering the Evidence 12.5.4.3 Defect Classification 12.5.5 The Use of Neural Networks 12.5.5.1 Defect Classification Using a Neural Network 12.5.5.2 Echodynamic Classification Using a Neural Network 12.5.5.3 Combining the Two Applications of Neural Networks 12.5.6 Rules for Verifying Neural Networks 12.6 Summary Further Reading Chapter 13: Systems for Design and Selection 13.1 The Design Process 13.2 Design as a Search Problem 13.3 Computer-Aided Design 13.4 The Product Design Specification (PDS): A Telecommunications Case Study 13.4.1 Background 13.4.2 Alternative Views of a Network 13.4.3 Implementation 13.4.4 The Classes 13.4.4.1 Network 13.4.4.2 Link 13.4.4.3 Information Stream 13.4.4.4 Site 13.4.4.5 Equipment 13.4.5 Summary of PDS Case Study 13.5 Conceptual Design 13.6 Constraint Propagation and Truth Maintenance 13.7 Case Study: The Design of a Lightweight Beam 13.7.1 Conceptual Design 13.7.2 Optimization and Evaluation 13.7.3 Detailed Design 13.8 Design as a Selection Exercise 13.8.1 Overview 13.8.2 Merit Indices 13.8.3 The Polymer Selection Example 13.8.4 Two-Stage Selection 13.8.5 Constraint Relaxation 13.8.6 A Naive Approach to Scoring 13.8.7 A Better Approach to Scoring 13.8.8 Case Study: The Design of a Kettle 13.8.9 Reducing the Search Space by Classification 13.9 Failure Mode and Effects Analysis (FMEA) 13.10 Summary Further Reading Chapter 14: Systems for Planning 14.1 Introduction 14.2 Classical Planning Systems 14.3 Stanford Research Institute Problem Solver (STRIPS) 14.3.1 General Description 14.3.2 An Example Problem 14.3.3 A Simple Planning System in Prolog 14.4 Considering the Side Effects of Actions 14.4.1 Maintaining a World Model 14.4.2 Deductive Rules 14.5 Hierarchical Planning 14.5.1 Description 14.5.2 Benefits of Hierarchical Planning 14.5.3 Hierarchical Planning with ABSTRIPS 14.6 Postponement of Commitment 14.6.1 Partial Ordering of Plans 14.6.2 The Use of Planning Variables 14.7 Job-Shop Scheduling 14.7.1 The Problem 14.7.2 Some Approaches to Scheduling 14.8 Constraint-Based Analysis (CBA) 14.8.1 Constraints and Preferences 14.8.2 Formalizing the Constraints 14.8.3 Identifying the Critical Sets of Operations 14.8.4 Sequencing in the Disjunctive Case 14.8.5 Sequencing in the Nondisjunctive Case 14.8.6 Updating Earliest Start Times and Latest Finish Times 14.8.7 Applying Preferences 14.8.8 Using Constraints and Preferences 14.9 Replanning and Reactive Planning 14.10 Summary Further Reading Chapter 15: Systems for Control 15.1 Introduction 15.2 Low-Level Control 15.2.1 Open-Loop Control 15.2.2 Feedforward Control 15.2.3 Feedback Control 15.2.4 First- and Second-Order Models 15.2.5 Algorithmic Control: The PID Controller 15.2.6 Bang-Bang Control 15.3 Requirements of High-Level (Supervisory) Control 15.4 Blackboard Maintenance 15.5 Time-Constrained Reasoning 15.5.1 Prioritization of Processes 15.5.2 Approximation 15.5.2.1 Approximate Search 15.5.2.2 Data Approximations 15.5.2.3 Knowledge Approximations 15.5.3 Single and Multiple Instantiation 15.6 Fuzzy Control 15.7 The BOXES Controller 15.7.1 The Conventional BOXES Algorithm 15.7.2 Fuzzy BOXES 15.8 Neural Network Controllers 15.8.1 Direct Association of State Variables with Action Variables 15.8.2 Estimation of Critical State Variables 15.9 Statistical Process Control (SPC) 15.9.1 Applications 15.9.2 Collecting the Data 15.9.3 Using the Data 15.10 Summary Further Reading Chapter 16: The Future of Intelligent Systems 16.1 Benefits 16.2 Trends in Implementation 16.3 Intelligent Systems and the Internet 16.4 Computational Power 16.5 Ubiquitous Intelligent Systems 16.6 Ethics 16.7 Conclusions References Index