دسترسی نامحدود
برای کاربرانی که ثبت نام کرده اند
برای ارتباط با ما می توانید از طریق شماره موبایل زیر از طریق تماس و پیامک با ما در ارتباط باشید
در صورت عدم پاسخ گویی از طریق پیامک با پشتیبان در ارتباط باشید
برای کاربرانی که ثبت نام کرده اند
درصورت عدم همخوانی توضیحات با کتاب
از ساعت 7 صبح تا 10 شب
ویرایش: نویسندگان: David L. Poole, Alan K. Mackworth سری: ناشر: Independently Published سال نشر: 2024 تعداد صفحات: 390 زبان: English فرمت فایل : PDF (درصورت درخواست کاربر به PDF، EPUB یا AZW3 تبدیل می شود) حجم فایل: 2 Mb
در صورت تبدیل فایل کتاب Python code for Artificial Intelligence: Foundations of Computational Agents (Updated) به فرمت های PDF، EPUB، AZW3، MOBI و یا DJVU می توانید به پشتیبان اطلاع دهید تا فایل مورد نظر را تبدیل نمایند.
توجه داشته باشید کتاب کد پایتون برای هوش مصنوعی: مبانی عوامل محاسباتی (به روز شده) نسخه زبان اصلی می باشد و کتاب ترجمه شده به فارسی نمی باشد. وبسایت اینترنشنال لایبرری ارائه دهنده کتاب های زبان اصلی می باشد و هیچ گونه کتاب ترجمه شده یا نوشته شده به فارسی را ارائه نمی دهد.
Contents 1 Python for Artificial Intelligence 1.1 Why Python? 1.2 Getting Python 1.3 Running Python 1.4 Pitfalls 1.5 Features of Python 1.5.1 f-strings 1.5.2 Lists, Tuples, Sets, Dictionaries and Comprehensions 1.5.3 Functions as first-class objects 1.5.4 Generators 1.6 Useful Libraries 1.6.1 Timing Code 1.6.2 Plotting: Matplotlib 1.7 Utilities 1.7.1 Display 1.7.2 Argmax 1.7.3 Probability 1.8 Testing Code 2 Agent Architectures and Hierarchical Control 2.1 Representing Agents and Environments 2.2 Paper buying agent and environment 2.2.1 The Environment 2.2.2 The Agent 2.2.3 Plotting 2.3 Hierarchical Controller 2.3.1 Environment 2.3.2 Body 2.3.3 Middle Layer 2.3.4 Top Layer 2.3.5 Plotting 3 Searching for Solutions 3.1 Representing Search Problems 3.1.1 Explicit Representation of Search Graph 3.1.2 Paths 3.1.3 Example Search Problems 3.2 Generic Searcher and Variants 3.2.1 Searcher 3.2.2 GUI for Tracing Search 3.2.3 Frontier as a Priority Queue 3.2.4 A* Search 3.2.5 Multiple Path Pruning 3.3 Branch-and-bound Search 4 Reasoning with Constraints 4.1 Constraint Satisfaction Problems 4.1.1 Variables 4.1.2 Constraints 4.1.3 CSPs 4.1.4 Examples 4.2 A Simple Depth-first Solver 4.3 Converting CSPs to Search Problems 4.4 Consistency Algorithms 4.4.1 Direct Implementation of Domain Splitting 4.4.2 Consistency GUI 4.4.3 Domain Splitting as an interface to graph searching 4.5 Solving CSPs using Stochastic Local Search 4.5.1 Any-conflict 4.5.2 Two-Stage Choice 4.5.3 Updatable Priority Queues 4.5.4 Plotting Run-Time Distributions 4.5.5 Testing 4.6 Discrete Optimization 4.6.1 Branch-and-bound Search 5 Propositions and Inference 5.1 Representing Knowledge Bases 5.2 Bottom-up Proofs (with askables) 5.3 Top-down Proofs (with askables) 5.4 Debugging and Explanation 5.5 Assumables 5.6 Negation-as-failure 6 Deterministic Planning 6.1 Representing Actions and Planning Problems 6.1.1 Robot Delivery Domain 6.1.2 Blocks World 6.2 Forward Planning 6.2.1 Defining Heuristics for a Planner 6.3 Regression Planning 6.3.1 Defining Heuristics for a Regression Planner 6.4 Planning as a CSP 6.5 Partial-Order Planning 7 Supervised Machine Learning 7.1 Representations of Data and Predictions 7.1.1 Creating Boolean Conditions from Features 7.1.2 Evaluating Predictions 7.1.3 Creating Test and Training Sets 7.1.4 Importing Data From File 7.1.5 Augmented Features 7.2 Generic Learner Interface 7.3 Learning With No Input Features 7.3.1 Evaluation 7.4 Decision Tree Learning 7.5 Cross Validation and Parameter Tuning 7.6 Linear Regression and Classification 7.7 Boosting 7.7.1 Gradient Tree Boosting 8 Neural Networks and Deep Learning 8.1 Layers 8.1.1 Linear Layer 8.1.2 ReLU Layer 8.1.3 Sigmoid Layer 8.2 Feedforward Networks 8.3 Improved Optimization 8.3.1 Momentum 8.3.2 RMS-Prop 8.4 Dropout 8.4.1 Examples 9 Reasoning with Uncertainty 9.1 Representing Probabilistic Models 9.2 Representing Factors 9.3 Conditional Probability Distributions 9.3.1 Logistic Regression 9.3.2 Noisy-or 9.3.3 Tabular Factors and Prob 9.3.4 Decision Tree Representations of Factors 9.4 Graphical Models 9.4.1 Showing Belief Networks 9.4.2 Example Belief Networks 9.5 Inference Methods 9.5.1 Showing Posterior Distributions 9.6 Naive Search 9.7 Recursive Conditioning 9.8 Variable Elimination 9.9 Stochastic Simulation 9.9.1 Sampling from a discrete distribution 9.9.2 Sampling Methods for Belief Network Inference 9.9.3 Rejection Sampling 9.9.4 Likelihood Weighting 9.9.5 Particle Filtering 9.9.6 Examples 9.9.7 Gibbs Sampling 9.9.8 Plotting Behavior of Stochastic Simulators 9.10 Hidden Markov Models 9.10.1 Exact Filtering for HMMs 9.10.2 Localization 9.10.3 Particle Filtering for HMMs 9.10.4 Generating Examples 9.11 Dynamic Belief Networks 9.11.1 Representing Dynamic Belief Networks 9.11.2 Unrolling DBNs 9.11.3 DBN Filtering 10 Learning with Uncertainty 10.1 Bayesian Learning 10.2 K-means 10.3 EM 11 Causality 11.1 Do Questions 11.2 Counterfactual Example 11.2.1 Firing Squad Example 12 Planning with Uncertainty 12.1 Decision Networks 12.1.1 Example Decision Networks 12.1.2 Decision Functions 12.1.3 Recursive Conditioning for decision networks 12.1.4 Variable elimination for decision networks 12.2 Markov Decision Processes 12.2.1 Problem Domains 12.2.2 Value Iteration 12.2.3 Value Iteration GUI for Grid Domains 12.2.4 Asynchronous Value Iteration 13 Reinforcement Learning 13.1 Representing Agents and Environments 13.1.1 Environments 13.1.2 Agents 13.1.3 Simulating an Environment-Agent Interaction 13.1.4 Party Environment 13.1.5 Environment from a Problem Domain 13.1.6 Monster Game Environment 13.2 Q Learning 13.2.1 Exploration Strategies 13.2.2 Testing Q-learning 13.3 Q-leaning with Experience Replay 13.4 Stochastic Policy Learning Agent 13.5 Model-based Reinforcement Learner 13.6 Reinforcement Learning with Features 13.6.1 Representing Features 13.6.2 Feature-based RL learner 13.7 GUI for RL 14 Multiagent Systems 14.1 Minimax 14.1.1 Creating a two-player game 14.1.2 Minimax and - Pruning 14.2 Multiagent Learning 14.2.1 Simulating Multiagent Interaction with an Environment 14.2.2 Example Games 14.2.3 Testing Games and Environments 15 Individuals and Relations 15.1 Representing Datalog and Logic Programs 15.2 Unification 15.3 Knowledge Bases 15.4 Top-down Proof Procedure 15.5 Logic Program Example 16 Knowledge Graphs and Ontologies 16.1 Triple Store 16.2 Integrating Datalog and Triple Store 17 Relational Learning 17.1 Collaborative Filtering 17.1.1 Plotting 17.1.2 Loading Rating Sets from Files and Websites 17.1.3 Ratings of top items and users 17.2 Relational Probabilistic Models 18 Version History Bibliography Index