ورود به حساب

نام کاربری گذرواژه

گذرواژه را فراموش کردید؟ کلیک کنید

حساب کاربری ندارید؟ ساخت حساب

ساخت حساب کاربری

نام نام کاربری ایمیل شماره موبایل گذرواژه

برای ارتباط با ما می توانید از طریق شماره موبایل زیر از طریق تماس و پیامک با ما در ارتباط باشید


09117307688
09117179751

در صورت عدم پاسخ گویی از طریق پیامک با پشتیبان در ارتباط باشید

دسترسی نامحدود

برای کاربرانی که ثبت نام کرده اند

ضمانت بازگشت وجه

درصورت عدم همخوانی توضیحات با کتاب

پشتیبانی

از ساعت 7 صبح تا 10 شب

دانلود کتاب Artificial intelligence: a modern approach

دانلود کتاب هوش مصنوعی: رویکردی مدرن

Artificial intelligence: a modern approach

مشخصات کتاب

Artificial intelligence: a modern approach

ویرایش: 3rd ed. ; International ed 
نویسندگان: , ,   
سری: Prentice Hall series in artificial intelligence 
ISBN (شابک) : 9781408225745, 1408225751 
ناشر: Addison Wesley 
سال نشر: 2011 
تعداد صفحات: 501 
زبان: English 
فرمت فایل : PDF (درصورت درخواست کاربر به PDF، EPUB یا AZW3 تبدیل می شود) 
حجم فایل: 10 مگابایت 

قیمت کتاب (تومان) : 54,000



کلمات کلیدی مربوط به کتاب هوش مصنوعی: رویکردی مدرن: الگوریتم ها، هوش مصنوعی، هوش مصنوعی، منطق نمادین و ریاضی، منطق نمادین و ریاضی



ثبت امتیاز به این کتاب

میانگین امتیاز به این کتاب :
       تعداد امتیاز دهندگان : 11


در صورت تبدیل فایل کتاب Artificial intelligence: a modern approach به فرمت های PDF، EPUB، AZW3، MOBI و یا DJVU می توانید به پشتیبان اطلاع دهید تا فایل مورد نظر را تبدیل نمایند.

توجه داشته باشید کتاب هوش مصنوعی: رویکردی مدرن نسخه زبان اصلی می باشد و کتاب ترجمه شده به فارسی نمی باشد. وبسایت اینترنشنال لایبرری ارائه دهنده کتاب های زبان اصلی می باشد و هیچ گونه کتاب ترجمه شده یا نوشته شده به فارسی را ارائه نمی دهد.


توضیحاتی درمورد کتاب به خارجی

27-4: What if AI does succeed? -- A: Mathematical background -- A-1: Complexity analysis and O() notation -- A-2: Vectors, matrices, and linear algebra -- A-3: Probability distributions -- B: Notes on languages and algorithms -- B-1: Defining languages with Backus-Naur form (BNF) -- B-2: Describing algorithms with pseudocode -- B-3: Online help -- Bibliography -- Index.;20-4: Summary, bibliographical and historical notes, exercises -- 21: Reinforcement learning -- 21-1: Introduction -- 21-2: Passive reinforcement learning -- 21-3: Active reinforcement learning -- 21-4: Generalization in reinforcement learning -- 21-5: Policy search -- 21-6: Applications of reinforcement learning -- 21-7: Summary, bibliographical and historical notes, exercises -- 6: Communicating, Perceiving, And Acting -- 22: Natural language processing -- 22-1: Language models -- 22-2: Text classification -- 22-3: Information retrieval -- 22-4: Information extraction -- 22-5: Summary, bibliographical and historical notes, exercises -- 23: Natural language for communication -- 23-1: Phrase structure grammars -- 23-2: Syntactic analysis (parsing) -- 23-3: Augmented grammars and semantic interpretation -- 23-4: Machine translation -- 23-5: Speech recognition -- 23-6: Summary, bibliographical and historical notes, exercises -- 24: Perception -- 24-1: Image formation.;11: Planning and acting in the real world -- 11-1: Time, schedules, and resources -- 11-2: Hierarchical planning -- 11-3: Planning and acting in nondeterministic domains -- 11-4: Multiagent planning -- 11-5: Summary, bibliographical and historical notes, exercises -- 12: Knowledge representation -- 12-1: Ontological engineering -- 12-2: Categories and objects -- 12-3: Events -- 12-4: Mental events and mental objects -- 12-5: Reasoning systems for categories -- 12-6: Reasoning with default information -- 12-7: Internet shopping world -- 12-8: Summary, bibliographical and historical notes, exercises -- 4: Uncertain Knowledge And Reasoning -- 13: Quantifying uncertainty -- 13-1: Acting under uncertainty -- 13-2: Basic probability notation -- 13-3: Inference using full joint distributions -- 13-4: Independence -- 13-5: Bayes' rule and its use -- 13-6: Wumpus world revisited -- 13-7: Summary, bibliographical and historical notes, exercises -- 14: Probabilistic reasoning.;7-5: Propositional theorem proving -- 7-6: Effective propositional model checking -- 7-7: Agents based on propositional logic -- 7-8: Summary, bibliographical and historical notes, exercises -- 8: First-order logic -- 8-1: Representation revisited -- 8-2: Syntax and semantics of first-order logic -- 8-3: Using first-order logic -- 8-4: Knowledge engineering in first-order logic -- 8-5: Summary, bibliographical and historical notes, exercises -- 9: Inference in first-order logic -- 9-1: Propositional vs first-order inference -- 9-2: Unification and lifting -- 9-3: Forward chaining -- 9-4: Backward chaining -- 9-5: Resolution -- 9-6: Summary, bibliographical and historical notes, exercises -- 10: Classical planning -- 10-1: Definition of classical planning -- 10-2: Algorithms for planning as state-space search -- 10-3: Planning graphs -- 10-4: Other classical planning approaches -- 10-5: Analysis of planning approaches -- 10-6: Summary, bibliographical and historical notes, exercises.;16-7: Decision-theoretic expert systems -- 16-8: Summary, bibliographical and historical notes, exercises -- 17: Making complex decisions -- 17-1: Sequential decision problems -- 17-2: Value iteration -- 17-3: Policy iteration -- 17-4: Partially observable MDPs -- 17-5: Decisions with multiple agents: game theory -- 17-6: Mechanism design -- 17-7: Summary, bibliographical and historical notes, exercises.;4-4: Searching with partial observations -- 4-5: Online search agents and unknown environments -- 4-6: Summary, bibliographical and historical notes, exercises -- 5: Adversarial search -- 5-1: Games -- 5-2: Optimal decisions in games -- 5-3: Alpha-beta pruning -- 5-4: Imperfect real-time decisions -- 5-5: Stochastic games -- 5-6: Partially observable games -- 5-7: State-of-the-art game programs -- 5-8: Alternative approaches -- 5-9: Summary, bibliographical and historical notes, exercises -- 6: Constraint satisfaction problems -- 6-1: Defining constraint satisfaction problems -- 6-2: Constraint propagation: inference in CSPs -- 6-3: Backtracking search for CSPs -- 6-4: Local search for CSPs -- 6-5: Structure of problems -- 6-6: Summary, bibliographical and historical notes, exercises -- 3: Knowledge. Reasoning And Planning -- 7: Logical agents -- 7-1: Knowledge-based agents -- 7-2: Wumpus world -- 7-3: Logic -- 7-4: Propositional logic: a very simple logic.;24-2: Early image-processing operations -- 24-3: Object recognition by appearance -- 24-4: Reconstructing the 3D world -- 24-5: Object recognition form structural information -- 24-6: Using vision -- 24-7: Summary, bibliographical and historical notes, exercises -- 25: Robotics -- 25-1: Introduction -- 25-2: Robot hardware -- 25-3: Robotic perception -- 25-4: Planning to move -- 25-5: Planning uncertain movements -- 25-6: Moving -- 25-7: Robotic software architectures -- 25-8: Application domains -- 25-9: Summary, bibliographical and historical notes, exercises -- 7: Conclusions -- 26: Philosophical foundations -- 26-1: Weak AI: can machines act intelligently? -- 26-2: Strong AI: can machines really think? -- 26-3: Ethics and risks of developing artificial intelligence -- 26-4: Summary, bibliographical and historical notes, exercises -- 27: AI: Present and future -- 27-1: Agent components -- 27-2: Agent architectures -- 27-3: Are we going in the right direction?;1: Artificial Intelligence -- 1: Introduction -- 1-1: What is AI? -- 1-2: Foundations of artificial intelligence -- 1-3: History of artificial intelligence -- 1-4: State of the art -- 1-5: Summary, bibliographical and historical notes, exercises -- 2: Intelligent agents -- 2-1: Agents and environments -- 2-2: Good behavior: the concepts of rationality -- 2-3: Nature of environments -- 2-4: Structure of agents -- 2-5: Summary, bibliographical and historical notes, exercises -- 2: Problem-Solving -- 3: Solving problems by searching -- 3-1: Problem-solving agents -- 3-2: Example problems -- 3-3: Searching for solutions -- 3-4: Uninformed search strategies -- 3-5: Informed (heuristic) search strategies -- 3-6: Heuristic functions -- 3-7: Summary, bibliographical and historical notes, exercises -- 4: Beyond classical search -- 4-1: Local search algorithms and optimization problems -- 4-2: Local search in continuous spaces -- 4-3: Searching with nondeterministic actions.;14-1: Representing knowledge in an uncertain domain -- 14-2: Semantics of Bayesian networks -- 14-3: Efficient representation of conditional distributions -- 14-4: Exact inference in Bayesian networks -- 14-5: Approximate inference in Bayesian networks -- 14-6: Relational and first-order probability models -- 14-7: Other approaches to uncertain reasoning -- 14-8: Summary, bibliographical and historical notes, exercises -- 15: Probabilistic reasoning over time -- 15-1: Time and uncertainty -- 15-2: Inference in temporal models -- 15-3: Hidden Markov models -- 15-4: Kalman filters -- 15-5: Dynamic Bayesian Networks -- 15-6: Keeping track of many objects -- 15-7: Summary, bibliographical and historical notes, exercises -- 16: Making simple decisions -- 16-1: Combining beliefs and desires under uncertainty -- 16-2: Basis of utility theory -- 16-3: Utility functions -- 16-4: Multiattribute utility functions -- 16-5: Decision networks -- 16-6: Value of information.;Learning -- 18: Learning from examples -- 18-1: Forms of learning -- 18-2: Supervised learning -- 18-3: Learning decision trees -- 18-4: Evaluating and choosing the best hypothesis -- 18-5: Theory of learning -- 18-6: Regression and classification with linear models -- 18-7: Artificial neural networks -- 18-8: Nonparametric models -- 18-9: Support vector machines -- 18-10: Ensemble learning -- 18-11: Practical machine learning -- 18-12: Summary, bibliographical and historical notes, exercises -- 19: Knowledge in learning -- 19-1: Logical formulation of learning -- 19-2: Knowledge in learning -- 19-3: Explanation-based learning -- 19-4: Learning using relevance information -- 19-5: Inductive logic programming -- 19-6: Summary, bibliographical and historical notes, exercises -- 20: Learning probabilistic models -- 20-1: Statistical learning -- 20-2: Learning with complete data -- 20-3: Learning with hidden variables: the EM algorithm.



فهرست مطالب

1: Artificial Intelligence --
1: Introduction --
1-1: What is AI? --
1-2: Foundations of artificial intelligence --
1-3: History of artificial intelligence --
1-4: State of the art --
1-5: Summary, bibliographical and historical notes, exercises --
2: Intelligent agents --
2-1: Agents and environments --
2-2: Good behavior: the concepts of rationality --
2-3: Nature of environments --
2-4: Structure of agents --
2-5: Summary, bibliographical and historical notes, exercises --
2: Problem-Solving --
3: Solving problems by searching --
3-1: Problem-solving agents --
3-2: Example problems --
3-3: Searching for solutions --
3-4: Uninformed search strategies --
3-5: Informed (heuristic) search strategies --
3-6: Heuristic functions --
3-7: Summary, bibliographical and historical notes, exercises --
4: Beyond classical search --
4-1: Local search algorithms and optimization problems --
4-2: Local search in continuous spaces --
4-3: Searching with nondeterministic actions. 4-4: Searching with partial observations --
4-5: Online search agents and unknown environments --
4-6: Summary, bibliographical and historical notes, exercises --
5: Adversarial search --
5-1: Games --
5-2: Optimal decisions in games --
5-3: Alpha-beta pruning --
5-4: Imperfect real-time decisions --
5-5: Stochastic games --
5-6: Partially observable games --
5-7: State-of-the-art game programs --
5-8: Alternative approaches --
5-9: Summary, bibliographical and historical notes, exercises --
6: Constraint satisfaction problems --
6-1: Defining constraint satisfaction problems --
6-2: Constraint propagation: inference in CSPs --
6-3: Backtracking search for CSPs --
6-4: Local search for CSPs --
6-5: Structure of problems --
6-6: Summary, bibliographical and historical notes, exercises --
3: Knowledge. Reasoning And Planning --
7: Logical agents --
7-1: Knowledge-based agents --
7-2: Wumpus world --
7-3: Logic --
7-4: Propositional logic: a very simple logic. 7-5: Propositional theorem proving --
7-6: Effective propositional model checking --
7-7: Agents based on propositional logic --
7-8: Summary, bibliographical and historical notes, exercises --
8: First-order logic --
8-1: Representation revisited --
8-2: Syntax and semantics of first-order logic --
8-3: Using first-order logic --
8-4: Knowledge engineering in first-order logic --
8-5: Summary, bibliographical and historical notes, exercises --
9: Inference in first-order logic --
9-1: Propositional vs first-order inference --
9-2: Unification and lifting --
9-3: Forward chaining --
9-4: Backward chaining --
9-5: Resolution --
9-6: Summary, bibliographical and historical notes, exercises --
10: Classical planning --
10-1: Definition of classical planning --
10-2: Algorithms for planning as state-space search --
10-3: Planning graphs --
10-4: Other classical planning approaches --
10-5: Analysis of planning approaches --
10-6: Summary, bibliographical and historical notes, exercises. 11: Planning and acting in the real world --
11-1: Time, schedules, and resources --
11-2: Hierarchical planning --
11-3: Planning and acting in nondeterministic domains --
11-4: Multiagent planning --
11-5: Summary, bibliographical and historical notes, exercises --
12: Knowledge representation --
12-1: Ontological engineering --
12-2: Categories and objects --
12-3: Events --
12-4: Mental events and mental objects --
12-5: Reasoning systems for categories --
12-6: Reasoning with default information --
12-7: Internet shopping world --
12-8: Summary, bibliographical and historical notes, exercises --
4: Uncertain Knowledge And Reasoning --
13: Quantifying uncertainty --
13-1: Acting under uncertainty --
13-2: Basic probability notation --
13-3: Inference using full joint distributions --
13-4: Independence --
13-5: Bayes' rule and its use --
13-6: Wumpus world revisited --
13-7: Summary, bibliographical and historical notes, exercises --
14: Probabilistic reasoning. 14-1: Representing knowledge in an uncertain domain --
14-2: Semantics of Bayesian networks --
14-3: Efficient representation of conditional distributions --
14-4: Exact inference in Bayesian networks --
14-5: Approximate inference in Bayesian networks --
14-6: Relational and first-order probability models --
14-7: Other approaches to uncertain reasoning --
14-8: Summary, bibliographical and historical notes, exercises --
15: Probabilistic reasoning over time --
15-1: Time and uncertainty --
15-2: Inference in temporal models --
15-3: Hidden Markov models --
15-4: Kalman filters --
15-5: Dynamic Bayesian Networks --
15-6: Keeping track of many objects --
15-7: Summary, bibliographical and historical notes, exercises --
16: Making simple decisions --
16-1: Combining beliefs and desires under uncertainty --
16-2: Basis of utility theory --
16-3: Utility functions --
16-4: Multiattribute utility functions --
16-5: Decision networks --
16-6: Value of information. 16-7: Decision-theoretic expert systems --
16-8: Summary, bibliographical and historical notes, exercises --
17: Making complex decisions --
17-1: Sequential decision problems --
17-2: Value iteration --
17-3: Policy iteration --
17-4: Partially observable MDPs --
17-5: Decisions with multiple agents: game theory --
17-6: Mechanism design --
17-7: Summary, bibliographical and historical notes, exercises. Learning --
18: Learning from examples --
18-1: Forms of learning --
18-2: Supervised learning --
18-3: Learning decision trees --
18-4: Evaluating and choosing the best hypothesis --
18-5: Theory of learning --
18-6: Regression and classification with linear models --
18-7: Artificial neural networks --
18-8: Nonparametric models --
18-9: Support vector machines --
18-10: Ensemble learning --
18-11: Practical machine learning --
18-12: Summary, bibliographical and historical notes, exercises --
19: Knowledge in learning --
19-1: Logical formulation of learning --
19-2: Knowledge in learning --
19-3: Explanation-based learning --
19-4: Learning using relevance information --
19-5: Inductive logic programming --
19-6: Summary, bibliographical and historical notes, exercises --
20: Learning probabilistic models --
20-1: Statistical learning --
20-2: Learning with complete data --
20-3: Learning with hidden variables: the EM algorithm. 20-4: Summary, bibliographical and historical notes, exercises --
21: Reinforcement learning --
21-1: Introduction --
21-2: Passive reinforcement learning --
21-3: Active reinforcement learning --
21-4: Generalization in reinforcement learning --
21-5: Policy search --
21-6: Applications of reinforcement learning --
21-7: Summary, bibliographical and historical notes, exercises --
6: Communicating, Perceiving, And Acting --
22: Natural language processing --
22-1: Language models --
22-2: Text classification --
22-3: Information retrieval --
22-4: Information extraction --
22-5: Summary, bibliographical and historical notes, exercises --
23: Natural language for communication --
23-1: Phrase structure grammars --
23-2: Syntactic analysis (parsing) --
23-3: Augmented grammars and semantic interpretation --
23-4: Machine translation --
23-5: Speech recognition --
23-6: Summary, bibliographical and historical notes, exercises --
24: Perception --
24-1: Image formation. 24-2: Early image-processing operations --
24-3: Object recognition by appearance --
24-4: Reconstructing the 3D world --
24-5: Object recognition form structural information --
24-6: Using vision --
24-7: Summary, bibliographical and historical notes, exercises --
25: Robotics --
25-1: Introduction --
25-2: Robot hardware --
25-3: Robotic perception --
25-4: Planning to move --
25-5: Planning uncertain movements --
25-6: Moving --
25-7: Robotic software architectures --
25-8: Application domains --
25-9: Summary, bibliographical and historical notes, exercises --
7: Conclusions --
26: Philosophical foundations --
26-1: Weak AI: can machines act intelligently? --
26-2: Strong AI: can machines really think? --
26-3: Ethics and risks of developing artificial intelligence --
26-4: Summary, bibliographical and historical notes, exercises --
27: AI: Present and future --
27-1: Agent components --
27-2: Agent architectures --
27-3: Are we going in the right direction? 27-4: What if AI does succeed? --
A: Mathematical background --
A-1: Complexity analysis and O() notation --
A-2: Vectors, matrices, and linear algebra --
A-3: Probability distributions --
B: Notes on languages and algorithms --
B-1: Defining languages with Backus-Naur form (BNF) --
B-2: Describing algorithms with pseudocode --
B-3: Online help --
Bibliography --
Index.




نظرات کاربران