دسترسی نامحدود
برای کاربرانی که ثبت نام کرده اند
برای ارتباط با ما می توانید از طریق شماره موبایل زیر از طریق تماس و پیامک با ما در ارتباط باشید
در صورت عدم پاسخ گویی از طریق پیامک با پشتیبان در ارتباط باشید
برای کاربرانی که ثبت نام کرده اند
درصورت عدم همخوانی توضیحات با کتاب
از ساعت 7 صبح تا 10 شب
ویرایش: [1 ed.]
نویسندگان: Wen Yu. Adolfo Perrusquia
سری: IEEE Press Series on Systems Science and Engineering
ISBN (شابک) : 1119782740, 9781119782742
ناشر: Wiley-IEEE Press
سال نشر: 2021
تعداد صفحات: 288
[289]
زبان: English
فرمت فایل : PDF (درصورت درخواست کاربر به PDF، EPUB یا AZW3 تبدیل می شود)
حجم فایل: 25 Mb
در صورت تبدیل فایل کتاب Human-Robot Interaction Control Using Reinforcement Learning به فرمت های PDF، EPUB، AZW3، MOBI و یا DJVU می توانید به پشتیبان اطلاع دهید تا فایل مورد نظر را تبدیل نمایند.
توجه داشته باشید کتاب کنترل تعامل انسان و ربات با استفاده از یادگیری تقویتی نسخه زبان اصلی می باشد و کتاب ترجمه شده به فارسی نمی باشد. وبسایت اینترنشنال لایبرری ارائه دهنده کتاب های زبان اصلی می باشد و هیچ گونه کتاب ترجمه شده یا نوشته شده به فارسی را ارائه نمی دهد.
A comprehensive exploration of the control schemes of human-robot interactions In Human-Robot Interaction Control Using Reinforcement Learning, an expert team of authors delivers a concise overview of human-robot interaction control schemes and insightful presentations of novel, model-free and reinforcement learning controllers. The book begins with a brief introduction to state-of-the-art human-robot interaction control and reinforcement learning before moving on to describe the typical environment model. The authors also describe some of the most famous identification techniques for parameter estimation. Human-Robot Interaction Control Using Reinforcement Learning offers rigorous mathematical treatments and demonstrations that facilitate the understanding of control schemes and algorithms. It also describes stability and convergence analysis of human-robot interaction control and reinforcement learning based control. The authors also discuss advanced and cutting-edge topics, like inverse and velocity kinematics solutions, H2 neural control, and likely upcoming developments in the field of robotics. Readers will also enjoy: A thorough introduction to model-based human-robot interaction control Comprehensive explorations of model-free human-robot interaction control and human-in-the-loop control using Euler angles Practical discussions of reinforcement learning for robot position and force control, as well as continuous time reinforcement learning for robot force control In-depth examinations of robot control in worst-case uncertainty using reinforcement learning and the control of redundant robots using multi-agent reinforcement learning Perfect for senior undergraduate and graduate students, academic researchers, and industrial practitioners studying and working in the fields of robotics, learning control systems, neural networks, and computational intelligence, Human-Robot Interaction Control Using Reinforcement Learning is also an indispensable resource for students and professionals studying reinforcement learning.
Cover Title Page Copyright Contents Author Biographies List of Figures List of Tables Preface Part I Human‐robot Interaction Control Chapter 1 Introduction 1.1 Human‐Robot Interaction Control 1.2 Reinforcement Learning for Control 1.3 Structure of the Book References Chapter 2 Environment Model of Human‐Robot Interaction 2.1 Impedance and Admittance 2.2 Impedance Model for Human‐Robot Interaction 2.3 Identification of Human‐Robot Interaction Model 2.4 Conclusions References Chapter 3 Model Based Human‐Robot Interaction Control 3.1 Task Space Impedance/Admittance Control 3.2 Joint Space Impedance Control 3.3 Accuracy and Robustness 3.4 Simulations 3.5 Conclusions References Chapter 4 Model Free Human‐Robot Interaction Control 4.1 Task‐Space Control Using Joint‐Space Dynamics 4.2 Task‐Space Control Using Task‐Space Dynamics 4.3 Joint Space Control 4.4 Simulations 4.5 Experiments 4.6 Conclusions References Chapter 5 Human‐in‐the‐loop Control Using Euler Angles 5.1 Introduction 5.2 Joint‐Space Control 5.3 Task‐Space Control 5.4 Experiments 5.5 Conclusions References Part II Reinforcement Learning for Robot Interaction Control Chapter 6 Reinforcement Learning for Robot Position/Force Control 6.1 Introduction 6.2 Position/Force Control Using an Impedance Model 6.3 Reinforcement Learning Based Position/Force Control 6.4 Simulations and Experiments 6.5 Conclusions References Chapter 7 Continuous‐Time Reinforcement Learning for Force Control 7.1 Introduction 7.2 K‐means Clustering for Reinforcement Learning 7.3 Position/Force Control Using Reinforcement Learning 7.4 Experiments 7.5 Conclusions References Chapter 8 Robot Control in Worst‐Case Uncertainty Using Reinforcement Learning 8.1 Introduction 8.2 Robust Control Using Discrete‐Time Reinforcement Learning 8.3 Double Q‐Learning with k‐Nearest Neighbors 8.4 Robust Control Using Continuous‐Time Reinforcement Learning 8.5 Simulations and Experiments: Discrete‐Time Case 8.6 Simulations and Experiments: Continuous‐Time Case 8.7 Conclusions References Chapter 9 Redundant Robots Control Using Multi‐Agent Reinforcement Learning 9.1 Introduction 9.2 Redundant Robot Control 9.3 Multi‐Agent Reinforcement Learning for Redundant Robot Control 9.4 Simulations and experiments 9.5 Conclusions References Chapter 10 Robot ℋ2 Neural Control Using Reinforcement Learning 10.1 Introduction 10.2 ℋ2 Neural Control Using Discrete‐Time Reinforcement Learning 10.3 ℋ2 Neural Control in Continuous Time 10.4 Examples 10.5 Conclusion References Chapter 11 Conclusions A Robot Kinematics and Dynamics A.1 Kinematics A.2 Dynamics A.3 Examples References B Reinforcement Learning for Control B.1 Markov decision processes B.2 Value functions B.3 Iterations B.4 TD learning Reference Index EULA