ورود به حساب

نام کاربری گذرواژه

گذرواژه را فراموش کردید؟ کلیک کنید

حساب کاربری ندارید؟ ساخت حساب

ساخت حساب کاربری

نام نام کاربری ایمیل شماره موبایل گذرواژه

برای ارتباط با ما می توانید از طریق شماره موبایل زیر از طریق تماس و پیامک با ما در ارتباط باشید


09117307688
09117179751

در صورت عدم پاسخ گویی از طریق پیامک با پشتیبان در ارتباط باشید

دسترسی نامحدود

برای کاربرانی که ثبت نام کرده اند

ضمانت بازگشت وجه

درصورت عدم همخوانی توضیحات با کتاب

پشتیبانی

از ساعت 7 صبح تا 10 شب

دانلود کتاب Deep Learning for Unmanned Systems

دانلود کتاب یادگیری عمیق برای سیستم های بدون سرنشین

Deep Learning for Unmanned Systems

مشخصات کتاب

Deep Learning for Unmanned Systems

ویرایش: 1 
نویسندگان:   
سری: Studies in Computational Intelligence 
ISBN (شابک) : 3030779386, 9783030779382 
ناشر: Springer 
سال نشر: 2021 
تعداد صفحات: 731 
زبان: English 
فرمت فایل : PDF (درصورت درخواست کاربر به PDF، EPUB یا AZW3 تبدیل می شود) 
حجم فایل: 21 مگابایت 

قیمت کتاب (تومان) : 30,000

در صورت ایرانی بودن نویسنده امکان دانلود وجود ندارد و مبلغ عودت داده خواهد شد



ثبت امتیاز به این کتاب

میانگین امتیاز به این کتاب :
       تعداد امتیاز دهندگان : 3


در صورت تبدیل فایل کتاب Deep Learning for Unmanned Systems به فرمت های PDF، EPUB، AZW3، MOBI و یا DJVU می توانید به پشتیبان اطلاع دهید تا فایل مورد نظر را تبدیل نمایند.

توجه داشته باشید کتاب یادگیری عمیق برای سیستم های بدون سرنشین نسخه زبان اصلی می باشد و کتاب ترجمه شده به فارسی نمی باشد. وبسایت اینترنشنال لایبرری ارائه دهنده کتاب های زبان اصلی می باشد و هیچ گونه کتاب ترجمه شده یا نوشته شده به فارسی را ارائه نمی دهد.


توضیحاتی در مورد کتاب یادگیری عمیق برای سیستم های بدون سرنشین



این کتاب در مقطع کارشناسی ارشد یا پیشرفته و بسیاری دیگر استفاده می شود. وسایل نقلیه زمینی، هوایی و دریایی سرنشین دار و بدون سرنشین، بسیاری از برنامه های غیرنظامی و نظامی امیدوارکننده و انقلابی را امکان پذیر می کند که زندگی ما را در آینده نزدیک تغییر خواهد داد. این برنامه‌ها شامل نظارت، جستجو و نجات، نظارت بر محیط زیست، نظارت بر زیرساخت‌ها، خودروهای خودران، وسایل نقلیه تحویل آخرین مایل بدون تماس، کشتی‌های خودران، کشاورزی دقیق و بازرسی خطوط انتقال می‌شوند. این وسایل نقلیه از پیشرفت های یادگیری عمیق به عنوان زیرشاخه یادگیری ماشینی بهره مند خواهند شد که می تواند به این وسایل نقلیه قابلیت های مختلفی مانند درک، آگاهی از موقعیت، برنامه ریزی و کنترل هوشمند را بدهد. مدل‌های یادگیری عمیق همچنین توانایی ایجاد بینش عملی در مورد ساختارهای پیچیده مجموعه داده‌های بزرگ را دارند.

در سال‌های اخیر، تحقیقات یادگیری عمیق توجه فزاینده‌ای را از سوی محققان دانشگاه، آزمایشگاه‌های دولتی و صنعت به خود جلب کرده است. . این فعالیت‌های تحقیقاتی در مقابله با برخی از مشکلات چالش‌برانگیز وسایل نقلیه زمینی، هوایی و دریایی سرنشین‌دار و بدون سرنشین که هنوز باز هستند، به ثمر نشسته است. علاوه بر این، روش‌های یادگیری عمیق اخیراً به طور فعال در زمینه‌های دیگر یادگیری ماشین، از جمله آموزش تقویتی و انتقال/فرا-یادگیری، توسعه یافته‌اند، در حالی که روش‌های یادگیری عمیق استاندارد مانند شبکه‌های عصبی اخیر (RNN) و شبکه‌های عصبی تکاملی (CNN).

این کتاب عمدتاً برای محققان دانشگاهی و صنعتی است که در زمینه‌های تحقیقاتی مانند مهندسی، مهندسی کنترل، رباتیک، مکاترونیک، مهندسی زیست پزشکی، مهندسی مکانیک و علوم کامپیوتر کار می‌کنند.

>
  • فصل های کتاب به مشکلات تحقیقاتی اخیر در زمینه های کنترل مبتنی بر یادگیری تقویتی پهپادها و یادگیری عمیق برای سیستم های هوایی بدون سرنشین (UAS) می پردازد
  • فصل های کتاب تکنیک های مختلف یادگیری عمیق را برای کاربردهای روباتیک ارائه می کند.
  • فصل های کتاب حاوی یک بررسی ادبیات خوب با فهرستی طولانی از منابع است.
  • فصل های کتاب به خوبی با توضیح خوبی از مسئله تحقیق، روش شناسی، نمودارهای بلوکی و ریاضی نوشته شده اند. تکنیک‌ها.
  • فصل‌های کتاب به‌طور واضح با مثال‌های عددی و شبیه‌سازی نشان داده شده‌اند.
  • فصل‌های کتاب جزئیات کاربردها و حوزه‌های تحقیقاتی آینده را مورد بحث قرار می‌دهند.

توضیحاتی درمورد کتاب به خارجی

This book is used at the graduate or advanced undergraduate level and many others. Manned and unmanned ground, aerial and marine vehicles enable many promising and revolutionary civilian and military applications that will change our life in the near future. These applications include, but are not limited to, surveillance, search and rescue, environment monitoring, infrastructure monitoring, self-driving cars, contactless last-mile delivery vehicles, autonomous ships, precision agriculture and transmission line inspection to name just a few. These vehicles will benefit from advances of deep learning as a subfield of machine learning able to endow these vehicles with different capability such as perception, situation awareness, planning and intelligent control. Deep learning models also have the ability to generate actionable insights into the complex structures of large data sets.

In recent years, deep learning research has received an increasing amount of attention from researchers in academia, government laboratories and industry. These research activities have borne some fruit in tackling some of the challenging problems of manned and unmanned ground, aerial and marine vehicles that are still open. Moreover, deep learning methods have been recently actively developed in other areas of machine learning, including reinforcement training and transfer/meta-learning, whereas standard, deep learning methods such as recent neural network (RNN) and coevolutionary neural networks (CNN). 

The book is primarily meant for researchers from academia and industry, who are working on in the research areas such as engineering, control engineering, robotics, mechatronics, biomedical engineering, mechanical engineering and computer science.

  • The book chapters deal with the recent research problems in the areas of reinforcement learning-based control of UAVs and deep learning for unmanned aerial systems (UAS)
  • The book chapters present various techniques of deep learning for robotic applications. 
  • The book chapters contain a good literature survey with a long list of references.
  • The book chapters are well written with a good exposition of the research problem, methodology, block diagrams and mathematical techniques.
  • The book chapters are lucidly illustrated with numerical examples and simulations.
  • The book chapters discuss details of applications and future research areas.


فهرست مطالب

Preface
Contents
Deep Learning for Unmanned Autonomous Vehicles: A Comprehensive Review
	1 Introduction
	2 Artificial Intelligence and Machine Learning:  Promises and Limitations
		2.1 Artificial Intelligence
		2.2 Machine Learning
		2.3 Deep Learning
	3 The Cognitive Cycle of Unmanned Autonomous Vehicles
	4 Deep Learning for Situational Awareness
		4.1 Localization
		4.2 Feature Extraction
		4.3 Object Recognition
	5 Deep Learning for Decision Making
		5.1 Path Planning
		5.2 Collision Avoidance
	6 Deep Learning for Developmental Unmanned Autonomous Vehicles
	7 Deep Learning for Adaptive Unmanned Autonomous Vehicles
	8 Conclusion
	References
Deep Learning and Reinforcement Learning for Autonomous Unmanned Aerial Systems: Roadmap for Theory to Deployment
	1 Introduction
		1.1 Applications of UAS
		1.2 Classification of UAS
		1.3 Chapter Organization
		1.4 Notations
	2 Overview of Machine Learning Techniques
		2.1 Feedforward Neural Networks
		2.2 Convolutional Neural Networks
		2.3 Recurrent Neural Networks
		2.4 Reinforcement Learning
	3 Deep Learning for UAS Autonomy
		3.1 Feature Extraction from Sensor Data
		3.2 UAS Path Planning and Situational Awareness
		3.3 Open Problems and Challenges
	4 Reinforcement Learning for UAS Autonomy
		4.1 UAS Control System
		4.2 Navigation and Higher Level Tasks
		4.3 Open Problems and Challenges
	5 Simulation Platforms for UAS
		5.1 Simulation Suites
		5.2 Open Problems and Challenges
	6 UAV Hardware for Rapid Prototyping
		6.1 Classification Choice
		6.2 Flight Stack
		6.3 Computational Unit
		6.4 UAS Safety and Regulations
	7 Conclusion
	References
Reactive Obstacle Avoidance Method for a UAV
	1 Introduction
	2 Related Work
	3 Real-Time Trajectory Replanning for Quadrotor Using OctoMap and Uniform B-splines
		3.1 Local Map Building Using OctoMap and Circular Buffer
		3.2 Real-Time Local Trajectory Replanning
		3.3 Experiments and Analysis
	4 Reactive Obstacle Avoidance Method Based on Deep Reinforcement Learning for UAV
		4.1 Background of Reinforcement Learning
		4.2 Obstacle Avoidance Algorithm Based on DDQN
		4.3 Experiments and Analysis
	5 Conclusion
	References
Guaranteed Performances for Learning-Based Control Systems Using Robust Control Theory
	1 Introduction and Motivation to Guarantee Performance Specifications in Control Systems
		1.1 Related Works and Approaches
		1.2 The Contributions of Robust Control Design Framework
	2 Robust Control Framework with Learning-Based Agent in the Reference Signal Generation
		2.1 Examinations on the Reference Signals Through a Supervisor
		2.2 Design Process of the Robust Controller
	3 Robust Control Framework with Learning-Based Agent in the Control Loop
		3.1 Concept of the Design Framework
		3.2 Selection Strategy in the Supervisor
		3.3 Design of the Controller via the Robust LPV Method
	4 Applications of the Proposed Robust Control to Unmanned Control System Problems
		4.1 Cruise Control Design for Autonomous Vehicles
		4.2 Illustration of the Control Design of a Mobile Robot
	5 Conclusions
	References
A Cascaded Deep Neural Network for Position Estimation of Industrial Robots
	1 Introduction
	2 Related Work
	3 Basics of Deep Learning
		3.1 Convolutional Neural Network
		3.2 Backpropagation Algorithm
		3.3 Training Strategy of Network Model
		3.4 Anchor in Object Detection
	4 Architecture Overview of a Hand-Eye System
	5 Position and Orientation Estimation Algorithm
		5.1 Position Determination Based on SSD Network
		5.2 Orientation Determination Based on CNN
	6 Training of Cascaded Deep Neural Network
		6.1 Method of Obtaining Dataset with Different Features
		6.2 Research on Avoiding Over-Fitting Phenomenon
		6.3 L2 Regularization of Network Model
		6.4 Model Training Process
	7 Comparative Experiment
		7.1 Measuring Angle of Tested Object
		7.2 Comparison Based on Single Sample Image
		7.3 Contrast Based on Multiple Sample Images
	8 Conclusion
	References
Managing Deep Learning Uncertainty for Unmanned Systems
	1 Introduction
	2 Background
		2.1 Control Strategies
		2.2 Map Representation Techniques
		2.3 Interaction with External Systems
		2.4 Autonomous Internet of Things
		2.5 Big Data Uncertainty
		2.6 Uncertainty in Machine Learning
	3 Methods
		3.1 Bayesian Deep Learning
		3.2 Fuzzy Deep Reinforcement Learning
		3.3 Applications of Deep Reinforcement Learning in Autonomous IoT
	4 Discussion
		4.1 Uncertainty Assessment for Autonomous Systems Using Simulation
		4.2 Fuzzy DRL Simulation
		4.3 IoT Simulation
	5 Conclusions
	References
Uncertainty-Aware Autonomous Mobile Robot Navigation with Deep Reinforcement Learning
	1 Introduction
	2 Related Works
		2.1 Handling of Uncertainty
	3 Mobile Robot Navigation
		3.1 Perception
		3.2 Localization
		3.3 Path Planning
		3.4 Motion Control
		3.5 Multi-Robot Systems
	4 Uncertainty
		4.1 Uncertainty in Robotics
		4.2 Handling of Uncertainty
	5 Reinforcement Learning
		5.1 Reinforcement Learning Algorithms
	6 Deep Reinforcement Learning
		6.1 Value-Based Algorithms
		6.2 Policy Gradient Algorithms
		6.3 Model-Based Algorithms
	7 Conclusions
	References
Deep Reinforcement Learning for Autonomous Mobile Networks in Micro-grids
	1 Introduction
	2 Reference Scenario
		2.1 Energy-Aware Mobile Network Scenario
		2.2 Functional Split in MEC-H Scenario
	3 Control Problem
		3.1 Problem Statement
		3.2 Traffic Model
		3.3 Energy Harvesting Model
		3.4 Power Consumption Model
		3.5 Problem Solving
	4 Reinforcement Learning
		4.1 A Brief Overview
		4.2 Multi-agent Reinforcement Learning
		4.3 Deep Reinforcement Learning
	5 Centralized Approach
		5.1 Central Control Problem Definition
		5.2 Algorithm Details
		5.3 Numerical Results and Discussion
	6 Distributed Approach
		6.1 Distributed Control Problem Definition
		6.2 Algorithm Details
		6.3 Numerical Results
	7 Conclusions
		7.1 Summary of Results and Open Issues
		7.2 Future Research Directions
	References
Reinforcement Learning for Autonomous Morphing Control and Cooperative Operations of UAV Cluster
	1 Introduction
	2 Related Work
		2.1 The Development Status of Morphing UAV
		2.2 The Advantages and Difficulties of Cluster Combat
	3 Reinforcement Learning Algorithm
		3.1 Single-Agent Reinforcement Learning Algorithm
		3.2 Multi-agent Reinforcement Learning Algorithm
	4 Morphing Control
		4.1 Morphing Control Algorithm
		4.2 Simulation Experiments
		4.3 Discussion
	5 Cooperative Control of the UAV Cluster
		5.1 Cooperative Control Algorithm for Multiple Agents
		5.2 Simulation Experiments
		5.3 Discussion
	6 Conclusion
	References
Bioinspired Robotic Arm Planning  by τ-Jerk Theory and Recurrent Multilayered ANN
	1 Introduction
	2 Related Work
	3 Robotic Arm Kinematics
	4 τ-calJerk Trajectory Generation
	5 Vision-Based Regions Segmentation
	6 Recurrent ANN Model
	7 Multi-path Optimization
	8 ROS Simulation and Packages Issues
	9 Results Discussion
	10 Conclusion
	References
Deep Learning Based Formation Control of Drones
	1 Introduction
		1.1 Background and Motivation
		1.2 Problem Statement
		1.3 Contribution
	2 Related Work
		2.1 Vision-Based Formation Control
		2.2 Deep Learning for Robotics Applications
	3 Formation Control of Drones with CNN
		3.1 Preliminaries and Problem Formulation
		3.2 Distributed Cyclic Formation Control
		3.3 Derivation of Bearing Angles
		3.4 Training and Implementation of CNN
	4 ROS Framework
		4.1 System Modeling in ROS
		4.2 ROS Nodes and Interactions
		4.3 Structural Flow
	5 Simulations
		5.1 System Setup
		5.2 Results
	6 Discussion and Future Work
	7 Conclusion
	References
Image-Based Identification of Animal Breeds Using Deep Learning
	1 Introduction
		1.1 Motivation
		1.2 Difficulties in Animal Breed Identification
		1.3 Objectives
		1.4 Work Done
		1.5 Contribution
		1.6 Chapter Organization
	2 Related Works
	3 Animal Breed Database
		3.1 Image Capturing
		3.2 Animal Breed Dataset
	4 Methodologies
		4.1 Convolutional Neural Network
		4.2 Parameters in CNN
		4.3 Description of Deep-CNN Models
		4.4 Plan of Action
		4.5 Performance Evaluation
	5 Implementation
		5.1 Hardware
		5.2 Training Methodology
	6 Results
		6.1 Optimal Models for Animal Breed Classification
	7 Breed Classification Performance
	8 Discussion
	9 Conclusion and Future Work
	References
Image Registration Algorithm for Deep Learning-Based Stereo Visual Control of Mobile Robots
	1 Introduction
	2 Related Work
	3 Semantic Segmentation—CNN Training and Implementation
	4 Image Formation and Registration Process
		4.1 Image Formation Process
		4.2 Image Registration
		4.3 Cost Function Definition
		4.4 Optimization Algorithms
	5 Mobile Robot Vision Control Algorithm
	6 Experimental Evaluation
	7 Discussion
	8 Conclusion
	References
Search-Based Planning and Reinforcement Learning for Autonomous Systems and Robotics
	1 Introduction
	2 Path Planning
		2.1 Dijkstra's Algorithm and Best-First-Search
		2.2 A* Shortest Path Finding Algorithm
	3 Uncertainty Representation Using Kalman Filter Landmark Based SLAM
		3.1 Principle of Kalman Filter
		3.2 General Landmark Based SLAM
		3.3 From the View of Simulation
	4 Reinforcement Learning Concepts in Search-Based Planning
	5 Reinforcement Learning Concepts in Search-Based Planning
		5.1 General Landmark Based SLAM
		5.2 SLAM-Based on Unstructured Environments
		5.3 General Landmark Based SLAM
	6 Conclusions
	References
Playing Doom with Anticipator-A3C Based Agents Using Deep Reinforcement Learning and the ViZDoom Game-AI Research Platform
	1 Introduction, Significance and Research Motivation
	2 Related Research, Analysis, and Background
	3 A Brief Analysis of the Problem
	4 The Main Research Contents of this Chapter
	5 Deep Reinforcement Learning and Game AI Research Environment
		5.1 Deep Learning and Its Principles
		5.2 Reinforcement Learning and Markov Decision Process
		5.3 Image Prediction
		5.4 Research Tools and Game-AI Research Platform
		5.5 Reinforcement Learning and DQN Principles
		5.6 Principles of Actor-Critic and A3C Models
		5.7 The Basic Idea of the Anticipator-A3C Model
	6 Model Design, Implementation, and Experiments
		6.1 Model Network Structure and Implementation Details
		6.2 Machinery Environment, Experimental Conditions, and Settings
		6.3 Experimental Results and Analysis
	7 Discussion and Summary
	References
Deep Reinforcement Learning for Quadrotor Path Following and Obstacle Avoidance
	1 Introduction
	2 The UAV Control Structure
		2.1 Guidance
		2.2 Navigation
		2.3 Control
	3 Guidance, Navigation and Control Literature Review
		3.1 Guidance
		3.2 Navigation
		3.3 Control
	4 Deep Deterministic Policy Gradient
	5 Agent Environment
		5.1 Quadrotor
		5.2 Autopilot
		5.3 Obstacle Detection
		5.4 Training Environment
	6 DDPG for Path Following
		6.1 Design Process
		6.2 Training Process
		6.3 Results
	7 DDPG Agent for Obstacle Avoidance
		7.1 Design Process
		7.2 Implementation of the PF and Reactive OA Approach
		7.3 Training Process
		7.4 Results
	8 Conclusions
	References
Playing First-Person Perspective Games with Deep Reinforcement Learning Using the State-of-the-Art Game-AI Research Platforms
	1 Introduction and Motivation
	2 Related Work
	3 The Problem of Partial Observability
	4 Deep Visual Reinforcement Learning Approach
		4.1 Model-Based Algorithms
		4.2 Model-Free Algorithms
		4.3 Deep Q-Networks (DQN)
		4.4 Deep Recurrent Q-Networks (DRQN)
	5 Game-AI Research Platform-1
		5.1 Proposed Model Details
		5.2 Fully Observable Markovian Decision Process (FOMDP)
		5.3 Partially Observable Markovian Decision Process (POMDP)
		5.4 Training and Testing
	6 Game-AI Research Platform-2
		6.1 Proposed Model Details
		6.2 Partially Observable Markovian Decision Process (POMDP)
		6.3 Training and Testing
	7 Comparison to Relative Work
	8 Summary and Discussion
	9 Conclusion and Future Work
	References
Language Modeling and Text Generation Using Hybrid Recurrent Neural Network
	1 Introduction
		1.1 Background
		1.2 Problem Statement
		1.3 Motivation
		1.4 Research Objective and Challenges
		1.5 Research Applications and Advantages
		1.6 Research Contribution
		1.7 Chapter Organization
	2 Literature Review
	3 Implementation
		3.1 Hybrid RNN Design and Working
		3.2 Software Design
	4 Experimental Results
		4.1 Result Comparison with Baseline Model
		4.2 Experiments on Datasets
	5 Conclusion
		5.1 Future Work
	References
Detection and Recognition of Vehicle’s Headlights Types for Surveillance Using Deep Neural Networks
	1 Introduction
	2 Literature Review
		2.1 Object Detection and Recognition Based on Machine Learning
		2.2 Object Detection and Recognition Based on Deep Learning
	3 Materials and Methods
	4 Results and Discussion
		4.1 Output Results and Graphs
	5 Conclusion
	References
Recent Advances of Deep Learning in Biology
	1 Introduction
	2 Cell Biology and Deep Learning
		2.1 Cell Imaging and Deep Learning
		2.2 Drug Treated Cell and Deep Learning
		2.3 Diseased Cell and Deep Learning
		2.4 Cell Movement and Deep Learning
	3 Discussions
		3.1 Deep Learning and Current Innovation in Biology
		3.2 Deep Learning Tools and Techniques for Biological Research
		3.3 Attraction of Deep Learning and Biology
		3.4 Success of Deep Learning in Biology
		3.5 Future Outlook in Deep Learning and Biology
	4 Conclusion
	References




نظرات کاربران