ورود به حساب

نام کاربری گذرواژه

گذرواژه را فراموش کردید؟ کلیک کنید

حساب کاربری ندارید؟ ساخت حساب

ساخت حساب کاربری

نام نام کاربری ایمیل شماره موبایل گذرواژه

برای ارتباط با ما می توانید از طریق شماره موبایل زیر از طریق تماس و پیامک با ما در ارتباط باشید


09117307688
09117179751

در صورت عدم پاسخ گویی از طریق پیامک با پشتیبان در ارتباط باشید

دسترسی نامحدود

برای کاربرانی که ثبت نام کرده اند

ضمانت بازگشت وجه

درصورت عدم همخوانی توضیحات با کتاب

پشتیبانی

از ساعت 7 صبح تا 10 شب

دانلود کتاب Deep Learning for Robot Perception and Cognition

دانلود کتاب یادگیری عمیق برای درک و شناخت ربات

Deep Learning for Robot Perception and Cognition

مشخصات کتاب

Deep Learning for Robot Perception and Cognition

ویرایش: [1 ed.] 
نویسندگان:   
سری:  
ISBN (شابک) : 0323857876, 9780323857871 
ناشر: Academic Press 
سال نشر: 2022 
تعداد صفحات: 634
[638] 
زبان: English 
فرمت فایل : PDF (درصورت درخواست کاربر به PDF، EPUB یا AZW3 تبدیل می شود) 
حجم فایل: 20 Mb 

قیمت کتاب (تومان) : 36,000



ثبت امتیاز به این کتاب

میانگین امتیاز به این کتاب :
       تعداد امتیاز دهندگان : 4


در صورت تبدیل فایل کتاب Deep Learning for Robot Perception and Cognition به فرمت های PDF، EPUB، AZW3، MOBI و یا DJVU می توانید به پشتیبان اطلاع دهید تا فایل مورد نظر را تبدیل نمایند.

توجه داشته باشید کتاب یادگیری عمیق برای درک و شناخت ربات نسخه زبان اصلی می باشد و کتاب ترجمه شده به فارسی نمی باشد. وبسایت اینترنشنال لایبرری ارائه دهنده کتاب های زبان اصلی می باشد و هیچ گونه کتاب ترجمه شده یا نوشته شده به فارسی را ارائه نمی دهد.


توضیحاتی در مورد کتاب یادگیری عمیق برای درک و شناخت ربات

یادگیری عمیق برای درک و شناخت ربات طیف گسترده ای از موضوعات و روش ها را در یادگیری عمیق برای ادراک و شناخت ربات همراه با متدولوژی های انتها به انتها معرفی می کند. این کتاب زمینه مفهومی و ریاضی مورد نیاز برای نزدیک شدن به تعداد زیادی از وظایف ادراک و شناخت ربات را از نقطه نظر یادگیری پایان به انتها فراهم می کند. این کتاب برای دانشجویان، محققان دانشگاه و صنعت و دست اندرکاران در زمینه های بینایی رباتیک، کنترل هوشمند، مکاترونیک، یادگیری عمیق، ادراک رباتیک و وظایف شناخت مناسب است.


توضیحاتی درمورد کتاب به خارجی

Deep Learning for Robot Perception and Cognition introduces a broad range of topics and methods in deep learning for robot perception and cognition together with end-to-end methodologies. The book provides the conceptual and mathematical background needed for approaching a large number of robot perception and cognition tasks from an end-to-end learning point-of-view. The book is suitable for students, university and industry researchers and practitioners in Robotic Vision, Intelligent Control, Mechatronics, Deep Learning, Robotic Perception and Cognition tasks.



فهرست مطالب

Front Cover
Deep Learning for Robot Perception and Cognition
Copyright
Contents
List of contributors
Preface
Acknowledgements
Editors biographies
1 Introduction
	1.1 Artificial intelligence and machine learning
	1.2 Real world problems representation
	1.3 Machine learning tasks
	1.4 Shallow and deep learning
	1.5 Robotics and deep learning
	References
2 Neural networks and backpropagation
	2.1 Introduction
	2.2 Activation functions
	2.3 Cost functions
	2.4 Backpropagation
	2.5 Optimizers and training
	2.6 Overfitting
		2.6.1 Early stopping
		2.6.2 Regularization
		2.6.3 Dropout
		2.6.4 Batch normalization
	2.7 Concluding remarks
	References
3 Convolutional neural networks
	3.1 Introduction
	3.2 Structure of convolutional neural networks
		3.2.1 Notation
		3.2.2 Convolutional layers
		3.2.3 Activation functions
		3.2.4 Pooling layers
		3.2.5 Fully connected and output layers
		3.2.6 Overall CNN structure
			3.2.6.1 Famous CNN architectures
	3.3 Training convolutional neural networks
		3.3.1 Backpropagation formulas on CNNs
			3.3.1.1 Backpropagation on convolutional layers
			3.3.1.2 Backpropagation on pooling layers
		3.3.2 Loss functions
		3.3.3 Batch training and optimizers
			3.3.3.1 Batch training
			3.3.3.2 Optimizers
		3.3.4 Typical challenges in CNN training
			3.3.4.1 Overfitting
			3.3.4.2 Long training time
			3.3.4.3 Vanishing and exploding gradients
			3.3.4.4 Internal covariate shift
		3.3.5 Solutions to CNN training challenges
			3.3.5.1 Learning rate scheduling
			3.3.5.2 Data augmentation
			3.3.5.3 Transfer learning
			3.3.5.4 Weight regularization
			3.3.5.5 Dropout
			3.3.5.6 Normalization
			3.3.5.7 Skip connections
	3.4 Conclusions
	References
4 Graph convolutional networks
	4.1 Introduction
		4.1.1 Graph definition
	4.2 Spectral graph convolutional network
	4.3 Spatial graph convolutional network
	4.4 Graph attention network (GAT)
	4.5 Graph convolutional networks for large graphs
		4.5.1 Layer sampling methods
		4.5.2 Graph sampling methods
	4.6 Datasets and libraries
	4.7 Conclusion
	References
5 Recurrent neural networks
	5.1 Introduction
	5.2 Vanilla RNN
	5.3 Long-short term memory
	5.4 Gated recurrent unit
	5.5 Other RNN variants
	5.6 Applications
	5.7 Concluding remarks
	References
6 Deep reinforcement learning
	6.1 Introduction
	6.2 Value-based methods
		6.2.1 Q-learning
		6.2.2 Deep Q-learning
	6.3 Policy-based methods
		6.3.1 Policy gradient
		6.3.2 Actor-critic methods
		6.3.3 Deep policy gradient-based methods
			6.3.3.1 Actor-critic
			6.3.3.2 Trust region policy optimization
	6.4 Concluding remarks
	References
7 Lightweight deep learning
	7.1 Introduction
	7.2 Lightweight convolutional neural network architectures
		7.2.1 Lightweight CNNs for classification
		7.2.2 Lightweight object detection
			7.2.2.1 Real-time generic object detection on embedded devices
			7.2.2.2 Real-time face detection
	7.3 Regularization of lightweight convolutional neural networks
		7.3.1 Graph embedded-based regularizer
			7.3.1.1 Discriminant analysis regularization
			7.3.1.2 Minimum enclosing ball regularization
			7.3.1.3 LLE-inspired regularization
			7.3.1.4 Clustering-based DA regularization
		7.3.2 Class-specific discriminant regularizer
		7.3.3 Mutual information regularizer
	7.4 Bag-of-features for improved representation learning
		7.4.1 Convolutional feature histograms for real-time tracking
	7.5 Early exits for adaptive inference
		7.5.1 Early exits using bag-of-features
		7.5.2 Adaptive inference with early exits
	7.6 Concluding remarks
	References
8 Knowledge distillation
	8.1 Introduction
	8.2 Neural network distillation
	8.3 Probabilistic knowledge transfer
	8.4 Multilayer knowledge distillation
		8.4.1 Hint-based distillation
		8.4.2 Flow of solution procedure distillation
		8.4.3 Other multilayer distillation methods
	8.5 Teacher training strategies
	8.6 Concluding remarks
	References
9 Progressive and compressive learning
	9.1 Introduction
	9.2 Progressive neural network learning
		9.2.1 Broad learning system
		9.2.2 Progressive learning network
		9.2.3 Progressive operational perceptron and its variants
		9.2.4 Heterogeneous multilayer generalized operational perceptron
		9.2.5 Subset sampling and online hyperparameter search for training enhancement
	9.3 Compressive learning
		9.3.1 Vector-based compressive learning
		9.3.2 Tensor-based compressive learning
	9.4 Conclusions
	References
10 Representation learning and retrieval
	10.1 Introduction
	10.2 Discriminative and self-supervised autoencoders
	10.3 Deep representation learning for content based image retrieval
	10.4 Model retraining methods for image retrieval
		10.4.1 Fully unsupervised retraining
		10.4.2 Retraining with relevance information
		10.4.3 Relevance feedback based retraining
	10.5 Variance preserving supervised representation learning
	10.6 Concluding remarks
	References
11 Object detection and tracking
	11.1 Object detection
		11.1.1 Object detection essentials
			11.1.1.1 Nonmaximum suppression
			11.1.1.2 Performance evaluation
			11.1.1.3 Traditional object detection methods
		11.1.2 Two-stage object detectors
		11.1.3 One-stage detectors
		11.1.4 Anchor-free detectors
	11.2 Object tracking
		11.2.1 Single object tracking
			11.2.1.1 Tracking with correlation filters
			11.2.1.2 Deep learning based tracking
				Tracking with offline pretraining
				Tracking with online training
			11.2.1.3 Tracking by similarity learning with Siamese networks
		11.2.2 Multiple object tracking
			11.2.2.1 Tracking with deep visual representations
			11.2.2.2 Tracking as a graph optimization problem
			11.2.2.3 Detection-driven tracking
	11.3 Conclusion
	References
12 Semantic scene segmentation for robotics
	12.1 Introduction
	12.2 Algorithms and architectures for semantic segmentation
		12.2.1 Traditional methods
		12.2.2 Deep learning methods
		12.2.3 Encoder variants
		12.2.4 Upsampling methods
		12.2.5 Techniques for exploiting context
			12.2.5.1 Encoder-decoder architecture
			12.2.5.2 Image pyramid
			12.2.5.3 Conditional random fields
			12.2.5.4 Spatial pyramid pooling
			12.2.5.5 Dilated convolution
		12.2.6 Real-time architectures
		12.2.7 Object detection-based methods
	12.3 Loss functions for semantic segmentation
		12.3.1 Pixelwise cross entropy loss
		12.3.2 Dice loss
	12.4 Semantic segmentation using multiple inputs
		12.4.1 Video semantic segmentation
		12.4.2 Point cloud semantic segmentation
		12.4.3 Multimodal semantic segmentation
	12.5 Semantic segmentation data sets and benchmarks
		12.5.1 Outdoor data sets
			12.5.1.1 Cityscapes
			12.5.1.2 KITTI
			12.5.1.3 Mapillary vistas
			12.5.1.4 BDD100K: a large-scale diverse driving video database
			12.5.1.5 Indian driving data set
		12.5.2 Indoor data sets
			12.5.2.1 NYU-Depth V2
			12.5.2.2 SUN 3D
			12.5.2.3 SUN RGB-D
			12.5.2.4 ScanNet
		12.5.3 General purpose data sets
			12.5.3.1 PASCAL visual object classes
			12.5.3.2 Microsoft common objects in context
			12.5.3.3 ADE20K
	12.6 Semantic segmentation metrics
		12.6.1 Accuracy
			12.6.1.1 ROC-AUC
			12.6.1.2 Pixel accuracy
			12.6.1.3 Intersection over union
			12.6.1.4 Precision-recall curve-based metrics
		12.6.2 Computational complexity
			12.6.2.1 Runtime
			12.6.2.2 Memory usage
			12.6.2.3 Floating point operations per second
	12.7 Conclusion
	References
13 3D object detection and tracking
	13.1 Introduction
	13.2 3D object detection
		13.2.1 Input data for 3D object detection
		13.2.2 3D object detection data sets and metrics
		13.2.3 Lidar-based 3D object detection methods
			13.2.3.1 VoxelNet
			13.2.3.2 PointPillars
			13.2.3.3 TANet
			13.2.3.4 HotSpotNet
			13.2.3.5 Point-based methods
			13.2.3.6 Projection-based methods
		13.2.4 Image+Lidar-based 3D object detection
		13.2.5 Monocular 3D object detection
			13.2.5.1 Prior information fusion based methods
			13.2.5.2 Depth-estimation-based methods
			13.2.5.3 Other monocular 3D object detection methods
		13.2.6 Binocular 3D object detection
	13.3 3D object tracking
		13.3.1 3D object tracking data sets and metrics
		13.3.2 3D object tracking methods
			13.3.2.1 Detection-based tracking
			13.3.2.2 Simultaneous detection and tracking
	13.4 Conclusion
	References
14 Human activity recognition
	14.1 Introduction
		14.1.1 Tasks in human activity recognition
		14.1.2 Input modalities for human activity recognition
	14.2 Trimmed action recognition
		14.2.1 2D convolutional and recurrent neural network-based architectures
		14.2.2 3D convolutional neural network architectures
		14.2.3 Inflated 3D CNN architectures
		14.2.4 Factorized (2+1)D CNN architectures
		14.2.5 Skeleton-based action recognition
			14.2.5.1 Spatial-temporal graph convolution network
		14.2.6 Multistream architectures
			14.2.6.1 Multimodal
			14.2.6.2 Multiresolution
			14.2.6.3 Multitemporal
	14.3 Temporal action localization
	14.4 Spatiotemporal action localization
	14.5 Data sets for human activity recognition
	14.6 Conclusion
	References
15 Deep learning for vision-based navigation in autonomous drone racing
	15.1 Introduction
	15.2 System decomposition approach in drone racing navigation
		15.2.1 Related work
		15.2.2 Drone hardware
		15.2.3 State estimation
		15.2.4 Control for agile quadrotor flight
			15.2.4.1 Dynamic model of a racing quadrotor
			15.2.4.2 Controller design
		15.2.5 Motion planning for agile flight
		15.2.6 Deep learning for perception
			15.2.6.1 Gate center estimation
			15.2.6.2 Global gate mapping
		15.2.7 Experimental results
	15.3 Transfer learning and end-to-end planning
		15.3.1 Related work
		15.3.2 Sim-to-real transfer with domain randomization
		15.3.3 Perceive and control with variational autoencoders
		15.3.4 Deep reinforcement learning
			15.3.4.1 RL framework
			15.3.4.2 Drone racing environment for DRL
			15.3.4.3 Curriculum learning
			15.3.4.4 Policy network architecture
			15.3.4.5 Experimental results
	15.4 Useful tools for data collection and training
		15.4.1 Simulation environments for autonomous drone racing
			15.4.1.1 AirSim
			15.4.1.2 FlightGoggles
			15.4.1.3 Flightmare
		15.4.2 Data sets
	15.5 Conclusions and future work
		15.5.1 Conclusions
		15.5.2 Future work
	References
16 Robotic grasping in agile production
	16.1 Introduction
		16.1.1 Robot tasks in agile production
		16.1.2 Deep learning in agile production
		16.1.3 Requirements in agile production
		16.1.4 Limitations in agile production
			Grasping hardware
			Grasping software
	16.2 Grasping and object manipulation
		16.2.1 Problem statement
		16.2.2 Analytical versus data-driven approaches
		16.2.3 Grasp detection with RGB-D
			Known objects
			Similar objects
			Novel objects
			PVN3D
		16.2.4 Grasp detection with point clouds
			6-DOF GraspNet
	16.3 Grasp evaluation
		16.3.1 Metrics
		16.3.2 Pose estimation with PVN3D
			Data collection and training
			Results
		16.3.3 Grasp detection with 6-DOF GraspNet
			Data collection and training
			Results
		16.3.4 Pick-and-place results
	16.4 Manipulation benchmarking
	16.5 Data sets
	16.6 Conclusion
	References
17 Deep learning in multiagent systems
	17.1 Introduction
	17.2 Setting the scene
	17.3 Challenges
	17.4 Deep learning in multiagent systems
		17.4.1 Individual learning
			17.4.1.1 Direct learning
			17.4.1.2 Learning about self
			17.4.1.3 Transfer learning
		17.4.2 Collaborative and cooperative learning
			17.4.2.1 Mentoring
			17.4.2.2 Social learning
			17.4.2.3 Federated learning
			17.4.2.4 Distributed learning and edge intelligence
	17.5 Conclusion
	References
18 Simulation environments
	18.1 Introduction
		18.1.1 Robotic simulators architecture
		18.1.2 Simulation types
		18.1.3 Qualitative characteristics
	18.2 Robotic simulators
		18.2.1 Gazebo
			18.2.1.1 Architecture
			18.2.1.2 Plugins
			18.2.1.3 Robotic models
			18.2.1.4 ROS/ ROS 2 support
			18.2.1.5 Cloud simulation
			18.2.1.6 Research works
		18.2.2 AirSim
			18.2.2.1 Architecture
			18.2.2.2 Environments and models
			18.2.2.3 Research works
		18.2.3 Webots
			18.2.3.1 Architecture
			18.2.3.2 Environments and models
			18.2.3.3 Research works
		18.2.4 CARLA
			18.2.4.1 Architecture
			18.2.4.2 Environments and models
			18.2.4.3 Research works
		18.2.5 CoppeliaSim
			18.2.5.1 Overview and features
			18.2.5.2 Research works
		18.2.6 Other simulators
			18.2.6.1 MORSE
			18.2.6.2 ARGoS
			18.2.6.3 USARSim
			18.2.6.4 Nvidia's Isaac Sim
			18.2.6.5 RoboDK
	18.3 Conclusions
	References
19 Biosignal time-series analysis
	19.1 Introduction
	19.2 ECG classification and advance warning for arrhythmia
		19.2.1 Patient-specific ECG classification by 1D convolutional neural networks
			19.2.1.1 ECG data
			19.2.1.2 Methodology
			19.2.1.3 Results
		19.2.2 Personalized advance warning system for cardiac arrhythmias
			19.2.2.1 ABS filter
			19.2.2.2 ABS filter selection
			19.2.2.3 Evaluation of ABS filters
	19.3 Early prediction of mortality risk for COVID-19 patients
		19.3.1 Introduction and motivation
		19.3.2 Methodology
			19.3.2.1 Study participants
			19.3.2.2 Statistical analysis
			19.3.2.3 Imputation and feature selection
			19.3.2.4 Development and validation of classification model
			19.3.2.5 Development and validation of nomogram based scoring system
		19.3.3 Results and discussion
			19.3.3.1 Performance evaluation of the classification model
			19.3.3.2 Performance evaluation of the developed nomogram model
			19.3.3.3 Longitudinal validation of prognostic model
	19.4 Conclusion
	References
20 Medical image analysis
	20.1 Introduction
	20.2 Early detection of myocardial infarction using echocardiography
		20.2.1 Methodology
			20.2.1.1 Pseudo-labeling technique for ground-truth generation
			20.2.1.2 Segmentation of the LV wall
			20.2.1.3 Feature engineering
			20.2.1.4 Myocardial infarction detection
		20.2.2 Experimental evaluation
			20.2.2.1 HMC-QU data set
			20.2.2.2 LV wall segmentation experiments
			20.2.2.3 Myocardial infarction detection experiments
			20.2.2.4 Computational complexity analysis
	20.3 COVID-19 recognition from X-ray images via convolutional sparse support estimator based classifier
		20.3.1 Preliminaries
			20.3.1.1 Sparse signal representation
			20.3.1.2 Representation based classification
		20.3.2 CSEN-based COVID-19 recognition system
			20.3.2.1 Data set
			20.3.2.2 Feature extraction via CheXNet
			20.3.2.3 CSEN-based classifier
			20.3.2.4 Evaluation of the classifiers
		20.3.3 Experimental evaluations
			20.3.3.1 Experimental setup
			20.3.3.2 Experimental results
	20.4 Conclusion
	References
21 Deep learning for robotics examples using OpenDR
	21.1 Introduction
	21.2 Structure of OpenDR toolkit and application examples
	21.3 Cointegration of simulation and training
		21.3.1 One-node architecture
		21.3.2 Emitter-receiver architecture
		21.3.3 Design decisions
	21.4 Concluding remarks
	References
Index
Back Cover




نظرات کاربران