ورود به حساب

نام کاربری گذرواژه

گذرواژه را فراموش کردید؟ کلیک کنید

حساب کاربری ندارید؟ ساخت حساب

ساخت حساب کاربری

نام نام کاربری ایمیل شماره موبایل گذرواژه

برای ارتباط با ما می توانید از طریق شماره موبایل زیر از طریق تماس و پیامک با ما در ارتباط باشید


09117307688
09117179751

در صورت عدم پاسخ گویی از طریق پیامک با پشتیبان در ارتباط باشید

دسترسی نامحدود

برای کاربرانی که ثبت نام کرده اند

ضمانت بازگشت وجه

درصورت عدم همخوانی توضیحات با کتاب

پشتیبانی

از ساعت 7 صبح تا 10 شب

دانلود کتاب Hardware Accelerator Systems for Artificial Intelligence and Machine Learning (Volume 122) (Advances in Computers, Volume 122)

دانلود کتاب سیستم‌های شتاب‌دهنده سخت‌افزار برای هوش مصنوعی و یادگیری ماشین (جلد 122) (پیشرفت‌ها در رایانه، جلد 122)

Hardware Accelerator Systems for Artificial Intelligence and Machine Learning (Volume 122) (Advances in Computers, Volume 122)

مشخصات کتاب

Hardware Accelerator Systems for Artificial Intelligence and Machine Learning (Volume 122) (Advances in Computers, Volume 122)

ویرایش: [1 ed.] 
نویسندگان:   
سری:  
ISBN (شابک) : 0128231238, 9780128231234 
ناشر: Academic Press 
سال نشر: 2021 
تعداد صفحات: 416
[417] 
زبان: English 
فرمت فایل : PDF (درصورت درخواست کاربر به PDF، EPUB یا AZW3 تبدیل می شود) 
حجم فایل: 24 Mb 

قیمت کتاب (تومان) : 32,000



ثبت امتیاز به این کتاب

میانگین امتیاز به این کتاب :
       تعداد امتیاز دهندگان : 5


در صورت تبدیل فایل کتاب Hardware Accelerator Systems for Artificial Intelligence and Machine Learning (Volume 122) (Advances in Computers, Volume 122) به فرمت های PDF، EPUB، AZW3، MOBI و یا DJVU می توانید به پشتیبان اطلاع دهید تا فایل مورد نظر را تبدیل نمایند.

توجه داشته باشید کتاب سیستم‌های شتاب‌دهنده سخت‌افزار برای هوش مصنوعی و یادگیری ماشین (جلد 122) (پیشرفت‌ها در رایانه، جلد 122) نسخه زبان اصلی می باشد و کتاب ترجمه شده به فارسی نمی باشد. وبسایت اینترنشنال لایبرری ارائه دهنده کتاب های زبان اصلی می باشد و هیچ گونه کتاب ترجمه شده یا نوشته شده به فارسی را ارائه نمی دهد.


توضیحاتی در مورد کتاب سیستم‌های شتاب‌دهنده سخت‌افزار برای هوش مصنوعی و یادگیری ماشین (جلد 122) (پیشرفت‌ها در رایانه، جلد 122)

سیستم‌های شتاب‌دهنده سخت‌افزاری برای هوش مصنوعی و یادگیری ماشین، جلد 122 به هوش مصنوعی و رشدی که با ظهور شبکه‌های عصبی عمیق (DNN) و یادگیری ماشین داشته است، می‌پردازد. به‌روزرسانی‌های این نسخه شامل فصل‌هایی درباره سیستم‌های شتاب‌دهنده سخت‌افزار برای هوش مصنوعی و یادگیری ماشین، مقدمه‌ای بر سیستم‌های شتاب‌دهنده سخت‌افزار برای هوش مصنوعی و یادگیری ماشین، یادگیری عمیق با پردازنده‌های گرافیکی، بهینه‌سازی محاسبات لبه مدل‌های یادگیری عمیق برای معماری‌های پردازش تانسور تخصصی، معماری NPU است. برای DNN، معماری سخت‌افزار برای شبکه عصبی کانولوشن برای پردازش تصویر، شتاب‌دهنده‌های شبکه عصبی مبتنی بر FPGA، و موارد دیگر.


توضیحاتی درمورد کتاب به خارجی

Hardware Accelerator Systems for Artificial Intelligence and Machine Learning, Volume 122 delves into arti?cial Intelligence and the growth it has seen with the advent of Deep Neural Networks (DNNs) and Machine Learning. Updates in this release include chapters on Hardware accelerator systems for artificial intelligence and machine learning, Introduction to Hardware Accelerator Systems for Artificial Intelligence and Machine Learning, Deep Learning with GPUs, Edge Computing Optimization of Deep Learning Models for Specialized Tensor Processing Architectures, Architecture of NPU for DNN, Hardware Architecture for Convolutional Neural Network for Image Processing, FPGA based Neural Network Accelerators, and much more.



فهرست مطالب

Cover
Contents
Copyright_2021_Advances-in-Computers
	Copyright
Contributors_2021_Advances-in-Computers
	Contributors
Preface_2021_Advances-in-Computers
	Preface
Chapter-One---Introduction-to-hardware-accelerator-systems-_2021_Advances-in
	Introduction to hardware accelerator systems for artificial intelligence and machine learning
		Introduction to artificial intelligence and machine learning in hardware acceleration
		Deep learning and neural network acceleration
			The neural processing unit
			RENO: A reconfigurable NoC accelerator
		HW accelerators for artificial neural networks and machine learning
			CNN accelerator architecture
			SVM accelerator architecture
			DNN based hardware acceleration
				EyeRiss
		SW framework for deep neural networks
		Comparison of FPGA, CPU and GPU
			Performance metrics
		Conclusion and future scope
		References
Chapter-Two---Hardware-accelerator-systems-for-embedd_2021_Advances-in-Compu
	Hardware accelerator systems for embedded systems
		Introduction
		Neural network computing in embedded systems
			Driving neural network computing into embedded systems
			Considerations for choosing embedded processing solutions
		Hardware acceleration in embedded systems
			Hardware acceleration options
			Commercial options for neural network acceleration
		Software frameworks for neural networks
		Acknowledgments
		References
Chapter-Three---Hardware-accelerator-systems-for-artificia_2021_Advances-in-
	Hardware accelerator systems for artificial intelligence and machine learning
		Introduction
		Background
			Overview of convolutional neural networks
			Quantization of weights and activations
				Performance of neural networks using quantized weights and activations based on arithmetic binary shift operations
			Computational elements of hardware accelerators in deep neural networks
		Hardware inference accelerators for deep neural networks
			Architectures of hardware accelerators
			Eyeriss: hardware accelerator using a spatial architecture
			UNPU and BIT FUSION: hardware accelerators using shift-based multiplier
			Digital neuron: a multiplier-less massive parallel processor
			Power saving strategies for hardware accelerators
		Hardware inference accelerators using digital neurons
			System architecture
			Implementation and experimental results
		Summary
		Acknowledgments
			Key terminology and definitions
		References
Chapter-Four---Generic-quantum-hardware-accelerators-fo_2021_Advances-in-Com
	Generic quantum hardware accelerators for conventional systems
		Introduction
		Principles of computation
		Need and foundation for quantum hardware accelerator design
			Algorithms
			Programming paradigm and languages
			Compiler and runtime requirements
			Quantum instruction set architecture (Q-ISA)
			Quantum microarchitecture
		A generic quantum hardware accelerator (GQHA)
			Deciphering HQAP as a GQHA
			Deconstructing GQHA
		Industrially available quantum hardware accelerators
			IBM Quantum project
			Google bristlecone
			D-Wave
			Some other development areas and derived inference
		Conclusion and future work
		References
Chapter-Five---FPGA-based-neural-network-accelerato_2021_Advances-in-Compute
	FPGA based neural network accelerators
		Introduction
		Background
			Deep neural network models and computations
			Field programmable gate array
			FPGA based acceleration systems
			Challenges of FPGA based neural network acceleration
		Algorithmic optimization
			Pruning
			Data quantization
			Data encoding and sharing
			Fast convolution algorithms
		Accelerator architecture
			Processing element
				DSP architecture
				PE architectures based on dataflows
			Vector architecture
			Array architecture
			Multi-FPGA architecture
			Narrow bit-precision architecture
		Design methodology
			Hardware/software co-design
			High level synthesis
			OpenCL
			Design automation framework
		Applications
			Image recognition
			Speech recognition
			Autonomous vehicle
			Cloud computing
		Evaluation
			Matrix-vector multiplication
			Deep neural networks
			Vision kernels
		Future research directions
		References
Chapter-Six---Deep-learning-with-GPUs_2021_Advances-in-Computers
	Deep learning with GPUs
		Deep learning applications using GPU as accelerator
		Overview of graphics processing unit
			History and overview of GPU architecture
			Structure of GPGPU applications
			GPU microarchitecture
			Evolution of GPUs
		Deep learning acceleration in GPU hardware perspective
			NVIDIA tensor core: Deep learning application-specific core
			High-bandwidth memory
			Multi-GPU system
			Multiple-instance GPU
		GPU software for accelerating deep learning
			Deep learning framework for GPU
				TensorFlow
				PyTorch
				Caffe
			Software support specialized for deep learning
				cuDNN: NVIDIA CUDA deep neural network library
				TensorRT: NVIDIA SDK for accelerating deep learning inference
				cuBLAS: CUDA basic linear algebra subroutine library
				cuSPARSE: CUDA sparse matrix library
				DALI: NVIDIA data loading library
			Software to optimize data communications on multi-node GPU
		Advanced techniques for optimizing deep learning models on GPUs
			Accelerating pruned deep learning models in GPUs
				Increasing compute efficiency of GPU SIMD lanes via synapse vector elimination
				Algorithm and hardware co-design for accelerating pruned deep learning models on tensor cores
			Improving data reuse on CNN models in GPUs
			Overcoming GPU memory capacity limits with CPU memory space
		Cons and pros of GPU accelerators
		Acknowledgment
			Key terminology and definitions
		References
		Further reading/References for advance
Chapter-Seven---Architecture-of-neural-processing-unit-f_2021_Advances-in-Co
	Architecture of neural processing unit for deep neural networks
		Introduction
		Background
		Considerations in hardware design
		NPU architectures
			NPU architectures for primitive neural networks
			NPU architectures for DNN
		Discussion
		Summary
		Acknowledgments
		References
			References for advance
		Further reading
Chapter-Eight---Energy-efficient-deep-learning-inferenc_2021_Advances-in-Com
	Energy-efficient deep learning inference on edge devices
		Introduction
		Theoretical background
			Neurons and layers
			Training and inference
			Feed-forward models
				Fully connected neural networks
				Convolutional neural networks
			Sequential models
		Deep learning frameworks and libraries
		Advantages of deep learning on the edge
		Applications of deep learning at the edge
			Computer vision
			Language and speech processing
			Time series processing
		Hardware support for deep learning inference at the edge
			Custom accelerators
			Embedded GPUs
			Embedded CPUs and MCUs
		Static optimizations for deep learning inference at the edge
			Quantization
				Quantization algorithms
				Quantization and training
				Binarization
				Benefits of quantization
			Pruning
				Pruning algorithms
				Benefits of pruning
			Knowledge distillation
			Collaborative inference
			Limitations of static optimizations
		Dynamic (input-dependent) optimizations for deep learning inference at the edge
			Ensemble learning
			Conditional inference and fast exiting
			Hierarchical inference
			Input-dependent collaborative inference
			Dynamic tuning of inference algorithm parameters
		Open challenges and future directions
		References
Chapter-Nine----Last-mile--optimization-of-edge-computing-eco_2021_Advances-
	``Last mile´´ optimization of edge computing ecosystem with deep learning models and specialized tensor pro ...
		Introduction
		State of the art
			Background of edge computing
			Edge computing hardware implementations
			Practical edge computing use cases
				Computer vision
				Network management
				Human-computer interaction
				Internet of things
				Human activity recognition
			Challenges in edge computing
		Methodology
			Hardware used
				Google TPUs
				Intel VPUs
			Use case and data used
			Optimization methods
			Models and metrics used
		Results
			Frame sequences
				CPU
				GPU
				Horned Sungem (Intel Movidius)
				Google Coral
			Optimization effect
				Horned Sungem
				Google Coral
				Time vs frame size dependence for various USB interfaces and OSs
		Discussion
		Conclusions
		Acknowledgments
		References
		Further reading
Chapter-Ten---Hardware-accelerator-for-training-with-intege_2021_Advances-in
	Hardware accelerator for training with integer backpropagation and probabilistic weight update
		Introduction
		Integer back propagation with probabilistic weight update
			Conversion from FP32 to integer for integer back propagation
			Random selection of indices of weights to be updated
			Probabilistic weight update of the randomly selected indices
		Consideration of hardware implementation of the probabilistic weight update
		Simulation results of the proposed scheme
		Discussions
		Summary
		Acknowledgments
			Key terminology and definitions
		References
Chapter-Eleven---Music-recommender-system-using-restricted_2021_Advances-in-
	Music recommender system using restricted Boltzmann machine with implicit feedback
		Introduction
			Motivation
			Objective
			Previous work
		Types of recommender systems
			Real world examples of recommender system
			Need for recommender systems
			Distinction of music recommender system from other recommender systems
			Explicit vs implicit feedback
			Benefits of using deep learning approaches
			Unsupervised learning
				Clustering
				Neural networks
				Real life applications of neural networks
				Stochastic neural network
				Energy-based models
				Boltzmann machine
				Restricted Boltzmann machine
				Markov chain model
		Problem statement
		Explanation of RBM
		Proposed architecture
			Contrastive divergence algorithm
			Prediction of recommender system
		Minibatch size used for training and selection of weights and biases
		Types of activation function that can be used in this model
		Evaluation metrics that can be used to measure for music recommendation
		Experimental setup
		Result
		Conclusion
		Future works
		Reference
Back_Cover




نظرات کاربران