ورود به حساب

نام کاربری گذرواژه

گذرواژه را فراموش کردید؟ کلیک کنید

حساب کاربری ندارید؟ ساخت حساب

ساخت حساب کاربری

نام نام کاربری ایمیل شماره موبایل گذرواژه

برای ارتباط با ما می توانید از طریق شماره موبایل زیر از طریق تماس و پیامک با ما در ارتباط باشید


09117307688
09117179751

در صورت عدم پاسخ گویی از طریق پیامک با پشتیبان در ارتباط باشید

دسترسی نامحدود

برای کاربرانی که ثبت نام کرده اند

ضمانت بازگشت وجه

درصورت عدم همخوانی توضیحات با کتاب

پشتیبانی

از ساعت 7 صبح تا 10 شب

دانلود کتاب Neuromorphic Computing and Beyond: Parallel, Approximation, Near Memory, and Quantum

دانلود کتاب محاسبات نورومورفیک و فراتر از آن: موازی، تقریب، حافظه نزدیک، و کوانتومی

Neuromorphic Computing and Beyond: Parallel, Approximation, Near Memory, and Quantum

مشخصات کتاب

Neuromorphic Computing and Beyond: Parallel, Approximation, Near Memory, and Quantum

ویرایش:  
نویسندگان:   
سری:  
ISBN (شابک) : 3030372235, 9783030372231 
ناشر: Springer 
سال نشر: 2020 
تعداد صفحات: 241 
زبان: English 
فرمت فایل : PDF (درصورت درخواست کاربر به PDF، EPUB یا AZW3 تبدیل می شود) 
حجم فایل: 11 مگابایت 

قیمت کتاب (تومان) : 42,000



ثبت امتیاز به این کتاب

میانگین امتیاز به این کتاب :
       تعداد امتیاز دهندگان : 9


در صورت تبدیل فایل کتاب Neuromorphic Computing and Beyond: Parallel, Approximation, Near Memory, and Quantum به فرمت های PDF، EPUB، AZW3، MOBI و یا DJVU می توانید به پشتیبان اطلاع دهید تا فایل مورد نظر را تبدیل نمایند.

توجه داشته باشید کتاب محاسبات نورومورفیک و فراتر از آن: موازی، تقریب، حافظه نزدیک، و کوانتومی نسخه زبان اصلی می باشد و کتاب ترجمه شده به فارسی نمی باشد. وبسایت اینترنشنال لایبرری ارائه دهنده کتاب های زبان اصلی می باشد و هیچ گونه کتاب ترجمه شده یا نوشته شده به فارسی را ارائه نمی دهد.


توضیحاتی درمورد کتاب به خارجی



فهرست مطالب

Preface
Contents
Chapter 1: An Introduction: New Trends in Computing
	1.1 Introduction
		1.1.1 Power Wall
		1.1.2 Frequency Wall
		1.1.3 Memory Wall
	1.2 Classical Computing
		1.2.1 Classical Computing Generations
		1.2.2 Types of Computers
	1.3 Computers Architectures
		1.3.1 Instruction Set Architecture (ISA)
		1.3.2 Different Computer Architecture
			1.3.2.1 Von-Neumann Architecture: General-Purpose Processors
			1.3.2.2 Harvard Architecture
			1.3.2.3 Modified Harvard Architecture
			1.3.2.4 Superscalar Architecture: Parallel Architecture
			1.3.2.5 VLIW Architecture: Parallel Architecture
	1.4 New Trends in Computing
	1.5 Conclusions
	References
Chapter 2: Numerical Computing
	2.1 Introduction
	2.2 Numerical Analysis for Electronics
		2.2.1 Why EDA
		2.2.2 Applications of Numerical Analysis
		2.2.3 Approximation Theory
	2.3 Different Methods for Solving PDEs and ODEs
		2.3.1 Iterative Methods for Solving PDEs and ODEs
			2.3.1.1 Finite Difference Method (Discretization)
			2.3.1.2 Finite Element Method (Discretization)
			2.3.1.3 Legendre Polynomials
		2.3.2 Hybrid Methods for Solving PDEs and ODEs
		2.3.3 ML-Based Methods for Solving ODEs and PDEs
		2.3.4 How to Choose a Method for Solving PDEs and ODEs
	2.4 Different Methods for Solving SNLEs
		2.4.1 Iterative Methods for Solving SNLEs
			2.4.1.1 Newton Method and Newton–Raphson Method
			2.4.1.2 Quasi-Newton Method aka Broyden’s Method
			2.4.1.3 The Secant Method
			2.4.1.4 The Muller Method
		2.4.2 Hybrid Methods for Solving SNLEs
		2.4.3 ML-Based Methods for Solving SNLEs
		2.4.4 How to Choose a Method for Solving Nonlinear Equations
	2.5 Different Methods for Solving SLEs
		2.5.1 Direct Methods for Solving SLEs
			2.5.1.1 Cramer’s Rule Method
			2.5.1.2 Gaussian Elimination Method
			2.5.1.3 Gauss–Jordan (GJ) Elimination Method
			2.5.1.4 LU Decomposition Method
			2.5.1.5 Cholesky Decomposition Method
		2.5.2 Iterative Methods for Solving SLEs
			2.5.2.1 Jacobi Method
			2.5.2.2 Gauss–Seidel Method
			2.5.2.3 Successive Over-Relaxation (SOR) Method
			2.5.2.4 Conjugate Gradient Method
			2.5.2.5 Bi-conjugate Gradient Method
			2.5.2.6 Generalized Minimal Residual Method
		2.5.3 Hybrid Methods for Solving SLEs
		2.5.4 ML-Based Methods for Solving SLEs
		2.5.5 How to Choose a Method for Solving Linear Equations
	2.6 Common Hardware Architecture for Different Numerical Solver Methods
	2.7 Software Implementation for Different Numerical Solver Methods
		2.7.1 Cramer’s Rule: Python-Implementation
		2.7.2 Newton–Raphson: C-Implementation
		2.7.3 Gauss Elimination: Python-Implementation
		2.7.4 Conjugate Gradient: MATLAB-Implementation
		2.7.5 GMRES: MATLAB-Implementation
		2.7.6 Cholesky: MATLAB-Implementation
	2.8 Conclusions
	References
Chapter 3: Parallel Computing: OpenMP, MPI, and CUDA
	3.1 Introduction
		3.1.1 Concepts
		3.1.2 Category of Processors: Flynn’s Taxonomy/Classification (1966)
			3.1.2.1 Von-Neumann Architecture (SISD)
			3.1.2.2 SIMD
			3.1.2.3 MISD
			3.1.2.4 MIMD
		3.1.3 Category of Processors: Soft/Hard/Firm
		3.1.4 Memory: Shared-Memory vs. Distributed Memory
		3.1.5 Interconnects: Between Processors and Memory
		3.1.6 Parallel Computing: Pros and Cons
	3.2 Parallel Computing: Programming
		3.2.1 Typical Steps for Constructing a Parallel Algorithm
		3.2.2 Levels of Parallelism
			3.2.2.1 Processor: Architecture Point of View
			3.2.2.2 Programmer Point of View
	3.3 Open Specifications for Multiprocessing (OpenMP) for Shared Memory
	3.4 Message-Passing Interface (MPI) for Distributed Memory
	3.5 GPU
		3.5.1 GPU Introduction
		3.5.2 GPGPU
		3.5.3 GPU Programming
			3.5.3.1 CUDA
		3.5.4 GPU Hardware
			3.5.4.1 The Parallella Board
	3.6 Parallel Computing: Overheads
	3.7 Parallel Computing: Performance
	3.8 New Trends in Parallel Computing
		3.8.1 3D Processors
		3.8.2 Network on Chip
		3.8.3 FCUDA
	3.9 Conclusions
	References
Chapter 4: Deep Learning and Cognitive Computing: Pillars and Ladders
	4.1 Introduction
		4.1.1 Artificial Intelligence
		4.1.2 Machine Learning
			4.1.2.1 Supervised Machine Learning
			4.1.2.2 Unsupervised Machine Learning
			4.1.2.3 Reinforcement Machine Learning
		4.1.3 Neural Network and Deep Learning
	4.2 Deep Learning: Basics
		4.2.1 DL: What? Deep vs. Shallow
		4.2.2 DL: Why? Applications
		4.2.3 DL: How?
		4.2.4 DL: Frameworks and Tools
			4.2.4.1 TensorFlow
			4.2.4.2 Keras
			4.2.4.3 PyTorch
			4.2.4.4 OpenCV
			4.2.4.5 Others
		4.2.5 DL: Hardware
	4.3 Deep Learning: Different Models
		4.3.1 Feedforward Neural Network
			4.3.1.1 Single-Layer Perceptron(SLP)
			4.3.1.2 Multilayer Perceptron (MLP)
			4.3.1.3 Radial Basis Function Neural Network
		4.3.2 Recurrent Neural Network (RNNs)
			4.3.2.1 LSTMs
			4.3.2.2 GRUs
		4.3.3 Convolutional Neural Network (CNNs): Feedforward
		4.3.4 Generative Adversarial Network (GAN)
		4.3.5 Auto Encoders Neural Network
		4.3.6 Spiking Neural Network
		4.3.7 Other Types of Neural Network
			4.3.7.1 Hopfield Networks
			4.3.7.2 Boltzmann Machine
			4.3.7.3 Restricted Boltzmann Machine
			4.3.7.4 Deep Belief Network
			4.3.7.5 Associative NN
	4.4 Challenges for Deep Learning
		4.4.1 Overfitting
		4.4.2 Underfitting
	4.5 Advances in Neuromorphic Computing
		4.5.1 Transfer Learning
		4.5.2 Quantum Machine Learning
	4.6 Applications of Deep Learning
		4.6.1 Object Detection
		4.6.2 Visual Tracking
		4.6.3 Natural Language Processing
		4.6.4 Digits Recognition
		4.6.5 Emotions Recognition
		4.6.6 Gesture Recognition
		4.6.7 Machine Learning for Communications
	4.7 Cognitive Computing: An Introduction
	4.8 Conclusions
	References
Chapter 5: Approximate Computing: Towards Ultra-Low-Power Systems Design
	5.1 Introduction
	5.2 Hardware-Level Approximation Techniques
		5.2.1 Transistor-Level Approximations
		5.2.2 Circuit-Level Approximations
		5.2.3 Gate-Level Approximations
			5.2.3.1 Approximate Multiplier Using Approximate Computing
			5.2.3.2 Approximate Multiplier Using Stochastic/Probabilistic Computing
		5.2.4 RTL-Level Approximations
			5.2.4.1 Iterative Algorithms
		5.2.5 Algorithm-Level Approximations
			5.2.5.1 Iterative Algorithms
			5.2.5.2 High-Level Synthesis (HLS) Approximations
		5.2.6 Device-Level Approximations: Memristor-Based Approximate Matrix Multiplier
	5.3 Software-Level Approximation Techniques
		5.3.1 Loop Perforation
		5.3.2 Precision Scaling
		5.3.3 Synchronization Elision
	5.4 Data-Level Approximation Techniques
		5.4.1 STT-MRAM
		5.4.2 Processing in Memory (PIM)
		5.4.3 Lossy Compression
	5.5 Evaluation: Case Studies
		5.5.1 Image Processing as a Case Study
		5.5.2 CORDIC Algorithm as a Case Study
		5.5.3 HEVC Algorithm as a Case Study
		5.5.4 Software-Based Fault Tolerance Approximation
	5.6 Conclusions
	References
Chapter 6: Near-Memory/In-Memory Computing: Pillars and Ladders
	6.1 Introduction
	6.2 Classical Computing: Processor-Centric Approach
	6.3 Near-Memory Computing: Data-Centric Approach
		6.3.1 HMC
		6.3.2 WideIO
		6.3.3 HBM
	6.4 In-Memory Computing: Data-Centric Approach
		6.4.1 Memristor-Based PIM
		6.4.2 PCM-Based PIM
		6.4.3 ReRAM-Based PIM
		6.4.4 STT-RAM-Based PIM
		6.4.5 FeRAM-Based PIM
		6.4.6 NRAM-Based PIM
		6.4.7 Comparison Between Different New Memories
	6.5 Techniques to Enhance DRAM Memory Controllers
		6.5.1 Techniques to Overcome the DRAM-Wall
			6.5.1.1 Low-Power Techniques in DRAM Interfaces
			6.5.1.2 High-Bandwidth and Low Latency Techniques in DRAM Interfaces
			6.5.1.3 High-Capacity and Small Footprint Techniques in DRAM Interfaces
	6.6 Conclusions
	References
Chapter 7: Quantum Computing and DNA Computing: Beyond Conventional Approaches
	7.1 Introduction: Beyond CMOS
	7.2 Quantum Computing
		7.2.1 Quantum Computing: History
		7.2.2 Quantum Computing: What?
		7.2.3 Quantum Computing: Why?
		7.2.4 Quantum Computing: How?
	7.3 Quantum Principles
		7.3.1 Bits Versus Qbits
		7.3.2 Quantum Uncertainty
		7.3.3 Quantum Superposition
		7.3.4 Quantum Entanglement (Nonlocality)
	7.4 Quantum Challenges
	7.5 DNA Computing: From Bits to Cells
		7.5.1 What Is DNA?
		7.5.2 Why DNA Computing?
		7.5.3 How DNA Works?
		7.5.4 Disadvantages of DNA Computing
		7.5.5 Traveling Salesman Problem Using DNA-Computing
	7.6 Conclusions
	References
Chapter 8: Cloud, Fog, and Edge Computing
	8.1 Cloud Computing
	8.2 Fog/Edge Computing
	8.3 Conclusions
	References
Chapter 9: Reconfigurable and Heterogeneous Computing
	9.1 Embedded Computing
		9.1.1 Categories of Embedded Systems Are [2–5]
		9.1.2 Embedded System Classifications
		9.1.3 Components of Embedded Systems
		9.1.4 Microprocessor vs. Microcontroller
		9.1.5 Embedded Systems Programming
		9.1.6 DSP
	9.2 Real-Time Computing
	9.3 Reconfigurable Computing
		9.3.1 FPGA
		9.3.2 High-Level Synthesis (C/C++ to RTL)
		9.3.3 High-Level Synthesis (Python to HDL)
		9.3.4 MATLAB to HDL
		9.3.5 Java to VHDL
	9.4 Heterogeneous Computing
		9.4.1 Heterogeneity vs. Homogeneity
		9.4.2 Pollack’s Rule
		9.4.3 Static vs. Dynamic Partitioning
		9.4.4 Heterogeneous Computing Programming
			9.4.4.1 Heterogeneous Computing Programming: OpenCL
	9.5 Conclusions
	References
Chapter 10: Conclusions
Index




نظرات کاربران