ورود به حساب

نام کاربری گذرواژه

گذرواژه را فراموش کردید؟ کلیک کنید

حساب کاربری ندارید؟ ساخت حساب

ساخت حساب کاربری

نام نام کاربری ایمیل شماره موبایل گذرواژه

برای ارتباط با ما می توانید از طریق شماره موبایل زیر از طریق تماس و پیامک با ما در ارتباط باشید


09117307688
09117179751

در صورت عدم پاسخ گویی از طریق پیامک با پشتیبان در ارتباط باشید

دسترسی نامحدود

برای کاربرانی که ثبت نام کرده اند

ضمانت بازگشت وجه

درصورت عدم همخوانی توضیحات با کتاب

پشتیبانی

از ساعت 7 صبح تا 10 شب

دانلود کتاب Memristive Devices for Brain-Inspired Computing: From Materials, Devices, and Circuits to Applications - Computational Memory, Deep Learning, and ... Series in Electronic and Optical Materials)

دانلود کتاب دستگاه های خاطره انگیز برای محاسبه با الهام از مغز: از مواد ، دستگاه ها و مدارها تا برنامه های کاربردی - حافظه محاسباتی ، یادگیری عمیق و ... سری در مواد الکترونیکی و نوری)

Memristive Devices for Brain-Inspired Computing: From Materials, Devices, and Circuits to Applications - Computational Memory, Deep Learning, and ... Series in Electronic and Optical Materials)

مشخصات کتاب

Memristive Devices for Brain-Inspired Computing: From Materials, Devices, and Circuits to Applications - Computational Memory, Deep Learning, and ... Series in Electronic and Optical Materials)

ویرایش: 1 
نویسندگان: , , ,   
سری: Woodhead Publishing Series in Electronic and Optical Materials 
ISBN (شابک) : 0081027826, 9780081027820 
ناشر: Woodhead Publishing 
سال نشر: 2020 
تعداد صفحات: 553 
زبان: English 
فرمت فایل : PDF (درصورت درخواست کاربر به PDF، EPUB یا AZW3 تبدیل می شود) 
حجم فایل: 16 مگابایت 

قیمت کتاب (تومان) : 54,000



ثبت امتیاز به این کتاب

میانگین امتیاز به این کتاب :
       تعداد امتیاز دهندگان : 7


در صورت تبدیل فایل کتاب Memristive Devices for Brain-Inspired Computing: From Materials, Devices, and Circuits to Applications - Computational Memory, Deep Learning, and ... Series in Electronic and Optical Materials) به فرمت های PDF، EPUB، AZW3، MOBI و یا DJVU می توانید به پشتیبان اطلاع دهید تا فایل مورد نظر را تبدیل نمایند.

توجه داشته باشید کتاب دستگاه های خاطره انگیز برای محاسبه با الهام از مغز: از مواد ، دستگاه ها و مدارها تا برنامه های کاربردی - حافظه محاسباتی ، یادگیری عمیق و ... سری در مواد الکترونیکی و نوری) نسخه زبان اصلی می باشد و کتاب ترجمه شده به فارسی نمی باشد. وبسایت اینترنشنال لایبرری ارائه دهنده کتاب های زبان اصلی می باشد و هیچ گونه کتاب ترجمه شده یا نوشته شده به فارسی را ارائه نمی دهد.


توضیحاتی در مورد کتاب دستگاه های خاطره انگیز برای محاسبه با الهام از مغز: از مواد ، دستگاه ها و مدارها تا برنامه های کاربردی - حافظه محاسباتی ، یادگیری عمیق و ... سری در مواد الکترونیکی و نوری)



دستگاه‌های Memristive برای محاسبات الهام‌گرفته از مغز: از مواد، دستگاه‌ها و مدارها تا برنامه‌ها—حافظه محاسباتی، یادگیری عمیق، و شبکه‌های عصبی Spiking جدیدترین‌های مهندسی مواد و دستگاه‌ها را برای بهینه‌سازی دستگاه‌های memristive مرور می‌کند. فراتر از برنامه های ذخیره سازی و به سمت محاسبات الهام گرفته از مغز. این کتاب درکی از چهار مفهوم کلیدی، از جمله جنبه‌های مواد و دستگاه با دیدی از سیستم‌های مواد فعلی و موانع باقی‌مانده آن‌ها، جنبه‌های الگوریتمی شامل مفاهیم اساسی علوم اعصاب و همچنین مفاهیم مختلف محاسباتی، مدارها و معماری‌های پیاده‌سازی آن الگوریتم‌ها را برای خوانندگان فراهم می‌کند. بر اساس فن‌آوری‌های حافظه‌دار و برنامه‌های هدف، از جمله محاسبات الهام‌گرفته از مغز، حافظه محاسباتی، و یادگیری عمیق.

این کتاب جامع برای مخاطبان بین‌رشته‌ای، از جمله دانشمندان مواد، فیزیکدانان، مهندسان برق، و کامپیوتر مناسب است. دانشمندان.


توضیحاتی درمورد کتاب به خارجی

Memristive Devices for Brain-Inspired Computing: From Materials, Devices, and Circuits to Applications―Computational Memory, Deep Learning, and Spiking Neural Networks reviews the latest in material and devices engineering for optimizing memristive devices beyond storage applications and toward brain-inspired computing. The book provides readers with an understanding of four key concepts, including materials and device aspects with a view of current materials systems and their remaining barriers, algorithmic aspects comprising basic concepts of neuroscience as well as various computing concepts, the circuits and architectures implementing those algorithms based on memristive technologies, and target applications, including brain-inspired computing, computational memory, and deep learning.

This comprehensive book is suitable for an interdisciplinary audience, including materials scientists, physicists, electrical engineers, and computer scientists.



فهرست مطالب

Cover
Memristive Devices for Brain-Inspired Computing
Copyright
Contents
List of contributors
Preface
Part I: Memristive devices for brain–inspired computing
1 Role of resistive memory devices in brain-inspired computing
	1.1 Introduction
	1.2 Type of resistive memory devices
	1.3 Resistive memory devices for brain-inspired computing
	1.4 Conclusions and perspectives
	References
2 Resistive switching memories
	2.1 Introduction
	2.2 Basic concepts of the physics of resistive switching
		2.2.1 Resistive switching based on cation migration
		2.2.2 Resistive switching based on anion migration
			2.2.2.1 Filamentary bipolar switching
			2.2.2.2 Complementary switching
			2.2.2.3 Area-dependent switching
		2.2.3 Negative differential resistance devices
		2.2.4 Switching features related to physical processes
	2.3 Resistance switching technology: performances and industrial-level prototypes
	2.4 Advanced functionalities and programming schemes
		2.4.1 Multilevel operation
		2.4.2 Implementation of plasticity in resistive switching random access memories devices
			2.4.2.1 Plasticity by analog switching dynamics
			2.4.2.2 Plasticity by stochastic switching
			2.4.2.3 Implementation of plasticity: assessment and practical issues
		2.4.3 Rate and timing computing with resistive switching random access memories devices
		2.4.4 Oscillatory systems
	2.5 Conclusions and perspectives
	References
3 Phase-change memory
	3.1 Introduction
		3.1.1 Historical overview of phase-change memory
		3.1.2 Applications of phase-change memory
			3.1.2.1 Memory technology
			3.1.2.2 Non-von Neumann computing
	3.2 Essentials of phase-change memory
	3.3 A detailed description of the write operation
		3.3.1 SET/RESET operation
		3.3.2 Switching process
		3.3.3 Multilevel operation
	3.4 A detailed description of the read operation
		3.4.1 Subthreshold electrical transport: voltage and temperature dependence
		3.4.2 Resistance drift
		3.4.3 Noise
	3.5 Key enablers for brain-inspired computing
		3.5.1 Multilevel storage
		3.5.2 Accumulative behavior
		3.5.3 Inter and intradevice randomness
	3.6 Outlook
	References
4 Magnetic and ferroelectric memories
	4.1 Magnetic memories
		4.1.1 “Spintronics” at a glance
		4.1.2 Storing information
			4.1.2.1 Ferromagnetism
			4.1.2.2 Magnetic anisotropy and magnetic materials
		4.1.3 Reading information
			4.1.3.1 Electronic transport in magnetic structures
			4.1.3.2 Spin-valve structure and the giant magnetoresistance
			4.1.3.3 Tunneling magnetoresistance
			4.1.3.4 Device design
		4.1.4 Writing information
			4.1.4.1 Acting on the magnetization by current flow: spin transfer
			4.1.4.2 Electrical control of magnetic states
			4.1.4.3 Magnetic domains and domain walls
		4.1.5 Latest developments
			4.1.5.1 Voltage control of magnetic anisotropy
			4.1.5.2 Pure spin currents
	4.2 Ferroelectric memories
		4.2.1 Ferroelectric materials
			4.2.1.1 Ferroelectricity
			4.2.1.2 Perovskite-based ferroelectric materials
			4.2.1.3 Fluoride structure ferroelectric materials
		4.2.2 Capacitor-based ferroelectric memories
			4.2.2.1 Ferroelectric random-access memory based on a one transistor–one capacitor cell
			4.2.2.2 Antiferroelectric random-access memory
		4.2.3 Transistor-based ferroelectric memories
		4.2.4 Ferroelectric tunneling junctions
	4.3 Memories beyond the Von Neumann architectures
		4.3.1 Logic-in-memory
			4.3.1.1 Ferroelectric field effect transistor-based logic-in-memory
			4.3.1.2 Comparison with the integration of magnetic devices
		4.3.2 Perspectives for neuromorphic computing: brain-inspired architectures
			4.3.2.1 Magnetic synapse and neuron
			4.3.2.2 Ferroelectric synapse and neuron
		4.3.3 Leveraging stochastic switching: random number generation, approximate computing
		4.3.4 Summary and outlook
	References
5 Selector devices for emerging memories
	5.1 Introduction
	5.2 Insulator–metal transition selector
	5.3 Ovonic threshold switching
	5.4 CBRAM-type selector
	5.5 Conclusion
	References
Part II: Computational memory
6 Memristive devices as computational memory
	6.1 Introduction
	6.2 In-memory computing
	6.3 Future outlook
	References
7 Memristor-based in-memory logic and its application in image processing
	7.1 Introduction
	7.2 Memristor-based logic
		7.2.1 Memristor Aided loGIC (MAGIC)
		7.2.2 Digital image processing
		7.2.3 Previous attempts to accelerate image processing with memristors
	7.3 The memristive Memory Processing Unit
		7.3.1 Challenges of the memristive Memory Processing Unit
	7.4 Performing image processing in the memristive Memory Processing Unit
		7.4.1 Fixed-Point multiplication
			7.4.1.1 Performing Fixed-Point multiplicating using MAGIC
		7.4.2 MAGIC-based algorithms for image processing
	7.5 Evaluation
		7.5.1 Methodology
		7.5.2 Performance
		7.5.3 Energy
	7.6 Conclusions
	References
8 Hyperdimensional computing nanosystem: in-memory computing using monolithic 3D integration of RRAM and CNFET
	8.1 Introduction
	8.2 Background on HD computing
		8.2.1 Arithmetic operations on hypervectors
		8.2.2 General and scalable model of computing
		8.2.3 Robustness of computations
		8.2.4 Memory-centric with parallel operations
	8.3 Case study: language recognition
		8.3.1 Mapping and encoding module
		8.3.2 Similarity search module
	8.4 Emerging technologies for HD computing
		8.4.1 Carbon nanotube field-effect transistors
		8.4.2 Resistive RAM
		8.4.3 Monolithic 3D integration
	8.5 Experimental demonstrations for HD computing
		8.5.1 3D VRRAM demonstration: in-memory MAP kernels
		8.5.2 System demonstration using monolithic 3D integrated CNFETs and RRAM
	8.6 Conclusion
	References
9 Vector multiplications using memristive devices and applications thereof
	9.1 Introduction
	9.2 Computing via physical laws
		9.2.1 Data mapping to the crossbar
		9.2.2 Input data encoding
		9.2.3 Output data sampling
		9.2.4 Additional design considerations
	9.3 Soft computing applications
		9.3.1 Data classification
			9.3.1.1 Bio-faithful networks
			9.3.1.2 Machine learning model implementations—classification
		9.3.2 Feature extraction
		9.3.3 Data clustering
		9.3.4 Signal processing
		9.3.5 Security applications
	9.4 Precise computing applications
		9.4.1 In-memory arithmetic accelerators
		9.4.2 Logic circuitry
	9.5 General memristor-based multiply-and-accumulate accelerators
	9.6 Conclusion
	Acknowledgments
	References
10 Computing with device dynamics
	10.1 Computation using oscillatory dynamics
	10.2 Control of memristor resistance
	10.3 Correlation detection and nonlinear solvers
	10.4 Optimization using Hopfield networks and chaotic devices
	10.5 Conclusions
	References
11 Exploiting the stochasticity of memristive devices for computing
	11.1 Harnessing randomness
		11.1.1 Trading-off reliability for low-power consumption
		11.1.2 Embracing unreliability by using noise
			11.1.2.1 Canonical model of stochastic resonance
			11.1.2.2 Various types of stochastic resonance
				Aperiodic stochastic resonance and nonlinear systems
				Suprathreshold stochastic resonance
			11.1.2.3 Relevance of stochastic resonance for computing
			11.1.2.4 Broader paradigm of stochastic facilitation
			11.1.2.5 Noise-induced synchronization for low-power computing?
		11.1.3 Computing with probabilities: stochastic computing
	11.2 Proposals of stochastic building blocks
		11.2.1 Quantum dots cellular automata
		11.2.2 Molecular approaches
			11.2.2.1 Biomolecular automata
			11.2.2.2 Resonant energy transfer between chromophores
		11.2.3 Charge-based memristive devices
			11.2.3.1 Memristors as random bitstream generators
			11.2.3.2 Memristors as stochastic integrate and fire neurons
		11.2.4 Spintronics
			11.2.4.1 Modifying the magnetic state—spin torques
				Dynamics of the magnetization of a nanomagnet
				Spin transfer torque
				Stochastic switching of magnetic tunnel junctions
	11.3 Test cases of stochastic computation: case of magnetic tunnel junction
		11.3.1 Spin dice: a true random number generator
		11.3.2 Stochastic synapses
		11.3.3 Stochastic computation with superparamagnetic tunnel junctions
		11.3.4 Population coding-based stochastic computation
	11.4 Conclusion
	References
Part III: Deep learning
12 Memristive devices for deep learning applications
	12.1 Quick introduction to deep learning
		12.1.1 Simple neural network
		12.1.2 Backpropagation
		12.1.3 Why going deep helps?
		12.1.4 Modern deep neural networks
			12.1.4.1 Multiple output neural networks
			12.1.4.2 Convolutional and recurrent neural networks
			12.1.4.3 Techniques for implementing learning
	12.2 Why do deep neural networks consume more energy than the brain, and how memristive devices can help
		12.2.1 Separation of logic and memory
		12.2.2 Reliance on approximate computing
		12.2.3 Cost of clock
		12.2.4 Is backpropagation hardware compatible?
	12.3 Conclusion
	References
13 Analog acceleration of deep learning using phase-change memory
	13.1 Introduction
	13.2 Deep learning with nonvolatile memory—an overview
	13.3 Recent progress on phase-change memory for deep learning
	13.4 Achieving software-equivalent accuracy in DNN training
		13.4.1 PCM+3T1C
		13.4.2 Polarity inversion
		13.4.3 Mixed hardware–Software experiment
		13.4.4 Results
	13.5 Nonvolatile memory device requirements for deep learning revisited
		13.5.1 Most significant pair programming
		13.5.2 Dependence of accuracy on device nonidealities
	13.6 Conclusions
	References
14 RRAM-based coprocessors for deep learning
	14.1 Introduction
	14.2 NN applications based on RRAM
		14.2.1 Related simulation work
		14.2.2 Experimental implementation
			14.2.2.1 Associative memory
			14.2.2.2 Pattern recognition
			14.2.2.3 information processing
			14.2.2.4 Scaling the demonstrations
	14.3 Circuit and system-level implementation
		14.3.1 Latest progress on circuit and system based on RRAM for NN processing
		14.3.2 Practical challenges of implementing RRAM macros for DNN processing
			14.3.2.1 Sneak current and array architecture
			14.3.2.2 The influence of resistances of access device and memory cell
			14.3.2.3 Influence of SA offset
			14.3.2.4 Read margin degradation with increasing number of activated WLs
		14.3.3 Advanced design techniques for performance and reliability enhancement
	14.4 Summary
	References
Part IV: Spiking neural networks
15 Memristive devices for spiking neural networks
	15.1 Introduction
	15.2 Signal encoding and processing with spikes
	15.3 System architecture
	15.4 Memristive devices for Spiking neural networks
	15.5 Future outlook
	References
16 Neuronal realizations based on memristive devices
	16.1 Introduction
		16.1.1 Spiking neuron network
		16.1.2 Conventional transistor-based spiking neurons
	16.2 Novel memristor-based neurons
		16.2.1 Phase-change memristor
		16.2.2 Redox and electronic memristor
		16.2.3 Ovonic chalcogenide glass
		16.2.4 Mott insulators
		16.2.5 Magnetic tunneling junction
	16.3 Unsupervised programming of the synapses
		16.3.1 Phase-change memristor neuron and synapse interaction
		16.3.2 Redox memristor neuron
	16.4 Conclusion
	References
17 Synaptic realizations based on memristive devices
	17.1 Introduction
	17.2 Biological synaptic plasticity rules
		17.2.1 Long-term spike-timing-dependent plasticity and spike-rate-dependent plasticity
		17.2.2 Short-term plasticity
		17.2.3 State-dependent synaptic modulation
	17.3 Memristive implementations
		17.3.1 Resistive switching random access memory synapses
		17.3.2 Phase-change memory synapses
		17.3.3 Spin-transfer torque magnetic random access memory synapses
	17.4 Hybrid complementary metal-oxide semiconductor/memristive synapses
		17.4.1 One-transistor/one-resistor synapses
		17.4.2 Two-transistor/one-resistor synapses
		17.4.3 Differential synapses
		17.4.4 Multimemristive synapses
	17.5 Synaptic transistors (3-terminal synapses)
	17.6 Triplet-based synapses
	17.7 Spike-rate-dependent plasticity synapses
		17.7.1 One-resistor synapses
		17.7.2 Four-transistors/one-resistor synapses
		17.7.3 One-selector/one-resistor synapses
	17.8 Self-learning networks with memristive synapses
	17.9 Conclusion
	Acknowledgments
	References
18 System-level integration in neuromorphic co-processors
	18.1 Neuromorphic computing
	18.2 Integrating memristive devices as synapses in neuromorphic computing architectures
	18.3 Spike-based learning mechanisms for hybrid memristive-CMOS neuromorphic synapses
		18.3.1 STDP mechanism
		18.3.2 Spike timing- and rate-dependent plasticity mechanism
		18.3.3 Spike-based stochastic weight update rules
		18.3.4 Comparison between the spike-based learning architectures
	18.4 Spike-based implementation of the neuronal intrinsic plasticity
	18.5 Scalable mixed memristive–CMOS multicore neuromorphic computing systems
	18.6 Conclusions and discussion
	References
19 Spiking neural networks for inference and learning: a memristor-based design perspective
	19.1 Introduction
	19.2 Spiking neural networks and synaptic plasticity
	19.3 Memristive realization and nonidealities
		19.3.1 Weight Mapping
		19.3.2 RRAM endurance and retention
		19.3.3 Sneak Path Effect
		19.3.4 Delay
		19.3.5 Asymmetric nonlinearity conductance update model
			19.3.5.1 Asymmetric nonlinearity behavior example
			19.3.5.2 RRAM updates for training
	19.4 Synaptic plasticity and learning in SNN
		19.4.1 Gradient-based learning in SNN and three-factor rules
	19.5 Stochastic SNNs
		19.5.1 Learning in stochastic SNNs
		19.5.2 Three-factor learning in memristor arrays
	19.6 Concluding remarks
	References
Index
Back Cover




نظرات کاربران