دسترسی نامحدود
برای کاربرانی که ثبت نام کرده اند
برای ارتباط با ما می توانید از طریق شماره موبایل زیر از طریق تماس و پیامک با ما در ارتباط باشید
در صورت عدم پاسخ گویی از طریق پیامک با پشتیبان در ارتباط باشید
برای کاربرانی که ثبت نام کرده اند
درصورت عدم همخوانی توضیحات با کتاب
از ساعت 7 صبح تا 10 شب
ویرایش: [1 ed.] نویسندگان: Sandeep Saini (editor), Kusum Lata (editor), G.R. Sinha (editor) سری: ISBN (شابک) : 1032061715, 9781032061719 ناشر: CRC Press سال نشر: 2022 تعداد صفحات: 336 [329] زبان: English فرمت فایل : PDF (درصورت درخواست کاربر به PDF، EPUB یا AZW3 تبدیل می شود) حجم فایل: 22 Mb
در صورت ایرانی بودن نویسنده امکان دانلود وجود ندارد و مبلغ عودت داده خواهد شد
در صورت تبدیل فایل کتاب VLSI and Hardware Implementations using Modern Machine Learning Methods به فرمت های PDF، EPUB، AZW3، MOBI و یا DJVU می توانید به پشتیبان اطلاع دهید تا فایل مورد نظر را تبدیل نمایند.
توجه داشته باشید کتاب VLSI و پیاده سازی سخت افزار با استفاده از روش های مدرن یادگیری ماشین نسخه زبان اصلی می باشد و کتاب ترجمه شده به فارسی نمی باشد. وبسایت اینترنشنال لایبرری ارائه دهنده کتاب های زبان اصلی می باشد و هیچ گونه کتاب ترجمه شده یا نوشته شده به فارسی را ارائه نمی دهد.
یادگیری ماشین یک راه حل بالقوه برای حل مشکلات تنگنا در VLSI از طریق بهینه سازی وظایف در فرآیند طراحی است. هدف این کتاب ارائه جدیدترین روشها، الگوریتمها، معماریها و چارچوبهای مبتنی بر یادگیری ماشین است که برای طراحی VLSI طراحی شدهاند. تمرکز بر تکنیکهای طراحی دیجیتال، آنالوگ و سیگنال مختلط، مدلسازی دستگاه، طراحی فیزیکی، پیادهسازی سختافزار، آزمایشپذیری، طراحی قابل تنظیم مجدد، سنتز و تأیید، و حوزههای مرتبط است. این شامل فصل هایی در مورد مطالعات موردی و همچنین ایده های تحقیقی جدید در زمینه داده شده است. به طور کلی، این کتاب پیادهسازیهای عملی طراحی VLSI، طراحی IC و تحقق سختافزار با استفاده از تکنیکهای یادگیری ماشین را ارائه میکند.
این کتاب برای محققان، متخصصان و دانشجویان فارغالتحصیل در VLSI، یادگیری ماشین، مهندسی برق و الکترونیک، مهندسی کامپیوتر، سیستمهای سختافزار طراحی شده است.< /p>
Machine learning is a potential solution to resolve bottleneck issues in VLSI via optimizing tasks in the design process. This book aims to provide the latest machine learning based methods, algorithms, architectures, and frameworks designed for VLSI design. Focus is on digital, analog, and mixed-signal design techniques, device modeling, physical design, hardware implementation, testability, reconfigurable design, synthesis and verification, and related areas. It contains chapters on case studies as well as novel research ideas in the given field. Overall, the book provides practical implementations of VLSI design, IC design and hardware realization using machine learning techniques.
This book is aimed at researchers, professionals and graduate students in VLSI, machine learning, electrical and electronic engineering, computer engineering, hardware systems.
Cover Half Title Title Page Copyright Page Contents Preface About the Editors Contributors 1. VLSI and Hardware Implementation Using Machine Learning Methods: A Systematic Literature Review 1.1 Introduction 1.2 Motivation 1.3 Contributions 1.4 Literature Review 1.5 Methods 1.5.1 Search Strategy 1.5.2 Inclusion and Exclusion Rules 1.5.3 Data Extraction Strategy 1.5.4 Synthesis of Extracted Data 1.5.5 Results and Discussions 1.5.6 Study Overview 1.6 Hardware Implementation of ML/AI Algorithms 1.6.1 FPGA-Based Implementation 1.6.2 GPU-Based Implementation 1.6.3 ASICs-Based Implementations 1.6.4 Other Implementations 1.6.5 SLR Discussions and Recommendations 1.7 Conclusions References 2. Machine Learning for Testing of VLSI Circuit 2.1 Introduction 2.2 Machine Learning Overview 2.3 Machine Learning Applications in IC Testing 2.4 ML in Digital Testing 2.5 ML in Analog Circuit Testing 2.6 ML in Mask Synthesis and Physical Placement 2.7 Conclusion Acknowledgment References 3. Online Checkers to Detect Hardware Trojans in AES Hardware Accelerators 3.1 Introduction: Background and Driving Forces 3.1.1 Threat Model 3.2 Proposed Methodology: Online Monitoring for HT Detection 3.2.1 Reliability-Based Node Selection to Insert Checker 3.3 Results and Discussion 3.3.1 Results of Benchmark Circuits 3.3.2 Results for AES Encryption Unit 3.4 Conclusion References 4. Machine Learning Methods for Hardware Security 4.1 Introduction 4.2 Preliminaries 4.2.1 Machine Learning Models Used in Hardware Security 4.2.1.1 Supervised Learning 4.2.1.1.1 Support Vector Machines 4.2.1.1.2 One-Class Classifiers 4.2.1.1.3 Bayesian Classifiers 4.2.1.1.4 Linear Regression 4.2.1.1.5 Multivariate Adaptive Regression Splines (MARS) 4.2.1.1.6 Decision Tree (DT) 4.2.1.1.7 Random Forest (RF) 4.2.1.1.8 Logistic Regression (LR) 4.2.1.1.9 AdaBoost or Adaptive Boosting 4.2.1.1.10 Artificial Neural Networks 4.2.1.1.11 Convolutional Neural Network 4.2.1.1.12 AutoEncoder 4.2.1.1.13 Recurrent Neural Network 4.2.1.1.14 Extreme Learning Machine 4.2.1.1.15 Long Short-Term Memory 4.2.1.1.16 Half-Space Trees 4.2.1.1.17 K-Nearest Neighbors (KNN) 4.2.2 Unsupervised Learning 4.2.2.1 Clustering Algorithms 4.2.2.2 K-means Clustering Algorithm 4.2.2.3 Partitioning Around Medoids (PAM) 4.2.2.4 Density-Based Spatial Clustering (DBSCAN) and Ordering Points to Identify the Clustering Structure (OPTICS) 4.2.3 Feature Selection and Dimensionality Reduction 4.2.3.1 Genetic Algorithms 4.2.3.2 Pearson's Correlation Coefficient 4.2.3.3 Minimum Redundancy Maximum Relevance (mRMR) 4.2.3.4 Principal Component Analysis 4.2.3.5 Two-Dimensional Principal Component Analysis 4.2.3.6 Self-Organizing Maps (SOMs) 4.3 Hardware Security Challenges Addressed by Machine Learning 4.3.1 Hardware Trojans 4.3.2 Reverse Engineering 4.3.3 Side-Channel Analysis 4.3.4 IC Counterfeiting 4.3.5 IC Overproduction 4.4 Present Protection Mechanisms in Hardware Security 4.4.1 Hardware Trojan Detection 4.4.2 IC Counterfeiting Countermeasures 4.4.3 Reverse Engineering Approach 4.5 Machine-Learning-Based Attacks and Threats 4.5.1 Side-Channel Analysis 4.5.1.1 Side-Channel Analysis for Cryptographic Key Extraction 4.5.1.2 Side-Channel Analysis for Instruction-Level Disassembly 4.5.2 IC Overbuilding 4.6 Emerging Challenges and New Directions References 5. Application-Driven Fault Identification in NoC Designs 5.1 Introduction 5.2 Related Work 5.3 Identification of Vulnerable Routers 5.3.1 Proposed Mathematical Model for Router Reliability 5.3.2 Determination of the Vulnerable Routers Using Simulation 5.3.3 Look-up-Table (LuT) Generation from Experimental Data 5.4 The Proposed Methodology for the Identification of Vulnerable Routers 5.4.1 Classification of Application Traffic Using Machine Learning 5.4.1.1 Dataset Generation 5.4.1.2 Feature Vector Extraction 5.4.1.3 Training of the ML Model 5.4.1.4 Working of the Trained Model 5.4.2 Validation of the ML Model for Traffic Classification 5.4.3 Identification of Vulnerable Routers Using Look-up-Table (LuT) 5.5 Future Work and Scope 5.5.1 Pooling of Unused Routers: A Structural Redundancy Approach 5.6 Conclusion References 6. Online Test Derived from Binary Neural Network for Critical Autonomous Automotive Hardware 6.1 Autonomous Vehicles 6.1.1 Levels of Autonomy 6.1.2 Safety Concerns 6.2 Traditional VLSI Testing 6.3 Functional Safety 6.3.1 Fault Detection Time Interval 6.4 Discussion 1: Binary Convolutional Neural Network 6.4.1 One Layer of the Convolutional Network 6.4.2 Forward Propagation 6.4.3 Binary Neural Autoencoder Model with Convolutional 1D 6.4.4 Binary Neural Network Model with Convolutional 2D 6.4.5 Backward Propagation 6.5 Discussion 2: On-Chip Compaction 6.5.1 Binary Recurrent Neural Networks 6.5.2 Forward Propagation 6.5.3 Backpropagation 6.5.4 Advantages and Limitations 6.6 Discussion 3: Binary Deep Neural Network for Controller Variance Detection 6.7 Conclusion Acknowledgment References 7. Applications of Machine Learning in VLSI Design 7.1 Introduction 7.2 Machine Learning Preliminaries 7.3 System-Level Design 7.4 Logic Synthesis and Physical Design 7.5 Verification 7.6 Test, Diagnosis, and Validation 7.7 Challenges 7.8 Conclusions References 8. An Overview of High-Performance Computing Techniques Applied to Image Processing 8.1 Introduction 8.1.1 Context 8.1.2 Concepts 8.2 HPC Techniques Applied to Image Treatment 8.2.1 Cloud-Based Distributed Computing 8.2.2 GPU-Accelerated Parallelization 8.2.3 Parallelization Using GPU Cluster 8.2.4 Multicore Architecture 8.3 Neural Networks 8.3.1 Convolutional Neural Network (CNN) 8.3.2 Generative Adversarial Network (GAN) 8.3.3 HPC Techniques Applied to Neural Networks 8.4 Machine Learning Applications Hardware Design 8.4.1 FPGA 8.4.2 SVM 8.5 Conclusions Notes References 9. Machine Learning Algorithms for Semiconductor Device Modeling 9.1 Introduction 9.2 Semiconductor Device Modeling 9.3 Related Work 9.4 Challenges 9.5 Machine Learning Fundamentals 9.5.1 Supervised Machine Learning Algorithms 9.5.2 Unsupervised Machine Learning Algorithms 9.5.3 Deep Learning Algorithms 9.6 Case Study: Thermal Modeling of the GaN HEMT Device 9.6.1 Experimental Setup 9.6.2 Results 9.7 Conclusion Acknowledgments References 10. Securing IoT-Based Microservices Using Artificial Intelligence 10.1 Introduction: Background and Driving Forces 10.2 Previous Work 10.3 Proposed Work 10.4 Results 10.4.1 Components 10.4.2 Deployment and Testing 10.5 Result and Discussion 10.6 Conclusions References 11. Applications of the Approximate Computing on ML Architecture 11.1 Approximate Computing 11.1.1 Introduction 11.1.2 Approximation 11.1.3 Strategies of Approximation Computing 11.1.4 What to Approximate 11.1.5 Error Analysis in Approximate Computing 11.2 Machine Learning 11.2.1 Introduction 11.2.2 Neural Networks 11.2.2.1 Architecture 11.2.2.2 Abilities and Disabilities 11.2.3 Machine Learning vs. Neural Network 11.2.4 Classifications of Neural Networks in Machine Learning 11.2.4.1 Artificial Neural Network (ANN) 11.2.4.1.1 Feedforward ANN 11.2.4.1.2 Abilities of Artificial Neural Network (ANN) 11.2.4.2 Convolution Neural Network (CNN) 11.2.5 Novel Algorithm in ANN 11.2.5.1 Introduction 11.2.5.2 Weights of Neurons 11.2.5.3 Weight vs. Bias 11.2.5.4 Neuron (Node) 11.2.5.4.1 Bias (Offset) 11.2.5.4.2 Activation Function (Transfer Function) 11.3 Approximate Machine Learning Algorithms 11.3.1 Introduction 11.3.2 Approximate Computing Techniques 11.3.3 Approximate Algorithms for Machine Learning 11.3.4 Results and Analysis 11.4 Case Study 1: Energy-Efficient ANN Using Alphabet Set Multiplier 11.4.1 Introduction 11.4.2 8-bit 4 Alphabet ASM 11.4.3 Four Alphabet ASMs Using CSHM Architecture 11.4.3.1 Rounding Logic 11.4.4 Multiplier-Less Neuron 11.4.5 Results and Analysis 11.5 Case Study 2: Efficient ANN Using Approximate Multiply-Accumulate Blocks 11.5.1 Introduction 11.5.2 SMAC Neuron's Architecture 11.5.3 The Architecture of SMAC ANN 11.5.4 Approximate Adder 11.5.5 Approximate Multiplier 11.5.6 Results and Analysis 11.6 Conclusion References 12. Hardware Realization of Reinforcement Learning Algorithms for Edge Devices 12.1 Introduction 12.1.1 Reinforcement Learning and Markov Decision Process 12.1.2 Hardware for Reinforcement Learning at the Edge 12.2 Background 12.3 Hardware Realization of Simple Reinforcement Learning Algorithm 12.3.1 Architecture-Level Description 12.3.2 Flow of Data in the Hardware Architecture 12.4 Results and Analysis of SRL Hardware Architecture 12.5 Q-Learning and SRL Algorithm Applications 12.6 Future Work: Application and Hardware Design Overview 12.6.1 Hardware Design Overview 12.7 Conclusion Acknowledgment References 13. Deep Learning Techniques for Side-Channel Analysis 13.1 Introduction 13.2 Preliminaries 13.2.1 Framework for Implementation Vulnerability Analysis 13.3 Profiled Side-Channel Attacks 13.3.1 Deep Learning Architecture for Analysis 13.3.2 Convolutional Neural Networks 13.4 Protected Countermeasure Techniques 13.4.1 Unrolled Implementation 13.4.2 Threshold Implementation 13.5 Case Study of GIFT Cipher 13.5.1 GIFT Algorithm Description 13.5.2 Implementation Profiles 13.5.3 Round (Naive) Implementation 13.5.4 (Un)Rolled Implementation 13.5.5 Partially (Un)Rolled Implementation with Threshold Implementation Countermeasure 13.5.6 Experiment Setup 13.6 Description of PSCA on GIFT Using DeepSCA 13.6.1 Vulnerability Analysis 13.7 Conclusion and Future Work Acknowledgments References 14. Machine Learning in Hardware Security of IoT Nodes 14.1 Introduction 14.2 Classification of Hardware Attacks 14.2.1 Hardware Trojan Taxonomy 14.2.1.1 Insertion Phase 14.2.1.2 Level of Description 14.2.1.3 Activation Mechanism 14.2.1.4 Effects of Hardware Trojans 14.2.1.5 Location 14.2.2 Types of Hardware Trojans 14.3 Countermeasures for Threats of Hardware Trojans in IoT Nodes 14.3.1 Hardware Trojan Detection Approaches 14.3.2 Hardware Trojan Diagnosis 14.3.3 Hardware Trojan Prevention 14.4 Machine Learning Models 14.4.1 Supervised Machine Learning 14.4.2 Unsupervised Machine Learning 14.4.3 Dimensionality Reduction & Feature Selection 14.4.4 Design Optimization 14.5 Proposed Methodology 14.5.1 Stage 1: Analysis of IoT Circuit Structure Features 14.5.2 Stage 2: Feature Extraction from Netlist 14.5.3 Stage 3: Hardware Trojan Classifier Training 14.5.4 Stage 4: Detection of Hardware Trojan 14.5.5 Comparison of HT Detection Models Based on ML 14.6 Conclusion References 15. Integrated Photonics for Artificial Intelligence Applications 15.1 Introduction to Photonic Neuromorphic Computing 15.2 Classification of Photonic Neural Network 15.3 Photonic Neuron and Synapse 15.4 Conclusion References Index