ورود به حساب

نام کاربری گذرواژه

گذرواژه را فراموش کردید؟ کلیک کنید

حساب کاربری ندارید؟ ساخت حساب

ساخت حساب کاربری

نام نام کاربری ایمیل شماره موبایل گذرواژه

برای ارتباط با ما می توانید از طریق شماره موبایل زیر از طریق تماس و پیامک با ما در ارتباط باشید


09117307688
09117179751

در صورت عدم پاسخ گویی از طریق پیامک با پشتیبان در ارتباط باشید

دسترسی نامحدود

برای کاربرانی که ثبت نام کرده اند

ضمانت بازگشت وجه

درصورت عدم همخوانی توضیحات با کتاب

پشتیبانی

از ساعت 7 صبح تا 10 شب

دانلود کتاب Machine Learning Applications in Electronic Design Automation

دانلود کتاب کاربردهای یادگیری ماشین در اتوماسیون طراحی الکترونیکی

Machine Learning Applications in Electronic Design Automation

مشخصات کتاب

Machine Learning Applications in Electronic Design Automation

ویرایش:  
نویسندگان:   
سری:  
ISBN (شابک) : 3031130731, 9783031130731 
ناشر: Springer 
سال نشر: 2023 
تعداد صفحات: 584
[585] 
زبان: English 
فرمت فایل : PDF (درصورت درخواست کاربر به PDF، EPUB یا AZW3 تبدیل می شود) 
حجم فایل: 19 Mb 

قیمت کتاب (تومان) : 35,000



ثبت امتیاز به این کتاب

میانگین امتیاز به این کتاب :
       تعداد امتیاز دهندگان : 6


در صورت تبدیل فایل کتاب Machine Learning Applications in Electronic Design Automation به فرمت های PDF، EPUB، AZW3، MOBI و یا DJVU می توانید به پشتیبان اطلاع دهید تا فایل مورد نظر را تبدیل نمایند.

توجه داشته باشید کتاب کاربردهای یادگیری ماشین در اتوماسیون طراحی الکترونیکی نسخه زبان اصلی می باشد و کتاب ترجمه شده به فارسی نمی باشد. وبسایت اینترنشنال لایبرری ارائه دهنده کتاب های زبان اصلی می باشد و هیچ گونه کتاب ترجمه شده یا نوشته شده به فارسی را ارائه نمی دهد.


توضیحاتی در مورد کتاب کاربردهای یادگیری ماشین در اتوماسیون طراحی الکترونیکی

این کتاب به عنوان یک مرجع تک منبعی به برنامه ها و روش های کلیدی یادگیری ماشین (ML) در طراحی و تأیید دیجیتال و آنالوگ عمل می کند. کارشناسان دانشگاه و صنعت طیف گسترده ای از آخرین تحقیقات در مورد کاربردهای ML در اتوماسیون طراحی الکترونیکی (EDA) را پوشش می دهند، از جمله تجزیه و تحلیل و بهینه سازی طراحی دیجیتال، تجزیه و تحلیل و بهینه سازی طراحی آنالوگ، و همچنین تأیید عملکرد، FPGA و طراحی های سطح سیستم. ، طراحی برای تولید (DFM) و طراحی اکتشاف فضایی. نویسندگان همچنین روش‌های کلیدی ML مانند ML کلاسیک، مدل‌های یادگیری عمیق مانند شبکه‌های عصبی کانولوشن (CNN)، شبکه‌های عصبی نموداری (GNN)، شبکه‌های متخاصم مولد (GANs) و روش‌های بهینه‌سازی مانند یادگیری تقویتی (RL) و بهینه‌سازی بیزی را پوشش می‌دهند. (BO). همه این موضوعات برای طراحان تراشه و توسعه دهندگان EDA و محققانی که در طراحی و تأیید دیجیتال و آنالوگ کار می کنند ارزشمند است.


توضیحاتی درمورد کتاب به خارجی

​This book serves as a single-source reference to key machine learning (ML) applications and methods in digital and analog design and verification. Experts from academia and industry cover a wide range of the latest research on ML applications in electronic design automation (EDA), including analysis and optimization of digital design, analysis and optimization of analog design, as well as functional verification, FPGA and system level designs, design for manufacturing (DFM), and design space exploration. The authors also cover key ML methods such as classical ML, deep learning models such as convolutional neural networks (CNNs), graph neural networks (GNNs), generative adversarial networks (GANs) and optimization methods such as reinforcement learning (RL) and Bayesian optimization (BO). All of these topics are valuable to chip designers and EDA developers and researchers working in digital and analog designs and verification.



فهرست مطالب

Preface
Contents
About the Editors
Part I Machine Learning-Based Design Prediction Techniques
	1 ML for Design QoR Prediction
		1.1 Introduction
		1.2 Challenges of Design QoR Prediction
			1.2.1 Limited Number of Samples
			1.2.2 Chaotic Behaviors of EDA Tools
			1.2.3 Actionable Predictions
			1.2.4 Infrastructure Needs
			1.2.5 The Bar for Design QoR Prediction
		1.3 ML Techniques in QoR Prediction
			1.3.1 Graph Neural Networks
			1.3.2 Long Short-Term Memory (LSTM) Networks
			1.3.3 Reinforcement Learning
			1.3.4 Other Models
		1.4 Timing Estimation
			1.4.1 Problem Formulation
			1.4.2 Estimation Flow
			1.4.3 Feature Engineering
			1.4.4 Machine Learning Engines
		1.5 Design Space Exploration
			1.5.1 Problem Formulation
			1.5.2 Estimation Flow
			1.5.3 Feature Engineering
			1.5.4 Machine Learning Engines
		1.6 Summary
		References
	2 Deep Learning for Routability
		2.1 Introduction
		2.2 Background on DL for Routability
			2.2.1 Routability Prediction Background
				2.2.1.1 Design Rule Checking (DRC) Violations
				2.2.1.2 Routing Congestion and Pin Accessibility
				2.2.1.3 Relevant Physical Design Steps
				2.2.1.4 Routability Prediction
			2.2.2 DL Techniques in Routability Prediction
				2.2.2.1 CNN Methods
				2.2.2.2 FCN Methods
				2.2.2.3 GAN Methods
				2.2.2.4 NAS Methods
			2.2.3 Why DL for Routability
		2.3 DL for Routability Prediction Methodologies
			2.3.1 Data Preparation and Augmentation
			2.3.2 Feature Engineering
				2.3.2.1 Blockage
				2.3.2.2 Wire Density
				2.3.2.3 Routing Congestion
				2.3.2.4 Pin Accessibility
				2.3.2.5 Routability Label
			2.3.3 DL Model Architecture Design
				2.3.3.1 Common Operators and Connections
				2.3.3.2 Case Study: RouteNet 2:xie2018routenet
				2.3.3.3 Case Study: PROS 2:chen2020pros
				2.3.3.4 Case Study: J-Net 2:liang2020drc
				2.3.3.5 Case Study: Painting 2:yu2019painting
				2.3.3.6 Case Study: Automated Model Development 2:chang2021auto
			2.3.4 DL Model Training and Inference
		2.4 DL for Routability Deployment
			2.4.1 Direct Feedback to Engineers
			2.4.2 Macro Location Optimization
			2.4.3 White Space-Driven Model-Guided Detailed Placement
			2.4.4 Pin Accessibility-Driven Model-Guided Detailed Placement
			2.4.5 Integration in Routing Flow
			2.4.6 Explicit Routability Optimization During Global Placement
			2.4.7 Visualization of Routing Utilization
			2.4.8 Optimization with Reinforcement Learning (RL)
		2.5 Summary
		References
	3 Net-Based Machine Learning-Aided Approaches for Timing and Crosstalk Prediction
		3.1 Introduction
		3.2 Backgrounds on Machine Learning-Aided Timing and Crosstalk Estimation
			3.2.1 Timing Prediction Background
			3.2.2 Crosstalk Prediction Background
			3.2.3 Relevant Design Steps
			3.2.4 ML Techniques in Net-Based Prediction
			3.2.5 Why ML for Timing and Crosstalk Prediction
		3.3 Preplacement Net Length and Timing Prediction
			3.3.1 Problem Formulation
			3.3.2 Prediction Flow
			3.3.3 Feature Engineering
				3.3.3.1 Features for Net Length Prediction
				3.3.3.2 Features for Timing Prediction
			3.3.4 Machine Learning Engines
				3.3.4.1 Machine Learning Engine for Net Length Prediction
				3.3.4.2 Machine Learning Engine for Preplacement Timing Prediction
		3.4 Pre-Routing Timing Prediction
			3.4.1 Problem Formulation
			3.4.2 Prediction Flow
			3.4.3 Feature Engineering
			3.4.4 Machine Learning Engines
		3.5 Pre-Routing Crosstalk Prediction
			3.5.1 Problem Formulation
			3.5.2 Prediction Flow
			3.5.3 Feature Engineering
				3.5.3.1 Probabilistic Congestion Estimation
				3.5.3.2 Net Physical Information
				3.5.3.3 Product of the Wirelength and Congestion
				3.5.3.4 Electrical and Logic Features
				3.5.3.5 Timing Information
				3.5.3.6 Neighboring Net Information
			3.5.4 Machine Learning Engines
		3.6 Interconnect Coupling Delay and Transition Effect Prediction at Sign-Off
			3.6.1 Problem Formulation
			3.6.2 Prediction Flow
			3.6.3 Feature Engineering
			3.6.4 Machine Learning Engines
		3.7 Summary
		References
	4 Deep Learning for Power and Switching Activity Estimation
		4.1 Introduction
		4.2 Background on Modeling Methods for Switching Activity Estimators
			4.2.1 Statistical Approaches to Switching Activity Estimators
			4.2.2 ``Cost-of-Action''-Based Power Estimation Models
			4.2.3 Learning/Regression-Based Power Estimation Models
		4.3 Deep Learning Models for Power Estimation
		4.4 A Case Study on Using Deep Learning Models for Per Design Power Estimation
			4.4.1 PRIMAL Methodology
			4.4.2 List of PRIMAL ML Models for Experimentation
				4.4.2.1 Feature Construction Techniques in PRIMAL
				4.4.2.2 Feature Encoding for Cycle-by-Cycle Power Estimation
				4.4.2.3 Mapping Registers and Signals to Pixels
		4.5 PRIMAL Experiments
			4.5.1 Power Estimation Results of PRIMAL
			4.5.2 Results Analysis
		4.6 A Case Study on Using Graph Neural Networks for Generalizable Power Estimation
			4.6.1 GRANNITE Introduction
			4.6.2 The Role of GPUs in Gate-Level Simulation and Power Estimation
			4.6.3 GRANNITE Implementation
				4.6.3.1 Toggle Rate Features
				4.6.3.2 Graph Object Creation
				4.6.3.3 GRANNITE Architecture
		4.7 GRANNITE Results
			4.7.1 Analysis
		4.8 Conclusion
		References
	5 Deep Learning for Analyzing Power Delivery Networks and Thermal Networks
		5.1 Introduction
		5.2 Deep Learning for PDN Analysis
			5.2.1 CNNs for IR Drop Estimation
				5.2.1.1 PowerNet Input Feature Representation
				5.2.1.2 PowerNet Architecture
				5.2.1.3 Evaluation of PowerNet
			5.2.2 Encoder-Decoder Networks for PDN Analysis
				5.2.2.1 PDN Analysis as an Image-to-Image Translation Task
				5.2.2.2 U-Nets for PDN Analysis
				5.2.2.3 3D U-Nets for IR Drop Sequence-to-Sequence Translation
				5.2.2.4 Regression-Like Layer for Instance-Level IR Drop Prediction
				5.2.2.5 Encoder-Secoder Network Training
				5.2.2.6 Evaluation of EDGe Networks for PDN Analysis
		5.3 Deep Learning for Thermal Analysis
			5.3.1 Problem Formulation
			5.3.2 Model Architecture for Thermal Analysis
			5.3.3 Model Training and Data Generation
			5.3.4 Evaluation of ThermEDGe
		5.4 Deep Learning for PDN Synthesis
			5.4.1 Template-Driven PDN Optimization
			5.4.2 PDN Synthesis as an Image Classification Task
			5.4.3 Principle of Locality for Region Size Selection
			5.4.4 ML-Based PDN Synthesis and Refinement Through the Design Flow
			5.4.5 Neural Network Architectures for PDN Synthesis
			5.4.6 Transfer Learning-Based CNN Training
				5.4.6.1 Synthetic Input Feature Set Generation
				5.4.6.2 Transfer Learning Model
				5.4.6.3 Training Data Generation
			5.4.7 Evaluation of OpeNPDN for PDN Synthesis
				5.4.7.1 Justification for Transfer Learning
				5.4.7.2 Validation on Real Design Testcases
		5.5 DL for PDN Benchmark Generation
			5.5.1 Introduction
			5.5.2 GANs for PDN Benchmark Generation
				5.5.2.1 Synthetic Image Generation for GAN Pretraining
				5.5.2.2 GAN Architecture and Training
				5.5.2.3 GAN Inference for Current Map Generation
			5.5.3 Evaluation of GAN-Generated PDN Benchmarks
		5.6 Conclusion
		References
	6 Machine Learning for Testability Prediction
		6.1 Introduction
		6.2 Classical Testability Measurements
			6.2.1 Approximate Measurements
				6.2.1.1 SCOAP
				6.2.1.2 Random Testability
			6.2.2 Simulation-Based Measurements
		6.3 Learning-Based Testability Prediction
			6.3.1 Node-Level Testability Prediction
				6.3.1.1 Conventional Machine Learning Methods
				6.3.1.2 Graph-Based Deep Learning Methods
			6.3.2 Circuit-Level Testability Prediction
				6.3.2.1 Fault Coverage Prediction
				6.3.2.2 Test Cost Prediction
				6.3.2.3 X-Sensitivity Prediction
		6.4 Additional Considerations
			6.4.1 Imbalanced Dataset
			6.4.2 Scalability of Graph Neural Networks
			6.4.3 Integration with Design Flow
			6.4.4 Robustness of Machine Learning Model and Metrics
		6.5 Summary
		References
Part II Machine Learning-Based Design Optimization Techniques
	7 Machine Learning for Logic Synthesis
		7.1 Introduction
		7.2 Supervised and Reinforcement Learning
			7.2.1 Supervised Learning
			7.2.2 Reinforcement Learning
		7.3 Supervised Learning for Guiding Logic Synthesis Algorithms
			7.3.1 Guiding Logic Network Type for Logic Network Optimization
			7.3.2 Guiding Logic Synthesis Flow Optimization
			7.3.3 Guiding Cut Choices for Technology Mapping
			7.3.4 Guiding Delay Constraints for Technology Mapping
		7.4 Reinforcement Learning Formulations for Logic Synthesis Algorithms
			7.4.1 Logic Network Optimization
			7.4.2 Logic Synthesis Flow Optimization
				7.4.2.1 Synthesis Flow Optimization for Circuit Area and Delay
				7.4.2.2 Synthesis Flow Optimization for Logic Network Node and Level Counts
			7.4.3 Datapath Logic Optimization
		7.5 Scalability Considerations for Reinforcement Learning
		References
	8 RL for Placement and Partitioning
		8.1 Introduction
		8.2 Background
		8.3 RL for Combinatorial Optimization
			8.3.1 How to Perform Decision-Making with RL
		8.4 RL for Placement Optimization
			8.4.1 The Action Space for Chip Placement
			8.4.2 Engineering the Reward Function
				8.4.2.1 Wirelength
				8.4.2.2 Routing Congestion
				8.4.2.3 Density and Macro Overlap
				8.4.2.4 State Representation
			8.4.3 Generating Adjacency Matrix for a Chip Netlist
			8.4.4 Learning RL Policies that Generalize
		8.5 Future Directions
			8.5.1 Top-Level Floorplanning
			8.5.2 Netlist Design Space Exploration
			8.5.3 Broader Discussions: ML for Co-optimization Across the Overall Chip Design Process
		References
	9 Deep Learning Framework for Placement
		9.1 Introduction
		9.2 DL Analogy for the Kernel Placement Problem
		9.3 Speedup Kernel Operators
			9.3.1 Wirelength
			9.3.2 Density Accumulation
			9.3.3 Discrete Cosine Transformation
		9.4 Handle Region Constraints
		9.5 Optimize Routability
			9.5.1 Instance Inflation
			9.5.2 Deep Learning-Based Optimization
		9.6 Conclusion
		References
	10 Circuit Optimization for 2D and 3D ICs with Machine Learning
		10.1 Introduction
		10.2 Graph Neural Network-Based Methods
			10.2.1 2D Placement Optimization
				10.2.1.1 Solution
				10.2.1.2 Results
			10.2.2 3D IC Tier Partitioning
				10.2.2.1 Problem Statement
				10.2.2.2 Solution
				10.2.2.3 Results
			10.2.3 Threshold Voltage Assignment
				10.2.3.1 Problem Statement
				10.2.3.2 Solution
				10.2.3.3 Results
		10.3 Reinforcement Learning-Based Methods
			10.3.1 Decoupling Capacitor Optimization
				10.3.1.1 Problem Statement
				10.3.1.2 RL Setting
				10.3.1.3 Deep Q-Learning
				10.3.1.4 Results
			10.3.2 Gate Sizing
				10.3.2.1 Problem Statement
				10.3.2.2 Solution
				10.3.2.3 Results
			10.3.3 Timing Closure Using Multiarmed Bandits
				10.3.3.1 Problem Statement
				10.3.3.2 Solution
				10.3.3.3 Innovative Ideas
				10.3.3.4 Results
		10.4 Other Methods
			10.4.1 3D IC Optimization Using Bayesian Optimization
				10.4.1.1 Problem Statement
				10.4.1.2 Solution
				10.4.1.3 Results
		10.5 Conclusions
		References
	11 Reinforcement Learning for Routing
		11.1 Introduction
		11.2 RL for Global Routing
			11.2.1 DQN Global Routing
				11.2.1.1 Problem Formulation
				11.2.1.2 Method and Results
			11.2.2 Constructing Steiner Tree via Reinforcement Learning
				11.2.2.1 Problem Formulation
				11.2.2.2 Methods and Results
		11.3 RL for Detailed Routing
			11.3.1 Attention Routing
				11.3.1.1 Problem Formulation
				11.3.1.2 Method and Results
			11.3.2 Asynchronous RL for Detailed Routing
				11.3.2.1 Problem Formulation
				11.3.2.2 Methods and Results
		11.4 RL for Standard Cell Routing
			11.4.1 Background
			11.4.2 Problem Formulation
			11.4.3 Methods and Results
		11.5 RL for Related Routing Problems
			11.5.1 RL for Communication Network Routing
			11.5.2 RL for Path Planning
		11.6 Challenges and Opportunities
		References
	12 Machine Learning for Analog Circuit Sizing
		12.1 Introduction
		12.2 Conventional Methods
			12.2.1 Equation-Based Methods
			12.2.2 Simulation-Based Methods
			12.2.3 Limitations on Conventional Methods
		12.3  Bayesian Optimization
			12.3.1 WEIBO: An Efficient Bayesian Optimization Approach for Automated Optimization of Analog Circuits
			12.3.2 EasyBO: An Efficient Asynchronous Batch Bayesian Optimization Approach for Analog Circuit Synthesis
		12.4 Improved Surrogate Model for Evolutionary Algorithm with Neural Networks
			12.4.1 An Efficient Analog Circuit Sizing Method Based on Machine Learning-Assisted Global Optimization
		12.5 Reinforcement Learning
			12.5.1 GCN-RL Circuit Designer: Transferable Transistor Sizing with Graph Neural Networks and Reinforcement Learning
			12.5.2 AutoCkt: Deep Reinforcement Learning of Analog Circuit Designs
			12.5.3 DNN-Opt: An RL-Inspired Optimization for Analog Circuit Sizing Using Deep Neural Networks
			12.5.4 Discussion: RL for Analog Sizing
		12.6 Parasitic and Layout-Aware Sizing
			12.6.1 BagNet: Layout-Aware Circuit Optimization
			12.6.2 Parasitic-Aware Sizing with Graph Neural Networks
		12.7 Conclusion and Future Directions
		References
Part III Machine Learning Applications in Various Design Domains
	13 The Interplay of Online and Offline Machine Learning for Design Flow Tuning
		13.1 Introduction
		13.2 Background
			13.2.1 Online Design Flow Tuning Implications
			13.2.2 Offline Design Flow Tuning Implications
			13.2.3 LSPD Application-Specific Considerations
			13.2.4 Design Flow Tuner Development: CAD Tool Vendor, Open-Source, or In-House
			13.2.5 Case Study Background: STS Terminology and Design Space
		13.3 Online Design Flow Tuning Approaches
			13.3.1 Approaches for HLS Flows
			13.3.2 Approaches for FPGA Flows
			13.3.3 Approaches for LSPD Flows
				13.3.3.1 Bayesian Optimization
				13.3.3.2 Reinforcement Learning
			13.3.4 Case Study: STS Online System
		13.4 Offline Design Flow Tuning Approaches
			13.4.1 Approaches for HLS Flows
			13.4.2 Approaches for FPGA Flows
			13.4.3 Approaches for LSPD Flows
				13.4.3.1 Parameter Importance
				13.4.3.2 Transfer Learning
				13.4.3.3 Reinforcement Learning
				13.4.3.4 Recommender Systems
			13.4.4 Case Study: STS Offline System
		13.5 The Interplay of Online and Offline Machine Learning
			13.5.1 Approaches for FPGA Flows
			13.5.2 Approaches for LSPD Flows
			13.5.3 Case Study: Hybrid STS System
		13.6 STS Experimental Results
			13.6.1 STS Product Impact
			13.6.2 STS Hybrid Online/Offline System Results
		13.7 Future Directions
			13.7.1 Adaptable Online and Offline Systems
			13.7.2 Enhanced Online Tuning Compute Frameworks
			13.7.3 HLS and LSPD Tuning System Collaboration
			13.7.4 Human Designer and Tuning System Collaboration
			13.7.5 Multi-Macro and Hierarchical Tuning Challenges
			13.7.6 Generalized Code Tuning
		13.8 Conclusion
		References
	14 Machine Learning in the Service of Hardware Functional Verification
		14.1 Introduction
		14.2 The Verification Cockpit
			14.2.1 Motivation and Goals
			14.2.2 Verification Cockpit Architecture
			14.2.3 Descriptive Analytics
		14.3 Coverage Closure
			14.3.1 Descriptive Coverage Analysis
				14.3.1.1 Automatic Discovery of Coverage Models Structure
			14.3.2 Coverage Directed Generation (CDG)
			14.3.3 Model-Based CDG
			14.3.4 Direct Data-Driven CDG
				14.3.4.1 Test-Case Harvesting
				14.3.4.2 Template-Aware Coverage
				14.3.4.3 Learning the Mapping
			14.3.5 Search-Based CDG
				14.3.5.1 Fast Evaluation of the Test to Coverage Mapping
				14.3.5.2 DFO-Based CDG
			14.3.6 CDG for Large Sets of Unrelated Events
		14.4 Risk Analysis
			14.4.1 Trend Analysis of Bug Discovery Rate
			14.4.2 Source Code Repository Analysis
		14.5 Summary
		References
	15 Machine Learning for Mask Synthesis and Verification
		15.1 Introduction
		15.2 Lithography Modeling
			15.2.1 Physics in Lithography Modeling
				15.2.1.1 Optical Modeling
				15.2.1.2 Resist Modeling
			15.2.2 Machine Learning Solutions for Lithogrpahy Modeling
			15.2.3 Case Studies
				15.2.3.1 Deep Lithography Simulator 15:OPC-TCAD2021-Chen
				15.2.3.2 Resist Modeling with Transfer Learning and Active Data Selection 15:DFM-TCAD2019-Lin
		15.3 Layout Hotspot Detection
			15.3.1 Machine Learning for Layout Hotspot Detection
			15.3.2 Case Studies
				15.3.2.1 Detecting Lithography Hotspots with Feature Tensor Generation and Batch-Biased Learning 15:HSD-TCAD2019-Yang
				15.3.2.2 Faster Region-Based Hotspot Detection 15:HSD-DAC2019-Chen
		15.4 Mask Optimization
			15.4.1 Machine Learning for Mask Optimization
			15.4.2 Case Studies
				15.4.2.1 SRAF Insertion with Supervised Dictionary Learning 15:OPC-ASPDAC2019-Geng
				15.4.2.2 GAN-OPC 15:OPC-TCAD2019-Yang
		15.5 Pattern Generation
			15.5.1 Machine Learning for Layout Pattern Generation
			15.5.2 Case Study
				15.5.2.1 DeePattern 15:PG-DAC2019-Yang
		15.6 Conclusion
		References
	16 Machine Learning for Agile FPGA Design
		16.1 Introduction
		16.2 Preliminaries and Background
			16.2.1 FPGA Design Flow
			16.2.2 Motivation of ML for FPGA Design
		16.3 Machine Learning Solutions for FPGA
			16.3.1 ML for Fast and Accurate QoR Estimation
			16.3.2 ML-Aided Decision-Making
			16.3.3 Challenges and Strategies of Data Preparation for Learning
		16.4 ML for FPGA Case Studies
			16.4.1 QoR Estimation
				16.4.1.1 Resource Usage Estimation in HLS Stage
				16.4.1.2 Operation Delay Estimation in HLS Stage
				16.4.1.3 Routing Congestion Estimation in Placement Stage
			16.4.2 Design Space Exploration
				16.4.2.1 Auto-Configuring CAD Tool Parameters
				16.4.2.2 Automatic Synthesis Flow Generation
		16.5 Future Research Scopes
		16.6 Conclusion
		References
	17 Machine Learning for Analog Layout
		17.1 Introduction
		17.2 Geometric Constraint Generation
			17.2.1 Problem Statement
			17.2.2 Subcircuit Annotation Using Graph Convolution Networks
			17.2.3 Array-Based Methods for Subcircuit Annotation
			17.2.4 System Symmetry Constraint with Graph Similarity
			17.2.5 Symmetry Constraint with Graph Neural Networks
		17.3 Constrained Placement and Routing
			17.3.1 Placement Quality Prediction
				17.3.1.1 Applying Standard ML Models
				17.3.1.2 Stratified Sampling with SVM/MLP Models
				17.3.1.3 Performance Prediction with Convolutional Neural Networks
				17.3.1.4 PEA: Pooling with Edge Attention Using a GAT
			17.3.2 Analog Placement
				17.3.2.1 Incorporating Extracted Constraints into Analog Placers
				17.3.2.2 Well Generation and Well-Aware Placement
			17.3.3 Analog Routing
				17.3.3.1 Incorporating Extracted Constraints into Analog Router
				17.3.3.2 GeniusRoute: ML-Guided Analog Routing
		17.4 Conclusion and Future Directions
		References
	18 ML for System-Level Modeling
		18.1 Introduction
		18.2 Cross-Layer Prediction
			18.2.1 Micro-Architecture Level Prediction
			18.2.2 Source-Level Prediction
		18.3 Cross-Platform Prediction
			18.3.1 CPU to CPU
			18.3.2 GPU to GPU
			18.3.3 CPU to GPU
			18.3.4 CPU to FPGA
			18.3.5 FPGA to FPGA
		18.4 Cross-Temporal Prediction
			18.4.1 Short-Term Workload Forecasting
			18.4.2 Long-Term Workload Forecasting
		18.5 Summary and Conclusions
		References
Index




نظرات کاربران