دسترسی نامحدود
برای کاربرانی که ثبت نام کرده اند
برای ارتباط با ما می توانید از طریق شماره موبایل زیر از طریق تماس و پیامک با ما در ارتباط باشید
در صورت عدم پاسخ گویی از طریق پیامک با پشتیبان در ارتباط باشید
برای کاربرانی که ثبت نام کرده اند
درصورت عدم همخوانی توضیحات با کتاب
از ساعت 7 صبح تا 10 شب
ویرایش: نویسندگان: Lejla Batina, Thomas Bäck, Ileana Buhan, Stjepan Picek سری: Lecture Notes in Computer Science, 13049 ISBN (شابک) : 3030987949, 9783030987947 ناشر: Springer سال نشر: 2022 تعداد صفحات: 364 [365] زبان: English فرمت فایل : PDF (درصورت درخواست کاربر به PDF، EPUB یا AZW3 تبدیل می شود) حجم فایل: 12 Mb
در صورت ایرانی بودن نویسنده امکان دانلود وجود ندارد و مبلغ عودت داده خواهد شد
در صورت تبدیل فایل کتاب Security and Artificial Intelligence: A Crossdisciplinary Approach به فرمت های PDF، EPUB، AZW3، MOBI و یا DJVU می توانید به پشتیبان اطلاع دهید تا فایل مورد نظر را تبدیل نمایند.
توجه داشته باشید کتاب امنیت و هوش مصنوعی: رویکردی بین رشته ای نسخه زبان اصلی می باشد و کتاب ترجمه شده به فارسی نمی باشد. وبسایت اینترنشنال لایبرری ارائه دهنده کتاب های زبان اصلی می باشد و هیچ گونه کتاب ترجمه شده یا نوشته شده به فارسی را ارائه نمی دهد.
هوش مصنوعی به یک فناوری نوظهور برای ارزیابی امنیت و حریم خصوصی، با چالشها و راهحلهای بالقوه در سطوح الگوریتم، معماری و پیادهسازی تبدیل شده است. تاکنون، تحقیقات در زمینه هوش مصنوعی و امنیت، مشکلات فرعی را به صورت مجزا بررسی کردهاند، اما راهحلهای آینده مستلزم اشتراکگذاری تجربه و بهترین عملکرد در این حوزهها است. ویراستاران این نظرسنجی پیشرفته یک تیم بین رشته ای از محققان را به کارگاه لورنتس در سال 2019 دعوت کردند تا همکاری در این زمینه ها را بهبود بخشند. برخی از مشارکتها در این رویداد آغاز شد، برخی دیگر از طریق دعوتنامههای بیشتر، ویرایش و بررسی متقابل توسعه یافتند. این کتاب ارائه شده شامل 14 فصل دعوت شده است که به حملات کانال جانبی و تزریق خطا، اصول رمزنگاری اولیه، یادگیری ماشین متخاصم و تشخیص نفوذ می پردازد. فصلها بر اساس اهمیت، کیفیت فنی و ارتباط با موضوعات امنیت و هوش مصنوعی ارزیابی شدند و هر مقاله در حالت یک سوکور بررسی و بازنگری شد.
AI has become an emerging technology to assess security and privacy, with many challenges and potential solutions at the algorithm, architecture, and implementation levels. So far, research on AI and security has looked at subproblems in isolation but future solutions will require sharing of experience and best practice in these domains. The editors of this State-of-the-Art Survey invited a cross-disciplinary team of researchers to a Lorentz workshop in 2019 to improve collaboration in these areas. Some contributions were initiated at the event, others were developed since through further invitations, editing, and cross-reviewing. This contributed book contains 14 invited chapters that address side-channel attacks and fault injection, cryptographic primitives, adversarial machine learning, and intrusion detection. The chapters were evaluated based on their significance, technical quality, and relevance to the topics of security and AI, and each submission was reviewed in single-blind mode and revised.
Preface Organization Contents AI for Cryptography Artificial Intelligence for the Design of Symmetric Cryptographic Primitives 1 Introduction 2 Background 2.1 Cryptography 2.2 Heuristic Optimization Algorithms 2.3 Cellular Automata 3 Boolean Functions 3.1 Background 3.2 Survey of Related Works 4 S-Boxes 4.1 Background 4.2 Survey of Related Works 5 Pseudorandom Number Generators 5.1 Background 5.2 Survey of Related Works 6 Conclusions and New Directions References Traditional Machine Learning Methods for Side-Channel Analysis 1 Introduction 2 Side-Channel Analysis 2.1 Types of Side-Channel Analysis 2.2 Information-Theoretic Models 3 Historical Overview of the Machine Learning Research for SCA 4 Data Preprocessing for SCA 4.1 Data Augmentation and Dimensionality Reduction Techniques 4.2 Feature Selection Methods 5 Supervised Learning Methods for SCA 5.1 Naive Bayes 5.2 Random Forests 5.3 Support Vector Machines 5.4 Multilayer Perceptron 5.5 Hierarchical Classification 5.6 Template Attack vs Traditional Machine Learning 6 Other Learning Methods for SCA 6.1 Unsupervised Learning 6.2 Semi-supervised Learning 7 Evaluation of ML Models in SCA 8 Conclusion References Deep Learning on Side-Channel Analysis 1 Introduction 2 Background 2.1 Notations 2.2 Profiled SCA and Deep Learning 3 Recent Results in Deep Learning-Based Profiled Side-Channel Attacks 3.1 From Machine Learning to Deep Learning in SCA 3.2 Deep Learning Techniques in SCA 4 Advantages of Deep Learning for Profiled Side-Channel Analysis 4.1 Side-Channel Analysis Without Preprocessing 4.2 Bypassing Desynchronization 4.3 Deep Neural Networks Can Learn Second-Order Leakages 4.4 Take Advantage of the Domain Knowledge 4.5 Visualization Techniques to Identify Input Leakage 5 Metrics for Deep Learning-Based Profiled SCA 6 Tuning Neural Network Hyper-Parameters for SCA 7 Different Applications of Deep Learning to Side Channel Analysis 8 Conclusions and Perspectives References Artificial Neural Networks and Fault Injection Attacks 1 Introduction 2 Assets and Threat Models 2.1 Attack Scenarios 2.2 AI Assets vs Cryptographic Assets 3 Faults in Neural Network 4 AI/Neural Network Accelerators 4.1 GPUs 4.2 FPGAs 4.3 Custom AI/Neural Network Accelerators 5 Fault Injection Attacks on AI Accelerators 5.1 Traditional Fault Attack 5.2 Remote Fault Attacks 6 Conclusion References Physically Unclonable Functions and AI 1 Introduction 2 Background on PUFs 3 Attacks Against PUFs: Physical vs. Non-physical 4 AI-Enabled Attacks 4.1 Machine Learning Attacks 5 Mathematical Modeling 6 Resiliency Against ML Attacks 6.1 How to Prove the Security of a PUF Against ML Attacks 6.2 Metrics for Evaluating the Security of a PUF Against ML Attacks 7 AI-Enabled Design of PUFs 8 Conclusion References AI for Authentication and Privacy Privacy-Preserving Machine Learning Using Cryptography 1 Introduction 2 Cryptographic Protocols and Primitives 2.1 Secure Multi-Party Computation (MPC) 2.2 Fully Homomorphic Encryption (FHE) 3 Security Models 3.1 MPC 3.2 FHE 4 Settings 4.1 MPC 4.2 FHE 5 Difficulties and Proposed Solutions 6 State-of-the-Art 6.1 MPC Training Algorithms 6.2 MPC Classification 6.3 HE Training Algorithms 6.4 HE Deep Learning Classification 7 Limitations 8 Conclusion References Machine Learning Meets Data Modification 1 Introduction 1.1 Risks and Opportunities of Machine Learning 1.2 Scope and Outline 2 Scenarios and Requirements 2.1 Scenario 1: User Data Sharing 2.2 Scenario 2: Data Set sharing 3 Threat Model 3.1 Scenario 1 Threat Model 3.2 Scenario 2 Threat Model 3.3 Privacy Threats in the Context of ML 3.4 Privacy Threats in the Context of Data Sharing 4 Overview of Data Modification Techniques 4.1 Non-perturbative Techniques 4.2 Pertubative Techniques 4.3 Synthetic Data Generation 5 Summary and Future Directions 5.1 New Types of Data 5.2 Privacy and Fairness 5.3 Interdisciplinarity References AI for Biometric Authentication Systems 1 Introduction 2 Biometric System 2.1 System Design 2.2 ML-Enabled Biometric Authentication 2.3 Attack Surface 2.4 Evaluation Metrics 3 Biometric Feature Extraction 3.1 Sensors 3.2 Pre-processing 3.3 Feature Extraction 3.4 Attacks and Defenses 4 Biometric DB 4.1 Template Enrollment 4.2 Template Matching 4.3 Attacks and Defenses 5 Comparison Functions 5.1 Distance Functions 5.2 Learned Functions 5.3 Attacks and Defenses 6 Summary 6.1 Biometric Authentication as an Open Set Problem 6.2 Threats Linked to New Factors and Deep Learning 6.3 Future Directions References Machine Learning and Deep Learning for Hardware Fingerprinting 1 Introduction 2 Background 3 Use Cases 3.1 Reconnaissance 3.2 Authentication 3.3 Attacks to Privacy 3.4 Indoor Positioning Systems 3.5 Forensic Device Identification 4 Domains of Use of ML and DL for HW Fingerprinting 4.1 Radio Fingerprinting 4.2 Bus Fingerprinting 4.3 Data Fingerprinting 5 Challenges 6 Summary References AI for Intrusion Detection Intelligent Malware Defenses 1 Introduction 2 Malware Characterization 2.1 Platform-Specific Malware and Defenses 2.2 Feature Sources 2.3 Feature Engineering Modes 2.4 Feature Representation 3 Malware Detection 3.1 Statistical Approaches 3.2 Graph-Mining Approaches 3.3 Image Visualization Approaches 3.4 Sequence Learning Approaches 3.5 Performance Optimizations 3.6 Trend 4 Additional Research Directions 4.1 Malware Analysis 4.2 Adversarial Malware 4.3 Malware Author Attribution 5 Challenges in ML-Applied Malware Defenses 6 Open Problems in ML-Based Malware Defenses 7 Summary References Open-World Network Intrusion Detection 1 Introduction 2 Network Intrusion Detection 2.1 Network Threats 2.2 Network Traffic Monitoring 3 A Data Analysis Approach 3.1 Machine Learning for NIDS 3.2 Anomaly Detection for Open-World NIDS 4 Challenges and Advances in Open-World NIDS Research 4.1 Original Premise of Anomaly Detection 4.2 High Error Rates and Performance Estimation 4.3 Representative Datasets and Ground Truth 4.4 Concept Drift 4.5 Real-Time Detection 4.6 Adversarial Robustness 5 Conclusion References Security of AI Adversarial Machine Learning 1 Introduction 2 Background 2.1 Related Work 3 Threat Modeling and Taxonomy of Adversarial Machine Learning 3.1 Attacks 3.2 Defenses 4 White-Box Attacks 4.1 Train Time White-Box Attacks 4.2 Test Time White-Box Attacks 5 Black-Box Attacks 5.1 Score-Based Attacks 5.2 Transfer-Based Attacks 5.3 Decision-Based Attacks 6 Defenses 6.1 On the Evaluation of Adversarial Defenses 7 Domains of Adversarial Machine Learning 7.1 Malware Detection 7.2 Authentication 7.3 CAPTCHAs 7.4 Computer Vision 7.5 Speech Recognition 7.6 Reinforcement Learning 7.7 Other Domains 8 Conclusions References Deep Learning Backdoors 1 Introduction to Backdoors in Deep Neural Networks 2 Backdoor Attacks 2.1 Threat Model 2.2 White-Box Setting 2.3 Grey-Box Setting 2.4 Black-Box Setting 2.5 Trigger Stealthiness 2.6 Application Areas 3 Detecting and Defending Backdoors 3.1 Pre-deployment Techniques 3.2 Post-deployment Techniques 4 Applications of Backdoors 4.1 Watermarking 4.2 Adversarial Example Detection 4.3 Open Problems References On Implementation-Level Security of Edge-Based Machine Learning Models 1 Introduction 1.1 Machine Learning for Edge Devices 1.2 Attacks on Machine Learning 2 Overview Side Channel Threats to Machine Learning 2.1 State-of-the-Art 2.2 Countermeasures 3 Overview of Fault Injection Threats to Machine Learning 3.1 Background 3.2 State-of-the-Art 3.3 Countermeasures 4 Conclusion 5 Open Research Problems References Author Index