ورود به حساب

نام کاربری گذرواژه

گذرواژه را فراموش کردید؟ کلیک کنید

حساب کاربری ندارید؟ ساخت حساب

ساخت حساب کاربری

نام نام کاربری ایمیل شماره موبایل گذرواژه

برای ارتباط با ما می توانید از طریق شماره موبایل زیر از طریق تماس و پیامک با ما در ارتباط باشید


09117307688
09117179751

در صورت عدم پاسخ گویی از طریق پیامک با پشتیبان در ارتباط باشید

دسترسی نامحدود

برای کاربرانی که ثبت نام کرده اند

ضمانت بازگشت وجه

درصورت عدم همخوانی توضیحات با کتاب

پشتیبانی

از ساعت 7 صبح تا 10 شب

دانلود کتاب AI, Machine Learning and Deep Learning: A Security Perspective

دانلود کتاب هوش مصنوعی، یادگیری ماشین و یادگیری عمیق: دیدگاه امنیتی

AI, Machine Learning and Deep Learning: A Security Perspective

مشخصات کتاب

AI, Machine Learning and Deep Learning: A Security Perspective

ویرایش:  
نویسندگان:   
سری:  
ISBN (شابک) : 2022055385, 9781032034058 
ناشر: CRC Press 
سال نشر: 2023 
تعداد صفحات: 346
[347] 
زبان: English 
فرمت فایل : PDF (درصورت درخواست کاربر به PDF، EPUB یا AZW3 تبدیل می شود) 
حجم فایل: 41 Mb 

قیمت کتاب (تومان) : 51,000

در صورت ایرانی بودن نویسنده امکان دانلود وجود ندارد و مبلغ عودت داده خواهد شد



ثبت امتیاز به این کتاب

میانگین امتیاز به این کتاب :
       تعداد امتیاز دهندگان : 1


در صورت تبدیل فایل کتاب AI, Machine Learning and Deep Learning: A Security Perspective به فرمت های PDF، EPUB، AZW3، MOBI و یا DJVU می توانید به پشتیبان اطلاع دهید تا فایل مورد نظر را تبدیل نمایند.

توجه داشته باشید کتاب هوش مصنوعی، یادگیری ماشین و یادگیری عمیق: دیدگاه امنیتی نسخه زبان اصلی می باشد و کتاب ترجمه شده به فارسی نمی باشد. وبسایت اینترنشنال لایبرری ارائه دهنده کتاب های زبان اصلی می باشد و هیچ گونه کتاب ترجمه شده یا نوشته شده به فارسی را ارائه نمی دهد.


توضیحاتی درمورد کتاب به خارجی



فهرست مطالب

Cover
Half Title
Title Page
Copyright Page
Table of Contents
Preface
About the Editors
Contributors
Part I Secure AI/ML Systems: Attack Models
	1 Machine Learning Attack Models
		1.1 Introduction
		1.2 Background
			1.2.1 Notation
			1.2.2 Support Vector Machines
			1.2.3 Neural Networks
		1.3 White-Box Adversarial Attacks
			1.3.1 L-BGFS Attack
			1.3.2 Fast Gradient Sign Method
			1.3.3 Basic Iterative Method
			1.3.4 DeepFool
			1.3.5 Fast Adaptive Boundary Attack
			1.3.6 Carlini and Wagner’s Attack
			1.3.7 Shadow Attack
			1.3.8 Wasserstein Attack
		1.4 Black-Box Adversarial Attacks
			1.4.1 Transfer Attack
			1.4.2 Score-Based Black-Box Attacks
				ZOO Attack
				Square Attack
			1.4.3 Decision-Based Attack
				Boundary Attack
				HopSkipJump Attack
				Spatial Transformation Attack
		1.5 Data Poisoning Attacks
			1.5.1 Label Flipping Attacks
			1.5.2 Clean Label Data Poisoning Attack
				Feature Collision Attack
				Convex Polytope Attack and Bullseye Polytope Attack
			1.5.3 Backdoor Attack
		1.6 Conclusions
		Acknowledgment
		Note
		References
	2 Adversarial Machine Learning: A New Threat Paradigm for Next-Generation Wireless Communications
		2.1 Introduction
			2.1.1 Scope and Background
		2.2 Adversarial Machine Learning
		2.3 Challenges and Gaps
			2.3.1 Development Environment
			2.3.2 Training and Test Datasets
			2.3.3 Repeatability, Hyperparameter Optimization, and Explainability
			2.3.4 Embedded Implementation
		2.4 Conclusions and Recommendations
		References
	3 Threat of Adversarial Attacks to Deep Learning: A Survey
		3.1 Introduction
		3.2 Categories of Attacks
			3.2.1 White-Box Attacks
				FGSM-based Method
				JSMA-based Method
			3.2.2 Black-Box Attacks
				Mobility-based Approach
				Gradient Estimation-Based Approach
		3.3 Attacks Overview
			3.3.1 Attacks On Computer-Vision-Based Applications
			3.3.2 Attacks On Natural Language Processing Applications
			3.3.3 Attacks On Data Poisoning Applications
		3.4 Specific Attacks In The Real World
			3.4.1 Attacks On Natural Language Processing
			3.4.2 Attacks Using Data Poisoning
		3.5 Discussions and Open Issues
		3.6 Conclusions
		References
	4 Attack Models for Collaborative Deep Learning
		4.1 Introduction
		4.2 Background
			4.2.1 Deep Learning (DL)
				Convolution Neural Network
			4.2.2 Collaborative Deep Learning (CDL)
				Architecture
				Collaborative Deep Learning Workflow
			4.2.3 Deep Learning Security and Collaborative Deep Learning Security
		4.3 Auror: An Automated Defense
			4.3.1 Problem Setting
			4.3.2 Threat Model
				Targeted Poisoning Attacks
			4.3.3 AUROR Defense
			4.3.4 Evaluation
		4.4 A New CDL Attack: Gan Attack
			4.4.1 Generative Adversarial Network (GAN)
			4.4.2 GAN Attack
				Main Protocol
			4.4.3 Experiment Setups
				Dataset
				System Architecture
				Hyperparameter Setup
			4.4.4 Evaluation
		4.5 Defend Against Gan Attack In IoT
			4.5.1 Threat Model
			4.5.2 Defense System
			4.5.3 Main Protocols
			4.5.4 Evaluation
		4.6 Conclusions
		Acknowledgment
		References
	5 Attacks On Deep Reinforcement Learning Systems: A Tutorial
		5.1 Introduction
		5.2 Characterizing Attacks on DRL Systems
		5.3 Adversarial Attacks
		5.4 Policy Induction Attacks
		5.5 Conclusions and Future Directions
		References
	6 Trust and Security of Deep Reinforcement Learning
		6.1 Introduction
		6.2 Deep Reinforcement Learning Overview
			6.2.1 Markov Decision Process
			6.2.2 Value-Based Methods
				V-value Function
				Q-value Function
				Advantage Function
				Bellman Equation
			6.2.3 Policy-Based Methods
			6.2.4 Actor–Critic Methods
			6.2.5 Deep Reinforcement Learning
		6.3 The Most Recent Reviews
			6.3.1 Adversarial Attack On Machine Learning
				6.3.1.1 Evasion Attack
				6.3.1.2 Poisoning Attack
			6.3.2 Adversarial Attack On Deep Learning
				6.3.2.1 Evasion Attack
				6.3.2.2 Poisoning Attack
			6.3.3 Adversarial Deep Reinforcement Learning
		6.4 Attacks On DRL Systems
			6.4.1 Attacks On Environment
			6.4.2 Attacks On States
			6.4.3 Attacks On Policy Function
			6.4.4 Attacks On Reward Function
		6.5 Defenses Against DRL System Attacks
			6.5.1 Adversarial Training
			6.5.2 Robust Learning
			6.5.3 Adversarial Detection
		6.6 Robust DRL Systems
			6.6.1 Secure Cloud Platform
			6.6.2 Robust DRL Modules
		6.7 A Scenario of Financial Stability
			6.7.1 Automatic Algorithm Trading Systems
		6.8 Conclusion and Future Work
		References
	7 IoT Threat Modeling Using Bayesian Networks
		7.1 Background
		7.2 Topics of Chapter
		7.3 Scope
		7.4 Cyber Security In IoT Networks
			7.4.1 Smart Home
			7.4.2 Attack Graphs
		7.5 Modeling With Bayesian Networks
			7.5.1 Graph Theory
			7.5.2 Probabilities and Distributions
			7.5.3 Bayesian Networks
			7.5.4 Parameter Learning
			7.5.5 Inference
		7.6 Model Implementation
			7.6.1 Network Structure
			7.6.2 Attack Simulation
				Selection Probabilities
				Vulnerability Probabilities Based On CVSS Scores
				Attack Simulation Algorithm
			7.6.3 Network Parametrization
			7.6.4 Results
		7.7 Conclusions and Future Work
		References
Part II Secure AI/ML Systems: Defenses
	8 Survey of Machine Learning Defense Strategies
		8.1 Introduction
		8.2 Security Threats
		8.3 Honeypot Defense
		8.4 Poisoned Data Defense
		8.5 Mixup Inference Against Adversarial Attacks
		8.6 Cyber-Physical Techniques
		8.7 Information Fusion Defense
		8.8 Conclusions and Future Directions
		References
	9 Defenses Against Deep Learning Attacks
		9.1 Introduction
		9.2 Categories of Defenses
			9.2.1 Modified Training Or Modified Input
				Data Preprocessing
				Data Augmentation
			9.2.2 Modifying Networks Architecture
				Network Distillation
				Model Regularization
			9.2.3 Network Add-On
				Defense Against Universal Perturbations
				MegNet Model
		9.4 Discussions and Open Issues
		9.5 Conclusions
		References
	10 Defensive Schemes for Cyber Security of Deep Reinforcement Learning
		10.1 Introduction
		10.2 Background
			10.2.1 Model-Free RL
			10.2.2 Deep Reinforcement Learning
			10.2.3 Security of DRL
		10.3 Certificated Verification For Adversarial Examples
			10.3.1 Robustness Certification
			10.3.2 System Architecture
			10.3.3 Experimental Results
		10.4 Robustness On Adversarial State Observations
			10.4.1 State-Adversarial DRL for Deterministic Policies: DDPG
			10.4.2 State-Adversarial DRL for Q-Learning: DQN
			10.4.3 Experimental Results
		10.5 Conclusion And Challenges
		Acknowledgment
		References
	11 Adversarial Attacks On Machine Learning Models in Cyber- Physical Systems
		11.1 Introduction
		11.2 Support Vector Machine (SVM) Under Evasion Attacks
			11.2.1 Adversary Model
			11.2.2 Attack Scenarios
			11.2.3 Attack Strategy
		11.3 SVM Under Causality Availability Attack
		11.4 Adversarial Label Contamination on SVM
			11.4.1 Random Label Flips
			11.4.2 Adversarial Label Flips
		11.5 Conclusions
		References
	12 Federated Learning and Blockchain:: An Opportunity for Artificial Intelligence With Data Regulation
		12.1 Introduction
		12.2 Data Security And Federated Learning
		12.3 Federated Learning Context
			12.3.1 Type of Federation
				12.3.1.1 Model-Centric Federated Learning
				12.3.1.2 Data-Centric Federated Learning
			12.3.2 Techniques
				12.3.2.1 Horizontal Federated Learning
				12.3.2.2 Vertical Federated Learning
		12.4 Challenges
			12.4.1 Trade-Off Between Efficiency and Privacy
			12.4.2 Communication Bottlenecks
			12.4.3 Poisoning
		12.5 Opportunities
			12.5.1 Leveraging Blockchain
		12.6 Use Case: Leveraging Privacy, Integrity, And Availability For Data-Centric Federated Learning Using A Blockchain-Based Approach
			12.6.1 Results
		12.7 Conclusion
		References
Part III Using AI/ML Algorithms for Cyber Security
	13 Using Machine Learning for Cyber Security: Overview
		13.1 Introduction
		13.2 Is Artificial Intelligence Enough To Stop Cyber Crime?
		13.3 Corporations’ Use Of Machine Learning To Strengthen Their Cyber Security Systems
		13.4 Cyber Attack/Cyber Security Threats And Attacks
			13.4.1 Malware
			13.4.2 Data Breach
			13.4.3 Structured Query Language Injection (SQL-I)
			13.4.4 Cross-Site Scripting (XSS)
			13.4.5 Denial-Of-Service (DOS) Attack
			13.4.6 Insider Threats
			13.4.7 Birthday Attack
			13.4.8 Network Intrusions
			13.4.9 Impersonation Attacks
			13.4.10 DDoS Attacks Detection On Online Systems
		13.5 Different Machine Learning Techniques In Cyber Security
			13.5.1 Support Vector Machine (SVM)
			13.5.2 K-Nearest Neighbor (KNN)
			13.5.3 Naïve Bayes
			13.5.4 Decision Tree
			13.5.5 Random Forest (RF)
			13.5.6 Multilayer Perceptron (MLP)
		13.6 Application Of Machine Learning
			13.6.1 ML in Aviation Industry
			13.6.2 Cyber ML Under Cyber Security Monitoring
			13.6.3 Battery Energy Storage System (BESS) Cyber Attack Mitigation
			13.6.4 Energy-Based Cyber Attack Detection in Large-Scale Smart Grids
			13.6.5 IDS for Internet of Vehicles (IoV)
		13.7 Deep Learning Techniques In Cyber Security
			13.7.1 Deep Auto-Encoder
			13.7.2 Convolutional Neural Networks (CNN)
			13.7.3 Recurrent Neural Networks (RNNs)
			13.7.4 Deep Neural Networks (DNNs)
			13.7.5 Generative Adversarial Networks (GANs)
			13.7.6 Restricted Boltzmann Machine (RBM)
			13.7.7 Deep Belief Network (DBN)
		13.8 Applications Of Deep Learning In Cyber Security
			13.8.1 Keystroke Analysis
			13.8.2 Secure Communication in IoT
			13.8.3 Botnet Detection
			13.8.4 Intrusion Detection and Prevention Systems (IDS/IPS)
			13.8.5 Malware Detection in Android
			13.8.6 Cyber Security Datasets
			13.8.7 Evaluation Metrics
		13.9 CONCLUSION
		References
	14 Performance of Machine Learning and Big Data Analytics Paradigms in Cyber Security
		14.1 Introduction
			14.1.1 Background On Cyber Security and Machine Learning
			14.1.2 Background Perspectives to Big Data Analytics and Cyber Security
			14.1.3 Supervised Learning Algorithms
			14.1.4 Statement of the Problem
			14.1.5 Purpose of Study
			14.1.6 Research Objectives
			14.1.7 Research Questions
		14.2 LITERATURE REVIEW
			14.2.1 Overview
			14.2.2 Classical Machine Learning (CML)
				14.2.2.1 Logistic Regression (LR)
				14.2.2.2 Naïve Bayes (NB)
				14.2.2.3 Decision Tree (DT)
				14.2.2.4 K-Nearest Neighbor (KNN)
				14.2.2.5 AdaBoost (AB)
				14.2.2.6 Random Forest (RF)
				14.2.2.7 Support Vector Machine (SVM)
			14.2.3 Modern Machine Learning
				14.2.3.1 Deep Neural Network (DNN)
				14.2.3.2 Future of AI in the Fight Against Cyber Crimes
			14.2.4 Big Data Analytics and Cyber Security
				14.2.4.1 Big Data Analytics Issues
				14.2.4.2 Independent Variable: Big Data Analytics
				14.2.4.3 Intermediating Variables
				14.2.4.4 Conceptual Framework
				14.2.4.5 Theoretical Framework
				14.2.4.6 Big Data Analytics Application to Cyber Security
				14.2.4.7 Big Data Analytics and Cyber Security Limitations
				14.2.4.8 Limitations
			14.2.5 Advances in Cloud Computing
				14.2.5.1 Explaining Cloud Computing and How It Has Evolved to Date
			14.2.6 Cloud Characteristics
			14.2.7 Cloud Computing Service Models
				14.2.7.1 Software as a Service (SaaS)
				14.2.7.2 Platform as a Service (PaaS)
				14.2.7.3 Infrastructure as a Service (IaaS)
			14.2.8 Cloud Deployment Models
				14.2.8.1 Private Cloud
				14.2.8.2 Public Cloud
				14.2.8.3 Hybrid Cloud
				14.2.8.4 Community Cloud
				14.2.8.5 Advantages and Disadvantages of Cloud Computing
				14.2.8.6 Six Main Characteristics of Cloud Computing and How They Are Leveraged
				14.2.8.7 Some Advantages of Network Function Virtualization
				14.2.8.8 Virtualization and Containerization Compared and Contrasted
		14.3 Research Methodology
			14.3.1 Presentation of the Methodology
				14.3.1.1 Research Approach and Philosophy
				14.3.1.2 Research Design and Methods
			14.3.2 Population and Sampling
				14.3.2.1 Population
				14.3.2.2 Sample
			14.3.3 Sources and Types of Data
			14.3.4 Model for Analysis
				14.3.4.1 Big Data
				14.3.4.2 Big Data Analytics
				14.3.4.3 Insights for Action
				14.3.4.4 Predictive Analytics
			14.3.5 Validity and Reliability
			14.3.6 Summary of Research Methodology
			14.3.7 Possible Outcomes
		14.4 Analysis And Research Outcomes
			14.4.1 Overview
			14.4.2 Support Vector Machine
			14.4.3 KNN Algorithm
			14.4.4 Multilinear Discriminant Analysis (LDA)
			14.4.5 Random Forest Classifier
			14.4.6 Variable Importance
			14.4.7 Model Results
			14.4.8 Classification and Regression Trees (CART)
			14.4.9 Support Vector Machine
			14.4.10 Linear Discriminant Algorithm
			14.4.11 K-Nearest Neighbor
			14.4.12 Random Forest
			14.4.13 Challenges and Future Direction
				14.4.13.1 Model 1: Experimental/Prototype Model
				14.4.13.2 Model 2: Cloud Computing/Outsourcing
				14.4.13.3 Application of Big Data Analytics Models in Cyber Security
				14.4.13.4 Summary of Analysis
		14.5 Conclusion
		References
	15 Using ML and DL Algorithms for Intrusion Detection in the Industrial Internet of Things
		15.1 Introduction
		15.2 IDS Applications
			15.2.1 Random Forest Classifier
			15.2.2 Pearson Correlation Coefficient
			15.2.3 Related Works
		15.3 Use Of ML And DL Algorithms In IIOT Applications
		15.4 Practical Application of ML Algorithms In IIOT
			15.4.1 Results
		15.5 Conclusion
		References
Part IV Applications
	16 On Detecting Interest Flooding Attacks in Named Data Networking (NDN)–based IoT Searches
		16.1 Introduction
		16.2 PRELIMINARIES
			16.2.1 Named Data Networking (NDN)
			16.2.2 Internet of Things Search Engine (IoTSE)
			16.2.3 Machine Learning (ML)
		16.3 Machine Learning Assisted For NDN-Based IFA Detection In IOTSE
			16.3.1 Attack Model
			16.3.2 Attack Scale
			16.3.3 Attack Scenarios
			16.3.4 Machine Learning (ML) Detection Models
		16.4 Performance Evaluation
			16.4.1 Methodology
			16.4.2 IFA Performance
				16.4.2.1 Simple Tree Topology (Small Scale)
				16.4.2.2 Rocketfuel ISP Like Topology (Large Scale)
			16.4.3 Data Processing for Detection
			16.4.4 Detection Results
				16.4.4.1 ML Detection Performance in Simple Tree Topology
				16.4.4.2 ML Detection in Rocketful ISP Topology
		16.5 Discussion
		16.6 Related Works
		16.7 Final Remarks
		Acknowledgment
		References
	17 Attack On Fraud Detection Systems in Online Banking Using Generative Adversarial Networks
		17.1 Introduction
			17.1.1 Problem of Fraud Detection in Banking
			17.1.2 Fraud Detection and Prevention System
		17.2 Experiment Description
			17.2.1 Research Goal
			17.2.2 Empirical Data
			17.2.3 Attack Scenario
		17.3 Generator And Discrimination Model
			17.3.1 Model Construction
				17.3.1.1 Imitation Fraud Detection System Model
				17.3.1.2 Generator Models
			17.3.2 Evaluation of Models
		17.4 Final Conclusions and Recommendations
		Notes
		References
	18 Artificial Intelligence-Assisted Security Analysis of Smart Healthcare Systems
		18.1 Introduction
		18.2 Smart Healthcare System (SHS)
			18.2.1 Formal Modeling of SHS
			18.2.2 Machine Learning (ML)–based Patient Status Classification Module (PSCM) in SHS
				18.2.2.1 Decision Tree (DT)
				18.2.2.2 Logistic Regression (LR)
				18.2.2.3 Neural Network (NN)
			18.2.3 Hyperparameter Optimization of PSCM in SHS
				18.2.3.1 Whale Optimization (WO)
				18.2.3.2 Grey Wolf Optimization (GWO)
				18.2.3.3 Firefly Optimization (FO)
				18.2.3.4 Evaluation Results
		18.3 Formal Attack Modeling of SHS
			18.3.1 Attacks in SHS
			18.3.2 Attacker’s Knowledge
			18.3.3 Attacker’s Capability
			18.3.4 Attacker’s Accessibility
			18.3.5 Attacker’s Goal
		18.4 Anomaly Detection Models (ADMS) In SHS
			18.4.1 ML-Based Anomaly Detection Model (ADM) in SHS
				18.4.1.1 Density-Based Spatial Clustering of Applications With Noise (DBSCAN)
				18.4.1.2 K-Means
				18.4.1.3 One-Class SVM (OCSVM)
				18.4.1.4 Autoencoder (AE)
			18.4.2 Ensemble-Based ADMs in SHS
				18.4.2.1 Data Collection and Preprocessing
				18.4.2.2 Model Training
				18.4.2.3 Threshold Calculation
				18.4.2.4 Anomaly Detection
				18.4.2.5 Example Case Studies
				18.4.2.6 Evaluation Result
				18.4.2.7 Hyperparameter Optimization of ADMs in SHS
		18.5 Formal Attack Analysis of Smart Healthcare Systems
			18.5.1 Example Case Studies
			18.5.2 Performance With Respect to Attacker Capability
			18.5.3 Frequency of Sensors in the Attack Vectors
			18.5.4 Scalability Analysis
		18.6 Resiliency Analysis Of Smart Healthcare System
		18.7 Conclusion and Future Works
		References
	19 A User-Centric Focus for Detecting Phishing Emails
		19.1 Introduction
		19.2 Background and Related Work
			19.2.1 Behavioral Models Related to Phishing Susceptibility
			19.2.2 User-Centric Antiphishing Measures
			19.2.3 Technical Antiphishing Measures
			19.2.4 Research Gap
		19.3 The Dataset
		19.4 Understanding The Decision Behavior Of Machine Learning Models
			19.4.1 Interpreter for Machine Learning Algorithms
			19.4.2 Local Interpretable Model-Agnostic Explanations (LIME)
			19.4.3 Anchor Explanations
				19.4.3.1 Share of Emails in the Data for Which the Rule Holds
		19.5 Designing the Artifact
			19.5.1 Background
			19.5.2 Identifying Suspected Phishing Attempts
			19.5.3 Cues in Phishing Emails
			19.5.4 Extracting Cues
			19.5.5 Examples of the Application of XAI for Extracting Cues and Phrases
		19.6 Conclusion and Future Works
			19.6.1 Completion of the Artifact
		Notes
		References




نظرات کاربران