ورود به حساب

نام کاربری گذرواژه

گذرواژه را فراموش کردید؟ کلیک کنید

حساب کاربری ندارید؟ ساخت حساب

ساخت حساب کاربری

نام نام کاربری ایمیل شماره موبایل گذرواژه

برای ارتباط با ما می توانید از طریق شماره موبایل زیر از طریق تماس و پیامک با ما در ارتباط باشید


09117307688
09117179751

در صورت عدم پاسخ گویی از طریق پیامک با پشتیبان در ارتباط باشید

دسترسی نامحدود

برای کاربرانی که ثبت نام کرده اند

ضمانت بازگشت وجه

درصورت عدم همخوانی توضیحات با کتاب

پشتیبانی

از ساعت 7 صبح تا 10 شب

دانلود کتاب Federated Learning: A Comprehensive Overview of Methods and Applications

دانلود کتاب یادگیری فدرال: مروری جامع بر روش ها و کاربردها

Federated Learning: A Comprehensive Overview of Methods and Applications

مشخصات کتاب

Federated Learning: A Comprehensive Overview of Methods and Applications

ویرایش: [1st ed. 2022] 
نویسندگان:   
سری:  
ISBN (شابک) : 3030968952, 9783030968953 
ناشر: Springer 
سال نشر: 2022 
تعداد صفحات: 540
[531] 
زبان: English 
فرمت فایل : PDF (درصورت درخواست کاربر به PDF، EPUB یا AZW3 تبدیل می شود) 
حجم فایل: 14 Mb 

قیمت کتاب (تومان) : 31,000



ثبت امتیاز به این کتاب

میانگین امتیاز به این کتاب :
       تعداد امتیاز دهندگان : 9


در صورت تبدیل فایل کتاب Federated Learning: A Comprehensive Overview of Methods and Applications به فرمت های PDF، EPUB، AZW3، MOBI و یا DJVU می توانید به پشتیبان اطلاع دهید تا فایل مورد نظر را تبدیل نمایند.

توجه داشته باشید کتاب یادگیری فدرال: مروری جامع بر روش ها و کاربردها نسخه زبان اصلی می باشد و کتاب ترجمه شده به فارسی نمی باشد. وبسایت اینترنشنال لایبرری ارائه دهنده کتاب های زبان اصلی می باشد و هیچ گونه کتاب ترجمه شده یا نوشته شده به فارسی را ارائه نمی دهد.


توضیحاتی در مورد کتاب یادگیری فدرال: مروری جامع بر روش ها و کاربردها

یادگیری فدرال: مروری جامع بر روش ها و کاربردها بحث عمیقی از مهم ترین مسائل و رویکردهای یادگیری فدرال را برای محققان و متخصصان ارائه می دهد.
آموزش فدرال (FL) رویکردی برای یادگیری ماشینی است که در آن داده های آموزشی به صورت متمرکز مدیریت نمی شوند. داده ها توسط طرف های داده ای که در فرآیند FL شرکت می کنند حفظ می شوند و با هیچ نهاد دیگری به اشتراک گذاشته نمی شوند. این امر FL را به یک راه‌حل محبوب فزاینده برای وظایف یادگیری ماشین تبدیل می‌کند که برای آن‌ها گردآوری داده‌ها در یک مخزن متمرکز مشکل ساز است، چه به دلایل حفظ حریم خصوصی، قانونی یا عملی.
این کتاب پیشرفت‌های اخیر در تحقیقات و وضعیت پیشرفته را توضیح می‌دهد. توسعه هنر آموزش فدرال (FL)، از مفهوم اولیه این رشته تا اولین کاربردها و استفاده تجاری. برای به دست آوردن این نمای کلی گسترده و عمیق، محققان برجسته به دیدگاه های مختلف یادگیری فدرال می پردازند: دیدگاه اصلی یادگیری ماشین، حریم خصوصی و امنیت، سیستم های توزیع شده، و حوزه های کاربردی خاص. خوانندگان در مورد چالش‌هایی که در هر یک از این حوزه‌ها با آن‌ها مواجه هستند، نحوه به هم پیوستگی آن‌ها و نحوه حل آن‌ها با روش‌های پیشرفته آشنا می‌شوند.
به دنبال مروری بر اصول یادگیری فدرال در مقدمه، در 24 زیر در فصل‌ها، خواننده عمیقاً در موضوعات مختلف فرو خواهد رفت. بخش اول به سوالات الگوریتمی حل وظایف مختلف یادگیری ماشین به روشی فدرال، نحوه آموزش کارآمد، در مقیاس و منصفانه می پردازد. بخش دیگری بر ارائه شفافیت در مورد نحوه انتخاب راه حل های حریم خصوصی و امنیتی به گونه ای متمرکز است که می تواند برای موارد استفاده خاص تنظیم شود، در حالی که بخش دیگری به عملکرد سیستم هایی که در آن فرآیند یادگیری فدرال اجرا می شود توجه می کند. این کتاب همچنین موارد استفاده مهم دیگری را برای یادگیری فدرال مانند یادگیری تقسیم شده و یادگیری فدرال عمودی پوشش می دهد. در نهایت، این کتاب شامل چند فصل است که بر روی استفاده از FL در تنظیمات سازمانی در دنیای واقعی تمرکز دارد.


توضیحاتی درمورد کتاب به خارجی

Federated Learning: A Comprehensive Overview of Methods and Applications presents an in-depth discussion of the most important issues and approaches to federated learning for researchers and practitioners. 
Federated Learning (FL) is an approach to machine learning in which the training data are not managed centrally. Data are retained by data parties that participate in the FL process and are not shared with any other entity. This makes FL an increasingly popular solution for machine learning tasks for which bringing data together in a centralized repository is problematic, either for privacy, regulatory or practical reasons.
This book explains recent progress in research and the state-of-the-art development of Federated Learning (FL), from the initial conception of the field to first applications and commercial use. To obtain this broad and deep overview, leading researchers address the different perspectives of federated learning: the core machine learning perspective, privacy and security, distributed systems, and specific application domains. Readers learn about the challenges faced in each of these areas, how they are interconnected, and how they are solved by state-of-the-art methods.
Following an overview on federated learning basics in the introduction, over the following 24 chapters, the reader will dive deeply into various topics. A first part addresses algorithmic questions of solving different machine learning tasks in a federated way, how to train efficiently, at scale, and fairly. Another part focuses on providing clarity on how to select privacy and security solutions in a way that can be tailored to specific use cases, while yet another considers the pragmatics of the systems where the federated learning process will run. The book also covers other important use cases for federated learning such as split learning and vertical federated learning. Finally, the book includes some chapters focusing on applying FL in real-world enterprise settings.



فهرست مطالب

Preface
Contents
1 Introduction to Federated Learning
	1.1 Overview
	1.2 Concepts and Terminology
	1.3 Machine Learning Perspective
		1.3.1 Deep Neural Networks
		1.3.2 Classical Machine Learning Models
		1.3.3 Horizontal, Vertical Federated Learning and Split Learning
		1.3.4 Model Personalization
	1.4 Security and Privacy
		1.4.1 Manipulation Attacks
		1.4.2 Inference Attacks
	1.5 Federated Learning Systems
	1.6 Summary and Conclusion
	References
Part I Federated Learning as a Machine Learning Problem
	2 Tree-Based Models for Federated Learning Systems
		2.1 Introduction
			2.1.1 Tree-Based Models
			2.1.2 Key Research Challenges of Tree-Based Models in FL
			2.1.3 Advantages of Tree-Based Models in FL
		2.2 Survey of Tree-Based Methods for FL
			2.2.1 Horizontal vs. Vertical FL
			2.2.2 Tree-Based Algorithm Types in Federated Learning
			2.2.3 Handling Security Requirements for Tree-Based Federated Learning
			2.2.4 Implementations of Tree-Based Models in FL
		2.3 Preliminaries on Decision Trees and Gradient Boosting
			2.3.1 The Federated Learning System
			2.3.2 Preliminaries on Centralized ID3 Models
			2.3.3 Preliminaries on Gradient Boosting
		2.4 Decision Trees for Federated Learning
		2.5 XGBoost for Federated Learning
		2.6 Open Problems and Future Research Directions
			2.6.1 Data Fidelity Threshold Policies
			2.6.2 Fairness and Bias Mitigation Methods for Tree-Based FL Models
			2.6.3 Training Tree-Based FL Models on Alternative Network Topologies
		2.7 Conclusion
		References
	3 Semantic Vectorization: Text- and Graph-Based Models
		3.1 Introduction
		3.2 Background
			3.2.1 Natural Language Processing
			3.2.2 Text Vectorizers
			3.2.3 Graph Vectorizers
		3.3 Problem Formulation
			3.3.1 Joint Learning
			3.3.2 Vector-Space Mapping
		3.4 Experimentation and Setup
			3.4.1 Datasets
			3.4.2 Implementation
		3.5 Results: Joint Learning
			3.5.1 Metrics
				3.5.1.1 Natural Language
				3.5.1.2 Graph
		3.6 Results: Vector-Space Mapping
			3.6.1 Cosine Distance
			3.6.2 Rank Similarity
		3.7 Conclusions and Future Work
		References
	4 Personalization in Federated Learning
		4.1 Introduction
		4.2 First Steps Toward Personalization
			4.2.1 Fine-Tuning Global Model for Personalization
			4.2.2 Federated Averaging as a First-Order Meta-learning Method
		4.3 Personalization Strategies
			4.3.1 Client (Party) Clustering
			4.3.2 Client Contextualization
			4.3.3 Data Augmentation
			4.3.4 Distillation
			4.3.5 Meta-learning Approach
			4.3.6 Mixture of Models
			4.3.7 Model Regularization
			4.3.8 Multi-task Learning
		4.4 Benchmarks for Personalization Techniques
			4.4.1 Synthetic Federated Datasets
			4.4.2 Simulating Federated Datasets
			4.4.3 Public Federated Datasets
		4.5 Personalization as the Incidental Parameters Problem
		4.6 Conclusion
		References
	5 Personalized, Robust Federated Learning with Fed+
		5.1 Introduction
		5.2 Literature Review
		5.3 Illustration of Federated Learning Training Failure
		5.4 Personalized Federated Learning
			5.4.1  Problem Formulation
			5.4.2 Handling Robust Aggregation
			5.4.3 Personalization
			5.4.4  Reformulation and Unification of Mean and Robust Aggregation
			5.4.5 The Fed+ Algorithm
			5.4.6 Mean and Robust Variants of Fed+
				5.4.6.1 FedAvg+
				5.4.6.2 FedGeoMed+
				5.4.6.3 FedCoMed+
				5.4.6.4 Hybridization via the Unified Fed+ Framework with Layer-Specific ϕ
			5.4.7  Deriving Existing Algorithms from Fed+
		5.5 Fixed Points of Fed+
		5.6 Convergence Analysis
		5.7 Experiments
			5.7.1 Datasets
			5.7.2 Results
		5.8 Conclusion
		References
	6 Communication-Efficient Distributed Optimization Algorithms
		6.1 Introduction
		6.2 Local-Update SGD and FedAvg
			6.2.1 Local-Update SGD and Its Variants
			6.2.2 Federated Averaging (FedAvg) Algorithm and Its Variants
		6.3 Model Compression
			6.3.1 SGD with Compressed Updates
				6.3.1.1 Unbiased Compressor Without Error Feedback
				6.3.1.2 General Compressor with Error Feedback
			6.3.2 Adaptive Compression Rate
			6.3.3 Model Pruning
		6.4 Discussion
		References
	7 Communication-Efficient Model Fusion
		7.1 Introduction
		7.2 Permutation-Invariant Structure of Models
			7.2.1 General Formulation of Matched Averaging
			7.2.2 Solving Matched Averaging
		7.3 Probabilistic Federated Neural Matching
			7.3.1 PFNM Generative Process
			7.3.2 PFNM Inference
			7.3.3 PFNM in Practice
		7.4 Unsupervised FL with SPAHM
			7.4.1 SPAHM Model
			7.4.2 SPAHM Inference
			7.4.3 SPAHM in Practice
		7.5 Model Fusion of Posterior Distributions
			7.5.1 Model Fusion with KL Divergence
			7.5.2 KL-Fusion in Practice
		7.6 Fusion of Deep Neural Networks
			7.6.1 Extending PFNM to Deep Neural Networks
			7.6.2 FedMA in Practice
		7.7 Theoretical Understanding of Model Fusion
			7.7.1 Preliminaries: Parametric Models
			7.7.2 The Benefits and Drawbacks of Model Fusion in Federated Settings
		7.8 Conclusion
		References
	8 Federated Learning and Fairness
		8.1 Introduction
		8.2 Preliminaries and Existing Mitigation Methods
			8.2.1 Notation and Terminology
			8.2.2 Types of Bias Mitigation Methods
			8.2.3 Data Privacy and Bias
		8.3 Sources of Bias
			8.3.1 Centralized and Federated Causes
			8.3.2 Federated Learning-Specific Causes
				8.3.2.1 Data Heterogeneity
				8.3.2.2 Fusion Algorithms
				8.3.2.3 Party Selection and Subsampling
		8.4 Exploring the Literature
			8.4.1 Centralized Methods
			8.4.2 Adapting Centralized Methods for FL
			8.4.3 Bias Mitigation Without Sensitive Attributes
		8.5 Measuring Bias
		8.6 Open Issues
		8.7 Conclusion
		References
Part II Systems and Frameworks
	9 Introduction to Federated Learning Systems
		9.1 Introduction
			9.1.1 Chapter Overview
		9.2 Cross-Device vs. Cross-Silo Federated Learning
		9.3 Cross-Device Federated Learning
			9.3.1 Problem Formulation
			9.3.2 System Overview
			9.3.3 Training Procedure
			9.3.4 Challenges
		9.4 Cross-Silo Federated Learning
			9.4.1 Problem Formulation
			9.4.2 System Overview
			9.4.3 Training Procedure
			9.4.4 Challenges
		9.5 Conclusion
		References
	10 Local Training and Scalability of Federated Learning Systems
		10.1 Party-Side Local Training
			10.1.1 Computation
			10.1.2 Memory
			10.1.3 Energy
			10.1.4 Network
		10.2 Large-Scale FL Systems
			10.2.1 Clustered FL
				10.2.1.1 Design Challenges
				10.2.1.2 Pros and Cons
				10.2.1.3 Notable Examples in Literature
			10.2.2 Hierarchical FL
				10.2.2.1 Design Challenges
				10.2.2.2 Pros and Cons
				10.2.2.3 Notable Examples in Literature
			10.2.3 Decentralized FL
				10.2.3.1 Design Challenges
				10.2.3.2 Pros and Cons
				10.2.3.3 Notable Examples in Literature
			10.2.4 Asynchronous FL
				10.2.4.1 Design Challenges
				10.2.4.2 Pros and Cons
				10.2.4.3 Notable Examples in Literature
		10.3 Conclusion
		References
	11 Straggler Management
		11.1 Introduction
		11.2 Heterogeneity Impact Study
			11.2.1 Formulating Standard Federated Learning
			11.2.2 Heterogeneity Impact Analysis
			11.2.3 Experimental Study
		11.3 Design of TiFL
			11.3.1 System Overview
			11.3.2 Profiling and Tiering
			11.3.3 Straw-Man Proposal: Static Tier Selection Algorithm
			11.3.4 Adaptive Tier Selection Algorithm
			11.3.5 Training Time Estimation Model
		11.4 Experimental Evaluation
			11.4.1 Experimental Setup
				11.4.1.1 Experimental Results
				11.4.1.2 Training Time Estimation via Analytical Model
			11.4.2 Resource Heterogeneity
			11.4.3 Data Heterogeneity
			11.4.4 Resource and Data Heterogeneity
			11.4.5 Adaptive Selection Policy
			11.4.6 Adaptive Selection Policy
		11.5 Conclusion
		References
	12 Systems Bias in Federated Learning
		12.1 Introduction
		12.2 Background
			12.2.1 Fairness in Machine Learning
			12.2.2 Fairness in Federated Learning
			12.2.3 Resource Usage in Federated Learning
		12.3 Characterization Study
			12.3.1 Performance Metrics
			12.3.2 Tradeoff Between Fairness and Training Time
			12.3.3 Impact of Dropout on Fairness and Model Error
			12.3.4 Tradeoff Between Cost and Model Error
		12.4 Methodology
			12.4.1 Problem Formulation
			12.4.2 DCFair Overview
			12.4.3 Selection Probability
			12.4.4 Selection Mutualism
		12.5 Evaluation
			12.5.1 Cost Analysis
			12.5.2 Model Error and Fairness Analysis
			12.5.3 Training Time Analysis
			12.5.4 Pareto Optimality Analysis
		12.6 Conclusion
		References
Part III Privacy and Security
	13 Protecting Against Data Leakage in Federated Learning: What Approach Should You Choose?
		13.1 Introduction
		13.2 System Entities, Attack Surfaces, and Inference Attacks
			13.2.1 System Setup, Assumptions, and Attack Surfaces
			13.2.2 Potential Adversaries
			13.2.3 Inference Attacks to Federated Learning
				13.2.3.1 Training Data Extraction Attacks
				13.2.3.2 Membership Inference Attacks
				13.2.3.3 Model Inversion Attacks
				13.2.3.4 Property Inference Attacks
		13.3 Mitigating Inference Threats in Federated Learning
			13.3.1 Secure Aggregation Approaches
				13.3.1.1 Homomorphic Encryption-Based Secure Aggregation
				13.3.1.2 Threshold Paillier-Based Secure Aggregation
				13.3.1.3 Pairwise Mask-Based Secure Aggregation
				13.3.1.4 Functional Encryption-Based Secure Aggregation
				13.3.1.5 Summary Secure Aggregation
			13.3.2 Syntactic and Perturbation Approaches
				13.3.2.1 K-Anonymity-Based Approaches
				13.3.2.2 Differential Privacy-Based Approaches
			13.3.3 Trusted Execution Environments (TEE)
			13.3.4 Other Techniques for Distributed Machine Learning and Vertical FL
		13.4 Selecting the Right Defense
			13.4.1 Fully Trusted Federations
			13.4.2 Ensuring that the Aggregator Can Be Trusted
			13.4.3 Federations with an Untrusted Aggregator
		13.5 Conclusions
		References
	14 Private Parameter Aggregation for Federated Learning
		14.1 Introduction
		14.2 Focus, Trust Model, and Assumptions
		14.3 Differentially Private Federated Learning
			14.3.1 Background: Differential Privacy (DP)
			14.3.2 Incorporating DP into SGD
			14.3.3 Experiments and Discussion
				14.3.3.1 Accuracy vs ε
				14.3.3.2 Accuracy vs Batch Size (Fixed ε)
		14.4 Additive Homomorphic Encryption
			14.4.1 Participants, Learners, and Administrative Domains
			14.4.2 Architecture
			14.4.3 Mystiko Algorithms
				14.4.3.1 Basic Ring-Based Algorithm
				14.4.3.2 Broadcast Algorithm
				14.4.3.3 All-Reduce
			14.4.4 Multiple Learners Per Administrative Domain
		14.5 Trusted Execution Environments
			14.5.1 Trustworthy Aggregation
		14.6 Comparing HE- and TEE-Based Aggregation with SMC
			14.6.1 Comparing Mystiko and SPDZ
			14.6.2 Overheads of Using TEEs: AMD SEV
		14.7 Concluding Remarks
		References
	15 Data Leakage in Federated Learning
		15.1 Introduction
			15.1.1 Motivation
			15.1.2 Background and Related Work
				15.1.2.1 Federated Learning
			15.1.3 Privacy Protection
		15.2 Data Leakage Attack in FL
			15.2.1 Catastrophic Data Leakage from Batch Gradients
				15.2.1.1 Why Large-Batch Data Leakage Attack Is Difficult?
		15.3 Performance Evaluation
			15.3.1 Experiment Setups and Datasets
			15.3.2 CAFE in HFL Settings
			15.3.3 CAFE in VFL Settings
			15.3.4 Attacking While Training in FL
			15.3.5 Ablation Study
		15.4 Concluding Remarks
			15.4.1 Summary
			15.4.2 Discussion
		References
	16 Security and Robustness in Federated Learning
		16.1 Introduction
			16.1.1 Notation
		16.2 Threats in Federated Learning
			16.2.1 Types of Attackers
			16.2.2 Attacker's Capabilities
				16.2.2.1 Attack Influence
				16.2.2.2 Data Manipulation Constraints
			16.2.3 Attacker's Goal
				16.2.3.1 Security Violation
				16.2.3.2 Attack Specificity
				16.2.3.3 Error Specificity
			16.2.4 Attacker's Knowledge
				16.2.4.1 Perfect Knowledge Attacks
				16.2.4.2 Limited Knowledge Attacks
			16.2.5 Attack Strategy
		16.3 Defense Strategies
			16.3.1 Defending Against Convergence Attacks
				16.3.1.1 Krum
				16.3.1.2 Median-Based Defenses
				16.3.1.3 Bulyan
				16.3.1.4 Zeno
			16.3.2 Defenses Based on Parties' Temporal Consistency
				16.3.2.1 Adaptive Model Averaging (AFA)
				16.3.2.2 PCA
				16.3.2.3 FoolsGold
				16.3.2.4 LEGATO
			16.3.3 Redundancy-Based Defenses
		16.4 Attacks
			16.4.1 Convergence Attacks
			16.4.2 Targeted Model Poisoning
		16.5 Conclusion
		References
	17 Dealing with Byzantine Threats to Neural Networks
		17.1 Background and Motivation
			17.1.1 Byzantine Threats
			17.1.2 Challenges of Mitigating the Effects of Byzantine Threats
		17.2 Gradient-Based Robustness
			17.2.1 Gradient Averaging
			17.2.2 Threat Model
			17.2.3 Coordinate-Wise Median
			17.2.4 Krum
		17.3 Layerwise Robustness to Byzantine Threats
		17.4 LEGATO: Layerwise Gradient Aggregation
			17.4.1 LEGATO
			17.4.2 Complexity Analysis of LEGATO
		17.5 Comparing Gradient-Based and Layerwise Robustness
			17.5.1 Dealing with Non-IID Party Data Distributions
			17.5.2 Dealing with Byzantine Failures
				17.5.2.1 Defense Against Fall of Empires
				17.5.2.2 Defense Against Gaussian Attacks
			17.5.3 Dealing with Overparameterized Neural Networks
			17.5.4 Effectiveness of the Log Size
		17.6 Conclusion, Open Problems, and Challenges
		References
Part IV Beyond Horizontal Federated Learning: Partitioning Models and Data in Diverse Ways
	18 Privacy-Preserving Vertical Federated Learning
		18.1 Introduction
		18.2 Understanding Vertical Federated Learning
			18.2.1 Notation, Terminology and Assumptions
			18.2.2 Two Phases of Vertical FL
				18.2.2.1 Phase I: Private Entity Resolution (PER)
				18.2.2.2 Phase II: Private Vertical Training
		18.3 Challenge of Applying Gradient Descent in Vertical FL
			18.3.1 Gradient Descent in Centralized ML
			18.3.2 Gradient Descent in Vertical FL
		18.4 Representative Vertical FL Solutions
			18.4.1 Contrasting Communication Topology and Efficiency
			18.4.2 Contrasting Privacy-Preserving Mechanisms and Their Threat Models
			18.4.3 Contrasting Supported Machine Learning Models
		18.5 FedV: An Efficient Vertical FL Framework
			18.5.1 Overview of FedV
			18.5.2 FedV Threat Model and Assumptions
			18.5.3 Vertical Training Process: FedV-SecGrad
				18.5.3.1 FedV-SecGrad for Linear Models
				18.5.3.2 FedV-SecGrad for Nonlinear Models
			18.5.4 Analysis and Discussion
		18.6 Conclusions
		References
	19 Split Learning: A Resource Efficient Model and Data Parallel Approach for Distributed Deep Learning
		19.1 Introduction to Split Learning
			19.1.1 Vanilla Split Learning
				19.1.1.1 Synchronization Step
				19.1.1.2 Relaxing Synchronization Requirements
		19.2 Communication Efficiency 19:singh2019detailed
		19.3 Latencies
		19.4 Split Learning Topologies
			19.4.1 Versatile Configurations
			19.4.2 Model Selection with ExpertMatcher 19:sharma2019expertmatcher
			19.4.3 Implementation Details
		19.5 Collaborative Inference with Split Learning
			19.5.1 Preventing Reconstruction Attacks in Collaborative Inference
				19.5.1.1 Channel Pruning
				19.5.1.2 Decorrelation
				19.5.1.3 Loss Function
			19.5.2 Differential Privacy for Activation Sharing
		19.6 Future Work
		References
Part V Applications
	20 Federated Learning for Collaborative Financial CrimesDetection
		20.1 Introduction: Financial Crimes Detection
			20.1.1 Combating Financial Crimes with Machine Learning and Graph Learning
			20.1.2 Need for Global Financial Crimes Detection and Contributions
		20.2 Graph Learning
		20.3 Federated Learning for Financial Crimes Detection
			20.3.1 Local Feature Computation
			20.3.2 Global Feature Computation
			20.3.3 Federated Learning
		20.4 Evaluation
			20.4.1 Data Set and Graph Modelling
			20.4.2 Graph Features for Party Relationship Graph
			20.4.3 Model Accuracy
		20.5 Concluding Remarks
		References
	21 Federated Reinforcement Learning for Portfolio Management
		21.1 Introduction
		21.2 Deep Reinforcement Learning Formulation
		21.3 Financial Portfolio Management
		21.4 Data Augmentation Methods
			21.4.1 Geometric Brownian Motion (GBM)
			21.4.2 Variable-Order Markov (VOM)
			21.4.3 Generative Adversarial Network (GAN)
		21.5 Experimental Results
			21.5.1 Experimental Setup
			21.5.2 Numerical Results
		21.6 Conclusion
		References
	22 Application of Federated Learning in Medical Imaging
		22.1 Introduction
		22.2 Image Segmentation
		22.3 3D Image Classification
		22.4 2D Image Classification
		22.5 Discussion
		22.6 Conclusions and Future Work
		References
	23 Advancing Healthcare Solutions with Federated Learning
		23.1 Introduction
		23.2 How Can Federated Learning Be Applied in Healthcare?
		23.3 Building a Healthcare FL Platform at Persistent with IBM FL
		23.4 Guiding Principles for Building Platforms and Solutions for Enabling Application of FL in Healthcare
			23.4.1 Infrastructure Design
			23.4.2 Data Connectors Design
			23.4.3 User Experience Design
			23.4.4 Deployment Considerations
		23.5 Core Technical Considerations with FL in Healthcare
			23.5.1 Data Heterogeneity
			23.5.2 Model Governance and Incentivization
			23.5.3 Trust and Privacy Considerations
			23.5.4 Conclusion
		References
	24 A Privacy-preserving
 Product Recommender System
		24.1 Introduction
		24.2 Related Work
		24.3 Federated Recommender System
			24.3.1 Algorithms
			24.3.2 Implementation
		24.4 Results
		24.5 Conclusion
		References
	25 Application of Federated Learning in Telecommunications and Edge Computing
		25.1 Overview
		25.2 Use Cases
			25.2.1 Vehicular Networks
			25.2.2 Cross-Border Payment
			25.2.3 Edge Computing
			25.2.4 Cyberattack
			25.2.5 6G
			25.2.6 “Emergency Services” Use Case to Demonstrate the Power of Federated Learning
		25.3 Challenges and Future Directions
			25.3.1 Security and Privacy Challenges and Considerations
			25.3.2 Environment Considerations
			25.3.3 Data Considerations
			25.3.4 Regulatory Consideration
		25.4 Concluding Remarks
		References




نظرات کاربران