ورود به حساب

نام کاربری گذرواژه

گذرواژه را فراموش کردید؟ کلیک کنید

حساب کاربری ندارید؟ ساخت حساب

ساخت حساب کاربری

نام نام کاربری ایمیل شماره موبایل گذرواژه

برای ارتباط با ما می توانید از طریق شماره موبایل زیر از طریق تماس و پیامک با ما در ارتباط باشید


09117307688
09117179751

در صورت عدم پاسخ گویی از طریق پیامک با پشتیبان در ارتباط باشید

دسترسی نامحدود

برای کاربرانی که ثبت نام کرده اند

ضمانت بازگشت وجه

درصورت عدم همخوانی توضیحات با کتاب

پشتیبانی

از ساعت 7 صبح تا 10 شب

دانلود کتاب Meta-Learning: Theory, Algorithms and Applications

دانلود کتاب فرا یادگیری: نظریه، الگوریتم ها و کاربردها

Meta-Learning: Theory, Algorithms and Applications

مشخصات کتاب

Meta-Learning: Theory, Algorithms and Applications

ویرایش:  
نویسندگان:   
سری:  
ISBN (شابک) : 0323899315, 9780323899314 
ناشر: Academic Press 
سال نشر: 2022 
تعداد صفحات: 402
[404] 
زبان: English 
فرمت فایل : PDF (درصورت درخواست کاربر به PDF، EPUB یا AZW3 تبدیل می شود) 
حجم فایل: 13 Mb 

قیمت کتاب (تومان) : 48,000



ثبت امتیاز به این کتاب

میانگین امتیاز به این کتاب :
       تعداد امتیاز دهندگان : 8


در صورت تبدیل فایل کتاب Meta-Learning: Theory, Algorithms and Applications به فرمت های PDF، EPUB، AZW3، MOBI و یا DJVU می توانید به پشتیبان اطلاع دهید تا فایل مورد نظر را تبدیل نمایند.

توجه داشته باشید کتاب فرا یادگیری: نظریه، الگوریتم ها و کاربردها نسخه زبان اصلی می باشد و کتاب ترجمه شده به فارسی نمی باشد. وبسایت اینترنشنال لایبرری ارائه دهنده کتاب های زبان اصلی می باشد و هیچ گونه کتاب ترجمه شده یا نوشته شده به فارسی را ارائه نمی دهد.


توضیحاتی در مورد کتاب فرا یادگیری: نظریه، الگوریتم ها و کاربردها




توضیحاتی درمورد کتاب به خارجی

Meta-Learning: An Overview explains the fundamentals of meta-learning, providing an understanding of the concept of learning to learn. After giving a background to artificial intelligence, machine learning, deep learning, deep reinforcement learning, and meta-learning, the book provides important state-of-the-art mechanisms for meta-learning, including memory-augmented neural networks, meta-networks, convolutional Siamese neural networks, matching networks, prototypical networks, relation networks, LSTM meta-learning, model-agnostic meta-learning, and Reptile. The book then demonstrates the application of the principles and algorithms of meta learning in computer vision, meta-reinforcement learning, robotics, speech recognition, natural language processing, finance, business management and health care. A final chapter summarizes future trends. Users, including students and researchers will find updates on the principles and state-of-the-art meta-learning algorithms, thus enabling the use of meta-learning for a range of applications.



فهرست مطالب

Front Cover
Meta-Learning: Theory, Algorithms and Applications
Copyright
Dedication
Contents
Preface
Acknowledgments
Chapter 1: Meta-learning basics and background
	1.1. Introduction
	1.2. Meta-learning
		1.2.1. Definitions
		1.2.2. Evaluation
		1.2.3. Datasets and benchmarks
	1.3. Machine learning
		1.3.1. Models
		1.3.2. Limitations
		1.3.3. Related concepts
		1.3.4. Further Reading
	1.4. Deep learning
		1.4.1. Models
		1.4.2. Limitations
		1.4.3. Further readings
	1.5. Transfer learning
		1.5.1. Multitask learning
	1.6. Few-shot learning
	1.7. Probabilistic modeling
	1.8. Bayesian inference
	References
Part I: Theory & mechanisms
	Chapter 2: Model-based meta-learning approaches
		2.1. Introduction
		2.2. Memory-augmented neural networks
			2.2.1. Background knowledge
			2.2.2. Methodology
				Task setup
				Memory retrieval
				Least recently used access
			2.2.3. Extended algorithm 1
			2.2.4. Extended algorithm 2
		2.3. Meta-networks
			2.3.1. Background knowledge
			2.3.2. Methodology
				Slow weights and fast weights
				Layer augmentation
			2.3.3. Main loss functions and representation loss functions
		2.4. Summary
		References
	Chapter 3: Metric-based meta-learning approaches
		3.1. Introduction
		3.2. Convolutional Siamese neural networks
			3.2.1. Background knowledge
			3.2.2. Methodology
				Combination of the twin Siamese networks
				Objective function
				Optimization
			3.2.3. Extended algorithm 1
		3.3. Matching networks
			3.3.1. Background knowledge
			3.3.2. Methodology
				The attention kernel
				Full context embedding
				Episode-based training
			3.3.3. Extended algorithm 1
		3.4. Prototypical networks
			3.4.1. Background knowledge
			3.4.2. Methodology
				Bregman divergence requirement
			3.4.3. Extended algorithm 1
			3.4.4. Extended algorithm 2
			3.4.5. Extended algorithm 3
		3.5. Relation network
			3.5.1. Background knowledge
			3.5.2. Methodology
				C-Way one-shot
				C-Way K-shot
				C-Way zero-shot
				Objective function
		3.6. Summary
		References
	Chapter 4: Optimization-based meta-learning approaches
		4.1. Introduction
		4.2. LSTM meta-learner
			4.2.1. Background knowledge
				Covariate shift
				Batch normalization
				Long short-term memory
				Gradient-based optimization
			4.2.2. Methodology
				Gradient independent assumption and initialization
				Meta-training and meta-testing batch normalization
				Parameter sharing
		4.3. Model-agnostic meta-learning
			4.3.1. Background knowledge
				Transfer learning
				Fine-tuning
			4.3.2. Methodology
				Task adaptation
			4.3.3. Illustration 1: Few-shot regression and few-shot classification
			4.3.4. Illustration 2: Policy gradient reinforcement learning
			4.3.5. Illustration 3: Meta-imitation learning
			4.3.6. Related Algorithm 1: Meta-SGD
			4.3.7. Related Algorithm 2: Feature reuse-The effectiveness of MAML
			4.3.8. Related Algorithm 3: Adaptive hyperparameter generation for fast adaptation
		4.4. Reptile
			4.4.1. Background knowledge
				First-order model-agnostic meta-learning
			4.4.2. Methodology
				4.4.2.1. Serial version
				4.4.2.2. Parallel or batch version
					The optimization assumption
					Analysis
			4.4.3. Related Algorithm 1
			4.4.4. Related Algorithm 2
			4.4.5. Related Algorithm 3
			4.4.6. Related Algorithm 4
		4.5. Summary
		References
Part II: Applications
	Chapter 5: Meta-learning for computer vision
		5.1. Introduction
			5.1.1. Limitations
		5.2. Image classification
			5.2.1. Introduction
				Development
				Approaches
				Benchmarks
				One-stage semisupervised learning
				One-stage unsupervised learning
				Multistage semisupervised learning
			5.2.2. Decision boundary sharpness and few-shot image classification
			5.2.3. Semisupervised few-shot image classification with refined prototypical network
			5.2.4. Few-shot unsupervised image classification
			5.2.5. One-shot image deformation
			5.2.6. Heterogeneous multitask learning in image classification
			5.2.7. Few-shot classification with transductive inference
			5.2.8. Closed-form base learners
			5.2.9. Long-tailed image classification
			5.2.10. Image classification via incremental learning without forgetting
				Comparison and contrast of iTAML and reptile
				Lower bound of sample
			5.2.11. Few-shot open set recognition
			5.2.12. Deficiency of pretrained knowledge in few-shot learning
			5.2.13. Bayesian strategy with deep kernel for regression and cross-domain image classification in a few-shot setting
			5.2.14. Statistical diversity in personalized models of federated learning
			5.2.15. Meta-learning deficiency in few-shot learning
		5.3. Face recognition and face presentation attack
			5.3.1. Introduction
				Facial recognition
				Face antispoofing
			5.3.2. Person-specific talking head generation for unseen people and portrait painting in few-shot regimes
			5.3.3. Face presentation attack and domain generalization
			5.3.4. Anti-face-spoofing in few-shot and zero-shot scenarios
			5.3.5. Generalized face recognition in the unseen domain
		5.4. Object detection
			5.4.1. Introduction
				Approaches
				Benchmarks
			5.4.2. Long-tailed data object detection in few-shot scenarios
			5.4.3. Object detection in few-shot scenarios
			5.4.4. Unseen object detection and viewpoint estimation in low-data settings
		5.5. Fine-grained image recognition
			5.5.1. Introduction
				Approaches
				Benchmarks
			5.5.2. Fine-grained visual categorization
			5.5.3. One-shot fine-grained visual recognition
			5.5.4. Few-shot fine-grained image recognition
		5.6. Image segmentation
			5.6.1. Introduction
				Modern development
			5.6.2. Multiobject few-shot semantic segmentation
			5.6.3. Few-shot static object instance-level detection
		5.7. Object tracking
			5.7.1. Introduction
			5.7.2. Offline object tracking
			5.7.3. Real-time online object tracking
			5.7.4. Real-time object tracking with channel pruning
				One-shot channel pruning
			5.7.5. Object tracking via instance detection
		5.8. Label noise
			5.8.1. Introduction
				Approaches
				Benchmarks
			5.8.2. Reweighting examples through online approximation
			5.8.3. Hallucinated clean representation for noisy-labeled visual recognition
			5.8.4. Data valuation using reinforcement learning
			5.8.5. Teacher-student networks for image classification on noisy labels
			5.8.6. Sample reweighting function construction
			5.8.7. Loss correction approach
			5.8.8. Meta-relabeling through data coefficients
			5.8.9. Meta-label correction
		5.9. Superresolution
			5.9.1. Introduction
				Approaches
				Datasets and benchmarks
			5.9.2. Meta-transfer learning for zero-shot superresolution
			5.9.3. LR-HR image pair superresolution
			5.9.4. No-reference image quality assessment
		5.10. Multimodal learning
			5.10.1. Introduction
				Deep learning approaches
				Benchmarks
			5.10.2. Visual question answering system
		5.11. Other emerging topics
			5.11.1. Domain generalization
			5.11.2. High-accuracy 3D appearance-based gaze estimation in few-shot regimes
			5.11.3. Benchmark of cross-domain few-shot learning in vision tasks
			5.11.4. Latent embedding optimization in low-dimensional space
			5.11.5. Image captioning
			5.11.6. Memorization issue
			5.11.7. Meta-pseudo label
		5.12. Summary
		References
	Chapter 6: Meta-learning for natural language processing
		6.1. Introduction
			6.1.1. Limitations
		6.2. Semantic parsing
			6.2.1. Introduction
				Development
				Benchmarks
			6.2.2. Natural language to structured query generation in few-shot learning
				Implementation
			6.2.3. Semantic parsing in low-resource scenarios
			6.2.4. Context-dependent semantic parser with few-shot learning
		6.3. Machine translation
			6.3.1. Introduction
			6.3.2. Multidomain neural machine translation in low-resource scenarios
			6.3.3. Multilingual neural machine translation in few-shot scenarios
		6.4. Dialogue system
			6.4.1. Introduction
			6.4.2. Few-shot personalizing dialogue generation
			6.4.3. Domain adaptation in a dialogue system
			6.4.4. Natural language generation by few-shot learning concerning task-oriented dialogue systems
		6.5. Knowledge graph
			6.5.1. Introduction
			6.5.2. Multihop knowledge graph reasoning in few-shot scenarios
			6.5.3. Knowledge graphs link prediction in few-shot scenarios
			6.5.4. Knowledge base complex question answering
			6.5.5. Named-entity recognition in cross-lingual scenarios
		6.6. Relation extraction
			6.6.1. Introduction
			6.6.2. Few-shot supervised relation classification
			6.6.3. Relation extraction with few-shot and zero-shot learning
		6.7. Sentiment analysis
			6.7.1. Introduction
				Benchmark and dataset
			6.7.2. Text emotion distribution learning with small samples
		6.8. Emerging topics
			6.8.1. Domain-specific word embedding under lifelong learning setting
				Background knowledge
				Methodology
			6.8.2. Multilabel classification
				Background knowledge
				Methodology
			6.8.3. Representation under a low-resource setting
				Background knowledge
				Methodology
			6.8.4. Compositional generalization
				Background knowledge
				Methodology
			6.8.5. Zero-shot transfer learning for query suggestion
				Background knowledge
				Methodology
		6.9. Summary
		References
	Chapter 7: Meta-reinforcement learning
		7.1. Background knowledge
			7.1.1. Basic components of a deep reinforcement learning system
			7.1.2. Model-based and model-free approaches
			7.1.3. Simulated environments
			7.1.4. Limitations of deep reinforcement learning
		7.2. Meta-reinforcement learning introduction
			7.2.1. Early development
			7.2.2. Formalism
			7.2.3. Fundamental components
		7.3. Memory
			7.3.1. External read-write memory for agents with multiple modalities
		7.4. Meta-reinforcement learning methods
			7.4.1. Continuous adaptation in nonstationary environments
				Related Meta-RL algorithms for sample efficiency
			7.4.2. Exploration with structured noise
				Related Meta-RL approaches for exploration
			7.4.3. Credit assignment
			7.4.4. Second-order computation in MAML
				Related Meta-RL algorithms based on MAML modifications
		7.5. Reward signals and environments
			7.5.1. Sparse extrinsic reward in procedurally generated environments
				Related Meta-RL algorithms for reward signal
		7.6. Benchmark
			7.6.1. Meta-World
		7.7. Visual navigation
			7.7.1. Introduction
			7.7.2. Visual navigation to unseen scenes
			7.7.3. Transferable meta-knowledge in unsupervised visual navigation
		7.8. Summary
		References
	Chapter 8: Meta-learning for healthcare
		8.1. Introduction
		Part I: Medical imaging computing
		8.2. Image classification
			8.2.1. Breast magnetic resonance imaging
			8.2.2. Tongue identification
		8.3. Lesion classification
			8.3.1. Fine-grained skin disease classification
			8.3.2. Difficulty-aware rare disease classification
			8.3.3. Rare disease diagnostics: Skin lesion
		8.4. Image segmentation
			8.4.1. Medical ultra-resolution image segmentation
		8.5. Image reconstruction
			8.5.1. Chest and abdomen computed tomography image reconstruction
		Part II: Electronic health records analysis
			8.6. Electronic health records
				8.6.1. Disease prediction in a low-resource setting
				8.6.2. Disease classification in a few-shot setting
		Part III: Application areas
			8.7. Cardiology
				8.7.1. Remote heart rate measurement in a few-shot setting
				8.7.2. Customized pulmonary valve conduit reconstruction
				8.7.3. Cardiac arrhythmia auto-screening
			8.8. Disease diagnostics
				8.8.1. Fine-grained disease classification under task heterogeneity
				8.8.2. Clinical prognosis with Bayesian optimization
			8.9. Data modality
				8.9.1. Modality detection of biomedical images
			8.10. Future work
		References
	Chapter 9: Meta-learning for emerging applications: Finance, building materials, graph neural networks, program synthesis ...
		9.1. Introduction
		9.2. Finance and economics
			9.2.1. Introduction
				Approaches
			9.2.2. Detection of credit card transaction fraud
			9.2.3. Task-agnostic meta-learner with inequality measurement in economics
				Economic inequality measure
		9.3. Building materials
			9.3.1. Defect (crack) recognition in concrete in reinforcement learning
		9.4. Graph neural network
			9.4.1. Introduction
			9.4.2. Node classification on graphs with few-shot novel labels
			9.4.3. Local subgraphs for node classification and link prediction
			9.4.4. Adversarial attacks of node classification
				Comparion and contrast of AQ and prototypical meta-learning
			9.4.5. Dual-graph structured approach with instance- and distribution-level relations
		9.5. Program synthesis
			9.5.1. Syntax-guided synthesis
		9.6. Transportation
			9.6.1. Introduction
			9.6.2. Traffic signal control
			9.6.3. Continuous trajectory estimation for lane changes under a few-shot setting
			9.6.4. Urban traffic prediction based on spatio-temporal correlation
		9.7. Cold-start problems in recommendation systems
			9.7.1. Introduction
			9.7.2. Continuously adding new items
			9.7.3. Context-aware cross-domain recommendation cold-start under a few-shot setting
			9.7.4. User preference estimator
			9.7.5. Memory-augmented recommendation system meta-optimization
			9.7.6. Meta-learner with heterogeneous information networks
		9.8. Climate science
			9.8.1. Introduction
			9.8.2. Critical incident detection
		9.9. Summary
		References
Index
Back Cover




نظرات کاربران