ورود به حساب

نام کاربری گذرواژه

گذرواژه را فراموش کردید؟ کلیک کنید

حساب کاربری ندارید؟ ساخت حساب

ساخت حساب کاربری

نام نام کاربری ایمیل شماره موبایل گذرواژه

برای ارتباط با ما می توانید از طریق شماره موبایل زیر از طریق تماس و پیامک با ما در ارتباط باشید


09117307688
09117179751

در صورت عدم پاسخ گویی از طریق پیامک با پشتیبان در ارتباط باشید

دسترسی نامحدود

برای کاربرانی که ثبت نام کرده اند

ضمانت بازگشت وجه

درصورت عدم همخوانی توضیحات با کتاب

پشتیبانی

از ساعت 7 صبح تا 10 شب

دانلود کتاب Artificial Neural Networks and Machine Learning – ICANN 2021: 30th International Conference on Artificial Neural Networks, Bratislava, Slovakia, ... II (Lecture Notes in Computer Science, 12892)

دانلود کتاب شبکه‌های عصبی مصنوعی و یادگیری ماشین – ICANN 2021: سی‌امین کنفرانس بین‌المللی شبکه‌های عصبی مصنوعی، براتیسلاوا، اسلواکی، ... II (یادداشت‌های سخنرانی در علوم کامپیوتر، 12892)

Artificial Neural Networks and Machine Learning – ICANN 2021: 30th International Conference on Artificial Neural Networks, Bratislava, Slovakia, ... II (Lecture Notes in Computer Science, 12892)

مشخصات کتاب

Artificial Neural Networks and Machine Learning – ICANN 2021: 30th International Conference on Artificial Neural Networks, Bratislava, Slovakia, ... II (Lecture Notes in Computer Science, 12892)

ویرایش: [1st ed. 2021] 
نویسندگان: , , ,   
سری:  
ISBN (شابک) : 3030863395, 9783030863395 
ناشر: Springer 
سال نشر: 2021 
تعداد صفحات: 674
[664] 
زبان: English 
فرمت فایل : PDF (درصورت درخواست کاربر به PDF، EPUB یا AZW3 تبدیل می شود) 
حجم فایل: 66 Mb 

قیمت کتاب (تومان) : 52,000



ثبت امتیاز به این کتاب

میانگین امتیاز به این کتاب :
       تعداد امتیاز دهندگان : 10


در صورت تبدیل فایل کتاب Artificial Neural Networks and Machine Learning – ICANN 2021: 30th International Conference on Artificial Neural Networks, Bratislava, Slovakia, ... II (Lecture Notes in Computer Science, 12892) به فرمت های PDF، EPUB، AZW3، MOBI و یا DJVU می توانید به پشتیبان اطلاع دهید تا فایل مورد نظر را تبدیل نمایند.

توجه داشته باشید کتاب شبکه‌های عصبی مصنوعی و یادگیری ماشین – ICANN 2021: سی‌امین کنفرانس بین‌المللی شبکه‌های عصبی مصنوعی، براتیسلاوا، اسلواکی، ... II (یادداشت‌های سخنرانی در علوم کامپیوتر، 12892) نسخه زبان اصلی می باشد و کتاب ترجمه شده به فارسی نمی باشد. وبسایت اینترنشنال لایبرری ارائه دهنده کتاب های زبان اصلی می باشد و هیچ گونه کتاب ترجمه شده یا نوشته شده به فارسی را ارائه نمی دهد.


توضیحاتی در مورد کتاب شبکه‌های عصبی مصنوعی و یادگیری ماشین – ICANN 2021: سی‌امین کنفرانس بین‌المللی شبکه‌های عصبی مصنوعی، براتیسلاوا، اسلواکی، ... II (یادداشت‌های سخنرانی در علوم کامپیوتر، 12892)



مجموعه جلسات LNCS 12891، LNCS 12892، LNCS 12893، LNCS 12894 و LNCS 12895، مجموعه مقالات سی امین کنفرانس بین المللی شبکه های عصبی مصنوعی، ICANN 2021، در مجموع در براتیسلاوا، اسلواکی 2، در سپتامبر 2، در 2 سپتامبر 2 برگزار شد. 265 مقاله کامل ارائه شده در این مجموعه مقالات به دقت بررسی و از بین 496 مقاله ارسالی انتخاب و در 5 جلد تنظیم شده است.


در این جلد، مقالات بر موضوعاتی مانند تمرکز دارند. بینایی کامپیوتری و تشخیص اشیا، شبکه‌های عصبی کانولوشنال و روش‌های هسته، یادگیری عمیق و بهینه‌سازی، یادگیری توزیع‌شده و مستمر، روش‌های قابل توضیح، یادگیری چند شات و شبکه‌های متخاصم مولد.

*این کنفرانس در سال 2021 به دلیل همه گیری کووید-19 به صورت آنلاین برگزار شد.


توضیحاتی درمورد کتاب به خارجی

The proceedings set LNCS 12891, LNCS 12892, LNCS 12893, LNCS 12894 and LNCS 12895 constitute the proceedings of the 30th International Conference on Artificial Neural Networks, ICANN 2021, held in Bratislava, Slovakia, in September 2021.* The total of 265 full papers presented in these proceedings was carefully reviewed and selected from 496 submissions, and organized in 5 volumes.


In this volume, the papers focus on topics such as computer vision and object detection, convolutional neural networks and kernel methods, deep learning and optimization, distributed and continual learning, explainable methods, few-shot learning and generative adversarial networks.

*The conference was held online 2021 due to the COVID-19 pandemic.



فهرست مطالب

Preface
Organization
Contents – Part II
Computer Vision and Object Detection
Selective Multi-scale Learning for Object Detection
	1 Introduction
	2 Related Work
	3 Selective Multi-scale Learning
		3.1 Network Architecture
	4 Experiments
		4.1 Dataset and Evaluation Metrics
		4.2 Implementation Details
		4.3 Ablation Study
		4.4 Application in Pyramid Architectures
		4.5 Application in Two-Stage Detectors
		4.6 Comparisons with Mainstream Methods
	5 Conclusions
	References
DRENet: Giving Full Scope to Detection and Regression-Based Estimation for Video Crowd Counting
	1 Introduction
	2 Related Works
	3 Crowd Counting by DRENet
		3.1 Problem Formulation
		3.2 Network Architecture
	4 Experimental Results
		4.1 The Mall Dataset
		4.2 The UCSD Dataset
	5 The FDST Dataset
		5.1 Effects of Different Components in the DRENet
	6 Conclusion
	References
Sisfrutos Papaya: A Dataset for Detection and Classification of Diseases in Papaya
	1 Introduction
	2 Related Works
	3 The Sisfrutos Papaya DataSet
		3.1 Image Acquisition
		3.2 Specification of Images and Annotations
	4 Methodology
		4.1 Detection Model
		4.2 Sub Dataset
		4.3 Hardware Specification
	5 Results and Discussion
	6 Conclusion and Future Works
	References
Faster-LTN: A Neuro-Symbolic, End-to-End Object Detection Architecture
	1 Introduction
	2 Related Work
	3 The Faster-LTN Architecture
		3.1 Faster R-CNN
		3.2 Logic Tensor Network
		3.3 LTN for Object Detection
		3.4 Faster-LTN
	4 Experiments
		4.1 Dataset
		4.2 Experimental Setup
		4.3 Results
	5 Conclusion and Future Works
	References
GC-MRNet: Gated Cascade Multi-stage Regression Network for Crowd Counting
	1 Introduction
	2 Related Work
		2.1 Detection-Based Approaches
		2.2 Counts Regression-Based Approaches
		2.3 Density Map-Based Approaches
	3 Our Approach
		3.1 Architecture of GC-MRNet
		3.2 Backbone Network
		3.3 Gated Cascade Module
		3.4 Loss Function
	4 Implementation Details
		4.1 Ground Truth Density Map
		4.2 Training Details
		4.3 Evaluation Metrics
	5 Experiments
		5.1 Datasets
		5.2 Ablation Study on ShanghaiTech Part A
		5.3 Comparisons with State-of-the-Art
	6 Conclusion
	References
Latent Feature-Aware and Local Structure-Preserving Network for 3D Completion from a Single Depth View
	1 Introduction
	2 Related Work
		2.1 Single-View 3D Completion
		2.2 3D Shape Representation
	3 Proposed Method
		3.1 Overview
		3.2 Network Architecture
		3.3 Loss Functions
	4 Experimental Results and Analysis
		4.1 Comparisons with Existing Methods
		4.2 Ablation Study
	5 Conclusion
	References
Facial Expression Recognition by Expression-Specific Representation Swapping
	1 Introduction
	2 Related Work
	3 Proposed Method
		3.1 Paired Face Images
		3.2 Facial Representation Learning
		3.3 Expression-Specific Representation Swapping
		3.4 Auxiliary Face Comparison Block
		3.5 Complete Algorithm
	4 Experiments
		4.1 Datasets and Setting
		4.2 Results
		4.3 Ablation Study
	5 Conclusion
	References
Iterative Error Removal for Time-of-Flight Depth Imaging
	1 Introduction
	2 Method
		2.1 Formulating for ToF Depth Imaging
		2.2 Input and Output Defining
		2.3 Proposed Iterative CNN
	3 Datasets
		3.1 Synthetic Dataset
		3.2 Real-World Dataset
	4 Experiments
		4.1 Error Removal
		4.2 Compared to State-of-the-Art Methods
	5 Conclusion and Future Work
	References
Blurred Image Recognition: A Joint Motion Deblurring and Classification Loss-Aware Approach
	1 Introduction
		1.1 Motivation
		1.2 Contributions
	2 Related Works
		2.1 Image Classification
		2.2 Single Image Motion Deblurring
	3 Methods
		3.1 Task Formulation
		3.2 Recognition Loss
		3.3 Joint Training Framework
		3.4 Parameterized Shortcut Connection
	4 Experiments
		4.1 Dataset
		4.2 Baselines and Ablation Groups
		4.3 Implementation
		4.4 Experimental Results
	5 Conclusion
	References
Learning How to Zoom In: Weakly Supervised ROI-Based-DAM for Fine-Grained Visual Classification
	1 Introduction
	2 Related Work
		2.1 Fine-Grained Visual Classification
		2.2 Data Augmentation
	3 Methodology
		3.1 Saliency Map Generation
		3.2 Template ROI Localization
		3.3 Selective Sampling
		3.4 Multi-scale ROI-based Cropping
		3.5 Testing Strategy Based on ROI-Based-DAM
	4 Experiments
		4.1 Dataset
		4.2 Implementation Details
		4.3 Numerical Results
		4.4 Ablation Study
		4.5 Qualitative Results
	5 Conclusion
	References
Convolutional Neural Networks and Kernel Methods
(Input) Size Matters for CNN Classifiers
	1 Introduction
	2 Background
		2.1 Fully Convolutional Networks
		2.2 Probe Classifiers, Saturation and Tail Patterns
		2.3 Receptive Field Size
		2.4 Methodology
	3 Experiments
		3.1 Image Size Affects Model Performance Even with No Additional Detail
		3.2 Input Size Affects the Inference Process of the CNN
		3.3 The Role of the Size of Discriminatory Features in the Relation of Model and Input Resolution
		3.4 The Role of the Receptive Field in Relation to the Object Size
		3.5 The Role of Residual Connections
	4 Implications on Neural Architecture Design
	5 Conclusion
	References
Accelerating Depthwise Separable Convolutions with Vector Processor
	1 Introduction
	2 Related Work
	3 Algorithm Mapping
		3.1 Architecture of Vector Processor
		3.2 Data Distribution and Optimization on Multi-core DSP
		3.3 Depthwise Convolution Mapping on Single-Core DSP
		3.4 Pointwise Convolution Mapping on Single-Core DSP
	4 Experiments and Evaluation
		4.1 Performance Analysis of Depthwise Convolution
		4.2 Performance Analysis of Pointwise Convolution
		4.3 Overall Performance Evaluation
	5 Conclusion
	References
KCNet: Kernel-Based Canonicalization Network for Entities in Recruitment Domain
	1 Introduction
	2 Related Works
	3 Kernel-Based Canonicalization Network (KCNet)
		3.1 Problem Definition
		3.2 Network Architecture
	4 Datasets
		4.1 Dataset Description
		4.2 Side Information Collection
	5 Experimental Setup
	6 Results and Discussion
	7 Conclusion
	References
Deep Unitary Convolutional Neural Networks
	1 Introduction
		1.1 Problem Statement
		1.2 Proposed Solution
		1.3 Literature Review
	2 Unitary Neural Networks with Lie Algebra
		2.1 Square Unitary Weight Matrices
		2.2 Unitary Weight Matrices of Any Shapes and Dimensions
	3 Experiments
		3.1 Network Architecture
		3.2 Dataset
		3.3 Training Details
		3.4 Caching of the Unitary Weights
	4 Results and Discussion
	5 Conclusion
	References
Deep Learning and Optimization I
DPWTE: A Deep Learning Approach to Survival Analysis Using a Parsimonious Mixture of Weibull Distributions
	1 Introduction
	2 Related Work
	3 Background
		3.1 Survival Analysis
		3.2 Mixture Weibull Distributions Estimation
	4 Deep Parsimonious Weibull Time-to-Event Model
		4.1 Description
		4.2 Sparse Weibull Mixture Layer
		4.3 Post-Training Steps: Selection of Weibull Distributions to Combine for Time-to-Event Modeling
		4.4 Loss Function
	5 Experiments on Real-World Datasets
		5.1 Description of the Real-World Datasets
		5.2 Experimental Setting
		5.3 Results
		5.4 Censoring Threshold Sensitivity Experiment
	6 Conclusion
	References
First-Order and Second-Order Variants of the Gradient Descent in a Unified Framework
	1 Introduction
	2 Problem Statement and Notations
	3 Vanilla, Classical Gauss-Newton and Natural Gradient Descent
		3.1 Vanilla Gradient Descent
		3.2 Classical Gauss-Newton
		3.3 Natural Gradient
	4 Gradient Covariance Matrix, Newton's Method and Generalized Gauss-Newton
		4.1 Gradient Covariance Matrix
		4.2 Newton's Method
		4.3 Generalized Gauss-Newton
	5 Summary and Conclusion
	References
Bayesian Optimization for Backpropagation in Monte-Carlo Tree Search
	1 Introduction
	2 Preliminaries
		2.1 Monte-Carlo Tree Search
		2.2 Bayesian Optimization with a Gaussian Process Prior
	3 Methods
		3.1 Monotone MCTS
		3.2 Softmax MCTS
	4 Experiments
		4.1 Monotone MCTS and Softmax MCTS
	5 Discussion and Future Work
	References
Growing Neural Networks Achieve Flatter Minima
	1 Introduction
	2 Related Work
	3 Model Description
		3.1 Notations
		3.2 Model Presentation
	4 Experimental Results
		4.1 Experiments with Small Models
		4.2 Growing RoBERTa's Classification Head
	5 Discussion
	6 Conclusion and Future Work
	References
Dynamic Neural Diversification: Path to Computationally Sustainable Neural Networks
	1 Introduction
	2 Our Approach
	3 Experiments
	4 Results and Discussion
		4.1 Evolving Diversity and Symmetry Breaking
		4.2 Negative Correlation Learning
		4.3 Pairwise Cosine Similarity Diversification
		4.4 Reaching Linear Complexity
		4.5 Iterative Diversified Weight Initialization
	5 Conclusion
	References
Curved SDE-Net Leads to Better Generalization for Uncertainty Estimates of DNNs
	1 Introduction
	2 Describing Ensembled SDE-Net by Bezier Curve
		2.1 Connection Curves: Bezier Curve
		2.2 Definition of SDE-Net
	3 Methods
		3.1 The Objective Function of CSDE-Net
		3.2 Algorithm of CSDE-Net Model
	4 Experiments
		4.1 Datasets
		4.2 Parameter Setting
		4.3 Quantitative Analysis of ID Dataset
		4.4 Bezier Curve Finding Experiment
		4.5 Quantitative Analysis of ID Dataset with Missing Rate
	5 Discussion and Further Work
	References
EIS - Efficient and Trainable Activation Functions for Better Accuracy and Performance
	1 Introduction
	2 Related Works
	3 EIS-1, EIS-2, and EIS-3
	4 Experiments with EIS-1, EIS-2, and EIS-3
		4.1 Image Classification:
		4.2 Object Detection
		4.3 Semantic Segmentation
		4.4 Machine Translation
		4.5 Computational Time Comparison
	5 Conclusion
	References
Deep Learning and Optimization II
Why Mixup Improves the Model Performance
	1 Introduction
	2 Related Works
		2.1 Mixup Variants
	3 Notations and Preliminaries
	4 Complexity Reduction of Linear Classifiers with Mixup
	5 Complexity Reduction of Neural Networks with Mixup
	6 The Optimal Parameters of Mixup
	7 Geometric Perspective of Mixup Training: Parameter Space Smoothing
	8 Conclusion and Discussion
	References
Mixup Gamblers: Learning to Abstain with Auto-Calibrated Reward for Mixed Samples
	1 Introduction
	2 Related Work
		2.1 Selective Classification
		2.2 Softmax Response
		2.3 Deep Gamblers
		2.4 Mixup Augmentation
	3 Proposed Method
		3.1 Calibrating the Rejection Reward Utilizing Mixup Augmentation
		3.2 CNN Feature Mixup
	4 Experiments
	5 Conclusion
	References
Non-iterative Phase Retrieval with Cascaded Neural Networks
	1 Introduction
		1.1 The Phase Contains the Relevant Information
		1.2 Non-iterative Phase Retrieval
		1.3 Contributions
		1.4 Related Work
	2 Proposed Method
		2.1 Loss Functions
		2.2 Training
	3 Experimental Evaluation
		3.1 Datasets
		3.2 Experimental Setup
		3.3 Metrics
		3.4 Results
		3.5 Intermediate Prediction at Full-Scale
		3.6 Ablation Study
	4 Conclusion and Future Work
	References
Incorporating Discrete Wavelet Transformation Decomposition Convolution into Deep Network to Achieve Light Training
	1 Introduction
	2 Related Work
	3 Preliminaries
	4 Discrete Wavelet Transformation Decomposition Convolution
		4.1 Feature Map DWT Decomposition
		4.2 Subbands Differential Fusion
	5 Experiments
		5.1 Datasets and Experiment Setting
		5.2 PlainNet
		5.3 DWTNet
		5.4 Experimental Results
	6 Conclusion
	References
MMF: A Loss Extension for Feature Learning in Open Set Recognition
	1 Introduction
	2 Related Work
	3 Approach
		3.1 Learning Objectives
		3.2 Training with MMF and Open Set Recognition
	4 Experimental Evaluation
		4.1 Network Architectures and Evaluation Criteria
		4.2 Experimental Results
		4.3 Analysis
	5 Conclusion
	References
On the Selection of Loss Functions Under Known Weak Label Models
	1 Introduction
	2 Formulation
		2.1 Notation
		2.2 Learning from Weak Labels
		2.3 Proper Losses
	3 Linear Transformations of Losses
		3.1 Characterization of Convex Weak Losses
		3.2 Lower-Bounded Losses
	4 Optimizing the Selection of the Weak Loss
		4.1 Optimizing Virtual Labels
		4.2 Optimizing Convexity-Preserving Virtual Labels
	5 Experiments
	6 Conclusions
	References
Distributed and Continual Learning
Bilevel Online Deep Learning in Non-stationary Environment
	1 Introduction
	2 Bilevel Online Deep Learning (BODL)
		2.1 Online Ensemble Classifier
		2.2 Bilevel Online Deep Learning
	3 Experiments
		3.1 1Experiment Setup
		3.2 Datasets
		3.3 Experimental Results
	4 Related Works
	5 Conclusion and Future Work
	References
A Blockchain Based Decentralized Gradient Aggregation Design for Federated Learning
	1 Introduction
	2 Background
		2.1 Studies on Federated Learning
		2.2 Enforcement by Smart Contract Platform - Blockchain
	3 System Design and Workflow
		3.1 Terms and Entities
		3.2 System Workflow
	4 Aggregation Algorithm with Random Enforcement
	5 Evaluation
		5.1 Experiment Setup
		5.2 Baselines and Metrics
		5.3 Results
	6 Conclusion
	References
Continual Learning for Fake News Detection from Social Media
	1 Introduction
	2 Background: Fake News Detection Algorithms and Datasets
	3 Problem Description
		3.1 Propagation Patterns for Fake News Detection
	4 Dealing with Degraded Performance on New Data
		4.1 Incremental Training Reverses the Model Performance
		4.2 Continual Learning Restores Balanced Performance
		4.3 Optimise the Sampling Process to Further Minimise Performance Drop
	5 Conclusions and Future Work
	References
Balanced Softmax Cross-Entropy for Incremental Learning
	1 Introduction
	2 Related Work
	3 Proposed Method
		3.1 Incremental Learning Baseline
		3.2 Balanced Softmax Cross-Entropy
		3.3 Meta Balanced Softmax Cross-Entropy
	4 Experiments
		4.1 Experimental Setups
		4.2 Comparison Results
		4.3 Ablation Study
	5 Conclusion
	References
Generalised Controller Design Using Continual Learning
	1 Introduction
	2 Existing Research
	3 Methodology
		3.1 Methods
		3.2 Metrics to Characterise Catastrophic Forgetting
	4 Experiments and Results
		4.1 Overall Performance
		4.2 Performance per Task
		4.3 Characterisation of Catastrophic Forgetting
	5 Conclusions
	References
DRILL: Dynamic Representations for Imbalanced Lifelong Learning
	1 Introduction
	2 Related Work
		2.1 Continual Learning
		2.2 Meta-Learning
		2.3 Growing Memory and Self-organization
	3 Methods
		3.1 Task Formulation
		3.2 Progressive Imbalancing
		3.3 Episode Generation
		3.4 DRILL
		3.5 Self-supervised Sampling
	4 Experiments
		4.1 Benchmark Datasets
		4.2 Baselines
		4.3 Implementation Details
	5 Results
		5.1 Imbalanced Lifelong Text Classification
		5.2 Knowledge Integration Mechanisms
		5.3 Self-organized Networks in NLP
	6 Conclusion and Future Work
	References
Principal Gradient Direction and Confidence Reservoir Sampling for Continual Learning
	1 Introduction
	2 Methods
		2.1 Setup
		2.2 Proximal Gradient Framework
		2.3 Principal Gradient Direction
		2.4 Confidence Reservoir Sampling
	3 Experiments
		3.1 Datasets and Architectures
		3.2 Metrics
		3.3 Ablation Study
		3.4 Performance of ER-PC
	4 Conclusion
	References
Explainable Methods
Spontaneous Symmetry Breaking in Data Visualization
	1 Motivation
	2 Symmetries, Graphs, and Persistent Homology
	3 Experiments
		3.1 t-Distributed Stochastic Neighborhood Embedding (t-SNE)
		3.2 TriMap
		3.3 Kernel Principal Component Analysis (kPCA)
		3.4 Gaussian Process Latent Variable Model (GPLVM)
		3.5 Summary of Experiments
	4 Related Works
	5 Discussion
		5.1 Empirical Findings
		5.2 Faithful Representations
		5.3 Concluding Remarks
	References
Deep NLP Explainer: Using Prediction Slope to Explain NLP Models
	1 Introduction
	2 Related Work
	3 Technical Description
		3.1 Dataset Introduction and Preprocessing
		3.2 Overview of the Latest Importance Rate (Activation Maximization)
		3.3 Introduction of Prediction Slope
		3.4 Extracting Word Importance Rate from the Prediction Slope
		3.5 Comparing Importance Rates
	4 Experimental Results
		4.1 Comparing Importance Rates on the IMDb Dataset
		4.2 Comparing Importance Rates on the Stack Overflow Dataset
		4.3 Analysis of the Result
	5 Conclusion
	References
Empirically Explaining SGD from a Line Search Perspective
	1 Introduction
	2 Related Work
	3 The Empirical Method
	4 On the Similarity of the Shape of Full-Batch Losses Along Lines
	5 On the Behavior of Line Search Approaches on the Full-Batch Loss
	6 On the Influence of the Batch Size on Update Steps
	7 Discussion and Outlook
	8  Appendix
	References
Towards Ontologically Explainable Classifiers
	1 Introduction
	2 Explainability
		2.1 Post-hoc Model Explanation
		2.2 Explainablity, Semantics and Ontologies
		2.3 Positioning
	3 Ontological Explainability Approach
		3.1 Illustration Domain: Pizzas
		3.2 Problems of a Non-ontological Approach
		3.3 Proposed Approach
	4 Ontological Classifier
		4.1 DL Module: Semantic Segmentation
		4.2 Ontological Module: OntoClassifier
		4.3 Results
	5 Conclusion
	References
Few-shot Learning
Leveraging the Feature Distribution in Transfer-Based Few-Shot Learning
	1 Introduction
	2 Related Work
	3 Methodology
		3.1 Problem Statement
		3.2 Feature Extraction
		3.3 Feature Preprocessing
		3.4 MAP
	4 Experiments
		4.1 Datasets
		4.2 Implementation Details
		4.3 Comparison with State-of-the-Art Methods
		4.4 Other Experiments
	5 Conclusion
	References
One-Shot Meta-learning for Radar-Based Gesture Sequences Recognition
	1 Introduction
	2 FMCW Radar Processing
		2.1 Radar Sensor
		2.2 Time-Range Preprocessing
	3 Meta-learning Based Network
		3.1 Models and Training Procedure
		3.2 Meta-dataset and Tasks Definition
	4 Experimental Results
		4.1 Models Performance
	5 Conclusion
	References
Few-Shot Learning with Random Erasing and Task-Relevant Feature Transforming
	1 Introduction
	2 Related Work
		2.1 Optimization-Based Methods
		2.2 Metric Learning-Based Methods
	3 Methodology
		3.1 Problem Statement
		3.2 Random Erasing Network (RENet)
		3.3 Task-Relevant Feature Transforming (TRFT)
		3.4 RE-TRFT: Integration of RENet and TRFT
	4 Performance Evaluation
		4.1 Implementation Details
		4.2 Comparison with State-of-the-Arts
		4.3 Ablation Study
	5 Conclusion
	References
Fostering Compositionality in Latent, Generative Encodings to Solve the Omniglot Challenge
	1 Introduction
	2 Method
		2.1 Model and one-shot Inference Mechanism
		2.2 Dataset
	3 Results
		3.1 Experiment 1
		3.2 Experiment 2
	4 Conclusion
	References
Better Few-Shot Text Classification with Pre-trained Language Model
	1 Introduction
	2 Related Work
		2.1 Language Models
		2.2 Traditional Few-Shot Learning
		2.3 Few-Shot Learning Based on Pre-trained LM
	3 Methodology
		3.1 Text Classification
		3.2 Few-Shot Classification
	4 Problem Setup
		4.1 Datasets
		4.2 Evaluation Protocol
	5 Experiments
		5.1 Analysis of Text Classification
		5.2 Analysis of Few-Shot Learning
		5.3 Visualization of Attention
	6 Conclusion
	References
Generative Adversarial Networks
Leveraging GANs via Non-local Features
	1 Introduction
	2 Related Work
		2.1 Generative Adversarial Networks
		2.2 Graph Convolutional Networks
		2.3 Attention Mechanism
	3 Graph Convolutional Architecture
	4 Experiments
	5 Conclusion
	References
On Mode Collapse in Generative Adversarial Networks
	1 Introduction
	2 Related Work
	3 Reasons for Mode Collapse in GANs
	4 Our Method
	5 Evaluation Metrics
	6 Experiments
		6.1 Ablation Study
		6.2 SoTA Comparison
	7 Conclusions
	References
Image Inpainting Using Wasserstein Generative Adversarial Imputation Network
	1 Introduction
	2 Related Work
	3 Wasserstein Generative Imputation Network
		3.1 Training
		3.2 Architecture of Networks
	4 Experiments
		4.1 Scenarios of Missingness
		4.2 Implementation Details
		4.3 Results
	5 Conclusion
	References
COViT-GAN: Vision Transformer for COVID-19 Detection in CT Scan Images with Self-Attention GAN for Data Augmentation
	1 Introduction
	2 Methodology
		2.1 GANs for Data Augmentation
		2.2 Image Classification
	3 Results and Discussion
	4 Conclusions
	References
PhonicsGAN: Synthesizing Graphical Videos from Phonics Songs
	1 Introduction
	2 Background
		2.1 Speech to Moving Face
		2.2 Music to Moving Body
		2.3 Audio to Moving Object
	3 PhonicsGAN
		3.1 Dataset Construction
		3.2 Problem Formalization
		3.3 Model Architecture
		3.4 Implementation
	4 Results and Discussion
	5 Conclusion
	References
A Progressive Image Inpainting Algorithm with a Mask Auto-update Branch
	1 Introduction
	2 Related Work
		2.1 Image Inpainting
		2.2 Progressive Inpainting
	3 Our Method
		3.1 Network Structure
		3.2 ID-MRF Regularization
		3.3 Spatial Variant Reconstruction Loss
		3.4 Mask Auto-update Module
	4 Experiments
		4.1 Training Procedure
		4.2 Quantitative Evaluation
		4.3 Qualitative Evaluation
	5 Conclusion
	References
Hybrid Generative Models for Two-Dimensional Datasets
	1 Introduction
	2 Previous Work
	3 Representation Bases
	4 Methodology
	5 Experimental Results
	6 Conclusions
	References
Towards Compressing Efficient Generative Adversarial Networks for Image Translation via Pruning and Distilling
	1 Introduction
	2 Related Work
	3 Method
		3.1 Notations
		3.2 Filter Distance-Based Pruning Method
		3.3 Fine-Tune Compressed GAN via KD
	4 Experiments
		4.1 Experimental Settings
		4.2 Detailed Compression Results
		4.3 Ablation Study
	5 Conclusion
	References
Author Index




نظرات کاربران