ورود به حساب

نام کاربری گذرواژه

گذرواژه را فراموش کردید؟ کلیک کنید

حساب کاربری ندارید؟ ساخت حساب

ساخت حساب کاربری

نام نام کاربری ایمیل شماره موبایل گذرواژه

برای ارتباط با ما می توانید از طریق شماره موبایل زیر از طریق تماس و پیامک با ما در ارتباط باشید


09117307688
09117179751

در صورت عدم پاسخ گویی از طریق پیامک با پشتیبان در ارتباط باشید

دسترسی نامحدود

برای کاربرانی که ثبت نام کرده اند

ضمانت بازگشت وجه

درصورت عدم همخوانی توضیحات با کتاب

پشتیبانی

از ساعت 7 صبح تا 10 شب

دانلود کتاب Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2019, Würzburg, Germany, September 16–20, 2019, Proceedings, Part II (Lecture Notes in Artificial Intelligence)

دانلود کتاب یادگیری ماشین و کشف دانش در پایگاه‌های داده: کنفرانس اروپایی، ECML PKDD 2019، وورزبورگ، آلمان، 16 تا 20 سپتامبر 2019، مجموعه مقالات، بخش دوم (یادداشت‌های سخنرانی در هوش مصنوعی)

Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2019, Würzburg, Germany, September 16–20, 2019, Proceedings, Part II (Lecture Notes in Artificial Intelligence)

مشخصات کتاب

Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2019, Würzburg, Germany, September 16–20, 2019, Proceedings, Part II (Lecture Notes in Artificial Intelligence)

ویرایش:  
نویسندگان: , , , , ,   
سری:  
ISBN (شابک) : 3030461467, 9783030461461 
ناشر:  
سال نشر:  
تعداد صفحات: 748 
زبان: English 
فرمت فایل : PDF (درصورت درخواست کاربر به PDF، EPUB یا AZW3 تبدیل می شود) 
حجم فایل: 56 مگابایت 

قیمت کتاب (تومان) : 34,000



ثبت امتیاز به این کتاب

میانگین امتیاز به این کتاب :
       تعداد امتیاز دهندگان : 11


در صورت تبدیل فایل کتاب Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2019, Würzburg, Germany, September 16–20, 2019, Proceedings, Part II (Lecture Notes in Artificial Intelligence) به فرمت های PDF، EPUB، AZW3، MOBI و یا DJVU می توانید به پشتیبان اطلاع دهید تا فایل مورد نظر را تبدیل نمایند.

توجه داشته باشید کتاب یادگیری ماشین و کشف دانش در پایگاه‌های داده: کنفرانس اروپایی، ECML PKDD 2019، وورزبورگ، آلمان، 16 تا 20 سپتامبر 2019، مجموعه مقالات، بخش دوم (یادداشت‌های سخنرانی در هوش مصنوعی) نسخه زبان اصلی می باشد و کتاب ترجمه شده به فارسی نمی باشد. وبسایت اینترنشنال لایبرری ارائه دهنده کتاب های زبان اصلی می باشد و هیچ گونه کتاب ترجمه شده یا نوشته شده به فارسی را ارائه نمی دهد.


توضیحاتی در مورد کتاب یادگیری ماشین و کشف دانش در پایگاه‌های داده: کنفرانس اروپایی، ECML PKDD 2019، وورزبورگ، آلمان، 16 تا 20 سپتامبر 2019، مجموعه مقالات، بخش دوم (یادداشت‌های سخنرانی در هوش مصنوعی)




توضیحاتی درمورد کتاب به خارجی



فهرست مطالب

Preface
Organization
Contents – Part II
Supervised Learning
Exploiting the Earth\'s Spherical Geometry to Geolocate Images
	1 Introduction
	2 Prior Work
		2.1 Image Retrieval
		2.2 Classification
	3 Geolocation via the MvMF
		3.1 The Probabilistic Interpretation
		3.2 Interpretation as a Classifier
		3.3 Interpretation as an Image Retrieval Method
		3.4 Analysis
	4 Experiments
		4.1 Procedure
		4.2 Results
	5 Conclusion
	References
Continual Rare-Class Recognition with Emerging Novel Subclasses
	1 Introduction
	2 Problem Setup and Preliminary Data Analysis
	3 Continual Rare-Class Recognition
		3.1 Model Formulation
		3.2 Convexity and Optimization
		3.3 Time and Space-Complexity Analysis
	4 Evaluation
		4.1 Experiment Setup
		4.2 Experiment Results
	5 Related Work
	6 Conclusion
	References
Unjustified Classification Regions and Counterfactual Explanations in Machine Learning
	1 Introduction
	2 Background
		2.1 Post-hoc Interpretability
		2.2 Studies of Post-hoc Interpretability Approaches
		2.3 Adversarial Examples
	3 Justification Using Ground-Truth Data
		3.1 Intuition and Definitions
		3.2 Implementation
	4 Procedures for Assessing the Risk of Unconnectedness
		4.1 LRA Procedure
		4.2 VE Procedure
	5 Experimental Study: Assessing the Risk of Unjustified Regions
		5.1 Experimental Protocol
		5.2 Defining the Problem Granularity: Choosing n and
		5.3 Detecting Unjustified Regions
		5.4 Vulnerability of Post-hoc Counterfactual Approaches
	6 Conclusion
	References
Shift Happens: Adjusting Classifiers
	1 Introduction
	2 Background and Related Work
		2.1 Dataset Shift and Prior Probability Adjustment
		2.2 Proper Scoring Rules and Bregman Divergences
		2.3 Adjusted Predictions and Adjustment Procedures
	3 General Adjustment
		3.1 Unbounded General Adjustment (UGA)
		3.2 Bounded General Adjustment
		3.3 Implementation
	4 Experiments
		4.1 Experimental Setup
		4.2 Results
	5 Conclusion
	References
Beyond the Selected Completely at Random Assumption for Learning from Positive and Unlabeled Data
	1 Introduction
	2 Preliminaries
	3 Labeling Mechanisms for PU Learning
	4 Learning with SAR Labeling Mechanisms
		4.1 Case 1: True Propensity Scores Known
		4.2 Case 2: Propensity Scores Estimated from Data
	5 Learning Under the SAR Assumption
		5.1 Reducing SAR to SCAR
		5.2 EM for Propensity Estimation
	6 Empirical Evaluation
		6.1 Data
		6.2 Methodology and Approaches
		6.3 Results
	7 Related Work
	8 Conclusions
	References
Cost Sensitive Evaluation of Instance Hardness in Machine Learning
	1 Introduction
	2 Notation and Basic Definitions
	3 Instance Hardness and Cost Curves
		3.1 Score-Fixed Instance Hardness
		3.2 Score-Driven Instance Hardness
		3.3 Rate-Driven Instance Hardness
		3.4 Score-Uniform Instance Hardness
		3.5 Rate-Uniform Instance Hardness
	4 Experiments
	5 Conclusion
	References
Non-parametric Bayesian Isotonic Calibration: Fighting Over-Confidence in Binary Classification
	1 Introduction
	2 Evaluation of Calibration
	3 Simple Improvement of Existing Methods
	4 Proposed Method
		4.1 Non-parametric Bayesian Isotonic Calibration
		4.2 Selecting the Prior over Isotonic Maps
		4.3 Practically Efficient Sampling from Prior
	5 Experiments
		5.1 Experiments on Synthetic Data
		5.2 Experimental Setup on Real Data
		5.3 Experiment Results on Real Data
	6 Conclusions
	References
Multi-label Learning
PP-PLL: Probability Propagation for Partial Label Learning
	1 Introduction
	2 Related Work
	3 The PP-PLL Method
	4 Optimization
		4.1 Updating F
		4.2 Updating
	5 Experiments
		5.1 Experimental Setup
		5.2 Experimental Results
		5.3 Sensitivity Analysis
	6 Conclusion
	References
Neural Message Passing for Multi-label Classification
	1 Introduction
	2 Method: LaMP Networks
		2.1 Background: Message Passing Neural Networks
		2.2 LaMP: Label Message Passing
		2.3 Readout Layer (MLC Predictions from the Label Embeddings)
		2.4 Model Details
		2.5 Loss Function
		2.6 LaMP Variation: Input Encoding with Feature Message Passing (FMP)
		2.7 Advantages of LaMP Models
		2.8 Connecting to Related Topics
	3 Experiments
		3.1 LaMP Variations
		3.2 Performance Evaluation
		3.3 Interpretability Evaluation
	4 Conclusion
	A  Appendix: MLC Background
		A.1  Background of Multi-label Classification
		A.2  Seq2Seq Models
		A.3  Drawbacks of Autoregressive Models
	B  Appendix: Dataset Details
	C  Appendix: Extra Metrics
	D  Appendix: More About Experiments
		D.1  Datasets
		D.2  Evaluation Metrics
		D.3  Model Hyperparameter Tuning
		D.4  Baseline Comparisons
	References
Assessing the Multi-labelness of Multi-label Data
	1 Introduction
	2 Background: Multi-label Data and Multicollinearity
	3 Analytical Models for Measuring Multi-labelness
		3.1 Regularisation of Analytical Models
		3.2 Split Analytical Model
	4 Analysis of Full and Split Analytical Models
		4.1 Measuring Multi-labelness
		4.2 Generating Multi-label Data
		4.3 Investigation: Full Model with l1 and l2 Regularisation
		4.4 Investigation: Split Model with l1 and l2 Regularisation
		4.5 Comparing Full and Split Regression
	5 Full and Split Analytical Models on Real Data
		5.1 Label Interdependence
		5.2 Effect of Label-Interdependence Reduction on Accuracy
	6 Conclusion
	References
Synthetic Oversampling of Multi-label Data Based on Local Label Distribution
	1 Introduction
	2 Related Work
	3 Our Approach
		3.1 Selection of Seed Instances
		3.2 Synthetic Instance Generation
		3.3 Ensemble of Multi-Label Sampling (EMLS)
		3.4 Complexity Analysis
	4 Empirical Analysis
		4.1 Setup
		4.2 Results and Analysis
	5 Conclusion
	References
Large-Scale Learning
Distributed Learning of Non-convex Linear Models with One Round of Communication
	1 Introduction
	2 Problem Setting
	3 The OWA Estimator
		3.1 Warmup: The Full OWA
		3.2 The OWA Estimator
		3.3 Implementing OWA with Existing Optimizers
		3.4 Fast Cross Validation for OWA
	4 Analysis
		4.1 The Sub-Gaussian Tail (SGT) Condition
		4.2 The Main Idea: owa Contains Good Solutions
		4.3 Bounding the Generalization Error
		4.4 Bounding the Estimation Error
	5 Other Non-interactive Estimators
	6 Experiments
		6.1 Synthetic Data
		6.2 Real World Advertising Data
	7 Conclusion
	References
SLSGD: Secure and Efficient Distributed On-device Machine Learning
	1 Introduction
	2 Related Work
	3 Problem Formulation
		3.1 Non-IID Local Datasets
		3.2 Data Poisoning
	4 Methodology
		4.1 Threat Model and Defense Technique
	5 Convergence Analysis
		5.1 Assumptions
		5.2 Convergence Without Data Poisoning
		5.3 Convergence with Data Poisoning
	6 Experiments
		6.1 Datasets and Evaluation Metrics
		6.2 SLSGD Without Attack
		6.3 SLSGD Under Data Poisoning Attack
		6.4 Acceleration by Local Updates
		6.5 Discussion
	7 Conclusion
	References
Trade-Offs in Large-Scale Distributed Tuplewise Estimation And Learning
	1 Introduction
	2 Background
		2.1 U-Statistics: Definition and Applications
		2.2 Large-Scale Tuplewise Inference with Incomplete U-Statistics
		2.3 Practices in Distributed Data Processing
	3 Distributed Tuplewise Statistical Estimation
		3.1 Naive Strategies
		3.2 Proposed Approach
		3.3 Analysis
		3.4 Practical Considerations and Other Repartitioning Schemes
	4 Extensions to Stochastic Gradient Descent for ERM
		4.1 Gradient-Based Empirical Minimization of U-statistics
		4.2 Repartitioning for Stochastic Gradient Descent
	5 Numerical Results
	6 Future Work
	References
Deep Learning
Importance Weighted Generative Networks
	1 Introduction
		1.1 Related Work
	2 Problem Formulation and Technical Approach
		2.1 Maximum Mean Discrepancy Between Two Distributions
		2.2 Importance Weighted Estimator for Known M
		2.3 Robust Importance Weighted Estimator for Known M
		2.4 Self-normalized Importance Weights for Unknown M
		2.5 Approximate Importance Weighting by Data Duplication
	3 Evaluation
		3.1 Can GANs with Importance Weighted Estimators Recover Target Distributions, Given M?
		3.2 In a High-Dimensional Image Setting, How Does Importance Weighting Compare with Conditional Generation?
		3.3 When M Is Unknown, But Can Be Estimated Up to a Normalizing Constant on a Subset of Data, Are We Able to Sample from Our Target Distribution?
	4 Conclusions and Future Work
	References
Linearly Constrained Weights: Reducing Activation Shift for Faster Training of Neural Networks
	1 Introduction
	2 Activation Shift
	3 Linearly Constrained Weights
		3.1 Learning LCW via Reparameterization
		3.2 LCW for Convolutional Layers
	4 Variance Analysis
		4.1 Variance Analysis of a Fully Connected Layer
		4.2 Variance Analysis of a Nonlinear Activation Layer
		4.3 Relationship to the Vanishing Gradient Problem
		4.4 Example
	5 Related Work
	6 Experiments
		6.1 Deep MLP with Sigmoid Activation Functions
		6.2 Deep Convolutional Networks with ReLU Activation Functions
	7 Conclusion
	References
LYRICS: A General Interface Layer to Integrate Logic Inference and Deep Learning
	1 Introduction
		1.1 Previous Work
	2 The Declarative Language
	3 From Logic to Learning
	4 Learning and Reasoning with Lyrics
	5 Conclusions
	References
Deep Eyedentification: Biometric Identification Using Micro-movements of the Eye
	1 Introduction
	2 Related Work
	3 Problem Setting
	4 Network Architecture
	5 Experiments
		5.1 Data Collection
		5.2 Reference Methods
		5.3 Hyperparameter Tuning
		5.4 Hardware and Framework
		5.5 Multi-class Classification
		5.6 Identification and Verification
		5.7 Assessing Session Bias
		5.8 Additional Exploratory Experiments
	6 Discussion
	7 Conclusion
	References
Adversarial Invariant Feature Learning with Accuracy Constraint for Domain Generalization
	1 Introduction
	2 Preliminary and Related Work
		2.1 Problem Statement of Domain Generalization
		2.2 Related Work
	3 Our Approach
		3.1 Domain Adversarial Networks
		3.2 Trade-Off Caused by Domain-Class Dependency
		3.3 Accuracy-Constrained Domain Invariance
		3.4 Proposed Method
	4 Experiments
		4.1 Datasets
		4.2 Baselines
		4.3 Experimental Settings
		4.4 Results
	5 Conclusion
	References
Quantile Layers: Statistical Aggregation in Deep Neural Networks for Eye Movement Biometrics
	1 Introduction
	2 Related Work
	3 The Quantile Layer
	4 Model Architectures
	5 Empirical Study
		5.1 Experimental Setup
		5.2 Results
	6 Conclusions
	References
Multitask Hopfield Networks
	1 Introduction
	2 Methods
		2.1 Problem Definition
		2.2 Previous Singletask Model
		2.3 Multitask Hopfield Networks
		2.4 Model Complexity
	3 Preliminary Results and Discussion
		3.1 Benchmark Data
		3.2 Evaluation Setting
		3.3 Model Configuration
		3.4 Model Performance
	4 Conclusions
	References
Meta-Learning for Black-Box Optimization
	1 Introduction
	2 Related Work
	3 Problem Overview
	4 RNN-Opt
		4.1 RNN-Opt Without Domain Constraints
		4.2 RNN-Opt with Domain Constraints (RNN-Opt-DC)
	5 Experimental Evaluation
		5.1 Observations
		5.2 RNN-Opt with Domain Constraints
	6 Conclusion and Future Work
	A  Generating Diverse Non-convex Synthetic Functions
	References
Training Discrete-Valued Neural Networks with Sign Activations Using Weight Distributions
	1 Introduction
	2 Related Work
	3 Neural Networks and Weight Distributions
		3.1 Discrete Neural Networks
		3.2 Relation to Variational Inference
	4 Approximation of the Expected Loss
		4.1 Approximation of the Maximum Function
	5 Model Details
		5.1 Batch Normalization
		5.2 Parameterization and Initialization of q
	6 Experiments
		6.1 Datasets
		6.2 Classification Results
		6.3 Ablation Study
	7 Conclusion
	References
Sobolev Training with Approximated Derivatives for Black-Box Function Regression with Neural Networks
	1 Introduction
	2 Sobolev Training with Approximated Target Derivatives
		2.1 Target Derivative Approximation
		2.2 Data Transformation
		2.3 Error Functions
		2.4 Derivative Approximation Using Finite-Differences
	3 Results
		3.1 Sobolev Training with Approximated Target Derivatives versus Value Training
		3.2 Sobolev Training with Approximated Derivatives Based on Finite-Differences
		3.3 Real-World Regression Problems
	4 Conclusion
	References
Hyper-Parameter-Free Generative Modelling with Deep Boltzmann Trees
	1 Introduction
	2 Notation and Background
		2.1 Graphical Models
		2.2 Deep Boltzmann Machines
	3 Deep Boltzmann Trees
		3.1 Learning the DBT Weights
	4 Experiments
	5 Conclusion
	References
L0-ARM: Network Sparsification via Stochastic Binary Optimization
	1 Introduction
	2 Formulation
	3 L0-ARM: Stochastic Binary Optimization
		3.1 Choice of g()
		3.2 Sparsifying Network Architectures for Inference
		3.3 Imposing Shrinkage on Model Parameters theta
		3.4 Group Sparsity Under L0 and L2 Norms
	4 Related Work
	5 Experimental Results
		5.1 Implementation Details
		5.2 MNIST Experiments
		5.3 CIFAR Experiments
	6 Conclusion
	References
Learning with Random Learning Rates
	1 Introduction
	2 Related Work
	3 Motivation and Outline
	4 All Learning Rates at Once: Description
		4.1 Notation
		4.2 Alrao Architecture
		4.3 Alrao Update for the Internal Layers: A Random Learning Rate for Each Unit
		4.4 Alrao Update for the Output Layer: Model Averaging from Output Layers Trained with Different Learning Rates
	5 Experimental Setup
		5.1 Image Classification on ImageNet and CIFAR10
		5.2 Other Tasks: Text Prediction, Reinforcement Learning
	6 Performance and Robustness of Alrao
		6.1 Alrao Compared to SGD with Optimal Learning Rate
		6.2 Robustness of Alrao, and Comparison to Default Adam
		6.3 Sensitivity Study to [_min;_max]
	7 Discussion, Limitations, and Perspectives
	8 Conclusion
	References
FastPoint: Scalable Deep Point Processes
	1 Introduction
	2 Background
	3 FastPoint: Scalable Deep Point Process
		3.1 Generative Model
		3.2 Sequential Monte Carlo Sampling
	4 Related Work
	5 Experiments
		5.1 Model Performance
		5.2 Sampling
	6 Conclusion
	References
Single-Path NAS: Designing Hardware-Efficient ConvNets in Less Than 4 Hours
	1 Introduction
	2 Related Work
	3 Proposed Method: Single-Path NAS
		3.1 Mobile ConvNets Search Space: A Novel View
		3.2 Proposed Methodology: Single-Path NAS Formulation
		3.3 Single-Path vs. Existing Multi-Path Assumptions
		3.4 Hardware-Aware NAS with Differentiable Runtime Loss
	4 Experiments
		4.1 Experimental Setup
		4.2 State-of-the-Art Runtime-Constrained ImageNet Classification
		4.3 Ablation Study: Kernel-Based Accuracy-Efficiency Trade-Off
	5 Conclusion
	References
Probabilistic Models
Scalable Large Margin Gaussian Process Classification
	1 Introduction
	2 Related Work
	3 Large Margin Gaussian Process
		3.1 Probabilistic Hinge Loss
		3.2 Generalised Multi-class Hinge Loss
		3.3 Scalable Variational Inference for LMGP
		3.4 LMGP-DNN
	4 Experimental Evaluation
		4.1 Classification
		4.2 Structured Data Classification
		4.3 Image Classification with LMGP-DNN
		4.4 Uncertainty Analysis
	5 Conclusions
	References
Integrating Learning and Reasoning with Deep Logic Models
	1 Introduction
	2 Model
		2.1 MAP Inference
		2.2 Learning
		2.3 Mapping Constraints into a Continuous Logic
		2.4 Potentials Expressing the Logic Knowledge
	3 Related Works
	4 Experimental Results
		4.1 The PAIRS Artificial Dataset
		4.2 Link Prediction in Knowledge Graphs
	5 Conclusions and Future Work
	References
Neural Control Variates for Monte Carlo Variance Reduction
	1 Introduction
	2 Control Variates
	3 Neural Control Variates
	4 Constrained Neural Control Variates
	5 Experiments
		5.1 Synthetic Data
		5.2 Thermodynamic Integral for Bayesian Model Evidence Evaluation
		5.3 Uncertainty Quantification in Bayesian Neural Network
	6 Conclusion
	A  Formulas for Goodwin Oscillator
	B Uncertainty Quantification in Bayesian Neural Network: Out-of-Bag Sample Detection
	References
Data Association with Gaussian Processes
	1 Introduction
	2 Data Association with Gaussian Processes
	3 Variational Approximation
		3.1 Variational Lower Bound
		3.2 Optimization of the Lower Bound
		3.3 Approximate Predictions
		3.4 Deep Gaussian Processes
	4 Experiments
		4.1 Noise Separation
		4.2 Multimodal Data
		4.3 Mixed Cart-Pole Systems
	5 Conclusion
	References
Incorporating Dependencies in Spectral Kernels for Gaussian Processes
	1 Introduction
	2 Background
	3 Related Work
	4 Dependencies Between SM Components
	5 Generalized Convolution SM Kernels
	6 Comparisons Between GCSM and SM
	7 Scalable Inference
		7.1 Hyper-parameter Initialization
	8 Experiments
		8.1 Compact Long Term Extrapolation
		8.2 Modeling Irregular Long Term Decreasing Trends
		8.3 Modeling Irregular Long Term Increasing Trends
		8.4 Prediction with Large Scale Multidimensional Data
	9 Conclusion
	References
Deep Convolutional Gaussian Processes
	1 Introduction
	2 Background
		2.1 Discrete Convolutions
		2.2 Primer on Gaussian Processes
		2.3 Variational Inference
	3 Deep Convolutional Gaussian Process
		3.1 Convolutional GP Layers
		3.2 Final Classification Layer
		3.3 Doubly Stochastic Variational Inference
		3.4 Stochastic Gradient Hamiltonian Monte Carlo
	4 Experiments
		4.1 MNIST and CIFAR-10 Results
	5 Conclusions
	References
Bayesian Generalized Horseshoe Estimation of Generalized Linear Models
	1 Introduction
		1.1 Bayesian Generalized Linear Models
		1.2 Generalized Horseshoe Priors
		1.3 Our Contributions
	2 Gradient-Based Samplers for Bayesian GLMs
		2.1 Algorithm 1: mGrad-1
		2.2 Algorithm 2: mGrad-2
		2.3 Sampling the Intercept
		2.4 Tuning the Step Size
		2.5 Implementation Details
	3 Two New Samplers for the Generalized Horseshoe
		3.1 Inverse Gamma-Inverse Gamma Sampler
		3.2 Rejection Sampling
	4 Experimental Results
		4.1 Comparison of GHS Hyperparameter Samplers
		4.2 Comparison of Samplers for Coefficients
	5 Summary
	References
Fine-Grained Explanations Using Markov Logic
	1 Introduction
	2 Background
		2.1 Markov Logic Networks
		2.2 Related Work
	3 Query Explanation
		3.1 Sampling
	4 Experiments
		4.1 User Study Setup
		4.2 Application 1: Review Spam Filter
		4.3 Application 2: Review Sentiment Prediction
		4.4 T-Test
	5 Conclusion
	References
Natural Language Processing
Unsupervised Sentence Embedding Using Document Structure-Based Context
	1 Introduction
	2 Related Work
	3 Document Structured-Based Context
		3.1 Titles
		3.2 Lists
		3.3 Links
		3.4 Window-Based Context (DWn)
	4 Neural Network Models
		4.1 Inter-sentential Dependency-Based Encoder-Decoder
		4.2 Out-Of-Vocabulary (OOV) Mapping
	5 Experiments
		5.1 Dependency Importance
		5.2 Target Sentence Prediction
		5.3 Paraphrase Detection
		5.4 Coreference Resolution
	6 Conclusion and Future Work
	References
Copy Mechanism and Tailored Training for Character-Based Data-to-Text Generation
	1 Introduction
	2 Model Description
		2.1 Summary on Encoder-Decoder Architectures with Attention
		2.2 Learning to Copy
		2.3 Switching GRUs
	3 Experiments
		3.1 Datasets
		3.2 Implementation Details
		3.3 Results and Discussion
	4 Conclusion
	References
NSEEN: Neural Semantic Embedding for Entity Normalization
	1 Introduction
	2 Related Work
	3 Approach
		3.1 Similarity Learning
		3.2 Reference Set Embedding and Storage
		3.3 Retrieval
	4 Experimental Validation
		4.1 Reference Sets
		4.2 Query Set
		4.3 Baselines
		4.4 Results
	5 Discussion
	References
Beyond Bag-of-Concepts: Vectors of Locally Aggregated Concepts
	1 Introduction
	2 Related Work
		2.1 Bag-of-Words
		2.2 Word Embeddings
		2.3 Bag-of-Concepts
		2.4 Vector of Locally Aggregated Descriptors (VLAD)
	3 Vectors of Locally Aggregated Concepts (VLAC)
	4 Experiments
		4.1 Experimental Setup
		4.2 Experiment 1
		4.3 Experiment 2
	5 Conclusion
	References
A Semi-discriminative Approach for Sub-sentence Level Topic Classification on a Small Dataset
	1 Introduction
	2 Related Work
	3 Data
		3.1 Topic Separability
	4 Methods
		4.1 Emission Probabilities
		4.2 Transition Probabilities
		4.3 Decoding
	5 Experiments and Results
		5.1 MaxEnt as Baseline
		5.2 Standard HMM
		5.3 MaxEnt Emissions for HMM (ME+HMM)
		5.4 Comparison of ME+HMM and CRF
	6 Discussion
	7 Conclusion and Future Work
	References
Generating Black-Box Adversarial Examples for Text Classifiers Using a Deep Reinforced Model
	1 Introduction
	2 Related Work
	3 Proposed Attack Strategy
		3.1 Background and Notations
	4 Adversarial Examples Generator (AEG) Architecture
		4.1 Encoder
		4.2 Decoder
	5 Training
		5.1 Supervised Pretraining with Teacher Forcing
		5.2 Training with Reinforcement Learning
		5.3 Training Details
	6 Experiments
		6.1 Setup
		6.2 Quantitative Analysis
		6.3 Human Evaluation
		6.4 Ablation Studies
		6.5 Qualitative Analysis
	7 Conclusion
	References
Author Index




نظرات کاربران