دسترسی نامحدود
برای کاربرانی که ثبت نام کرده اند
برای ارتباط با ما می توانید از طریق شماره موبایل زیر از طریق تماس و پیامک با ما در ارتباط باشید
در صورت عدم پاسخ گویی از طریق پیامک با پشتیبان در ارتباط باشید
برای کاربرانی که ثبت نام کرده اند
درصورت عدم همخوانی توضیحات با کتاب
از ساعت 7 صبح تا 10 شب
ویرایش: [13026, 1 ed.] نویسندگان: Hongfei Lin, Min Zhang, Liang Pang سری: Lecture Notes in Computer Science ISBN (شابک) : 3030881881, 9783030881887 ناشر: Springer سال نشر: 2021 تعداد صفحات: 224 زبان: English فرمت فایل : EPUB (درصورت درخواست کاربر به PDF، EPUB یا AZW3 تبدیل می شود) حجم فایل: 14 Mb
در صورت تبدیل فایل کتاب Information Retrieval: 27th China Conference, CCIR 2021, Dalian, China, October 29–31, 2021, Proceedings به فرمت های PDF، EPUB، AZW3، MOBI و یا DJVU می توانید به پشتیبان اطلاع دهید تا فایل مورد نظر را تبدیل نمایند.
توجه داشته باشید کتاب بازیابی اطلاعات: بیست و هفتمین کنفرانس چین، CCIR 2021، دالیان، چین، 29 تا 31 اکتبر 2021، مجموعه مقالات نسخه زبان اصلی می باشد و کتاب ترجمه شده به فارسی نمی باشد. وبسایت اینترنشنال لایبرری ارائه دهنده کتاب های زبان اصلی می باشد و هیچ گونه کتاب ترجمه شده یا نوشته شده به فارسی را ارائه نمی دهد.
15 مقاله کامل ارائه شده با دقت بررسی و از بین 124 مورد ارسالی انتخاب شدند. مقالات در بخش های موضوعی سازماندهی شده اند: جستجو و توصیه، NLP برای IR، IR در آموزش، و IR در زیست پزشکی.
The 15 full papers presented were carefully reviewed and selected from 124 submissions. The papers are organized in topical sections: search and recommendation, NLP for IR, IR in Education, and IR in Biomedicine.
Preface Organization Contents Search and Recommendation Interaction-Based Document Matching for Implicit Search Result Diversification 1 Introduction 2 Related Work 3 Methodology 3.1 Problem Formulation 3.2 MatchingDIV 3.3 Model Training and Inference 4 Experiment Settings 4.1 Datasets and Metrics 4.2 Model Settings 5 Experimental Results 5.1 Overall Results and Discussion 5.2 Effect of Different Encoder 5.3 Inference Latency for Online Ranking 6 Conclusion and Future Work References Various Legal Factors Extraction Based on Machine Reading Comprehension 1 Introduction 2 Related Works 3 The Legal Factor Extraction Framework 3.1 Architecture of the Framework 3.2 Strategies for Representing Extraction Fact Labels 4 Experiments 4.1 Datasets and Experimental Settings 4.2 Experiments of Different Methods for Legal Factor Extraction 4.3 The Structure of Questions and the Design of Query Expansion 4.4 Results 5 Analyses 5.1 Extract Legal Factor as a Human Judge 5.2 The Richer Semantic Information, the Higher Score 5.3 Strategies of Query Expansion 6 Conclusion 7 Future Works References Meta-learned ID Embeddings for Online Inductive Recommendation 1 Introduction 2 Preliminaries 3 Meta-learned ID Embeddings for New Users 3.1 Meta-learning for Inductive Recommendation 3.2 Architecture of Meta-Learned ID Embeddings 3.3 Meta-test Stage 4 Experiments 4.1 Experimental Settings 4.2 Model Performance 5 Related Work 6 Conclusions and Future Work References Modelling Dynamic Item Complementarity with Graph Neural Network for Recommendation 1 Introduction 2 Related Work 2.1 Relation-Based Neural Recommendation Models 2.2 Graph-Based Neural Recommendation Models 3 Preliminaries 3.1 Problem Definition 3.2 Cross Elasticity of Demand 4 Dynamic Complementary Graph Neural Network (DCGNN) 4.1 Framework Overview 4.2 CGNN 4.3 Time Transfer Mechanism 4.4 Model Training 5 Experiment 5.1 Data Set 5.2 Experimental Settings 6 Experimental Results and Analysis 6.1 Performance Comparison 6.2 Effect of Capturing Dynamic Item Complementarity 6.3 Effect of Transfer Mechanism 7 Conclusion and Future Work References NLP for IR LDA-Transformer Model in Chinese Poetry Authorship Attribution 1 Introduction 2 Related Works 3 Corpus 4 Methodology 4.1 Model Architecture 4.2 LDA Model for Topic Feature 4.3 Transformer 5 Experiments 5.1 Datasets 5.2 Baseline 5.3 Experimental Results 5.4 Ablation Study 5.5 Visualization 6 Error Study 6.1 Contradiction 6.2 Same Period 6.3 Lyric by Scene 7 Conclusions References Aspect Fusion Graph Convolutional Networks for Aspect-Based Sentiment Analysis 1 Introduction 2 Related Work 3 The Proposed Model 3.1 Producing Dependency Tree 3.2 Producing Dependency-Position Graph 3.3 Proximity-Weight Convolution 3.4 Aspect Fusion Graph Convolutional Network 3.5 Aspect Fusion Attention Mechanism 3.6 Model Training 4 Experiments 4.1 Datasets and Experimental Settings 4.2 Models for Comparison 4.3 Performance Comparison 4.4 Ablation Study 4.5 Impact of GCN Layers 4.6 Impact of the Dependency Fusion Parameter 4.7 Case Study and Error Analysis 5 Conclusion References Iterative Strict Density-Based Clustering for News Stream 1 Introduction 2 Related Work 3 Preliminaries 3.1 Task Formulation 3.2 Distance Definition 4 Algorithm 4.1 Dynamic Topic Detection Management 4.2 Outdated Topic Detection Management 5 Experiment 5.1 Performance Metrics 5.2 Dataset Analysis 5.3 Comparison with Baseline Algorithms 5.4 Ablation Study on the Embedding Model 6 Conclusion References A Pre-LN Transformer Network Model with Lexical Features for Fine-Grained Sentiment Classification 1 Introduction 2 Related Work 3 Model 3.1 Input Layer 3.2 Embedding Layer 3.3 Encoder Layer 3.4 Fusion Layer and Objective Function 4 Experiments 4.1 Datasets 4.2 Evaluation 4.3 Training Details 4.4 Comparison Methods 4.5 Results 4.6 Error Analysis 5 Conclusion and Future Work References Adversarial Context-Aware Representation Learning of Multiword Expressions 1 Introduction 2 Related Work 3 Context-Aware Multiword Expressions Representation Learning 3.1 Literal Meaning Representation 3.2 Adversarial Context-Aware Representation Learning 3.3 Idiomatic Token Disambiguation 3.4 Compositionality Prediction 4 Experiments 4.1 Evaluation Settings 4.2 Idiom Token Classification 4.3 Compositionality Prediction 5 Conclusions References IR in Education Research on the Evaluation Words Recognition in Scholarly Papers\' Peer Review Texts 1 Introduction 2 Related Work 2.1 Peer Review 2.2 Opinion Mining 3 Task and Data 3.1 Task Description 3.2 Dataset 4 Evaluation Opinion Mining 4.1 Feature Extraction, Transfer and Decoder Layer 4.2 Transfer Module 4.3 Adversarial Module 5 Experiment 5.1 Peer Review Mining 5.2 Quantitative Analysis of Evaluation Words 5.3 Application 6 Conclusion References Evaluation of Learning Effect Based on Online Data 1 Introduction 1.1 Related Work 1.2 Factors Affecting the Quality of Online Learning 1.3 Contributions 2 Online Learning Data Processing 2.1 Data Sources 2.2 Data Processing of Canvas Teaching Platform 2.3 Data Processing of the Questionnaire 2.4 Data Processing of Course WeChat Group 3 Evaluation of Learning Effect 3.1 Linear Regression 3.2 Lasso Regression 3.3 Random Forest Regression 3.4 Result Analysis 4 Conclusion References Self-training vs Pre-trained Embeddings for Automatic Essay Scoring 1 Introduction 2 Related Work 3 Method 3.1 Input for Pre-trained Embeddings 3.2 Transformer Unit 3.3 Input for Self-training Embeddings 3.4 Output Module 4 Experiment and Result Analysis 4.1 Data and Evaluation Indexes 4.2 Experimental Setup 4.3 Result Analysis 4.4 Pre-trained vs Self-training Embeddings 4.5 Network Layer Analysis 5 Conclusions References Enhanced Hierarchical Structure Features for Automated Essay Scoring 1 Introduction 2 Our Method 2.1 Overview Architecture of Our Method 2.2 Hierarchical Neural Feature Representation 2.3 Hierarchical Graph Feature Representation 2.4 Interact Attention Mechanism 2.5 Model Training 3 Experiments 3.1 Dataset 3.2 Experiment Settings 3.3 Compared Models 3.4 Experimental Results 3.5 Visualization Analysis 4 Related Work 5 Conclusion References IR in Biomedicine A Drug Repositioning Method Based on Heterogeneous Graph Neural Network 1 Introduction 2 Related Works 3 Methodology 3.1 Meta-path Model 3.2 Node Interaction Based on Convolution 3.3 Attention Mechanism 3.4 Loss Function 4 Experiment and Analysis 4.1 Data Set 4.2 Experimental Setup 4.3 Interpretability Analysis 4.4 Example of Drug Repositioning 5 Conclusion References Auto-learning Convolution-Based Graph Convolutional Network for Medical Relation Extraction 1 Introduction 2 Model 2.1 Bi-LSTM 2.2 Multi-head Attention 2.3 2D-CNN 2.4 GCN 2.5 Relation Prediction 3 Experiments 3.1 Data 3.2 Setup 3.3 Main Results 3.4 Cross-Sentence n-Ary Relation Extraction 3.5 Sentence-Level Relation Extraction 3.6 Analysis 4 Conclusion References Author Index