دسترسی نامحدود
برای کاربرانی که ثبت نام کرده اند
برای ارتباط با ما می توانید از طریق شماره موبایل زیر از طریق تماس و پیامک با ما در ارتباط باشید
در صورت عدم پاسخ گویی از طریق پیامک با پشتیبان در ارتباط باشید
برای کاربرانی که ثبت نام کرده اند
درصورت عدم همخوانی توضیحات با کتاب
از ساعت 7 صبح تا 10 شب
ویرایش: نویسندگان: Go Irie, Choonsung Shin, Takashi Shibata, Kazuaki Nakamura سری: Communications in Computer and Information Science 2143 ISBN (شابک) : 9789819742486, 9789819742493 ناشر: Springer سال نشر: 2024 تعداد صفحات: [172] زبان: English فرمت فایل : PDF (درصورت درخواست کاربر به PDF، EPUB یا AZW3 تبدیل می شود) حجم فایل: 43 Mb
در صورت تبدیل فایل کتاب Frontiers of Computer Vision. 30th International Workshop, IW-FCV 2024 Tokyo, Japan, February 19–21, 2024 Revised Selected Papers به فرمت های PDF، EPUB، AZW3، MOBI و یا DJVU می توانید به پشتیبان اطلاع دهید تا فایل مورد نظر را تبدیل نمایند.
توجه داشته باشید کتاب مرزهای بینایی کامپیوتر. 30مین کارگاه بین المللی، IW-FCV 2024 توکیو، ژاپن، 19 تا 21 فوریه 2024 مقالات منتخب اصلاح شده نسخه زبان اصلی می باشد و کتاب ترجمه شده به فارسی نمی باشد. وبسایت اینترنشنال لایبرری ارائه دهنده کتاب های زبان اصلی می باشد و هیچ گونه کتاب ترجمه شده یا نوشته شده به فارسی را ارائه نمی دهد.
Preface Organization Contents Tackling Background Misclassification in Box-Supervised Segmentation: A Background Constraint Approach 1 Introduction 2 Related Work 2.1 Condinst 2.2 Boxinst 2.3 Projection Loss 2.4 Pairwise Loss 3 Proposed Method 3.1 Background Similarity 3.2 Background Loss 3.3 Foreground Loss 4 Experiment 5 Result 6 Conclusion References Clustering of Face Images in Video by Using Deep Learning 1 Introduction 2 Related Work 2.1 Contrastive Learning 2.2 Tsiam 3 Our Approach 3.1 Center Loss 3.2 AutoEncoder 3.3 Online Clustering 4 Experiments 4.1 Dataset 4.2 Implementation Details 4.3 Evaluation Metric 4.4 Main Experiment 4.5 Online Clustering 5 Conclusion References Exploring the Impact of Various Contrastive Learning Loss Functions on Unsupervised Domain Adaptation in Person Re-identification 1 Introduction 2 Related Work 2.1 Unsupervised Person Re-identification 2.2 Contrastive Learning in Person Re-identification 3 Methodology 3.1 UDA Person Re-identification Pipeline 3.2 Unsupervised Training on the Target Domain Dataset 3.3 Supervised Training on the Source Domain Dataset 4 Experiments 4.1 Dataset and Evaluation Metrics 4.2 Implementation Details 4.3 Discussions 4.4 Comparison with State-of-the-art Methods 5 Conclusion References Automatic Measured Drawing Generation for Mokkan Using Deep Learning 1 Introduction 2 Related Work 3 Proposed Method 3.1 Preprocessing 3.2 Image Conversion 3.3 Postprocessing 3.4 Dataset 3.5 Training of Neural Network 4 Evaluation 5 Results and Discussion 6 Conclusion References Monocular Absolute 3D Human Pose Estimation with an Uncalibrated Fixed Camera 1 Introduction 2 Related Work 2.1 Object Pose Estimation 2.2 Human Pose Estimation 3 Method 3.1 Camera Calibration and Object Pose Estimation 3.2 Absolute 3D Human Pose Estimation 3.3 Visualization 4 Evaluation 4.1 User Study for Object Pose Estimation 4.2 Quantitative Evaluation by Public Dataset 4.3 Qualitative Evaluation by Original Dataset 4.4 Discussion 5 Conclusion References Technical Skill Evaluation and Training Using Motion Curved Surface in Considered Velocity and Acceleration 1 Introduction 2 Motion Curved Surface Evaluation 2.1 Creation and Display 2.2 Calculation Method of Velocity and Acceleration Surface 2.3 Velocity and Acceleration Surface Display 3 Analyzation and Effectiveness Using Motion Curved Surface of Technical Skill 3.1 Measurement of Technical Skill Motion 3.2 Analyzation Using Motion Curved Surface 3.3 Effectiveness of Technical Skill Training 4 Conclusion References A Benchmark for 3D Reconstruction with Semantic Completion in Dynamic Environments 1 Introduction 2 Related Work 2.1 Semantic Scene Completion 2.2 3D Human Motion Generation 3 Dynamic 3D Scene Synthesis 3.1 Static Scene Reconstruction with Complete Geometry 3.2 Human Dynamics Simulation 3.3 Scene-Motion Synthesis 4 Experiments 4.1 Experimental Settings 4.2 Quantitative Results 4.3 Qualitative Results 5 Conclusion References Framework for Measuring the Similarity of Visual and Semantic Structures in Sign Languages 1 Introduction 2 Theoretical Background 2.1 Subspace Representation for Sign Language Video 2.2 Video Features for Sign Subspaces 2.3 Vector Representation of Words 3 Proposed Method 3.1 3D Visual and Semantic Maps on the Visual and Semantic Spaces 3.2 Calculation of Communicability Metric 3.3 Tsukuba New Signs Dataset 4 Experimental Results 4.1 Visualization of 3D Visual and Semantic Maps 4.2 Visualization of Extracted Image Features 4.3 Evaluation of the Communicability Metric 5 Conclusion References Human Facial Age Group Recognizer Using Assisted Bottleneck Transformer Encoder 1 Introduction 2 Related Work 3 The Proposed Method 3.1 The Feature Extraction Module 3.2 The Assisted Bottleneck Transformer Encoder (ABTE) 3.3 The Classification Module 4 Implementation Settings 5 Experiments and Results 5.1 Evaluation on Datasets 5.2 Model Analysis 5.3 Runtime Efficiency 6 Conclusion References Efficient Detection Model Using Feature Maximizer Convolution for Edge Computing 1 Introduction 2 Related Work 2.1 Edge Computing 2.2 Efficient Feature Extraction 3 Proposed Method 3.1 Feature Maximizer Convolution 3.2 Lightweight Strategy 4 Experiment 4.1 Dataset 4.2 Evaluation Metric 4.3 Experimental Setting 4.4 Result 5 Conclusion References Spatial Attention Network with High Frequency Component for Facial Expression Recognition 1 Introduction 2 Related Work 3 Proposed Method 4 Experiment 4.1 Dataset 4.2 Experimental Setup 4.3 Ablation Study 4.4 Comparison 5 Conclusion References Minor Object Recognition from Drone Image Sequence 1 Introduction 2 Related Work 2.1 Deep Neural Network-Based Method 2.2 Convolutional Neural Network-Based Method 3 Methodology 3.1 Proposed Network Architecture 3.2 Loss Function 4 Experiments 4.1 Dataset 4.2 Experimental Setup 4.3 Experimental Result 4.4 Ablation Study 5 Conclusion References Author Index