دسترسی نامحدود
برای کاربرانی که ثبت نام کرده اند
برای ارتباط با ما می توانید از طریق شماره موبایل زیر از طریق تماس و پیامک با ما در ارتباط باشید
در صورت عدم پاسخ گویی از طریق پیامک با پشتیبان در ارتباط باشید
برای کاربرانی که ثبت نام کرده اند
درصورت عدم همخوانی توضیحات با کتاب
از ساعت 7 صبح تا 10 شب
ویرایش: [1st ed. 2022] نویسندگان: Giuseppe Nicosia (editor), Varun Ojha (editor), Emanuele La Malfa (editor), Gabriele La Malfa (editor), Giorgio Jansen (editor), Panos M. Pardalos (editor), Giovanni Giuffrida (editor), Renato Umeton (editor) سری: ISBN (شابک) : 3030954692, 9783030954697 ناشر: Springer سال نشر: 2022 تعداد صفحات: 572 [571] زبان: English فرمت فایل : PDF (درصورت درخواست کاربر به PDF، EPUB یا AZW3 تبدیل می شود) حجم فایل: 46 Mb
در صورت تبدیل فایل کتاب Machine Learning, Optimization, and Data Science: 7th International Conference, LOD 2021, Grasmere, UK, October 4–8, 2021, Revised Selected Papers, Part II (Lecture Notes in Computer Science, 13164) به فرمت های PDF، EPUB، AZW3، MOBI و یا DJVU می توانید به پشتیبان اطلاع دهید تا فایل مورد نظر را تبدیل نمایند.
توجه داشته باشید کتاب یادگیری ماشین، بهینهسازی و علم داده: هفتمین کنفرانس بینالمللی، LOD 2021، Grasmere، انگلستان، 4 تا 8 اکتبر 2021، مقالات منتخب اصلاحشده، قسمت دوم (یادداشتهای سخنرانی در علوم کامپیوتر، 13164) نسخه زبان اصلی می باشد و کتاب ترجمه شده به فارسی نمی باشد. وبسایت اینترنشنال لایبرری ارائه دهنده کتاب های زبان اصلی می باشد و هیچ گونه کتاب ترجمه شده یا نوشته شده به فارسی را ارائه نمی دهد.
این مجموعه دو جلدی، LNCS 13163-13164، مجموعه مقالات داوری هفتمین کنفرانس بینالمللی یادگیری ماشین، بهینهسازی و علم داده، LOD 2021، همراه با اولین ویرایش سمپوزیوم در زمینه هوش مصنوعی را تشکیل میدهد. و علوم اعصاب، ACAIN 2021.
مجموع 86 مقاله کامل ارائه شده در این مجموعه مقالات دو جلدی پس از کنفرانس به دقت بررسی و از بین 215 مورد ارسالی انتخاب شد. این مقالات تحقیقاتی توسط دانشمندان برجسته در زمینههای یادگیری ماشین، هوش مصنوعی، یادگیری تقویتی، بهینهسازی محاسباتی، علوم اعصاب و علم داده نوشته شدهاند که مجموعهای قابل توجه از ایدهها، فناوریها، الگوریتمها، روشها و کاربردها را ارائه میکنند.This two-volume set, LNCS 13163-13164, constitutes the refereed proceedings of the 7th International Conference on Machine Learning, Optimization, and Data Science, LOD 2021, together with the first edition of the Symposium on Artificial Intelligence and Neuroscience, ACAIN 2021.
The total of 86 full papers presented in this two-volume post-conference proceedings set was carefully reviewed and selected from 215 submissions. These research articles were written by leading scientists in the fields of machine learning, artificial intelligence, reinforcement learning, computational optimization, neuroscience, and data science presenting a substantial array of ideas, technologies, algorithms, methods, and applications.Preface Organization Contents – Part II Contents – Part I Boosted Embeddings for Time-Series Forecasting 1 Introduction 2 Gradient Boosting 3 DeepGB Algorithm 3.1 Gradient Boosting, Forward Stagewise Additive Models, and Structural Time Series Analysis 4 Experimental Evaluation 4.1 Datasets 4.2 Model Setup 4.3 Results 5 Conclusion References Deep Reinforcement Learning for Optimal Energy Management of Multi-energy Smart Grids 1 Introduction 1.1 Context of the Problem 1.2 Deep Reinforcement Learning in Smart Grids: Related Work 2 The Multi-energy Smart Grid Model and Optimal Control Mechanism 3 The Proposed Deep Reinforcement Learning-Based Approach 4 Implementation Details, Simulations and Results 5 Conclusion References A k-mer Based Sequence Similarity for Pangenomic Analyses 1 Introduction 2 Background: Notations and PanDelos 3 A Computationally Efficient Approach 4 Experimental Results 5 Conclusions References A Machine Learning Approach to Daily Capacity Planning in E-Commerce Logistics 1 Introduction 2 Proposed Approach 3 Experiments 4 Results 5 Conclusion References Explainable AI for Financial Forecasting 1 Introduction 2 Related Work 3 Standard XAI Methods 4 The Proposed Strategies 5 Experimental Setup 5.1 Dataset 5.2 Forecasting and Feature Selection 5.3 Backtesting 5.4 Baselines 5.5 Evaluation Metrics 6 Results 6.1 Discussion 7 Conclusions References Online Semi-supervised Learning from Evolving Data Streams with Meta-features and Deep Reinforcement Learning 1 Introduction 2 Background and Related Work 2.1 Meta-learning 2.2 Online Semi-supervised Learning 3 Online Reinforce Algorithm 3.1 Meta-features 3.2 Pseudo-labelling with Meta-reinforcement Learning 3.3 Training of the Meta-reinforcement Learning Model 4 Experimental Evaluation 4.1 Datasets 4.2 SSL Algorithms 4.3 Experimental Results 4.4 Discussion 5 Conclusion and Future Work References Dissecting FLOPs Along Input Dimensions for GreenAI Cost Estimations 1 Introduction 2 Measures of Efficiency 3 Computation of FLOPs for Basic Layers 4 The Problem of Convolutions 5 -FLOPs 5.1 Main Properties of the -correction 5.2 Rationale 6 Additional Experimental Results 6.1 Dense Layers vs Batchsize 7 Conclusions References Development of a Hybrid Modeling Methodology for Oscillating Systems with Friction 1 Introduction 2 Hybrid Modeling: The Framework 2.1 Parametric Modeling 2.2 Non-parametric Modeling 2.3 Hybrid Modeling 3 Proposed Hybrid Methodology for Oscillating Systems 3.1 Approach 3.2 Training 3.3 Prediction 3.4 Challenges in Real Experiments 4 Validation: Double Pendulum 4.1 Synthetic Data 4.2 Measurements 5 Discussion 6 Summary References Numerical Issues in Maximum Likelihood Parameter Estimation for Gaussian Process Interpolation 1 Introduction 2 Background 2.1 Gaussian Processes 2.2 Maximum Likelihood Estimation 3 Numerical Noise 4 Strategies for Improving Likelihood Maximization 4.1 Initialization Strategies 4.2 Stopping Condition 4.3 Restart and Multi-start Strategies 4.4 Parameterization of the Covariance Function 5 Numerical Study 5.1 Methodology 5.2 Optimization Schemes 5.3 Data Sets 5.4 Results and Findings 6 Conclusions and Recommendations References KAFE: Knowledge and Frequency Adapted Embeddings 1 Introduction 2 Background 3 Method 3.1 Notations 3.2 Knowledge Injection into word2vec 3.3 Frequency Expulsion 3.4 KAFE 4 Experiments and Results 4.1 Data 4.2 Reproducibility 4.3 Quantitative Tasks 4.4 Qualitative Tasks 5 Conclusion and Future Work References Improved Update Rule and Sampling of Stochastic Gradient Descent with Extreme Early Stopping for Support Vector Machines 1 Problem 2 Sampling with Full Replacement 3 New Update Rule 4 Improvement of Speed of Tuning 5 Theoretical Analysis 6 Methods 7 Experiments 8 Summary A Appendix References A Hybrid Surrogate-Assisted Accelerated Random Search and Trust Region Approach for Constrained Black-Box Optimization 1 Introduction 2 Global and Local Constrained Black-Box Optimization Using Radial Basis Functions 2.1 RBF-Assisted Constrained Accelerated Random Search 2.2 CONORBIT Trust Region Method 2.3 Radial Basis Function Interpolation 3 A Hybrid Surrogate-Based Algorithm for Constrained Black-Box Optimization 4 Numerical Experiments 4.1 Experimental Setup 4.2 Comparison Using Data Profiles 4.3 Results and Discussion 5 Summary and Future Work References Health Change Detection Using Temporal Transductive Learning 1 Introduction 2 Notation and Background 3 Our Approach 3.1 Analysis 4 Experiments 4.1 Baselines 4.2 Experimental Setup 4.3 Datasets 4.4 Turbofan Engine Degradation 4.5 Controlled Experiments 4.6 Results and Observations 4.7 Need for a Balancing Constraint 5 Conclusion References A Large Visual Question Answering Dataset for Cultural Heritage 1 Introduction 2 Methodology 3 Results 4 Conclusions and Future Work References Expressive Graph Informer Networks 1 Introduction 2 Proposed Approach 2.1 Setup 2.2 Dot-Product Self-attention 2.3 Route-Based Dot-Product Self-attention 2.4 Locality-Constrained Attention 2.5 Implementation Details 3 Architecture of the Network 4 Expressiveness of Graph Informer 4.1 The Weisfeiler-Lehman Test 4.2 Beyond the Weisfeiler-Lehman Test 5 Related Research 6 Evaluation 6.1 Model Selection 6.2 Node-Level Task: NMR 13C Spectrum Prediction 6.3 Results for Graph-Level Tasks 7 Conclusion References Zero-Shot Learning-Based Detection of Electric Insulators in the Wild 1 Introduction 2 Related Work 3 Dataset Details 4 Methodology 5 Experimental Results 6 Discussion 7 Conclusion References Randomized Iterative Methods for Matrix Approximation 1 Introduction and Motivation from Optimization 2 Fundamental Problem, Samples, and Terminology 3 Randomized One-Sided Quasi-Newton Algorithms 4 Randomized Two-Sided Quasi-Newton Algorithms 4.1 General Two-Sided Sampled Update 4.2 Symmetric Update 4.3 Multi-step Symmetric Updates 5 Convergence Analysis 6 Numerical Results 7 Heuristic Accelerated Schemes 8 Conclusions and Future Work References Improved Migrating Birds Optimization Algorithm to Solve Hybrid Flowshop Scheduling Problem with Lot-Streaming of Random Breakdown 1 Introduction 2 Problem Statement 3 The IMBO Algorithm for RBHLFS 3.1 Population Initialization 3.2 Neighborhood Structure 3.3 Local Search and Reset Mechanism 3.4 The Proposed Algorithm 4 Experimental Results 5 Conclusions References Building Knowledge Base for the Domain of Economic Mobility of Older Workers 1 Introduction 2 Building Domain Lexicon 2.1 Domain Specificity Score 2.2 Phrase Extraction and Term Recognition 2.3 Relation Extraction 3 Description Guided Topic Modeling 3.1 Algorithm Details 3.2 Experimentation Settings and Results 4 Constructing Domain Ontology 5 Case Study on the Issue of Broadband Access 6 Conclusions and Future Work References Optimisation of a Workpiece Clamping Position with Reinforcement Learning for Complex Milling Applications 1 Introduction 2 ML Applications in Mechanical Engineering 3 Problem Statement 4 RL Experiment Setup 4.1 State Space, Action Space and Reward Function 4.2 Search Efficiency Modifications 4.3 RL Agent Training and Validation 4.4 Data Generation and Approximation of the Simulation with Machine Learning 5 Experimental Results 6 Conclusion and Future Work References Thresholding Procedure via Barzilai-Borwein Rules for the Steplength Selection in Stochastic Gradient Methods 1 Introduction 2 Novel Contribution in Steplength Selection via Ritz-like Values 3 Numerical Experiments 4 Conclusions and Future Works References Learning Beam Search: Utilizing Machine Learning to Guide Beam Search for Solving Combinatorial Optimization Problems 1 Introduction 2 Related Work 3 Learning Beam Search 4 Case Studies 5 State Graphs for the LCS and CLCS Problems 6 ML Models for the LCS and CLCS Problems 7 Experimental Evaluation 7.1 LCS Experiments 7.2 CLCS Experiments 8 Conclusions and Future Work References Modular Networks Prevent Catastrophic Interference in Model-Based Multi-task Reinforcement Learning 1 Introduction 2 Related Work 3 Method Description 3.1 Vector-Quantized Variational Autoencoder 3.2 Recurrent Dynamics Models 3.3 Context Detection 3.4 Planning 3.5 Training 4 Experiments 4.1 Evaluation 5 Conclusion References A New Nash-Probit Model for Binary Classification 1 Introduction 1.1 The Nash-Probit Game 2 Covariance Matrix Adaptation - Nash - Evolution Strategy 3 Numerical Examples 4 Conclusions References An Optimization Method for Accurate Nonparametric Regressions on Stiefel Manifolds 1 Introduction 2 Preliminaries 3 Regression on Stiefel Manifolds 4 Experimental Results 4.1 Special Case with Directions and Rotations 4.2 Special Case with Procrustes Process 5 Conclusion References Using Statistical and Artificial Neural Networks Meta-learning Approaches for Uncertainty Isolation in Face Recognition by the Established Convolutional Models 1 Introduction 2 Machine Learning and Uncertainty 2.1 High-Level View on Classification with ANN 2.2 Trusted Accuracy Metrics 2.3 Bayesian View on ANN Classification 2.4 Proposed Solution: Supervisor ANN 3 Data Set 4 Experiments 5 Results 6 Discussion, Conclusions, and Future Work References Multi-Asset Market Making via Multi-Task Deep Reinforcement Learning 1 Introduction 2 Related Work 3 The Proposed Method 4 Experiments 4.1 Data and Settings 4.2 Results and Discussion 5 Conclusion References Evaluating Hebbian Learning in a Semi-supervised Setting 1 Introduction 2 Related Work 2.1 Semi-supervised Training and Sample Efficiency 2.2 Hebbian Learning 3 Hebbian Learning Strategies 4 Sample Efficiency Scenario and Semi-supervised Approach Based on Hebbian Learning 5 Experimental Setup 5.1 Datasets Used for the Experiments 5.2 Network Architecture and Training 5.3 Testing Sample Efficiency at Different Layer Depths 5.4 Details of Training 6 Results and Discussion 6.1 CIFAR10 6.2 CIFAR100 6.3 Tiny ImageNet 7 Conclusions and Future Work References Experiments on Properties of Hidden Structures of Sparse Neural Networks 1 Introduction 2 Sparse Neural Networks 3 Related Work 4 Experiments 4.1 Pruning Feed-Forward Networks 4.2 Pruning Recurrent Networks 4.3 Random Structural Priors for Recurrent Neural Networks 4.4 Architectural Performance Prediction in Neural Architecture Search 5 Discussion and Conclusion and Future Work References Active Learning for Capturing Human Decision Policies in a Data Frugal Context 1 Introduction 1.1 Cognitive Shadow 1.2 Frugal Data Machine Learning 1.3 Active Learning for Frugal Human Policy Capturing 2 Methods 2.1 Rival Strategies 2.2 Classifiers 2.3 Datasets 2.4 Evaluation Methods 3 Analysis and Results 3.1 AMASCOS Dataset 3.2 AMASCOS Dataset with Outliers 3.3 Complex Pattern Dataset 4 Discussion References Adversarial Perturbations for Evolutionary Optimization 1 Introduction 2 Adversarial Examples and Adversarial Perturbations 3 Adversarial Examples as Promising Solutions 3.1 Learning to Discriminate the Quality of the Solutions 3.2 Generating Promising Solutions with Adversarial Attacks 3.3 Weaker Models Make More Adversarial Examples 4 Surrogate Assisted EA with Adversarial Sampling 4.1 Inserting Adversarial Perturbation Methods in EAs 5 Related Work 6 Experiments 6.1 Problem Benchmark and Parameters of the Algorithm 6.2 Initialization Schemes for Adversarial Perturbations 6.3 Performance of Adversarial Perturbation Methods 6.4 Network Tricking 7 Conclusions References Cascaded Classifier for Pareto-Optimal Accuracy-Cost Trade-Off Using Off-the-Shelf ANNs 1 Introduction 2 Related Work and Background 3 Methodology 3.1 Architecture and Quantitative Optimization 3.2 Vehicle 3.3 Analyses of Pass-On Criteria 3.4 Generation of Multi-stage Classifiers 4 Case Study: CIFAR 10 5 Conclusion References Conditional Generative Adversarial Networks for Speed Control in Trajectory Simulation 1 Introduction 2 Related Work 2.1 Generative Models 2.2 Conditioned Generation 2.3 Problem Formulation 3 Methodology 3.1 Preprocessing 3.2 Feature Extraction 3.3 Aggregation Methods 3.4 Speed Forecasting 3.5 Decoder 3.6 Discriminator 3.7 Losses 4 Experiments 4.1 Datasets 4.2 Simulation 4.3 Effect of Aggregation Method 4.4 Trajectory Prediction 5 Conclusion and Future Work References An Integrated Approach to Produce Robust Deep Neural Network Models with High Efficiency 1 Introduction 1.1 Background 2 Related Work 2.1 Binary Quantization 2.2 Adversarial Attacks 2.3 Adversarial Training 3 Quantization of EnResNet 3.1 Experimental Setup 3.2 Result 4 Trade-Off Between Robust Accuracy and Natural Accuracy 4.1 Previous Work and Our Methodology 4.2 Experiment and Result 4.3 Analysis of Trade-Off Functions 5 Further Balance of Efficiency and Robustness: Structured Sparse Quantized Neural Network via Ternary/4-Bit Quantization and Trade-Off Loss Function 5.1 Sparse Neural Network Delivered by High-Precision Quantization 6 Benckmarking Adversarial Robustness of Quantized Model 7 Conclusion References Leverage Score Sampling for Complete Mode Coverage in Generative Adversarial Networks 1 Introduction 2 Sampling with Ridge Leverage Scores 2.1 Approximation Schemes 3 Numerical Experiments 3.1 Synthetic Data 3.2 Unbalanced MNIST 3.3 Unbalanced CIFAR10 4 Conclusion References Public Transport Arrival Time Prediction Based on GTFS Data 1 Introduction 2 Preprocessing Static and Real-Time GTFS Data 2.1 PT Provider and GTFS Data 2.2 GTFS Data Errors and Proposed Solutions 2.3 Cleansing and Reconstructing GTFS Data (CR-GTFS) Tool 3 Methodology 3.1 Problem Formulation 3.2 Machine Learning Methods Compared 4 Experimental Study 4.1 Experimental Protocol - Parameter Selection 4.2 Results and Discussion 5 Conclusion References The Optimized Social Distance Lab 1 Introduction 2 Related Work 3 Research Methodology 3.1 User Route Generation 3.2 Layout Optimization 4 Results 5 Conclusion References Distilling Financial Models by Symbolic Regression 1 Introduction 1.1 Related Works 2 Financial Stochastic Processes - Itô Formula for Brownian Motion 3 Symbolic Regression by Genetic Programming 4 Methodology 4.1 Symbolic Regression and Financial Processes Variation 5 Experimental Results 5.1 Experimental Protocol 6 Concluding Remarks References Analyzing Communication Broadcasting in the Digital Space 1 Introduction 2 Data and Methodological Approach 2.1 Information Entropy 2.2 Entropy and Twitter Trends 2.3 Text Analysis 3 Case Study Discussion: Information on the Restrictions Procedure During Covid-19 Pandemic in Italy 4 Conclusions 4.1 Future Work References Multivariate LSTM for Stock Market Volatility Prediction 1 Introduction 2 Volatility 3 Related Work 4 Long Short-Term Memory Networks 5 Methodology and Experimental Analysis 5.1 Test 1 - Benchmark Testing 5.2 Test 2 - Multivariate Input LSTM Testing 6 Conclusions References Author Index