دسترسی نامحدود
برای کاربرانی که ثبت نام کرده اند
برای ارتباط با ما می توانید از طریق شماره موبایل زیر از طریق تماس و پیامک با ما در ارتباط باشید
در صورت عدم پاسخ گویی از طریق پیامک با پشتیبان در ارتباط باشید
برای کاربرانی که ثبت نام کرده اند
درصورت عدم همخوانی توضیحات با کتاب
از ساعت 7 صبح تا 10 شب
دسته بندی: کامپیوتر ویرایش: نویسندگان: Masayuki Murata. Kenji Leibnitz سری: ISBN (شابک) : 9813349751, 9789813349759 ناشر: Springer سال نشر: 2021 تعداد صفحات: 239 زبان: English فرمت فایل : PDF (درصورت درخواست کاربر به PDF، EPUB یا AZW3 تبدیل می شود) حجم فایل: 11 مگابایت
در صورت تبدیل فایل کتاب Fluctuation-Induced Network Control and Learning: Applying the Yuragi Principle of Brain and Biological Systems به فرمت های PDF، EPUB، AZW3، MOBI و یا DJVU می توانید به پشتیبان اطلاع دهید تا فایل مورد نظر را تبدیل نمایند.
توجه داشته باشید کتاب کنترل و یادگیری شبکه ناشی از نوسانات: به کارگیری اصل یوراگی در مغز و سیستم های بیولوژیکی نسخه زبان اصلی می باشد و کتاب ترجمه شده به فارسی نمی باشد. وبسایت اینترنشنال لایبرری ارائه دهنده کتاب های زبان اصلی می باشد و هیچ گونه کتاب ترجمه شده یا نوشته شده به فارسی را ارائه نمی دهد.
کتاب از دو بخش تشکیل شده است. بخش 1 در چهار فصل مقدمه ای بر پیشینه بیولوژیکی مفهوم یوراگی و همچنین نحوه اعمال این مفاهیم در مسائل شبکه ارائه می دهد. بخش 2 کمک های بیشتری را ارائه می دهد که مفهوم اصلی یوراگی را به مدل جذب کننده بیزی از تصمیم گیری ادراکی انسان گسترش می دهد. در شش فصل بخش دوم، برنامههای کاربردی در زمینههای مختلف در کنترل شبکه اطلاعات و هوش مصنوعی، از پیکربندی مجدد شبکههای مجازی، اینترنت اشیاء تعریفشده توسط نرمافزار، و شبکههای گسترده کم مصرف ارائه شدهاند.
این کتاب برای کسانی که در زمینههای شبکههای اطلاعاتی، سیستمهای توزیعشده و یادگیری ماشین کار میکنند و به دنبال مکانیزمهای طراحی جدید برای کنترل سیستمهای در حال تغییر پویا در مقیاس بزرگ هستند، مفید خواهد بود.The book consists of two parts. Part 1 provides in four chapters an introduction to the biological background of the Yuragi concept as well as how these are applied to networking problems. Part 2 provides additional contributions that extend the original Yuragi concept to a Bayesian attractor model from human perceptual decision making. In the six chapters of the second part, applications to various fields in information network control and artificial intelligence are presented, ranging from virtual network reconfigurations, a software-defined Internet of Things, and low-power wide-area networks.
This book will benefit those working in the fields of information networks, distributed systems, and machine learning who seek new design mechanisms for controlling large-scale dynamically changing systems.Preface Contents Contributors Part I Fluctuation-Based Control Systems: Yuragi Concept 1 Introduction to Yuragi Theory and Yuragi Control 1.1 Introduction 1.2 Principles of Self-organization 1.2.1 Centralized and Distributed Control 1.2.2 Characteristics of Self-organized Systems 1.2.3 Role of Noise in Self-organized Systems 1.3 Examples of Nature-Inspired Models Utilizing Noise 1.3.1 Random Walks and Brownian Motion 1.3.2 Impact of Noise on Visual Perception and Decision-Making in the Brain 1.3.3 Signal Enhancement Through Stochastic Resonance 1.3.4 Evolutionary and Genetic Algorithms 1.3.5 Routing Methods Inspired by Social Insects 1.3.5.1 Ant Colony Optimization (ACO) 1.3.5.2 AntNet in Packet-Switched Networks 1.3.5.3 AntHocNet in Mobile Ad-Hoc Networks 1.3.5.4 BeeHive for Wired Connectionless Networks 1.3.6 Tug-of-War Model for Solving the Multi-Armed Bandit Problem 1.4 Mathematical Formulation of Noise-Driven Systems 1.4.1 Stability and Attractors 1.4.2 Dynamic Systems Under the Influence of Noise 1.4.3 Relationship Between Fluctuation and Its Response 1.5 Yuragi Model for Attractor Selection 1.5.1 Adaptive Response of Gene Network to Nutrient Availability 1.5.2 Modeling the Interactions of Gene Expression and Metabolic Flux 1.5.3 Gaussian Mixture Model Attractors 1.6 Conclusion References 2 Functional Roles of Yuragi in Biosystems 2.1 Introduction 2.2 How Muscle Works 2.2.1 Biological Molecular Motor in Muscle 2.2.2 Single Molecule Imaging and Nano-Detection 2.2.3 Bias Brownian Motion Model (Yuragi Model) 2.3 How the Human Brain Recognizes Puzzling Figures by Means of Yuragi Activity 2.3.1 Yuragi Activity in the Human Brain 2.3.2 Psychophysical Experiment of Hidden-Figure Recognition 2.3.3 Yuragi Model of Hidden-Figure Recognition 2.4 Conclusion References 3 Next-Generation Bio- and Brain-Inspired Networking 3.1 Yuragi-Based Routing and Other Network Control Methods 3.2 Multi-Dimensional Yuragi Model 3.3 Application to Single Network Control 3.3.1 Multipath Routing 3.3.2 Routing in Mobile Ad Hoc Networks 3.4 Application to Multi-Network Control 3.4.1 Multipath Routing in Layered Networks 3.4.2 Cluster-Based Routing in Wireless Sensor Networks 3.4.3 Network Resource Allocation to Multiple Applications on Multiple Vehicles 3.5 Exploration of Better Attractors 3.6 Conclusion References 4 Yuragi-Based Virtual Network Control 4.1 Introduction 4.2 Attractor Selection 4.2.1 Concept of Attractor Selection 4.2.2 Cell Model 4.2.3 Mathematical Model of Attractor Selection 4.3 Virtual Network Control Based on Attractor Selection 4.3.1 Virtual Network Control 4.3.2 Overview of Virtual Network Control Based on Attractor Selection 4.3.3 Dynamics of Virtual Network Control 4.3.4 Attractor Structure 4.4 Attractor Structure Design 4.4.1 Problem Formulation 4.4.2 Dynamic Reconfiguration of Attractor Structure 4.4.3 Design of Diverse Attractor Structures 4.4.4 Scalable Design of Attractor Structure by Graph Contraction 4.5 Related Work 4.6 Conclusion References Part II Yuragi Learning: Extension to Artificial Intelligence 5 Introduction to Yuragi Learning 5.1 Yuragi Learning: An Introduction 5.2 Bayesian Attractor Model for Human Perceptual Decision-Making 5.2.1 Overview 5.2.2 Inference Mechanism for Decision-Making by Bayesian Attractor Model 5.2.3 Design Choices for Bayesian-Attractor-Model-Based Network Control 5.2.3.1 Setting Parameters r and q 5.2.3.2 How to Determine a Criterion for Decision-Making 5.2.3.3 Preparing Attractors 5.3 Virtual Network Reconfiguration Based on Yuragi Learning 5.3.1 Overview of Virtual Network Reconfiguration 5.3.2 Virtual Network Reconfiguration Algorithm 5.3.2.1 Preparation 5.3.2.2 (Step 1) Calculate Confidence Using the BAM-Based Approach 5.3.2.3 (Step 2) Change the Control Phase and Execute the Control 5.4 Performance Evaluation of Yuragi Learning 5.4.1 Evaluation Environments 5.4.2 Characteristics of Virtual Network Reconfiguration Framework 5.4.3 Advantages of Virtual Network Reconfiguration Framework 5.4.4 Impact of the Number of Attractors 5.5 Yuragi Learning with Linear Regression 5.5.1 Virtual Network Reconfiguration Algorithm with Linear Regression 5.5.1.1 (Step 1) Fit the Traffic Situation by Linear Regression 5.5.1.2 (Step 2) Calculate a New VN 5.5.2 Effect of Linear Regression 5.6 Preparing/Updating Attractors in Yuragi Learning 5.6.1 Approach for Preparing Attractors 5.6.2 Approach for Updating Attractors 5.7 Conclusion References 6 Fast/Slow-Pathway Bayesian Attractor Model for IoT Networks Based on Software-Defined Networking with Virtual Network Slicing 6.1 Introduction 6.2 Bayesian Attractor Model 6.2.1 Decision-Making Process of the Brain 6.2.2 The Analytical Model of BAM 6.2.3 Fast/Slow-Pathway Bayesian Attractor Model 6.3 Proposed Architecture 6.4 Simulation Results 6.5 Conclusion References 7 Application to IoT Network Control: Predictive Network Control Based on Real-World Information 7.1 Introduction 7.2 Predictive Network Control Based on Yuragi Learning 7.2.1 Model of Human Cognition 7.2.1.1 Abstraction 7.2.1.2 Generative Model 7.2.1.3 Update of State 7.2.1.4 Decision-Making 7.2.2 Application of Yuragi Learning to Predictive Network Control 7.2.2.1 Overview 7.2.2.2 Options and Network Configuration Corresponding to Each Option 7.2.2.3 Abstraction in Predictive Network Control 7.2.2.4 Generative Model in Predictive Network Control 7.2.2.5 Update in Predictive Network Control 7.2.2.6 Decision-Making in Predictive Network Control 7.3 Hierarchical Predictive Network Control Based on Yuragi Learning: Resource Allocation Among Network Slices 7.3.1 Overview 7.3.2 Network Slice Controller 7.3.2.1 Options and Network Configuration Corresponding to Each Option 7.3.2.2 Identification of Current Condition 7.3.2.3 Configuration of Network Slice 7.3.3 Resource Allocation Controller 7.3.3.1 Observations in Resource Allocation Controller 7.3.3.2 Options and Network Configuration Corresponding to Each Option 7.3.3.3 Identification of Current Condition 7.3.3.4 Resource Allocation 7.4 Simple Example 7.4.1 Scenario 7.4.1.1 Network 7.4.1.2 Network Slices 7.4.1.3 Traffic and Real-World Information 7.4.1.4 Controller Settings 7.4.2 Results 7.5 Conclusion References 8 Another Prediction Method and Application to Low-Power Wide-Area Networks 8.1 Introduction 8.2 Bayesian Attractor Model 8.2.1 Generative Model 8.2.2 State Estimation by Bayesian Filters 8.2.3 Comparison of Bayesian Filters in the Bayesian Attractor Model 8.3 Methods for Channel Congestion Prediction and Channel Assignment 8.4 Evaluation 8.4.1 Simulation Settings 8.4.2 Simulation Results 8.5 Conclusion References 9 Artificial Intelligence Platform for Yuragi Learning 9.1 Introduction 9.2 Overview of a Brain-Inspired Cognitive Computing System 9.2.1 Conceptual Design of a Brain-Inspired Cognitive Computing System 9.2.2 Architecture Design 9.3 Overview of Yuragi Learning General-Purpose Data Analysis Platform (YGAP) 9.4 Example of YGAP Usage 9.4.1 Preparation of Analysis Data and Configuration of Data Files 9.4.2 Setup of YGAP Console 9.4.3 Training the Model Using Training Data 9.4.4 Classification of Test Data 9.5 Conclusion References 10 Bias-Free Yuragi Learning 10.1 Introduction 10.2 Classification System with Yuragi Learning 10.2.1 Feature Extractor for Preprocessing of Classification 10.2.2 Yuragi Learning as a Classifier 10.2.3 Yuragi Learning: State Update 10.2.4 Yuragi Learning: Decision-Making 10.3 New Category Acquisition in Yuragi Learning 10.3.1 Detecting New Category 10.3.2 Adding New Category with Initial Data 10.3.3 Gathering Training Data 10.3.4 Updating New Category with Gathered Data 10.4 Numerical Simulation 10.4.1 Simulation Scenario 10.4.2 Accuracy and Sensitivity 10.4.3 Using a Neural Network as Classifier 10.4.4 Results 10.5 Handwritten Character Recognition 10.5.1 Evaluation Scenario with Handwritten Character 10.5.2 Using a Convolutional Neural Network as a Feature Extractor 10.5.3 Using Neural Network as Classifier 10.5.4 Results 10.6 Summary References Index