ورود به حساب

نام کاربری گذرواژه

گذرواژه را فراموش کردید؟ کلیک کنید

حساب کاربری ندارید؟ ساخت حساب

ساخت حساب کاربری

نام نام کاربری ایمیل شماره موبایل گذرواژه

برای ارتباط با ما می توانید از طریق شماره موبایل زیر از طریق تماس و پیامک با ما در ارتباط باشید


09117307688
09117179751

در صورت عدم پاسخ گویی از طریق پیامک با پشتیبان در ارتباط باشید

دسترسی نامحدود

برای کاربرانی که ثبت نام کرده اند

ضمانت بازگشت وجه

درصورت عدم همخوانی توضیحات با کتاب

پشتیبانی

از ساعت 7 صبح تا 10 شب

دانلود کتاب Distributed Artificial Intelligence: Third International Conference, DAI 2021, Shanghai, China, December 17–18, 2021, Proceedings

دانلود کتاب هوش مصنوعی توزیع شده: سومین کنفرانس بین المللی، DAI 2021، شانگهای، چین، 17 تا 18 دسامبر 2021، مجموعه مقالات

Distributed Artificial Intelligence: Third International Conference, DAI 2021, Shanghai, China, December 17–18, 2021, Proceedings

مشخصات کتاب

Distributed Artificial Intelligence: Third International Conference, DAI 2021, Shanghai, China, December 17–18, 2021, Proceedings

ویرایش: 1 
نویسندگان: , , ,   
سری: Lecture Notes in Computer Science 
ISBN (شابک) : 9783030946623, 9783030946616 
ناشر: Springer 
سال نشر: 2022 
تعداد صفحات: 0 
زبان: English 
فرمت فایل : EPUB (درصورت درخواست کاربر به PDF، EPUB یا AZW3 تبدیل می شود) 
حجم فایل: 34 مگابایت 

قیمت کتاب (تومان) : 56,000



ثبت امتیاز به این کتاب

میانگین امتیاز به این کتاب :
       تعداد امتیاز دهندگان : 13


در صورت تبدیل فایل کتاب Distributed Artificial Intelligence: Third International Conference, DAI 2021, Shanghai, China, December 17–18, 2021, Proceedings به فرمت های PDF، EPUB، AZW3، MOBI و یا DJVU می توانید به پشتیبان اطلاع دهید تا فایل مورد نظر را تبدیل نمایند.

توجه داشته باشید کتاب هوش مصنوعی توزیع شده: سومین کنفرانس بین المللی، DAI 2021، شانگهای، چین، 17 تا 18 دسامبر 2021، مجموعه مقالات نسخه زبان اصلی می باشد و کتاب ترجمه شده به فارسی نمی باشد. وبسایت اینترنشنال لایبرری ارائه دهنده کتاب های زبان اصلی می باشد و هیچ گونه کتاب ترجمه شده یا نوشته شده به فارسی را ارائه نمی دهد.


توضیحاتی در مورد کتاب هوش مصنوعی توزیع شده: سومین کنفرانس بین المللی، DAI 2021، شانگهای، چین، 17 تا 18 دسامبر 2021، مجموعه مقالات

این کتاب مجموعه مقالات داوری سومین کنفرانس بین‌المللی هوش مصنوعی توزیع‌شده، DAI 2021، در شانگهای، چین، در دسامبر 2021 است.

15 مقاله کامل ارائه‌شده در این کتاب با دقت بررسی و از بین 31 مورد ارسالی انتخاب شدند. هدف DAI گردآوری محققان و متخصصان بین‌المللی در زمینه‌های مرتبط از جمله هوش مصنوعی عمومی، سیستم‌های چندعاملی، یادگیری توزیع‌شده، نظریه بازی‌های محاسباتی و غیره است تا یک انجمن واحد، با مشخصات بالا و مشهور بین‌المللی برای تحقیق در تئوری و عمل توزیع‌شده فراهم کند. هوش مصنوعی.


توضیحاتی درمورد کتاب به خارجی

This book constitutes the refereed proceedings of the Third International Conference on Distributed Artificial Intelligence, DAI 2021, held in Shanghai, China, in December 2021.

The 15 full papers presented in this book were carefully reviewed and selected from 31 submissions. DAI aims at bringing together international researchers and practitioners in related areas including general AI, multiagent systems, distributed learning, computational game theory, etc., to provide a single, high-profile, internationally renowned forum for research in the theory and practice of distributed AI.



فهرست مطالب

Preface
Organization
Contents
The Power of Signaling and Its Intrinsic Connection to the Price of Anarchy
	1 Introduction
	2 Preliminaries
		2.1 Cost-Minimization/Payoff-Maximization Games
		2.2 Signaling Schemes and Equilibrium Concepts
	3 The Power of Signaling (PoS)
	4 PoS in Cost-Minimization Games
		4.1 Proof of the PoS Upper Bounds
		4.2 Tightness of the Upper-Bound for PoS
	5 PoS in Payoff-Maximization Games
		5.1 Tightness of PoS(Pri:Pub) at the Robber\'s Game
	6 Discussions and Future Work
	A  Omitted Proofs for Cost-Minimization PoS Bounds
		A.1  Tightness of the Upper-Bound for PoS(Pri:Pub)
		A.2 Tightness of the Upper-Bound for PoS(exP:Pri)
	B  Omitted Proofs for Payoff-Maximization PoS Bounds
		B.1  Tightness of the Lower-Bound for PoS(Pub:FI)
		B.2  Tightness of the Lower-Bound for PoS(exP:Pri)
	C  PoS w.r.t the No Information (NI) Benchmark
	D Non-tightness of PoS in ``Reverse\'\' Routing
	References
Uncertainty-Aware Low-Rank Q-Matrix Estimation for Deep Reinforcement Learning
	1 Introduction
	2 Background
		2.1 Reinforcement Learning
		2.2 Approximate Rank and Matrix Reconstruction
	3 Low-Rank Q-Matrix in DRL
		3.1 Empirical Study of Low-Rank Q-Matrix in MuJoCo
		3.2 Q-Matrix Reconstruction for DRL
	4 DRL with Uncertainty-Aware Q-Matrix Reconstruction
		4.1 Connection Between Rank and Uncertainty
		4.2 Uncertainty-Aware Q-Matrix Reconstruction for DRL
	5 Experiment
		5.1 Experiment Setup
		5.2 Results and Analysis
	6 Conclusion
	A  More Background on Matrix Estimation
	B  Additional Experimental Details
	C  Complete Learning Curves of Table2
	References
SEIHAI: A Sample-Efficient Hierarchical AI for the MineRL Competition
	1 Introduction
	2 Background
		2.1 The MineRL Competition
		2.2 The ObtainDiamond Task
		2.3 The ObtainDiamond MDP
	3 Method
		3.1 The Overall Framework
		3.2 Action Discretization
		3.3 The ChopTree Agent
		3.4 The CraftWoodenPickaxe Agent
		3.5 The DigStone Agent
		3.6 The CraftStonePickaxe Agent
		3.7 The RandomSearch Agent
		3.8 The Scheduler
	4 Experiments
		4.1 Overall Evaluation
		4.2 Agent-Level Evaluation
	5 Related Work
	6 Conclusion
	References
BGC: Multi-agent Group Belief with Graph Clustering
	1 Introduction
	2 Related Work
	3 Background
		3.1 Graph Attention Network
	4 Method
		4.1 Adjacent Matrix via kNN
		4.2 Belief in Graph Clustering
		4.3 Split Loss
		4.4 Decentralization Execution
	5 Experiment
		5.1 Starcraft II
		5.2 Representation
		5.3 Ablation
		5.4 Distributed Execution
	6 Conclusion
	References
Incomplete Distributed Constraint Optimization Problems: Model, Algorithms, and Heuristics
	1 Introduction
	2 Background
	3 Incomplete DCOPs
	4 Solving I-DCOPs
		4.1 SyncBB
		4.2 ALS-MGM
	5 SyncBB Cost-Estimate Heuristic
	6 ALS-MGM Cost-Estimate Heuristic
	7 Theoretical Results
	8 Related Work
	9 Empirical Evaluations
	10 Conclusions
	References
Securities Based Decision Markets
	1 Introduction
	2 Related Work and Notation
		2.1 Scoring Rules
		2.2 Sequentially Shared Scoring Rules
		2.3 Securities Based Prediction Markets
		2.4 Decision Markets
	3 Strictly Proper Securities Based Decision Markets
		3.1 Design
		3.2 Distribution of Realised Payoffs
	4 Worst-Case Losses for Participants and Market Creator
		4.1 Worst-Case Loss for Participants
		4.2 Worst-Case Loss for Market Creator
		4.3 Re-allocation of Worst-Case Losses
	5 Conclusion and Discussion
	References
MARL for Traffic Signal Control in Scenarios with Different Intersection Importance
	1 Introduction
	2 Basic Notation
		2.1 Adaptive Traffic Signal Control
		2.2 Network Markov Game
		2.3 Deep Q-Learning and HDQN
	3 Leader Follower Markov Game
	4 Breadth First Sort Hysteretic DQN
	5 Experiment
		5.1 Scenarios Setting
		5.2 Training Setting
		5.3 Performance Comparison
	6 Conclusion
	References
Safe Distributional Reinforcement Learning
	1 Introduction
	2 Related Work
	3 Background
	4 Problem Formulation
	5 Proposed Method
		5.1 General Principle
		5.2 Techniques for Efficient Implementation
	6 Experimental Results
		6.1 Safety Gym
		6.2 Stock Investment
	7 Conclusion
	A Performance Guarantee Bound
	B More Results
		B.1 Random CMDP
		B.2 Safety Gym
	C More Details on Experiments
		C.1 Random CMDP
		C.2 Stock Transaction
		C.3 Mujoco Simulator
	References
The Positive Effect of User Faults over Agent Perception in Collaborative Settings and Its Use in Agent Design
	1 Introduction
	2 Related Work
	3 The Model
	4 Research Hypotheses
	5 Experimental Framework
	6 Experimental Design
		6.1 Experimental Treatments
		6.2 Measures
	7 Results and Analysis
		7.1 The Effect of User\'s Own Faults over Her Satisfaction with the Collaborative Agent
		7.2 Incorporating User Faults in Agent Design - Proof of Concept
		7.3 Participants\' Qualitative (Textual) Responses
	8 Discussion, Conclusions and Future Work
	A Experimental Framework Interface
	B Experimental Treatments Comparison
	C State Machines
	D Measures
	E Complementary Graphs
		E.1 Competence
		E.2 Recommendation
	F Participants\' Qualitative (Textual) Responses
	References
Behavioral Stable Marriage Problems
	1 Introduction
	2 Multialternative Decision Field Theory (MDFT)
	3 Stable Marriage Problems (SMPs)
	4 Behavioral Stable Marriage Problems (BSMPs)
	5 Complexity Results
	6 Algorithms for BSMPs
	7 Experimental Results
	8 Future Work
	A Proofs of Theorems
	B Algorithms
	C Convergence Analysis for B-LS
	References
FUN-Agent: A HUMAINE Competitor
	1 Introduction
	2 Related Work
	3 2020 HUMAINE Competition
		3.1 Language Processing
		3.2 Formal Specification of the Repeated Task Negotiation Problem
	4 Strategy
		4.1 Conversational Demeanor
		4.2 Bundling
		4.3 State Tracking
		4.4 Conceding
		4.5 Obfuscation as a Competitive Strategy
		4.6 Initial Offer Generation
		4.7 Counteroffer Generation
		4.8 Guarding Against Competitor Sniping
	5 Experimentation and Design
		5.1 Competitor Agents
		5.2 Human Negotiators
		5.3 Negotiation Rounds
	6 Results and Discussion
	7 Conclusion
	References
Signal Instructed Coordination in Cooperative Multi-agent Reinforcement Learning
	1 Introduction
	2 Methods
		2.1 Preliminaries
		2.2 Joint Policy Space with Coordination Signal
		2.3 Signal Instructed Coordination
		2.4 Implementation Details
	3 Experiments
		3.1 Rock-Paper-Scissors-Well (RPSW)
		3.2 Particle Worlds
	4 Related Works
	5 Conclusions
	A Algorithm
	B Proof of Proposition 1
	C Proof of Proposition 2
	D Derivation of the Lower Bound
	E Experiment Details
		E.1 Matrix Game Experiment
		E.2 Particle World Experiment
	F Visualization for Joint Policy of Multi-step Matrix Game
	G Parameter Sensitivity
	References
A Description of the Jadescript Type System
	1 Introduction and Motivation
	2 The Jadescript Type System
		2.1 Basic Types
		2.2 Collection Types
		2.3 Concept, Action, Predicate, and Proposition Types
		2.4 Ontology Types
		2.5 Agent Types
		2.6 Behaviour Types
		2.7 Message Types
	3 Related Work
	4 Conclusions
	References
Combining M-MCTS and Deep Reinforcement Learning for General Game Playing
	1 Introduction
	2 Background
		2.1 General Game Playing
		2.2 Memory-Augmented Monte Carlo Tree Search
		2.3 Deep Reinforcement Learning
	3 Method
		3.1 M-MCTS for GGP
		3.2 Combining M-MCTS with Deep Reinforcement Learning
	4 Experimental Evaluation
		4.1 Evaluation Methodology
		4.2 Results and Analysis
	5 Conclusion
	References
A Two-Step Method for Dynamics of Abstract Argumentation
	1 Introduction
	2 Preliminaries
	3 The Update of Argumentation Frameworks
	4 Properties
	5 Related Work
	6 Conclusion and Further Work
	References
Author Index




نظرات کاربران