دسترسی نامحدود
برای کاربرانی که ثبت نام کرده اند
برای ارتباط با ما می توانید از طریق شماره موبایل زیر از طریق تماس و پیامک با ما در ارتباط باشید
در صورت عدم پاسخ گویی از طریق پیامک با پشتیبان در ارتباط باشید
برای کاربرانی که ثبت نام کرده اند
درصورت عدم همخوانی توضیحات با کتاب
از ساعت 7 صبح تا 10 شب
ویرایش: نویسندگان: Andreas Holzinger, Randy Goebel, Ruth Fong, Taesup Moon, Klaus-Robert Müller, Wojciech Samek سری: Lecture Notes in Artificial Intelligence, 13200 ISBN (شابک) : 9783031040825, 9783031040832 ناشر: Springer سال نشر: 2022 تعداد صفحات: [397] زبان: English فرمت فایل : PDF (درصورت درخواست کاربر به PDF، EPUB یا AZW3 تبدیل می شود) حجم فایل: 35 Mb
در صورت ایرانی بودن نویسنده امکان دانلود وجود ندارد و مبلغ عودت داده خواهد شد
در صورت تبدیل فایل کتاب xxAI - Beyond Explainable AI. International Workshop Held in Conjunction with ICML 2020 July 18, 2020, Vienna, Austria Revised and Extended Papers به فرمت های PDF، EPUB، AZW3، MOBI و یا DJVU می توانید به پشتیبان اطلاع دهید تا فایل مورد نظر را تبدیل نمایند.
توجه داشته باشید کتاب xxAI - فراتر از هوش مصنوعی قابل توضیح. کارگاه بین المللی در ارتباط با ICML 2020 در 18 ژوئیه 2020، وین، اتریش مقالات اصلاح شده و توسعه یافته برگزار شد نسخه زبان اصلی می باشد و کتاب ترجمه شده به فارسی نمی باشد. وبسایت اینترنشنال لایبرری ارائه دهنده کتاب های زبان اصلی می باشد و هیچ گونه کتاب ترجمه شده یا نوشته شده به فارسی را ارائه نمی دهد.
این یک کتاب دسترسی آزاد است. یادگیری ماشینی آماری (ML) باعث رنسانس هوش مصنوعی (AI) شده است. در حالی که موفقترین مدلهای ML، از جمله شبکههای عصبی عمیق (DNN)، پیشبینی بهتری را توسعه دادهاند، آنها به طور فزایندهای پیچیدهتر شدهاند که به قیمت تفسیرپذیری انسانی (همبستگی در مقابل علیت) تمام میشود. حوزه هوش مصنوعی قابل توضیح (xAI) با هدف ایجاد ابزارها و مدلهایی پدید آمده است که هم برای انسان قابل پیشبینی و تفسیر و قابل درک باشد. هوش مصنوعی قابل توضیح علاقه زیادی به یادگیری ماشین و جوامع تحقیقاتی هوش مصنوعی در سراسر دانشگاه، صنعت و دولت دارد و اکنون فرصتی عالی برای پیشبرد برنامههای هوش مصنوعی قابل توضیح موفق وجود دارد. این جلد به جامعه تحقیقاتی کمک میکند تا این فرآیند را تسریع کنند، استفاده سیستماتیکتر از هوش مصنوعی قابل توضیح را برای بهبود مدلها در برنامههای کاربردی متنوع ترویج کنند، و در نهایت درک بهتری داشته باشند که چگونه روشهای هوش مصنوعی قابل توضیح فعلی باید بهبود یابند و چه نوع تئوری هوش مصنوعی قابل توضیح چیست. لازم است. پس از مروری بر روشها و چالشهای فعلی، ویراستاران فصلهایی را شامل میشوند که پیشرفتهای جدید در هوش مصنوعی قابل توضیح را توصیف میکنند. مشارکتهای پژوهشگران پیشرو در این زمینه، از دانشگاه و صنعت گرفته شدهاند، و بسیاری از فصلها رویکردی بینرشتهای روشن برای حل مسئله دارند. مفاهیم مورد بحث شامل توضیح پذیری، علیت و رابط هوش مصنوعی با انسان است و کاربردها شامل پردازش تصویر، زبان طبیعی، قانون، انصاف و علم آب و هوا است.
This is an open access book. Statistical machine learning (ML) has triggered a renaissance of artificial intelligence (AI). While the most successful ML models, including Deep Neural Networks (DNN), have developed better predictivity, they have become increasingly complex, at the expense of human interpretability (correlation vs. causality). The field of explainable AI (xAI) has emerged with the goal of creating tools and models that are both predictive and interpretable and understandable for humans. Explainable AI is receiving huge interest in the machine learning and AI research communities, across academia, industry, and government, and there is now an excellent opportunity to push towards successful explainable AI applications. This volume will help the research community to accelerate this process, to promote a more systematic use of explainable AI to improve models in diverse applications, and ultimately to better understand how current explainable AI methods need to be improved and what kind of theory of explainable AI is needed. After overviews of current methods and challenges, the editors include chapters that describe new developments in explainable AI. The contributions are from leading researchers in the field, drawn from both academia and industry, and many of the chapters take a clear interdisciplinary approach to problem-solving. The concepts discussed include explainability, causability, and AI interfaces with humans, and the applications include image processing, natural language, law, fairness, and climate science.
Preface Organization Contents Editorial xxAI - Beyond Explainable Artificial Intelligence 1 Introduction and Motivation for Explainable AI 2 Explainable AI: Past and Present 3 Book Structure References Current Methods and Challenges Explainable AI Methods - A Brief Overview 1 Introduction 2 Explainable AI Methods - Overview 2.1 LIME (Local Interpretable Model Agnostic Explanations) 2.2 Anchors 2.3 GraphLIME 2.4 Method: LRP (Layer-wise Relevance Propagation) 2.5 Deep Taylor Decomposition (DTD) 2.6 Prediction Difference Analysis (PDA) 2.7 TCAV (Testing with Concept Activation Vectors) 2.8 XGNN (Explainable Graph Neural Networks) 2.9 SHAP (Shapley Values) 2.10 Asymmetric Shapley Values (ASV) 2.11 Break-Down 2.12 Shapley Flow 2.13 Textual Explanations of Visual Models 2.14 Integrated Gradients 2.15 Causal Models 2.16 Meaningful Perturbations 2.17 EXplainable Neural-Symbolic Learning (X-NeSyL) 3 Conclusion and Future Outlook References General Pitfalls of Model-Agnostic Interpretation Methods for Machine Learning Models 1 Introduction 2 Assuming One-Fits-All Interpretability 3 Bad Model Generalization 4 Unnecessary Use of Complex Models 5 Ignoring Feature Dependence 5.1 Interpretation with Extrapolation 5.2 Confusing Linear Correlation with General Dependence 5.3 Misunderstanding Conditional Interpretation 6 Misleading Interpretations Due to Feature Interactions 6.1 Misleading Feature Effects Due to Aggregation 6.2 Failing to Separate Main from Interaction Effects 7 Ignoring Model and Approximation Uncertainty 8 Ignoring the Rashomon Effect 9 Failure to Scale to High-Dimensional Settings 9.1 Human-Intelligibility of High-Dimensional IML Output 9.2 Computational Effort 9.3 Ignoring Multiple Comparison Problem 10 Unjustified Causal Interpretation 11 Discussion References CLEVR-X: A Visual Reasoning Dataset for Natural Language Explanations 1 Introduction 2 Related Work 3 The CLEVR-X Dataset 3.1 The CLEVR Dataset 3.2 Dataset Generation 3.3 Dataset Analysis 3.4 User Study on Explanation Completeness and Relevance 4 Experiments 4.1 Experimental Setup 4.2 Evaluating Explanations Generated by State-of-the-Art Methods 4.3 Analyzing Results on CLEVR-X by Question and Answer Types 4.4 Influence of Using Different Numbers of Ground-Truth Explanations 4.5 Qualitative Explanation Generation Results 5 Conclusion References New Developments in Explainable AI A Rate-Distortion Framework for Explaining Black-Box Model Decisions 1 Introduction 2 Related Works 3 Rate-Distortion Explanation Framework 3.1 General Formulation 3.2 Implementation 4 Experiments 4.1 Images 4.2 Audio 4.3 Radio Maps 5 Conclusion References Explaining the Predictions of Unsupervised Learning Models 1 Introduction 2 A Brief Review of Explainable AI 2.1 Approaches to Attribution 2.2 Neuralization-Propagation 3 Kernel Density Estimation 3.1 Explaining Outlierness 3.2 Explaining Inlierness: Direct Approach 3.3 Explaining Inlierness: Random Features Approach 4 K-Means Clustering 4.1 Explaining Cluster Assignments 5 Experiments 5.1 Wholesale Customer Analysis 5.2 Image Analysis 6 Conclusion and Outlook A Attribution on CNN Activations A.1 Attributing Outlierness A.2 Attributing Inlierness A.3 Attributing Cluster Membership References Towards Causal Algorithmic Recourse 1 Introduction 1.1 Motivating Examples 1.2 Summary of Contributions and Structure of This Chapter 2 Preliminaries 2.1 XAI: Counterfactual Explanations and Algorithmic Recourse 2.2 Causality: Structural Causal Models, Interventions, and Counterfactuals 3 Causal Recourse Formulation 3.1 Limitations of CFE-Based Recourse 3.2 Recourse Through Minimal Interventions 3.3 Negative Result: No Recourse Guarantees for Unknown Structural Equations 4 Recourse Under Imperfect Causal Knowledge 4.1 Probabilistic Individualised Recourse 4.2 Probabilistic Subpopulation-Based Recourse 4.3 Solving the Probabilistic Recourse Optimization Problem 5 Experiments 5.1 Compared Methods 5.2 Metrics 5.3 Synthetic 3-Variable SCMs Under Different Assumptions 5.4 Semi-synthetic 7-Variable SCM for Loan-Approval 6 Discussion 7 Conclusion References Interpreting Generative Adversarial Networks for Interactive Image Generation 1 Introduction 2 Supervised Approach 3 Unsupervised Approach 4 Embedding-Guided Approach 5 Concluding Remarks References XAI and Strategy Extraction via Reward Redistribution 1 Introduction 2 Background 2.1 Explainability Methods 2.2 Reinforcement Learning 2.3 Credit Assignment in Reinforcement Learning 2.4 Methods for Credit Assignment 2.5 Explainability Methods for Credit Assignment 2.6 Credit Assignment via Reward Redistribution 3 Strategy Extraction via Reward Redistribution 3.1 Strategy Extraction with Profile Models 3.2 Explainable Agent Behavior via Strategy Extraction 4 Experiments 4.1 Gridworld 4.2 Minecraft 5 Limitations 6 Conclusion References Interpretable, Verifiable, and Robust Reinforcement Learning via Program Synthesis 1 Introduction 2 Background on Reinforcement Learning 3 Programmatic Policies 3.1 Traditional Interpretable Models 3.2 State Machine Policies 3.3 List Processing Programs 3.4 Neurosymbolic Policies 4 Synthesizing Programmatic Policies 4.1 Imitation Learning 4.2 Q-Guided Imitation Learning 4.3 Updating the DNN Policy 4.4 Program Synthesis for Supervised Learning 5 Case Studies 5.1 Interpretability 5.2 Verification 5.3 Robustness 6 Conclusions and Future Work References Interpreting and Improving Deep-Learning Models with Reality Checks 1 Interpretability: For What and For Whom? 2 Computing Interpretations for Feature Interactions and Transformations 2.1 Contextual Decomposition (CD) Importance Scores for General DNNs 2.2 Agglomerative Contextual Decomposition (ACD) 2.3 Transformation Importance with Applications to Cosmology (TRIM) 3 Using Attributions to Improve Models 3.1 Penalizing Explanations to Align Neural Networks with Prior Knowledge (CDEP) 3.2 Distilling Adaptive Wavelets from Neural Networks with Interpretations 4 Real-Data Problems Showcasing Interpretations 4.1 Molecular Partner Prediction 4.2 Cosmological Parameter Prediction 4.3 Improving Skin Cancer Classification via CDEP 5 Discussion 5.1 Building/Distilling Accurate and Interpretable Models 5.2 Making Interpretations Useful References Beyond the Visual Analysis of Deep Model Saliency 1 Introduction 2 Saliency-Based XAI in Vision 2.1 White-Box Models 2.2 Black-Box Models 3 XAI for Improved Models: Excitation Dropout 4 XAI for Improved Models: Domain Generalization 5 XAI for Improved Models: Guided Zoom 6 Conclusion References ECQx: Explainability-Driven Quantization for Low-Bit and Sparse DNNs 1 Introduction 2 Related Work 3 Neural Network Quantization 3.1 Entropy-Constrained Quantization 4 Explainability-Driven Quantization 4.1 Layer-Wise Relevance Propagation 4.2 eXplainability-Driven Entropy-Constrained Quantization 5 Experiments 5.1 Experimental Setup 5.2 ECQx Results 6 Conclusion References A Whale's Tail - Finding the Right Whale in an Uncertain World 1 Introduction 2 Related Work 3 Humpback Whale Data 3.1 Image Data 3.2 Expert Annotations 4 Methods 4.1 Landmark-Based Identification Framework 4.2 Uncertainty and Sensitivity Analysis 5 Experiments and Results 5.1 Experimental Setup 5.2 Uncertainty and Sensitivity Analysis of the Landmarks 5.3 Heatmapping Results and Comparison with Whale Expert Knowledge 5.4 Spatial Uncertainty of Individual Landmarks 6 Conclusion and Outlook References Explainable Artificial Intelligence in Meteorology and Climate Science: Model Fine-Tuning, Calibrating Trust and Learning New Science 1 Introduction 2 XAI Applications 2.1 XAI in Remote Sensing and Weather Forecasting 2.2 XAI in Climate Prediction 2.3 XAI to Extract Forced Climate Change Signals and Anthropogenic Footprint 3 Development of Attribution Benchmarks for Geosciences 3.1 Synthetic Framework 3.2 Assessment of XAI Methods 4 Conclusions References An Interdisciplinary Approach to Explainable AI Varieties of AI Explanations Under the Law. From the GDPR to the AIA, and Beyond 1 Introduction 1.1 Functional Varieties of AI Explanations 1.2 Technical Varieties of AI Explanations 1.3 Roadmap of the Paper 2 Explainable AI Under Current Law 2.1 The GDPR: Rights-Enabling Transparency 2.2 Contract and Tort Law: Technical and Protective Transparency 2.3 Banking Law: More Technical and Protective Transparency 3 Regulatory Proposals at the EU Level: The AIA 3.1 AI with Limited Risk: Decision-Enabling Transparency (Art. 52 AIA)? 3.2 AI with High Risk: Encompassing Transparency (Art. 13 AIA)? 3.3 Limitations 4 Beyond Explainability 4.1 Actionable Explanations 4.2 Connections to Algorithmic Fairness 4.3 Quality Benchmarking 4.4 Interventions and Co-design 5 Conclusion References Towards Explainability for AI Fairness 1 Introduction 2 Fairness 3 AI Explanation 4 Explanation for AI Fairness 4.1 Explanation Guarantees Fairness 4.2 Influence of Explanation on Perception of Fairness 4.3 Fairness and Properties of Features 4.4 Fairness and Counterfactuals 5 Discussion 6 Conclusion References Logic and Pragmatics in AI Explanation 1 Introduction 2 The Logic of Explanations 3 The Pragmatics of Explanations 3.1 Case 1: Conversational Explanations 3.2 Case 2: Explainable AI-Mediated Communication (XAI-MC) 4 Usability, Explaniability and Causability References Author Index