دسترسی نامحدود
برای کاربرانی که ثبت نام کرده اند
برای ارتباط با ما می توانید از طریق شماره موبایل زیر از طریق تماس و پیامک با ما در ارتباط باشید
در صورت عدم پاسخ گویی از طریق پیامک با پشتیبان در ارتباط باشید
برای کاربرانی که ثبت نام کرده اند
درصورت عدم همخوانی توضیحات با کتاب
از ساعت 7 صبح تا 10 شب
ویرایش: نویسندگان: Uday Kamath, John Liu سری: ISBN (شابک) : 9783030833565, 9783030833558 ناشر: Springer International Publishing سال نشر: تعداد صفحات: 0 زبان: English فرمت فایل : EPUB (درصورت درخواست کاربر به PDF، EPUB یا AZW3 تبدیل می شود) حجم فایل: 54 مگابایت
در صورت تبدیل فایل کتاب Explainable Artificial Intelligence: An Introduction to Interpretable Machine Learning به فرمت های PDF، EPUB، AZW3، MOBI و یا DJVU می توانید به پشتیبان اطلاع دهید تا فایل مورد نظر را تبدیل نمایند.
توجه داشته باشید کتاب هوش مصنوعی قابل توضیح: مقدمه ای بر یادگیری ماشینی قابل تفسیر نسخه زبان اصلی می باشد و کتاب ترجمه شده به فارسی نمی باشد. وبسایت اینترنشنال لایبرری ارائه دهنده کتاب های زبان اصلی می باشد و هیچ گونه کتاب ترجمه شده یا نوشته شده به فارسی را ارائه نمی دهد.
این کتاب هم برای خوانندگانی که وارد این رشته می شوند و هم برای افرادی که سابقه هوش مصنوعی دارند و علاقه مند به توسعه برنامه های کاربردی در دنیای واقعی هستند نوشته شده است. این کتاب یک منبع عالی برای پزشکان و محققان در صنعت و دانشگاه است، و مطالعات موردی مورد بحث و مطالب مرتبط می تواند به عنوان الهام بخش برای انواع پروژه ها و تکالیف عملی در یک محیط کلاس درس باشد. من مطمئناً این کتاب را به عنوان یک منبع شخصی برای دروسی که تدریس می کنم نگه خواهم داشت و آن را قویاً به دانش آموزانم توصیه می کنم. --دکتر. کارلوتا دومنیکنی، دانشیار، گروه علوم کامپیوتر، GMU این کتاب یک برنامه درسی برای معرفی تفسیرپذیری به یادگیری ماشین در هر مرحله ارائه می دهد. نویسندگان مثالهای قانعکنندهای ارائه میکنند که نشان میدهد یک روش آموزشی اصلی مانند بحثهای تفسیری پیشرو میتواند توسط معلمان و تلاش مستمر آموزش داده شود. و چه راهی بهتر برای تقویت کیفیت نتایج یادگیری ماشینی و هوش مصنوعی. امیدوارم این کتاب به آغازی برای معلمان، مربیان علوم داده و توسعه دهندگان ML تبدیل شود و با هم هنر یادگیری ماشین تفسیری را تمرین کنیم. -- Anusha Dandapani، مدیر ارشد داده ها و تجزیه و تحلیل، UNICC و دانشکده وابسته، NYU این یک کتاب فوق العاده است! من خوشحالم که نسل بعدی دانشمندان بالاخره می توانند این موضوع مهم را بیاموزند. این اولین کتابی است که می بینم که پوششی به روز و جامع دارد. با تشکر از نویسندگان! --دکتر. سینتیا رودین، استاد علوم کامپیوتر، مهندسی برق و کامپیوتر، علوم آماری و آمار زیستی
This book is written both for readers entering the field, and for practitioners with a background in AI and an interest in developing real-world applications. The book is a great resource for practitioners and researchers in both industry and academia, and the discussed case studies and associated material can serve as inspiration for a variety of projects and hands-on assignments in a classroom setting. I will certainly keep this book as a personal resource for the courses I teach, and strongly recommend it to my students. --Dr. Carlotta Domeniconi, Associate Professor, Computer Science Department, GMU This book offers a curriculum for introducing interpretability to machine learning at every stage. The authors provide compelling examples that a core teaching practice like leading interpretive discussions can be taught and learned by teachers and sustained effort. And what better way to strengthen the quality of AI and Machine learning outcomes. I hope that this book will become a primer for teachers, data Science educators, and ML developers, and together we practice the art of interpretive machine learning. --Anusha Dandapani, Chief Data and Analytics Officer, UNICC and Adjunct Faculty, NYU This is a wonderful book! I’m pleased that the next generation of scientists will finally be able to learn this important topic. This is the first book I’ve seen that has up-to-date and well-rounded coverage. Thank you to the authors! --Dr. Cynthia Rudin, Professor of Computer Science, Electrical and Computer Engineering, Statistical Science, and Biostatistics & Bioinformatics Literature on Explainable AI has up until now been relatively scarce and featured mainly mainstream algorithms like SHAP and LIME. This book has closed this gap by providing an extremely broad review of various algorithms proposed in the scientific circles over the previous 5-10 years. This book is a great guide to anyone who is new to the field of XAI or is already familiar with the field and is willing to expand their knowledge. A comprehensive review of the state-of-the-art Explainable AI methods starting from visualization, interpretable methods, local and global explanations, time series methods, and finishing with deep learning provides an unparalleled source of information currently unavailable anywhere else. Additionally, notebooks with vivid examples are a great supplement that makes the book even more attractive for practitioners of any level. Overall, the authors provide readers with an enormous breadth of coverage without losing sight of practical aspects, which makes this book truly unique and a great addition to the library of any data scientist. Dr. Andrey Sharapov, Product Data Scientist, Explainable AI Expert and Speaker, Founder of Explainable AI-XAI Group
Foreword Preface Why This Book? Who This Book Is For What This Book Covers Acknowledgments Contents Notation Calculus Datasets Functions Variables Probability Sets 1 Introduction to Interpretability and Explainability 1.1 Black-Box problem 1.2 Goals 1.3 Brief History 1.3.1 Porphyrian Tree 1.3.2 Expert Systems 1.3.3 Case-Based Reasoning 1.3.4 Bayesian Networks 1.3.5 Neural Networks 1.4 Purpose 1.5 Societal Impact 1.6 Types of Explanations 1.7 Trade-offs 1.8 Taxonomy 1.8.1 Scope 1.8.2 Stage 1.9 Flowchart for Interpretable and Explainable Techniques 1.10 Resources for Researchers and Practitioners 1.10.1 Books 1.10.2 Relevant University Courses and Classes 1.10.3 Online Resources 1.10.4 Survey Papers 1.11 Book Layout and Details 1.11.1 Structure: Explainable Algorithm 1.11.1.1 Linear Regression References 2 Pre-model Interpretability and Explainability 2.1 Data Science Process and EDA 2.2 Exploratory Data Analysis 2.2.1 EDA Challenges for Explainability 2.2.2 EDA: Taxonomy 2.2.3 Role of EDA in Explainability 2.2.4 Non-graphical: Summary Statistics and Analysis 2.2.4.1 Tools and Libraries 2.2.4.2 Summary Statistics and Analysis 2.2.5 Graphical: Univariate and Multivariate Analysis 2.2.5.1 Tools and Libraries 2.2.5.2 Univariate Analysis 2.2.5.3 Multivariate Analysis 2.2.6 EDA and Time Series 2.2.6.1 Resampling 2.2.6.2 Seasonality and Trend Analysis 2.2.6.3 Autocorrelation, Stationarity, and Differencing 2.2.7 EDA and NLP 2.2.7.1 Text Corpus Statistics 2.2.7.2 N-Grams Analysis 2.2.7.3 Word Cloud 2.2.7.4 Topic Modeling 2.2.7.5 Corpus Visualization 2.2.8 EDA and Computer Vision 2.2.8.1 Distributional Analysis 2.2.8.2 2D Projections 2.3 Feature Engineering 2.3.1 Feature Engineering and Explainability 2.3.2 Feature Engineering Taxonomy and Tools 2.3.2.1 Filter-Based 2.3.2.2 Wrapper-Based 2.3.2.3 Unsupervised 2.3.2.4 Embedded References 3 Model Visualization Techniques and Traditional Interpretable Algorithms 3.1 Model Validation, Evaluation, and Hyperparameters 3.1.1 Tools and Libraries 3.2 Model Selection and Visualization 3.2.1 Validation Curve 3.2.2 Learning Curve 3.3 Classification Model Visualization 3.3.1 Confusion Matrix and Classification Report 3.3.2 ROC and AUC 3.3.3 PRC 3.3.4 Discrimination Thresholds 3.4 Regression Model Visualization 3.4.1 Residual Plots 3.4.2 Prediction Error Plots 3.4.3 Alpha Selection Plots 3.4.4 Cook's Distance 3.5 Clustering Model Visualization 3.5.1 Elbow Method 3.5.2 Silhouette Coefficient Visualizer 3.5.3 Intercluster Distance Maps 3.6 Interpretable Machine Learning Properties 3.7 Traditional Interpretable Algorithms 3.7.1 Tools and Libraries 3.7.2 Linear Regression 3.7.2.1 Regularization 3.7.3 Logistic Regression 3.7.4 Generalized Linear Models 3.7.5 Generalized Additive Models 3.7.6 Naive Bayes 3.7.7 Bayesian Networks 3.7.8 Decision Trees 3.7.9 Rule Induction References 4 Model Interpretability: Advances in Interpretable Machine Learning 4.1 Interpretable vs. Explainable Algorithms 4.2 Tools and Libraries 4.3 Ensemble-Based 4.3.1 Boosted Rulesets 4.3.2 Explainable Boosting Machines (EBM) 4.3.3 RuleFit 4.3.4 Skope-Rules 4.3.5 Iterative Random Forests (iRF) 4.4 Decision Tree-Based 4.4.1 Optimal Classification Trees 4.4.2 Optimal Decision Trees 4.4.2.1 Optimal Sparse Decision Trees 4.4.2.2 DL8.5 4.4.2.3 Generalized and Scalable Optimal Sparse Decision Trees (GOSDT) 4.5 Rule-Based Techniques 4.5.1 Bayesian Or's of And's (BOA) 4.5.2 Bayesian Case Model 4.5.3 Certifiably Optimal RulE ListS (CORELS) 4.5.4 Bayesian Rule Lists 4.6 Scoring System 4.6.1 Supersparse Linear Integer Models References 5 Post-Hoc Interpretability and Explanations 5.1 Tools and Libraries 5.2 Visual Explanation 5.2.1 Partial Dependence Plots 5.2.2 Individual Conditional Expectation Plots 5.2.3 Ceteris Paribus Plots 5.2.4 Accumulated Local Effects Plots 5.2.5 Breakdown Plots 5.2.6 Interaction Breakdown Plots 5.3 Feature Importance 5.3.1 Feature Interaction 5.3.2 Permutation Feature Importance 5.3.3 Ablations: Leave-One-Covariate-Out 5.3.4 Shapley Values 5.3.5 SHAP 5.3.6 KernelSHAP 5.3.7 Anchors 5.3.8 Global Surrogate 5.3.9 LIME 5.4 Example-Based 5.4.1 Contrastive Explanation 5.4.2 kNN 5.4.3 Trust Scores 5.4.4 Counterfactuals 5.4.5 Prototypes/Criticisms 5.4.6 Influential Instances References 6 Explainable Deep Learning 6.1 Applications 6.2 Tools and Libraries 6.3 Intrinsic 6.3.1 Attention 6.3.2 Joint Training 6.4 Perturbation 6.4.1 LIME 6.4.2 Occlusion 6.4.3 RISE 6.4.4 Prediction Difference Analysis 6.4.5 Meaningful Perturbation 6.5 Gradient/Backpropagation 6.5.1 Activation Maximization 6.5.2 Class Model Visualization 6.5.3 Saliency Maps 6.5.4 DeepLIFT 6.5.5 DeepSHAP 6.5.6 Deconvolution 6.5.7 Guided Backpropagation 6.5.8 Integrated Gradients 6.5.9 Layer-Wise Relevance Propagation 6.5.10 Excitation Backpropagation 6.5.11 CAM 6.5.12 Gradient-Weighted CAM 6.5.13 Testing with Concept Activation Vectors References 7 Explainability in Time Series Forecasting, Natural Language Processing, and Computer Vision 7.1 Time Series Forecasting 7.1.1 Tools and Libraries 7.1.2 Model Validation and Evaluation 7.1.3 Model Metrics 7.1.4 Statistical Time Series Models 7.1.4.1 ARIMA Models 7.1.4.2 Exponential Smoothing Models 7.1.5 Prophet: Scalable and Interpretable Machine Learning Approach 7.1.6 Deep Learning and Interpretable Time Series Forecasting 7.2 Natural Language Processing 7.2.1 Explainability, Operationalization, and Visualization Techniques 7.2.1.1 Feature Importance 7.2.1.2 Surrogate Model 7.2.1.3 Example Driven 7.2.1.4 Provenance-Based 7.2.1.5 Declarative Induction 7.2.2 Explanation Quality Evaluation 7.2.2.1 Comparison to the Ground Truth 7.2.2.2 Human Evaluation 7.2.3 Tools and Libraries 7.2.4 Case Study 7.3 Computer Vision 7.3.1 Generating Iconic Examples 7.3.2 Attribution 7.3.3 Semantic Identification 7.3.4 Understanding the Networks 7.3.5 Tools and Libraries 7.3.6 Case Study References 8 XAI: Challenges and Future 8.1 XAI: Challenges 8.1.1 Properties of Explanation 8.1.2 Categories of Explanation 8.1.3 Taxonomy of Explanation Evaluation 8.2 Future 8.2.1 Formalization of Explanation Techniquesand Evaluations 8.2.2 Adoption of Interpretable Techniques 8.2.3 Human-Machine Collaboration 8.2.4 Collective Intelligence from Multiple Disciplines 8.2.5 Responsible AI (RAI) 8.2.6 XAI and Security 8.2.7 Causality and XAI 8.3 Closing Remarks References