دسترسی نامحدود
برای کاربرانی که ثبت نام کرده اند
برای ارتباط با ما می توانید از طریق شماره موبایل زیر از طریق تماس و پیامک با ما در ارتباط باشید
در صورت عدم پاسخ گویی از طریق پیامک با پشتیبان در ارتباط باشید
برای کاربرانی که ثبت نام کرده اند
درصورت عدم همخوانی توضیحات با کتاب
از ساعت 7 صبح تا 10 شب
ویرایش:
نویسندگان: Aditya Bhattacharya
سری:
ISBN (شابک) : 1803246154, 9781803246154
ناشر: Packt Publishing
سال نشر: 2022
تعداد صفحات: 306
زبان: English
فرمت فایل : PDF (درصورت درخواست کاربر به PDF، EPUB یا AZW3 تبدیل می شود)
حجم فایل: 8 مگابایت
در صورت ایرانی بودن نویسنده امکان دانلود وجود ندارد و مبلغ عودت داده خواهد شد
در صورت تبدیل فایل کتاب Applied Machine Learning Explainability Techniques: Make ML models explainable and trustworthy for practical applications using LIME, SHAP, and more به فرمت های PDF، EPUB، AZW3، MOBI و یا DJVU می توانید به پشتیبان اطلاع دهید تا فایل مورد نظر را تبدیل نمایند.
توجه داشته باشید کتاب تکنیکهای توضیحپذیری یادگیری ماشین کاربردی: مدلهای ML را برای کاربردهای عملی با استفاده از LIME، SHAP و موارد دیگر قابل توضیح و قابل اعتماد کنید. نسخه زبان اصلی می باشد و کتاب ترجمه شده به فارسی نمی باشد. وبسایت اینترنشنال لایبرری ارائه دهنده کتاب های زبان اصلی می باشد و هیچ گونه کتاب ترجمه شده یا نوشته شده به فارسی را ارائه نمی دهد.
Cover Title page Copyright and Credits Dedications Contributors Table of Contents Preface Section 1 – Conceptual Exposure Chapter 1: Foundational Concepts of Explainability Techniques Introduction to XAI Understanding the key terms Consequences of poor predictions Summarizing the need for model explainability Defining explanation methods and approaches Dimensions of explainability Addressing key questions of explainability Understanding different types of explanation methods Understanding the accuracy interpretability trade-off Evaluating the quality of explainability methods Criteria for good explainable ML systems Auxiliary criteria of XAI for ML systems Taxonomy of evaluation levels for explainable ML systems Summary References Chapter 2: Model Explainability Methods Technical requirements Types of model explainability methods Knowledge extraction methods EDA Result visualization methods Using comparison analysis Using Surrogate Explainer methods Influence-based methods Feature importance Sensitivity analysis PDPs LRP Representation-based explanation VAMs Example-based methods CFEs in structured data CFEs in unstructured data Summary References Chapter 3: Data-Centric Approaches Technical requirements Introduction to data-centric XAI Analyzing data volume Analyzing data consistency Analyzing data purity Thorough data analysis and profiling process The need for data analysis and profiling processes Data analysis as a precautionary step Building robust data profiles Monitoring and anticipating drifts Detecting drifts Selection of statistical measures Checking adversarial robustness Impact of adversarial attacks Methods to increase adversarial robustness Evaluating adversarial robustness Measuring data forecastability Estimating data forecastability Summary References Section 2 – Practical Problem Solving Chapter 4: LIME for Model Interpretability Technical requirements Intuitive understanding of LIME Learning interpretable data representations Maintaining a balance in the fidelity-interpretability trade-off Searching for local explorations What makes LIME a good model explainer? SP-LIME A practical example of using LIME for classification problems Potential pitfalls Summary References Chapter 5: Practical Exposure to Using LIME in ML Technical requirements Using LIME on tabular data Setting up LIME Discussion about the dataset Discussions about the model Application of LIME Explaining image classifiers with LIME Setting up the required Python modules Using a pre-trained TensorFlow model as our black-box model Application of LIME Image Explainers Using LIME on text data Installing the required Python modules Discussions about the dataset used for training the model Discussions about the text classification model Applying LIME Text Explainers LIME for production-level systems Summary References Chapter 6: Model Interpretability Using SHAP Technical requirements An intuitive understanding of the SHAP and Shapley values Introduction to SHAP and Shapley values What are Shapley values? Shapley values in ML The SHAP framework Model explainability approaches using SHAP Visualizations in SHAP Explainers in SHAP Using SHAP to explain regression models Setting up SHAP Inspecting the dataset Training the model Application of SHAP Advantages and limitations of SHAP Advantages Limitations Summary References Chapter 7: Practical Exposure to Using SHAP in ML Technical requirements Applying TreeExplainers to tree ensemble models Installing the required Python modules Discussion about the dataset Training the model Application of TreeExplainer in SHAP Explaining deep learning models using DeepExplainer and GradientExplainer GradientExplainer Discussion on the dataset used for training the model Using a pre-trained CNN model for this example Application of GradientExplainer in SHAP Exploring DeepExplainers Application of DeepExplainer in SHAP Model-agnostic explainability using KernelExplainer Application of KernelExplainer in SHAP Exploring LinearExplainer in SHAP Application of LinearExplainer in SHAP Explaining transformers using SHAP Explaining transformer-based sentiment analysis models Explaining a multi-class prediction transformer model using SHAP Explaining zero-shot learning models using SHAP Summary References Chapter 8: Human-Friendly Explanations with TCAV Technical requirements Understanding TCAV intuitively What is TCAV? Explaining with abstract concepts Goals of TCAV Approach of TCAV Exploring the practical applications of TCAV Getting started About the data Discussions about the deep learning model used Model explainability using TCAV Advantages and limitations Advantages Limitations Potential applications of concept-based explanations Summary References Chapter 9: Other Popular XAI Frameworks Technical requirements DALEX Setting up DALEX for model explainability Discussions about the dataset Training the model Model explainability using DALEX Model-level explanations Prediction-level explanations Evaluating model fairness Interactive dashboards using ARENA Explainerdashboard Setting up Explainerdashboard Model explainability with Explainerdashboard InterpretML Supported explanation methods Setting up InterpretML Discussions about the dataset Training the model Explainability with InterpretML ALIBI Setting up ALIBI Discussion about the dataset Training the model Model explainability with ALIBI DiCE CFE methods supported in DiCE Model explainability with DiCE ELI5 Setting up ELI5 Model explainability using ELI5 H2O AutoML explainers Explainability with H2O explainers Quick comparison guide Summary References Section 3 – Taking XAI to the Next Level Chapter 10: XAI Industry Best Practices Open challenges of XAI Guidelines for designing explainable ML systems Adopting a data-first approach for explainability Emphasizing IML for explainability Emphasizing prescriptive insights for explainability Summary References Chapter 11: End User-Centered Artificial Intelligence User-centered XAI/ML systems Different aspects of end user-centric XAI Rapid XAI prototyping using EUCA Efforts toward increasing user acceptance of AI/ML systems using XAI Providing a delightful UX Summary References Index Other Books You May Enjoy