دسترسی نامحدود
برای کاربرانی که ثبت نام کرده اند
برای ارتباط با ما می توانید از طریق شماره موبایل زیر از طریق تماس و پیامک با ما در ارتباط باشید
در صورت عدم پاسخ گویی از طریق پیامک با پشتیبان در ارتباط باشید
برای کاربرانی که ثبت نام کرده اند
درصورت عدم همخوانی توضیحات با کتاب
از ساعت 7 صبح تا 10 شب
ویرایش: [1 ed.]
نویسندگان: Pradeepta Mishra
سری:
ISBN (شابک) : 1484271572, 9781484271575
ناشر: Apress
سال نشر: 2022
تعداد صفحات: 362
زبان: English
فرمت فایل : EPUB (درصورت درخواست کاربر به PDF، EPUB یا AZW3 تبدیل می شود)
حجم فایل: 25 Mb
در صورت تبدیل فایل کتاب Practical Explainable AI Using Python: Artificial Intelligence Model Explanations Using Python-based Libraries, Extensions, and Frameworks به فرمت های PDF، EPUB، AZW3، MOBI و یا DJVU می توانید به پشتیبان اطلاع دهید تا فایل مورد نظر را تبدیل نمایند.
توجه داشته باشید کتاب هوش مصنوعی قابل توضیح عملی با استفاده از پایتون: توضیحات مدل هوش مصنوعی با استفاده از کتابخانهها، برنامههای افزودنی و چارچوبهای مبتنی بر پایتون نسخه زبان اصلی می باشد و کتاب ترجمه شده به فارسی نمی باشد. وبسایت اینترنشنال لایبرری ارائه دهنده کتاب های زبان اصلی می باشد و هیچ گونه کتاب ترجمه شده یا نوشته شده به فارسی را ارائه نمی دهد.
Table of Contents About the Author About the Technical Reviewers Acknowledgments Introduction Chapter 1: Model Explainability and Interpretability Establishing the Framework Artificial Intelligence Need for XAI Explainability vs. Interpretability Explainability Types Tools for Model Explainability SHAP LIME ELI5 Skater Skope_rules Methods of XAI for ML XAI Compatible Models XAI Meets Responsible AI Evaluation of XAI Conclusion Chapter 2: AI Ethics, Biasness, and Reliability AI Ethics Primer Biasness in AI Data Bias Algorithmic Bias Bias Mitigation Process Interpretation Bias Training Bias Reliability in AI Conclusion Chapter 3: Explainability for Linear Models Linear Models Linear Regression VIF and the Problems It Can Generate Final Model Model Explainability Trust in ML Model: SHAP Local Explanation and Individual Predictions in a ML Model Global Explanation and Overall Predictions in ML Model LIME Explanation and ML Model Skater Explanation and ML Model ELI5 Explanation and ML Model Logistic Regression Interpretation LIME Inference Conclusion Chapter 4: Explainability for Non-Linear Models Non-Linear Models Decision Tree Explanation Data Preparation for the Decision Tree Model Creating the Model Decision Tree – SHAP Partial Dependency Plot PDP Using Scikit-Learn Non-Linear Model Explanation – LIME Non-Linear Explanation – Skope-Rules Conclusion Chapter 5: Explainability for Ensemble Models Ensemble Models Types of Ensemble Models Why Ensemble Models? Using SHAP for Ensemble Models Using the Interpret Explaining Boosting Model Ensemble Classification Model: SHAP Using SHAP to Explain Categorical Boosting Models Using SHAP Multiclass Categorical Boosting Model Using SHAP for Light GBM Model Explanation Conclusion Chapter 6: Explainability for Time Series Models Time Series Models Knowing Which Model Is Good Strategy for Forecasting Confidence Interval of Predictions What Happens to Trust? Time Series: LIME Conclusion Chapter 7: Explainability for NLP Natural Language Processing Tasks Explainability for Text Classification Dataset for Text Classification Explaining Using ELI5 Calculating the Feature Weights for Local Explanation Local Explanation Example 1 Local Explanation Example 2 Local Explanation Example 3 Explanation After Stop Word Removal N-gram-Based Text Classification Multi-Class Label Text Classification Explainability Local Explanation Example 1 Local Explanation Example 2 Local Explanation Example 1 Conclusion Chapter 8: AI Model Fairness Using a What-If Scenario What Is the WIT? Installing the WIT Evaluation Metric Conclusion Chapter 9: Explainability for Deep Learning Models Explaining DL Models Using SHAP with DL Using Deep SHAP Using Alibi SHAP Explainer for Deep Learning Another Example of Image Classification Using SHAP Deep Explainer for Tabular Data Conclusion Chapter 10: Counterfactual Explanations for XAI Models What Are CFEs? Implementation of CFEs CFEs Using Alibi Counterfactual for Regression Tasks Conclusion Chapter 11: Contrastive Explanations for Machine Learning What Is CE for ML? CEM Using Alibi Comparison of an Original Image vs. an Autoencoder-Generated Image CEM for Tabular Data Explanations Conclusion Chapter 12: Model-Agnostic Explanations by Identifying Prediction Invariance What Is Model Agnostic? What Is an Anchor? Anchor Explanations Using Alibi Anchor Text for Text Classification Anchor Image for Image Classification Conclusion Chapter 13: Model Explainability for Rule-Based Expert Systems What Is an Expert System? Backward and Forward Chaining Rule Extraction Using Scikit-Learn Need for a Rule-Based System Challenges of an Expert System Conclusion Chapter 14: Model Explainability for Computer Vision Why Explainability for Image Data? Anchor Image Using Alibi Integrated Gradients Method Conclusion Index