ورود به حساب

نام کاربری گذرواژه

گذرواژه را فراموش کردید؟ کلیک کنید

حساب کاربری ندارید؟ ساخت حساب

ساخت حساب کاربری

نام نام کاربری ایمیل شماره موبایل گذرواژه

برای ارتباط با ما می توانید از طریق شماره موبایل زیر از طریق تماس و پیامک با ما در ارتباط باشید


09117307688
09117179751

در صورت عدم پاسخ گویی از طریق پیامک با پشتیبان در ارتباط باشید

دسترسی نامحدود

برای کاربرانی که ثبت نام کرده اند

ضمانت بازگشت وجه

درصورت عدم همخوانی توضیحات با کتاب

پشتیبانی

از ساعت 7 صبح تا 10 شب

دانلود کتاب Machine Learning for Practical Decision Making: A Multidisciplinary Perspective with Applications from Healthcare, Engineering and Business Analytics

دانلود کتاب یادگیری ماشینی برای تصمیم گیری عملی: دیدگاهی چند رشته ای با برنامه های کاربردی از مراقبت های بهداشتی، مهندسی و تجزیه و تحلیل کسب و کار

Machine Learning for Practical Decision Making: A Multidisciplinary Perspective with Applications from Healthcare, Engineering and Business Analytics

مشخصات کتاب

Machine Learning for Practical Decision Making: A Multidisciplinary Perspective with Applications from Healthcare, Engineering and Business Analytics

ویرایش: [1st ed. 2022] 
نویسندگان: , , ,   
سری: International Series in Operations Research & Management Science, 334 
ISBN (شابک) : 3031169891, 9783031169892 
ناشر: Springer 
سال نشر: 2022 
تعداد صفحات: 474
[475] 
زبان: English 
فرمت فایل : PDF (درصورت درخواست کاربر به PDF، EPUB یا AZW3 تبدیل می شود) 
حجم فایل: 24 Mb 

قیمت کتاب (تومان) : 37,000

در صورت ایرانی بودن نویسنده امکان دانلود وجود ندارد و مبلغ عودت داده خواهد شد



ثبت امتیاز به این کتاب

میانگین امتیاز به این کتاب :
       تعداد امتیاز دهندگان : 8


در صورت تبدیل فایل کتاب Machine Learning for Practical Decision Making: A Multidisciplinary Perspective with Applications from Healthcare, Engineering and Business Analytics به فرمت های PDF، EPUB، AZW3، MOBI و یا DJVU می توانید به پشتیبان اطلاع دهید تا فایل مورد نظر را تبدیل نمایند.

توجه داشته باشید کتاب یادگیری ماشینی برای تصمیم گیری عملی: دیدگاهی چند رشته ای با برنامه های کاربردی از مراقبت های بهداشتی، مهندسی و تجزیه و تحلیل کسب و کار نسخه زبان اصلی می باشد و کتاب ترجمه شده به فارسی نمی باشد. وبسایت اینترنشنال لایبرری ارائه دهنده کتاب های زبان اصلی می باشد و هیچ گونه کتاب ترجمه شده یا نوشته شده به فارسی را ارائه نمی دهد.


توضیحاتی در مورد کتاب یادگیری ماشینی برای تصمیم گیری عملی: دیدگاهی چند رشته ای با برنامه های کاربردی از مراقبت های بهداشتی، مهندسی و تجزیه و تحلیل کسب و کار



این کتاب مقدمه‌ای عملی بر یادگیری ماشین (ML) از دیدگاه چند رشته‌ای ارائه می‌کند که به پیش‌زمینه‌ای در علم داده یا علوم رایانه نیاز ندارد. این ML را با استفاده از زبان ساده و یک رویکرد ساده که با مثال‌های واقعی در زمینه‌هایی مانند انفورماتیک سلامت، فناوری اطلاعات و تجزیه و تحلیل تجاری هدایت می‌شود، توضیح می‌دهد. این کتاب به خوانندگان کمک می کند تا الگوریتم های کلیدی مختلف، ابزارهای نرم افزاری اصلی و کاربردهای آنها را درک کنند. علاوه بر این، از طریق مثال‌هایی از حوزه‌های مراقبت‌های بهداشتی و تجزیه و تحلیل کسب‌وکار، نشان می‌دهد که چگونه و چه زمانی ML می‌تواند به آنها در تصمیم‌گیری بهتر در رشته‌های خود کمک کند.

این کتاب عمدتاً برای مقاطع کارشناسی و کارشناسی در نظر گرفته شده است. دانشجویان تحصیلات تکمیلی که در حال گذراندن دوره مقدماتی در یادگیری ماشین هستند. همچنین برای تحلیلگران داده و هر کسی که علاقه مند به یادگیری رویکردهای ML است مفید خواهد بود.



توضیحاتی درمورد کتاب به خارجی

This book provides a hands-on introduction to Machine Learning (ML) from a multidisciplinary perspective that does not require a background in data science or computer science. It explains ML using simple language and a straightforward approach guided by real-world examples in areas such as health informatics, information technology, and business analytics. The book will help readers understand the various key algorithms, major software tools, and their applications. Moreover, through examples from the healthcare and business analytics fields, it demonstrates how and when ML can help them make better decisions in their disciplines.

The book is chiefly intended for undergraduate and graduate students who are taking an introductory course in machine learning. It will also benefit data analysts and anyone interested in learning ML approaches.




فهرست مطالب

Preface
Contents
Chapter 1: Introduction to Machine Learning
	1.1 Introduction to Machine Learning
	1.2 Origin of Machine Learning
	1.3 Growth of Machine Learning
	1.4 How Machine Learning Works
	1.5 Machine Learning Building Blocks
		1.5.1 Data Management and Exploration
			1.5.1.1 Data, Information, and Knowledge
			1.5.1.2 Big Data
			1.5.1.3 OLAP Versus OLTP
			1.5.1.4 Databases, Data Warehouses, and Data Marts
			1.5.1.5 Multidimensional Analysis Techniques
				1.5.1.5.1 Slicing and Dicing
				1.5.1.5.2 Pivoting
				1.5.1.5.3 Drill-Down, Roll-Up, and Drill-Across
		1.5.2 The Analytics Landscape
			1.5.2.1 Types of Analytics (Descriptive, Diagnostic, Predictive, Prescriptive)
				1.5.2.1.1 Descriptive Analytics
				1.5.2.1.2 Diagnostic Analytics
				1.5.2.1.3 Predictive Analytics
				1.5.2.1.4 Prescriptive Analytics
	1.6 Conclusion
	1.7 Key Terms
	1.8 Test Your Understanding
	1.9 Read More
	1.10 Lab
		1.10.1 Introduction to R
		1.10.2 Introduction to RStudio
			1.10.2.1 RStudio Download and Installation
			1.10.2.2 Install a Package
			1.10.2.3 Activate Package
			1.10.2.4 User Readr to Load Data
			1.10.2.5 Run a Function
			1.10.2.6 Save Status
		1.10.3 Introduction to Python and Jupyter Notebook IDE
			1.10.3.1 Python Download and Installation
			1.10.3.2 Jupyter Download and Installation
			1.10.3.3 Load Data and Plot It Visually
			1.10.3.4 Save the Execution
			1.10.3.5 Load a Saved Execution
			1.10.3.6 Upload a Jupyter Notebook File
		1.10.4 Do It Yourself
	References
Chapter 2: Statistics
	2.1 Overview of the Chapter
	2.2 Definition of General Terms
	2.3 Types of Variables
		2.3.1 Measures of Central Tendency
			2.3.1.1 Measures of Dispersion
	2.4 Inferential Statistics
		2.4.1 Data Distribution
		2.4.2 Hypothesis Testing
		2.4.3 Type I and II Errors
		2.4.4 Steps for Performing Hypothesis Testing
		2.4.5 Test Statistics
			2.4.5.1 Student´s t-test
			2.4.5.2 One-Way Analysis of Variance
			2.4.5.3 Chi-Square Statistic
			2.4.5.4 Correlation
			2.4.5.5 Simple Linear Regression
	2.5 Conclusion
	2.6 Key Terms
	2.7 Test Your Understanding
	2.8 Read More
	2.9 Lab
		2.9.1 Working Example in R
			2.9.1.1 Statistical Measures Overview
			2.9.1.2 Central Tendency Measures in R
			2.9.1.3 Dispersion in R
			2.9.1.4 Statistical Test Using p-value in R
		2.9.2 Working Example in Python
			2.9.2.1 Central Tendency Measure in Python
			2.9.2.2 Dispersion Measures in Python
			2.9.2.3 Statistical Testing Using p-value in Python
		2.9.3 Do It Yourself
		2.9.4 Do More Yourself (Links to Available Datasets for Use)
	References
Chapter 3: Overview of Machine Learning Algorithms
	3.1 Introduction
	3.2 Data Mining
	3.3 Analytics and Machine Learning
		3.3.1 Terminology Used in Machine Learning
		3.3.2 Machine Learning Algorithms: A Classification
	3.4 Supervised Learning
		3.4.1 Multivariate Regression
			3.4.1.1 Multiple Linear Regression
			3.4.1.2 Multiple Logistic Regression
		3.4.2 Decision Trees
		3.4.3 Artificial Neural Networks
			3.4.3.1 Perceptron
		3.4.4 Naïve Bayes Classifier
		3.4.5 Random Forest
		3.4.6 Support Vector Machines (SVM)
	3.5 Unsupervised Learning
		3.5.1 K-Means
		3.5.2 K-Nearest Neighbors (KNN)
		3.5.3 AdaBoost
	3.6 Applications of Machine Learning
		3.6.1 Machine Learning Demand Forecasting and Supply Chain Performance [42]
		3.6.2 A Case Study on Cervical Pain Assessment with Motion Capture [43]
		3.6.3 Predicting Bank Insolvencies Using Machine Learning Techniques [44]
		3.6.4 Deep Learning with Convolutional Neural Network for Objective Skill Evaluation in Robot-Assisted Surgery [45]
	3.7 Conclusion
	3.8 Key Terms
	3.9 Test Your Understanding
	3.10 Read More
	3.11 Lab
		3.11.1 Machine Learning Overview in R
			3.11.1.1 Caret Package
			3.11.1.2 ggplot2 Package
			3.11.1.3 mlBench Package
			3.11.1.4 Class Package
			3.11.1.5 DataExplorer Package
			3.11.1.6 Dplyr Package
			3.11.1.7 KernLab Package
			3.11.1.8 Mlr3 Package
			3.11.1.9 Plotly Package
			3.11.1.10 Rpart Package
		3.11.2 Supervised Learning Overview
			3.11.2.1 KNN Diamonds Example
				3.11.2.1.1 Loading KNN Algorithm Package
				3.11.2.1.2 Loading Dataset for KNN
				3.11.2.1.3 Preprocessing Data
				3.11.2.1.4 Scaling Data
				3.11.2.1.5 Splitting Data and Applying KNN Algorithm
				3.11.2.1.6 Model Performance
		3.11.3 Unsupervised Learning Overview
			3.11.3.1 Loading K-Means Clustering Package
			3.11.3.2 Loading Dataset for K-Means Clustering Algorithm
			3.11.3.3 Preprocessing Data
			3.11.3.4 Executing K-Means Clustering Algorithm
			3.11.3.5 Results Discussion
		3.11.4 Python Scikit-Learn Package Overview
		3.11.5 Python Supervised Learning Machine (SML)
			3.11.5.1 Using Scikit-Learn Package
			3.11.5.2 Loading Diamonds Dataset Using Python
			3.11.5.3 Preprocessing Data
			3.11.5.4 Splitting Data and Executing Linear Regression Algorithm
			3.11.5.5 Model Performance Explanation
			3.11.5.6 Classification Performance
		3.11.6 Unsupervised Machine Learning (UML)
			3.11.6.1 Loading Dataset for Hierarchical Clustering Algorithm
			3.11.6.2 Running Hierarchical Algorithm and Plotting Data
		3.11.7 Do It Yourself
		3.11.8 Do More Yourself
	References
Chapter 4: Data Preprocessing
	4.1 The Problem
	4.2 Data Preprocessing Steps
		4.2.1 Data Collection
		4.2.2 Data Profiling, Discovery, and Access
		4.2.3 Data Cleansing and Validation
		4.2.4 Data Structuring
		4.2.5 Feature Selection
		4.2.6 Data Transformation and Enrichment
		4.2.7 Data Validation, Storage, and Publishing
	4.3 Feature Engineering
		4.3.1 Feature Creation
		4.3.2 Transformation
		4.3.3 Feature Extraction
	4.4 Feature Engineering Techniques
		4.4.1 Imputation
			4.4.1.1 Numerical Imputation
			4.4.1.2 Categorical Imputation
		4.4.2 Discretizing Numerical Features
		4.4.3 Converting Categorical Discrete Features to Numeric (Binarization)
		4.4.4 Log Transformation
		4.4.5 One-Hot Encoding
		4.4.6 Scaling
			4.4.6.1 Normalization (Min-Max Normalization)
			4.4.6.2 Standardization (Z-Score Normalization)
		4.4.7 Reduce the Features Dimensionality
	4.5 Overfitting
	4.6 Underfitting
	4.7 Model Selection: Selecting the Best Performing Model of an Algorithm
		4.7.1 Model Selection Using the Holdout Method
		4.7.2 Model Selection Using Cross-Validation
		4.7.3 Evaluating Model Performance in Python
	4.8 Data Quality
	4.9 Key Terms
	4.10 Test Your Understanding
	4.11 Read More
	4.12 Lab
		4.12.1 Working Example in Python
			4.12.1.1 Read the Dataset
			4.12.1.2 Split the Dataset
			4.12.1.3 Impute Data
			4.12.1.4 One-Hot-Encode Data
			4.12.1.5 Scale Numeric Data: Standardization
			4.12.1.6 Create Pipelines
			4.12.1.7 Creating Models
			4.12.1.8 Cross-Validation
			4.12.1.9 Hyperparameter Finetuning
		4.12.2 Working Example in Weka
			4.12.2.1 Missing Values
			4.12.2.2 Discretization (or Binning)
			4.12.2.3 Data Normalization and Standardization
			4.12.2.4 One-Hot-Encoding (Nominal to Numeric)
		4.12.3 Do It Yourself
			4.12.3.1 Lenses Dataset
			4.12.3.2 Nested Cross-Validation
		4.12.4 Do More Yourself
	References
Chapter 5: Data Visualization
	5.1 Introduction
	5.2 Presentation and Visualization of Information
		5.2.1 A Taxonomy of Graphs
		5.2.2 Relationships and Graphs
		5.2.3 Dashboards
		5.2.4 Infographics
	5.3 Building Effective Visualizations
	5.4 Data Visualization Software
	5.5 Conclusion
	5.6 Key Terms
	5.7 Test Your Understanding
	5.8 Read More
	5.9 Lab
		5.9.1 Working Example in Tableau
			5.9.1.1 Getting a Student Copy of Tableau Desktop
			5.9.1.2 Learning with Tableau´s how-to Videos and Resources
		5.9.2 Do It Yourself
			5.9.2.1 Assignment 1: Introduction to Tableau
			5.9.2.2 Assignment 2: Data Manipulation and Basic Charts with Tableau
		5.9.3 Do More Yourself
			5.9.3.1 Assignment 3: Charts and Dashboards with Tableau
			5.9.3.2 Assignment 4: Analytics with Tableau
	References
Chapter 6: Linear Regression
	6.1 The Problem
	6.2 A Practical Example
	6.3 The Algorithm
		6.3.1 Modeling the Linear Regression
		6.3.2 Gradient Descent
		6.3.3 Gradient Descent Example
		6.3.4 Batch Versus Stochastic Gradient Descent
		6.3.5 Examples of Error Functions
		6.3.6 Gradient Descent Types
			6.3.6.1 Stochastic Gradient Descent
			6.3.6.2 Batch Gradient
	6.4 Final Notes: Advantages, Disadvantages, and Best Practices
	6.5 Key Terms
	6.6 Test Your Understanding
	6.7 Read More
	6.8 Lab
		6.8.1 Working Example in R
			6.8.1.1 Load Diabetes Dataset
			6.8.1.2 Preprocess Diabetes Dataset
			6.8.1.3 Choose Dependent and Independent Variables
			6.8.1.4 Visualize Your Dataset
			6.8.1.5 Split Data into Test and Train Datasets
			6.8.1.6 Create Linear Regression Model and Visualize it
			6.8.1.7 Calculate Confusion Matrix
			6.8.1.8 Gradient Descent
		6.8.2 Working Example in Python
			6.8.2.1 Load USA House Prices Dataset
			6.8.2.2 Explore Housing Prices Visually
			6.8.2.3 Preprocess Data
			6.8.2.4 Split Data and Scale Features
			6.8.2.5 Create and Visualize Model Using the LinearRegression Algorithm
			6.8.2.6 Evaluate Performance of LRM
			6.8.2.7 Optimize LRM Manually with Gradient Descent
			6.8.2.8 Create and Visualize a Model Using the Stochastic Gradient Descent (SGD)
		6.8.3 Working Example in Weka
		6.8.4 Do It Yourself
			6.8.4.1 Methods, Arguments, and Regularization
				6.8.4.1.1 Methods and Arguments
				6.8.4.1.2 Regularization
			6.8.4.2 Predicting House Prices
		6.8.5 Do More Yourself
	References
Chapter 7: Logistic Regression
	7.1 The Problem
	7.2 A Practical Example
	7.3 The Algorithm
	7.4 Final Notes: Advantages, Disadvantages, and Best Practices
	7.5 Key Terms
	7.6 Test Your Understanding
	7.7 Read More
	7.8 Lab
		7.8.1 Working Example in Python
			7.8.1.1 Load Pima Indians Diabetes Dataset
			7.8.1.2 Visualize Pima Indians Dataset
			7.8.1.3 Preprocess Data
			7.8.1.4 Optimize Logistic Regression Model
		7.8.2 Working Example in Weka
		7.8.3 Do It Yourself
			7.8.3.1 Predicting Online Purchases
			7.8.3.2 Predicting Click-Through Advertisements
		7.8.4 Do More Yourself
	References
Chapter 8: Decision Trees
	8.1 The Problem
	8.2 A Practical Example
	8.3 The Algorithm
		8.3.1 Tree Basics
		8.3.2 Training Decision Trees
		8.3.3 A Generic Algorithm
		8.3.4 Tree Pruning
	8.4 Final Notes: Advantages, Disadvantages, and Best Practices
	8.5 Key Terms
	8.6 Test Your Understanding
	8.7 Read More
	8.8 Lab
		8.8.1 Working Example in Python
			8.8.1.1 Load Car Evaluation Dataset
			8.8.1.2 Visualize Car Evaluation
			8.8.1.3 Split and Scale Data
			8.8.1.4 Optimize Decision Tree Model
		8.8.2 Working Example in Weka
		8.8.3 Do It Yourself
			8.8.3.1 Decision Tree: Reflections on the Car Evaluation Dataset
			8.8.3.2 Decision Trees for Regression
			8.8.3.3 Decision Trees for Classification
		8.8.4 Do More Yourself
	References
Chapter 9: Naïve Bayes
	9.1 The Problem
	9.2 The Algorithm
		9.2.1 Bayes Theorem
		9.2.2 The Naïve Bayes Classifier (NBC): Dealing with Categorical Variables
		9.2.3 Gaussian Naïve Bayes (GNB): Dealing with Continuous Variables
	9.3 A Practical Example
		9.3.1 Naïve Bayes Classifier with Categorical Variables Example
		9.3.2 Gaussian Naïve Bayes Example
	9.4 Final Notes: Advantages, Disadvantages, and Best Practices
	9.5 Key Terms
	9.6 Test Your Understanding
	9.7 Read More
	9.8 Lab
		9.8.1 Working Example in Python
			9.8.1.1 Load Social Network Ads Dataset
			9.8.1.2 Visualize Social Network Ads Dataset
			9.8.1.3 Choose Features and Normalize Data
			9.8.1.4 Optimize GNB Model Using Hyperparameter
		9.8.2 Working Example in Weka
		9.8.3 Do It Yourself
			9.8.3.1 Building a Movie Recommender System
			9.8.3.2 Predicting Flower Types
		9.8.4 Do More Yourself
	References
Chapter 10: K-Nearest Neighbors
	10.1 The Problem
	10.2 A Practical Example
		10.2.1 A Classification
		10.2.2 Regression
	10.3 The Algorithm
		10.3.1 Distance Function
			10.3.1.1 Euclidean Distance
			10.3.1.2 Manhattan Distance
			10.3.1.3 Minkowski Distance
			10.3.1.4 Cosine Similarity
			10.3.1.5 Hamming Distance
		10.3.2 KNN for Classification
		10.3.3 KNN for Regression
	10.4 Final Notes: Advantages, Disadvantages, and Best Practices
	10.5 Key Terms
	10.6 Test Your Understanding
	10.7 Read More
	10.8 Lab
		10.8.1 Working Example in Python
			10.8.1.1 Load Iris Dataset
			10.8.1.2 Data Cleaning and Visualization
			10.8.1.3 Split and Scale Data
			10.8.1.4 Optimize KNN Model Using Grid Search Cross-Validation
		10.8.2 Working Example in Weka
		10.8.3 Do It Yourself
			10.8.3.1 Iris Data Set Revisited
			10.8.3.2 Predict the Age of Abalone from Physical Measurement
			10.8.3.3 Prostate Cancer
		10.8.4 Do More Yourself
	References
Chapter 11: Neural Networks
	11.1 The Problem
	11.2 A Practical Example
		11.2.1 Example 1
	11.3 The Algorithm
		11.3.1 The McCulloch-Pitts Neuron
		11.3.2 The Perceptron
		11.3.3 The Perceptron as a Linear Function
		11.3.4 Activation Functions
			11.3.4.1 The Sigmoid Function
			11.3.4.2 The Tanh Function
			11.3.4.3 The ReLU Function
			11.3.4.4 The Leaky ReLU Function
			11.3.4.5 The Parameterized ReLU Function
			11.3.4.6 The Swish Function
			11.3.4.7 The SoftMax Function
			11.3.4.8 Which Activation Function to Choose?
		11.3.5 Training the Perceptron
		11.3.6 Perceptron Limitations: XOR Modeling
		11.3.7 Multilayer Perceptron (MLP)
		11.3.8 MLP Algorithm Overview
		11.3.9 Backpropagation
			11.3.9.1 Simple 1-1-1 Network
				11.3.9.1.1 Computation with Respect to Layer L-1
				11.3.9.1.2 Computation with Respect to Layer L-2
			11.3.9.2 Fully Connected Neural Network
				11.3.9.2.1 Computation with Respect to Layer L-1
				11.3.9.2.2 Computation with Respect to Layer L-2
		11.3.10 Backpropagation Algorithm
	11.4 Final Notes: Advantages, Disadvantages, and Best Practices
	11.5 Key Terms
	11.6 Test Your Understanding
	11.7 Read More
	11.8 Lab
		11.8.1 Working Example in Python
			11.8.1.1 Load Diabetes for Pima Indians Dataset
			11.8.1.2 Visualize Data
			11.8.1.3 Split Dataset into Training and Testing Datasets
			11.8.1.4 Create Neural Network Model
			11.8.1.5 Optimize Neural Network Model Using Hyperparameter
		11.8.2 Working Example in Weka
		11.8.3 Do it Yourself
			11.8.3.1 Diabetes Revisited
			11.8.3.2 Choose your Own Problem
		11.8.4 Do More Yourself
	References
Chapter 12: K-Means
	12.1 The Problem
	12.2 A Practical Example
	12.3 The Algorithm
	12.4 Inertia
	12.5 Minibatch K-Means
	12.6 Final Notes: Advantages, Disadvantages, and Best Practices
	12.7 Key Terms
	12.8 Test Your Understanding
	12.9 Read More
	12.10 Lab
		12.10.1 Working Example in Python
			12.10.1.1 Load Person´s Demographics
			12.10.1.2 Data Visualization and Cleaning
			12.10.1.3 Data preprocessing
			12.10.1.4 Choosing Features and Scaling Data
			12.10.1.5 Finding the Best K for the K-Means Model
		12.10.2 Do It Yourself
			12.10.2.1 The Iris Dataset Revisited
			12.10.2.2 K-Means for Dimension Reduction
		12.10.3 Do More Yourself
	References
Chapter 13: Support Vector Machine
	13.1 The Problem
	13.2 The Algorithm
		13.2.1 Important Concepts
		13.2.2 Margin
			13.2.2.1 Functional Margin
			13.2.2.2 Geometric Margin
		13.2.3 Types of Support Vector Machines
			13.2.3.1 Linear Support Vector Machine
			13.2.3.2 Soft Margin Classifier
				13.2.3.2.1 Hard Margin Classifier
			13.2.3.3 Nonlinear Support Vector Machine
		13.2.4 Classification
		13.2.5 Regression
		13.2.6 Tuning Parameters
			13.2.6.1 Regularization
			13.2.6.2 Gamma
			13.2.6.3 Margins
		13.2.7 Kernel
			13.2.7.1 Linear Kernel
			13.2.7.2 Polynomial Kernel
			13.2.7.3 Radial Basis Function (RBF) Kernel
	13.3 Advantages, Disadvantages, and Best Practices
	13.4 Key Terms
	13.5 Test Your Understanding
	13.6 Read More
	13.7 Lab
		13.7.1 Working Example in Python
			13.7.1.1 Loading Iris Dataset
				13.7.1.1.1 Visualize Iris Dataset
			13.7.1.2 Preprocess and Scale Data
			13.7.1.3 Dimension Reduction
			13.7.1.4 Hyperparameter Tuning and Performance Measurements
			13.7.1.5 Plot the Decision Boundaries
		13.7.2 Do It Yourself
			13.7.2.1 The Iris Dataset Revisited
			13.7.2.2 Breast Cancer
			13.7.2.3 Wine Classification
			13.7.2.4 Face Recognition
			13.7.2.5 SVM Regressor: Predict House Prices with SVR
			13.7.2.6 SVM Regressor: Predict Diabetes with SVR
			13.7.2.7 Unsupervised SVM
		13.7.3 Do More Yourself
	References
Chapter 14: Voting and Bagging
	14.1 The Problem
	14.2 Voting Algorithm
	14.3 Bagging Algorithm
	14.4 Random Forest
	14.5 Voting Example
	14.6 Bagging Example: Random Forest
	14.7 Final Notes: Advantages, Disadvantages, and Best Practices
	14.8 Key Terms
	14.9 Test Your Understanding
	14.10 Read More
	14.11 Lab
		14.11.1 A working Example in Python
			14.11.1.1 Load Titanic Dataset
			14.11.1.2 Visualizing Titanic Dataset
			14.11.1.3 Preprocess and Manipulate Data
			14.11.1.4 Create Bagging and Voting Models
			14.11.1.5 Evaluate Bagging and Voting Model´s
			14.11.1.6 Optimize the Bagging and Voting Models
		14.11.2 Do It Yourself
			14.11.2.1 The Titanic revisited
			14.11.2.2 The Diabetes Dataset
		14.11.3 Do More Yourself
	References
Chapter 15: Boosting and Stacking
	15.1 The Problem
	15.2 Boosting
	15.3 Stacking
	15.4 Boosting Example
		15.4.1 AdaBoost Algorithm
		15.4.2 AdaBoost Example
	15.5 Key Terms
	15.6 Test Your Understanding
	15.7 Read More
	15.8 Lab
		15.8.1 A Working Example in Python
			15.8.1.1 Loading Heart Dataset
			15.8.1.2 Visualizing Heart Dataset
			15.8.1.3 Preprocess Data
			15.8.1.4 Split and Scale Data
			15.8.1.5 Create AdaBoost and Stacking Models
			15.8.1.6 Evaluate the AdaBoost and the Stacking Models
			15.8.1.7 Optimizing the Stacking and AdaBoost Models
		15.8.2 Do It Yourself
			15.8.2.1 The Heart Disease Dataset Revisited
			15.8.2.2 The Iris Dataset
		15.8.3 Do More Yourself
	References
Chapter 16: Future Directions and Ethical Considerations
	16.1 Introduction
	16.2 Current AI Applications
	16.3 Future Directions
		16.3.1 Democratized AI
		16.3.2 Edge AI
		16.3.3 Responsible AI
		16.3.4 Generative AI
	16.4 Ethical Concerns
		16.4.1 Ethical Frameworks
	16.5 Conclusion
	16.6 Key Terms
	16.7 Test Your Understanding
	16.8 Read More
	References
Index




نظرات کاربران