دسترسی نامحدود
برای کاربرانی که ثبت نام کرده اند
برای ارتباط با ما می توانید از طریق شماره موبایل زیر از طریق تماس و پیامک با ما در ارتباط باشید
در صورت عدم پاسخ گویی از طریق پیامک با پشتیبان در ارتباط باشید
برای کاربرانی که ثبت نام کرده اند
درصورت عدم همخوانی توضیحات با کتاب
از ساعت 7 صبح تا 10 شب
ویرایش: 1 نویسندگان: Brad Boehmke (Author), Brandon M. Greenwell (Author) سری: ISBN (شابک) : 9781138495685, 9781000730319 ناشر: Chapman and Hall/CRC سال نشر: 2019 تعداد صفحات: 484 زبان: فرمت فایل : PDF (درصورت درخواست کاربر به PDF، EPUB یا AZW3 تبدیل می شود) حجم فایل: 35 مگابایت
در صورت تبدیل فایل کتاب Hands-On Machine Learning with R به فرمت های PDF، EPUB، AZW3، MOBI و یا DJVU می توانید به پشتیبان اطلاع دهید تا فایل مورد نظر را تبدیل نمایند.
توجه داشته باشید کتاب یادگیری ماشینی دستی با R نسخه زبان اصلی می باشد و کتاب ترجمه شده به فارسی نمی باشد. وبسایت اینترنشنال لایبرری ارائه دهنده کتاب های زبان اصلی می باشد و هیچ گونه کتاب ترجمه شده یا نوشته شده به فارسی را ارائه نمی دهد.
این کتاب برای معرفی مفهوم رویکردهای تحلیلی پیشرفته کسبوکار طراحی شده است و اولین کتابی است که دامنه نحوه استفاده از زبان برنامهنویسی R را برای استفاده از روشهای تحلیلی توصیفی، پیشبینیکننده و تجویزی برای حل مسئله پوشش میدهد.
This book is designed to introduce the concept of advanced business analytic approaches and would the first to cover the gamut of how to use the R programming language to apply descriptive, predictive, and prescriptive analytic methodologies for problem solving.
FUNDAMENTALS
Introduction to Machine Learning
Supervised learning
Regression problems
Classification problems
Unsupervised learning
Roadmap
The data sets
Modeling Process
Prerequisites
Data splitting
Simple random sampling
Stratified sampling
Class imbalances
Creating models in R
Many formula interfaces
Many engines
Resampling methods
Contents
k-fold cross validation
Bootstrapping
Alternatives
Bias variance trade-off
Bias
Variance
Hyperparameter tuning
Model evaluation
Regression models
Classification models
Putting the processes together
Feature & Target Engineering
Prerequisites
Target engineering
Dealing with missingness
Visualizing missing values
Imputation
Feature filtering
Numeric feature engineering
Skewness
Standardization
Categorical feature engineering
Lumping
One-hot & dummy encoding
Label encoding
Alternatives
Dimension reduction
Proper implementation
Sequential steps
Data leakage
Putting the process together
Contents v
SUPERVISED LEARNING
Linear Regression
Prerequisites
Simple linear regression
Estimation
Inference
Multiple linear regression
Assessing model accuracy
Model concerns
Principal component regression
Partial least squares
Feature interpretation
Final thoughts
Logistic Regression
Prerequisites
Why logistic regression
Simple logistic regression
Multiple logistic regression
Assessing model accuracy
Model concerns
Feature interpretation
Final thoughts
Regularized Regression
Prerequisites
Why regularize?
Ridge penalty
Lasso penalty
Elastic nets
Implementation
vi Contents
Tuning
Feature interpretation
Attrition data
Final thoughts
Multivariate Adaptive Regression Splines
Prerequisites
The basic idea
Multivariate regression splines
Fitting a basic MARS model
Tuning
Feature interpretation
Attrition data
Final thoughts
K-Nearest Neighbors
Prerequisites
Measuring similarity
Distance measures
Pre-processing
Choosing k
MNIST example
Final thoughts
Decision Trees
Prerequisites
Structure
Partitioning
How deep?
Early stopping
Pruning
Ames housing example
Contents vii
Feature interpretation
Final thoughts
Bagging
Prerequisites
Why and when bagging works
Implementation
Easily parallelize
Feature interpretation
Final thoughts
Random Forests
Prerequisites
Extending bagging
Out-of-the-box performance
Hyperparameters
Number of trees
mtry
Tree complexity
Sampling scheme
Split rule
Tuning strategies
Feature interpretation
Final thoughts
Gradient Boosting
Prerequisites
How boosting works
A sequential ensemble approach
Gradient descent
Basic GBM
Hyperparameters
viii Contents
Implementation
General tuning strategy
Stochastic GBMs
Stochastic hyperparameters
Implementation
XGBoost
XGBoost hyperparameters
Tuning strategy
Feature interpretation
Final thoughts
Deep Learning
Prerequisites
Why deep learning
Feedforward DNNs
Network architecture
Layers and nodes
Activation
Backpropagation
Model training
Model tuning
Model capacity
Batch normalization
Regularization
Adjust learning rate
Grid Search
Final thoughts
Contents ix
Support Vector Machines
Prerequisites
Optimal separating hyperplanes
The hard margin classifier
The soft margin classifier
The support vector machine
More than two classes
Support vector regression
Job attrition example
Class weights
Class probabilities
Feature interpretation
Final thoughts
Stacked Models
Prerequisites
The Idea
Common ensemble methods
Super learner algorithm
Available packages
Stacking existing models
Stacking a grid search
Automated machine learning
Final thoughts
Interpretable Machine Learning
Prerequisites
The idea
Global interpretation
Local interpretation
Model-specific vs. model-agnostic
x Contents
Permutation-based feature importance
Concept
Implementation
Partial dependence
Concept
Implementation
Alternative uses
Individual conditional expectation
Concept
Implementation
Feature interactions
Concept
Implementation
Alternatives
Local interpretable model-agnostic explanations
Concept
Implementation
Tuning
Alternative uses
Shapley values
Concept
Implementation
XGBoost and built-in Shapley values
Localized step-wise procedure
Concept
Implementation
Final thoughts
DIMENSION REDUCTION
Contents xi
Principal Components Analysis
Prerequisites
The idea
Finding principal components
Performing PCA in R
Selecting the number of principal components
Eigenvalue criterion
Proportion of variance explained criterion
Scree plot criterion
Final thoughts
Generalized Low Rank Models
Prerequisites
The idea
Finding the lower ranks
Alternating minimization
Loss functions
Regularization
Selecting k
Fitting GLRMs in R
Basic GLRM model
Tuning to optimize for unseen data
Final thoughts
Autoencoders
Prerequisites
Undercomplete autoencoders
Comparing PCA to an autoencoder
Stacked autoencoders
Visualizing the reconstruction
Sparse autoencoders
xii Contents
Denoising autoencoders
Anomaly detection
Final thoughts
CLUSTERING
K-means Clustering
Prerequisites
Distance measures
Defining clusters
k-means algorithm
Clustering digits
How many clusters?
Clustering with mixed data
Alternative partitioning methods
Final thoughts
Hierarchical Clustering
Prerequisites
Hierarchical clustering algorithms
Hierarchical clustering in R
Agglomerative hierarchical clustering
Divisive hierarchical clustering
Determining optimal clusters
Working with dendrograms
Final thoughts
Model-based Clustering
Prerequisites
Measuring probability and uncertainty
Covariance types
Model selection
My basket example
Final thoughts