ورود به حساب

نام کاربری گذرواژه

گذرواژه را فراموش کردید؟ کلیک کنید

حساب کاربری ندارید؟ ساخت حساب

ساخت حساب کاربری

نام نام کاربری ایمیل شماره موبایل گذرواژه

برای ارتباط با ما می توانید از طریق شماره موبایل زیر از طریق تماس و پیامک با ما در ارتباط باشید


09117307688
09117179751

در صورت عدم پاسخ گویی از طریق پیامک با پشتیبان در ارتباط باشید

دسترسی نامحدود

برای کاربرانی که ثبت نام کرده اند

ضمانت بازگشت وجه

درصورت عدم همخوانی توضیحات با کتاب

پشتیبانی

از ساعت 7 صبح تا 10 شب

دانلود کتاب Machine Learning for Decision Sciences with Case Studies in Python

دانلود کتاب یادگیری ماشین برای علوم تصمیم گیری با مطالعات موردی در پایتون

Machine Learning for Decision Sciences with Case Studies in Python

مشخصات کتاب

Machine Learning for Decision Sciences with Case Studies in Python

ویرایش: [1 ed.] 
نویسندگان: , , ,   
سری:  
ISBN (شابک) : 1032193565, 9781032193564 
ناشر: CRC Press 
سال نشر: 2022 
تعداد صفحات: 454
[477] 
زبان: English 
فرمت فایل : PDF (درصورت درخواست کاربر به PDF، EPUB یا AZW3 تبدیل می شود) 
حجم فایل: 18 Mb 

قیمت کتاب (تومان) : 44,000



ثبت امتیاز به این کتاب

میانگین امتیاز به این کتاب :
       تعداد امتیاز دهندگان : 7


در صورت تبدیل فایل کتاب Machine Learning for Decision Sciences with Case Studies in Python به فرمت های PDF، EPUB، AZW3، MOBI و یا DJVU می توانید به پشتیبان اطلاع دهید تا فایل مورد نظر را تبدیل نمایند.

توجه داشته باشید کتاب یادگیری ماشین برای علوم تصمیم گیری با مطالعات موردی در پایتون نسخه زبان اصلی می باشد و کتاب ترجمه شده به فارسی نمی باشد. وبسایت اینترنشنال لایبرری ارائه دهنده کتاب های زبان اصلی می باشد و هیچ گونه کتاب ترجمه شده یا نوشته شده به فارسی را ارائه نمی دهد.


توضیحاتی در مورد کتاب یادگیری ماشین برای علوم تصمیم گیری با مطالعات موردی در پایتون



این کتاب شرح مفصلی از الگوریتم های یادگیری ماشین در تجزیه و تحلیل داده ها، چرخه حیات علم داده، پایتون برای یادگیری ماشین، رگرسیون خطی، رگرسیون لجستیک و غیره ارائه می دهد. این کتاب به مفاهیم یادگیری ماشینی در مفهوم عملی می پردازد که کد کامل و پیاده سازی را برای نمونه های واقعی در صنایع برق، نفت و گاز، تجارت الکترونیک و صنایع پیشرفته ارائه می دهد. تمرکز بر برنامه نویسی پایتون برای یادگیری ماشین و الگوهای درگیر در علم تصمیم گیری برای مدیریت داده ها است.

ویژگی ها:

  • مفاهیم اساسی پایتون و نقش آن در یادگیری ماشین را توضیح می دهد.
  • جامع ارائه می دهد. پوشش مهندسی ویژگی از جمله مطالعات موردی بلادرنگ.
  • الگوهای ساختاری را درک می‌کند. با ارجاع به علم داده و آمار و تجزیه و تحلیل.
  • شامل یادگیری ماشینی تمرینات ساختاریافته.
  • مفاهیم الگوریتمی مختلف یادگیری ماشین از جمله بدون نظارت، تحت نظارت، و یادگیری تقویتی.

این کتاب برای محققان، متخصصان و دانشجویان فارغ التحصیل در علوم داده، یادگیری ماشین، علوم کامپیوتر و برق طراحی شده است. و مهندسی کامپیوتر.


توضیحاتی درمورد کتاب به خارجی

This book provides a detailed description of machine learning algorithms in data analytics, data science life cycle, Python for machine learning, linear regression, logistic regression, and so forth. It addresses the concepts of machine learning in a practical sense providing complete code and implementation for real-world examples in electrical, oil and gas, e-commerce, and hi-tech industries. The focus is on Python programming for machine learning and patterns involved in decision science for handling data.

Features:

  • Explains the basic concepts of Python and its role in machine learning.
  • Provides comprehensive coverage of feature engineering including real-time case studies.
  • Perceives the structural patterns with reference to data science and statistics and analytics.
  • Includes machine learning-based structured exercises.
  • Appreciates different algorithmic concepts of machine learning including unsupervised, supervised, and reinforcement learning.

This book is aimed at researchers, professionals, and graduate students in data science, machine learning, computer science, and electrical and computer engineering.



فهرست مطالب

Cover
Half Title
Title Page
Copyright Page
Table of Contents
Preface
Acknowledgment
About the Authors
Introduction
Chapter 1 Introduction
	1.1 Introduction to Data Science
		1.1.1 Mathematics
		1.1.2 Statistics
	1.2 Describing Structural Patterns
		1.2.1 Uses of Structural Patterns
	1.3 Machine Learning and Statistics
	1.4 Relation between Artificial Intelligence, Machine Learning, Neural
Networks, and Deep Learning
	1.5 Data Science Life Cycle
	1.6 Key Role of Data Scientist
		1.6.1 Difference between Data Scientist and Machine Learning Engineer
	1.7 Real-World Examples
	1.8 Use Cases
		1.8.1 Financial and Insurance Industries
			1.8.1.1 Fraud Mitigation
			1.8.1.2 Personalized Pricing
			1.8.1.3 AML – Anti-Money Laundering
		1.8.2 Utility Industries
			1.8.2.1 Smart Meter and Smart Grid
			1.8.2.2 Manage disaster and Outages
			1.8.2.3 Compliance
		1.8.3 Oil and Gas Industries
			1.8.3.1 Manage Exponential Growth
			1.8.3.2 3D Seismic Imaging and Kirchhoff
			1.8.3.3 Rapidly Process and Display Seismic Data
		1.8.4 E-Commerce and Hi-Tech Industries
			1.8.4.1 Association and Complementary Products
			1.8.4.2 Cross-Channel Analytics
			1.8.4.3 Event analytics
	Summary
	Review Questions
Chapter 2 Overview of Python for Machine Learning
	2.1 Introduction
		2.1.1 The Flow of Program Execution in Python
	2.2 Python for Machine Learning
		2.2.1 Why Is Python Good for ML?
	2.3 Setting up Python
		2.3.1 Python on Windows
		2.3.2 Python on Linux
			2.3.2.1 Ubuntu
	2.4 Python Basics
		2.4.1 Python Operators
			2.4.1.1 Arithmetic Operators
			2.4.1.2 Comparison Operators
			2.4.1.3 Assignment Operators
			2.4.1.4 Logical Operators
			2.4.1.5 Membership Operators
		2.4.2 Python Code Samples on Basic Operators
			2.4.2.1 Arithmetic Operators
			2.4.2.2 Comparison Operators
			2.4.2.3 Logical Operators
			2.4.2.4 Membership Operators
		2.4.3 Flow Control
			2.4.3.1 If & elif Statement
			2.4.3.2 Loop Statement
			2.4.3.3 Loop Control Statements
		2.4.4 Python Code Samples on Flow Control Statements
			2.4.4.1 Conditional Statements
			2.4.4.2 Python if...else Statement
			2.4.4.3 Python if…elif…else Statement
			2.4.4.4 The For Loop
			2.4.4.5 The range() Function
			2.4.4.6 For Loop with else
			2.4.4.7 While Loop
			2.4.4.8 While Loop with else
			2.4.4.9 Python Break and Continue
			2.4.4.10 Python Break Statement
			2.4.4.11 Python Continue Statement
		2.4.5 Review of Basic Data Structures and Implementation in Python
			2.4.5.1 Array Data Structure
			2.4.5.2 Implementation of Arrays in Python
			2.4.5.3 Linked List
			2.4.5.4 Implementation of Linked List in Python
			2.4.5.5 Stacks and Queues
			2.4.5.6 Queues
			2.4.5.7 Implementation of Queue in Python
			2.4.5.8 Searching
			2.4.5.9 Implementation of Searching in Python
			2.4.5.10 Sorting
			2.4.5.11 Implementation of Bubble Sort in Python
			2.4.5.12 Insertion Sort
			2.4.5.13 Implementation of Insertion Sort in Python
			2.4.5.14 Selection Sort
			2.4.5.15 Implementation of Selection Sort in Python
			2.4.5.16 Merge Sort
			2.4.5.17 Implementation of Merge Sort in Python
			2.4.5.18 Shell Sort
			2.4.5.19 Quicksort
			2.4.5.20 Data Structures in Python with Sample Codes
			2.4.5.21 Python Code Samples for Data Structures in Python
		2.4.6 Functions in Python
			2.4.6.1 Python Code Samples for Functions
			2.4.6.2 Returning Values from Functions
			2.4.6.3 Scope of Variables
			2.4.6.4 Function Arguments
		2.4.7 File Handling
		2.4.8 Exception Handling
		2.4.9 Debugging in Python
			2.4.9.1 Packages
	2.5 Numpy Basics
		2.5.1 Introduction to Numpy
			2.5.1.1 Array Creation
			2.5.1.2 Array Slicing
		2.5.2 Numerical Operations
		2.5.3 Python Code Samples for Numpy Package
			2.5.3.1 Array Creation
			2.5.3.2 Class and Attributes of ndarray—.ndim
			2.5.3.3 Class and Attributes of ndarray—.shape
			2.5.3.4 Class and Attributes of ndarray—ndarray.size, ndarray.Itemsize, ndarray.resize
			2.5.3.5 Class and Attributes of ndarray—.dtype
			2.5.3.6 Basic Operations
			2.5.3.7 Accessing Array Elements: Indexing
			2.5.3.8 Shape Manipulation
			2.5.3.9 Universal Functions (ufunc) in Numpy
			2.5.3.10 Broadcasting
			2.5.3.11 Args and Kwargs
	2.6 Matplotlib Basics
		2.6.1 Creating Graphs with Matplotlib
	2.7 Pandas Basics
		2.7.1 Getting Started with Pandas
		2.7.2 Data Frames
		2.7.3 Key Operations on Data Frames
			2.7.3.1 Data Frame from List
			2.7.3.2 Rows and Columns in Data Frame
	2.8 Computational Complexity
	2.9 Real-world Examples
		2.9.1 Implementation using Pandas
		2.9.2 Implementation using Numpy
		2.9.3 Implementation using Matplotlib
	Summary
	Review Questions
	Exercises for Practice
Chapter 3 Data Analytics Life Cycle for Machine Learning
	3.1 Introduction
	3.2 Data Analytics Life Cycle
		3.2.1 Phase 1 – Data Discovery
		3.2.2 Phase 2 – Data Preparation and Exploratory Data Analysis
			3.2.2.1 Exploratory Data Analysis
		3.2.3 Phase 3 – Model Planning
		3.2.4 Phase 4 – Model Building
		3.2.5 Phase 5 – Communicating Results
		3.2.6 Phase 6 – Optimize and Operationalize the Models
	Summary
	Review Questions
Chapter 4 Unsupervised Learning
	4.1 Introduction
	4.2 Unsupervised Learning
		4.2.1 Clustering
	4.3 Evaluation Metrics for Clustering
		4.3.1 Distance Measures
			4.3.1.1 Minkowski Metric
		4.3.2 Similarity Measures
	4.4 Clustering Algorithms
		4.4.1 Hierarchical and Partitional Clustering Approaches
		4.4.2 Agglomerative and Divisive Clustering Approaches
		4.4.3 Hard and Fuzzy Clustering Approaches
		4.4.4 Monothetic and Polythetic Clustering Approaches
		4.4.5 Deterministic and Probabilistic Clustering Approaches
	4.5 k-Means Clustering
		4.5.1 Geometric Intuition, Centroids
		4.5.2 The Algorithm
		4.5.3 Choosing k
		4.5.4 Space and Time Complexity
		4.5.5 Advantages and Disadvantages of k-Means Clustering
			4.5.5.1 Advantages
			4.5.5.2 Disadvantages
		4.5.6 k-Means Clustering in Practice Using Python
			4.5.6.1 Illustration of the k-Means Algorithm Using Python
		4.5.7 Fuzzy k-Means Clustering Algorithm
			4.5.7.1 The Algorithm
		4.5.8 Advantages and Disadvantages of Fuzzy k-Means Clustering
	4.6 Hierarchical Clustering
		4.6.1 Agglomerative Hierarchical Clustering
		4.6.2 Divisive Hierarchical Clustering
		4.6.3 Techniques to Merge Cluster
		4.6.4 Space and Time Complexity
		4.6.5 Limitations of Hierarchical Clustering
		4.6.6 Hierarchical Clustering in Practice Using Python
			4.6.6.1 DATA_SET
	4.7 Mixture of Gaussian Clustering
		4.7.1 Expectation Maximization
		4.7.2 Mixture of Gaussian Clustering in Practice Using Python
	4.8 Density-Based Clustering Algorithm
		4.8.1 DBSCAN (Density-Based Spatial Clustering of Applications with Noise)
		4.8.2 Space and Time Complexity
		4.8.3 Advantages and Disadvantages of DBSCAN
			4.8.3.1 Advantages
			4.8.3.2 Disadvantages
		4.8.4 DBSCAN in Practice Using Python
	Summary
	Review Questions
Chapter 5 Supervised Learning: Regression
	5.1 Introduction
	5.2 Supervised Learning – Real-Life Scenario
	5.3 Types of Supervised Learning
		5.3.1 Supervised Learning – Classification
			5.3.1.1 Classification – Predictive Modeling
		5.3.2 Supervised Learning – Regression
			5.3.2.1 Regression Predictive Modeling
		5.3.3 Classification vs. Regression
		5.3.4 Conversion between Classification and Regression Problems
	5.4 Linear Regression
		5.4.1 Types of Linear Regression
			5.4.1.1 Simple Linear Regression
			5.4.1.2 Multiple Linear Regression
		5.4.2 Geometric Intuition
		5.4.3 Mathematical Formulation
		5.4.4 Solving Optimization Problem
			5.4.4.1 Maxima and Minima
			5.4.4.2 Gradient Descent
			5.4.4.3 LMS (Least Mean Square) Update Rule
			5.4.4.4 SGD Algorithm
		5.4.5 Real-World Applications
			5.4.5.1 Predictive Analysis
			5.4.5.2 Medical Outcome Prediction
			5.4.5.3 Wind Speed Prediction
			5.4.5.4 Environmental Effects Monitoring
		5.4.6 Linear Regression in Practice Using Python
			5.4.6.1 Simple Linear Regression Using Python
			5.4.6.2 Multiple Linear Regression Using Python
	Summary
	Review Questions
Chapter 6 Supervised Learning: Classification
	6.1 Introduction
	6.2 Use Cases of Classification
	6.3 Logistic Regression
		6.3.1 Geometric Intuition
		6.3.2 Variants of Logistic Regression
			6.3.2.1 Simple Logistic Regression
			6.3.2.2 Multiple Logistic Regression
			6.3.2.3 Binary Logistic Regression
			6.3.2.4 Multiclass Logistic Regression
			6.3.2.5 Nominal Logistic Regression
			6.3.2.6 Ordinal Logistic Regression
		6.3.3 Optimization Problem
		6.3.4 Regularization
		6.3.5 Real-World Applications
			6.3.5.1 Medical Diagnosis
			6.3.5.2 Text Classification
			6.3.5.3 Marketing
		6.3.6 Logistic Regression in Practice using Python
			6.3.6.1 Variable Descriptions
			6.3.6.2 Checking for Missing Values
			6.3.6.3 Converting Categorical Variables to a Dummy Indicator
	6.4 Decision Tree Classifier
		6.4.1 Important Terminology in the Decision Tree
		6.4.2 Example for Decision Tree
		6.4.3 Sample Decision Tree
		6.4.4 Decision Tree Formation
		6.4.5 Algorithms Used for Decision Trees
			6.4.5.1 ID3 Algorithm
			6.4.5.2 C 4.5 Algorithm
			6.4.5.3 CART Algorithm
		6.4.6 Overfitting and Underfitting
			6.4.6.1 Overfitting
			6.4.6.2 Underfitting
			6.4.6.3 Pruning to Avoid Overfitting
		6.4.7 Advantages and Disadvantages
			6.4.7.1 Advantages
			6.4.7.2 Disadvantages
		6.4.8 Decision Tree Examples
		6.4.9 Regression Using Decision Tree
		6.4.10 Real-World Examples
			6.4.10.1 Predicting Library Book
			6.4.10.2 Identification of Tumor
			6.4.10.3 Classification of Telescope Image
			6.4.10.4 Business Management
			6.4.10.5 Fault Diagnosis
			6.4.10.6 Healthcare Management
			6.4.10.7 Decision Tree in Data Mining
		6.4.11 Decision Trees in Practice Using Python
	6.5 Random Forest Classifier
		6.5.1 Random Forest and Their Construction
		6.5.2 Sampling of the Dataset in Random Forest
			6.5.2.1 Creation of Subset Data
		6.5.3 Pseudocode for Random Forest
			6.5.3.1 Pseudocode for Prediction in Random Forest
		6.5.4 Regression Using Random Forest
		6.5.5 Classification Using Random Forest
			6.5.5.1 Random Forest Problem for Classification – Examples
		6.5.6 Features and Properties of Random Forest
			6.5.6.1 Features
			6.5.6.2 Properties
		6.5.7 Advantages and Disadvantages of Random Forest
			6.5.7.1 Advantages
			6.5.7.2 Disadvantages
		6.5.8 Calculation of Error Using Bias and Variance
			6.5.8.1 Bias
			6.5.8.2 Variance
			6.5.8.3 Properties of Bias and Variance
		6.5.9 Time Complexity
		6.5.10 Extremely Randomized Tree
		6.5.11 Real-World Examples
			6.5.11.1 Machine Fault Diagnosis
			6.5.11.2 Medical Field
			6.5.11.3 Banking
			6.5.11.4 E-Commerce
			6.5.11.5 Security
		6.5.12 Random Forest in Practice Using Python
	6.6 Support Vector Machines
		6.6.1 Geometric Intuition
		6.6.2 Mathematical Formulation
			6.6.2.1 Maximize Margin with Noise
			6.6.2.2 Slack Variable ᶓ[sub(i)]
		6.6.3 Loss Minimization
		6.6.4 Dual Formulation
		6.6.5 The Kernel Trick
		6.6.6 Polynomial Kernel
			6.6.6.1 Mercer’s Theorem
			6.6.6.2 Radial Basis Function (RBF) Kernel
			6.6.6.3 Other Domain-Specific Kernel
			6.6.6.4 Sigmoid Kernel
			6.6.6.5 Exponential Kernel
			6.6.6.6 ANOVA Kernel
			6.6.6.7 Rational Quadratic Kernel
			6.6.6.8 Multiquadratic Kernel
			6.6.6.9 Inverse Multiquadratic Kernel
			6.6.6.10 Circular Kernel
			6.6.6.11 Bayesian Kernel
			6.6.6.12 Chi-Square Kernel
			6.6.6.13 Histogram Intersection Kernel
			6.6.6.14 Generalized Histogram Intersection Kernel
		6.6.7 nu SVM
		6.6.8 SVM Regression
		6.6.9 One-Class SVM
		6.6.10 Multiclass SVM
			6.6.10.1 One against All
			6.6.10.2 One against One
			6.6.10.3 Directed Acyclic Graph SVM
		6.6.11 SVM Examples
		6.6.12 Real-World Applications
			6.6.12.1 Classification of Cognitive Impairment
			6.6.12.2 Preprocessing
			6.6.12.3 Feature Extraction
			6.6.12.4 SVM Classification
			6.6.12.5 Procedure
			6.6.12.6 Performance Analysis
			6.6.12.7 Text Categorization
			6.6.12.8 Handwritten Optical Character Recognition
			6.6.12.9 Natural Language Processing
			6.6.12.10 Cancer Prediction
			6.6.12.11 Stock Market Forecasting
			6.6.12.12 Protein Structure Prediction
			6.6.12.13 Face Detection Using SVM
		6.6.13 Advantages and Disadvantages of SVM
	6.7 SVM Classification in Practice Using Python
		6.7.1 Support Vectors
		6.7.2 What Is a Hyperplane?
	Summary
	Review Questions
Chapter 7 Feature Engineering
	7.1 Introduction
	7.2 Feature Selection
		7.2.1 Wrapper Methods
			7.2.1.1 Forward Selection
			7.2.1.2 Backward Elimination
			7.2.1.3 Exhaustive Feature Selection
		7.2.2 Featured Methods
	7.3 Factor Analysis
		7.3.1 Types of Factor Analysis
		7.3.2 Working of Factor Analysis
		7.3.3 Terminologies
			7.3.3.1 Definition of Factor
			7.3.3.2 Factor Loading
			7.3.3.3 Eigenvalues
			7.3.3.4 Communalities
			7.3.3.5 Factor Rotation
			7.3.3.6 Selecting the Number of Factors
	7.4 Principal Component Analysis
		7.4.1 Center the Data
		7.4.2 Normalize the Data
		7.4.3 Estimate the Eigen decomposition
		7.4.4 Project the Data
	7.5 Eigenvalues and PCA
		7.5.1 Usage of eigendecomposition in PCA
	7.6 Feature Reduction
		7.6.1 Factor Analysis Vs. Principal Component Analysis
	7.7 PCA Transformation in Practice Using Python
	7.8 Linear Discriminant Analysis
		7.8.1 Mathematical Operations in LDA
	7.9 LDA Transformation in Practice Using Python
		7.9.1 Implementation of Scatter within the Class (Sw)
		7.9.2 Implementation of Scatter between Class (Sb)
	Summary
	Review Questions
Chapter 8 Reinforcement Engineering
	8.1 Introduction
	8.2 Reinforcement Learning
		8.2.1 Examples of Reinforcement Learning
	8.3 How RL Differs from Other ML Algorithms?
		8.3.1 Supervised Learning
	8.4 Elements of Reinforcement Learning
		8.4.1 Policy
		8.4.2 Reward Signal
		8.4.3 Value Function
			8.4.3.1 Examples of Rewards
		8.4.4 Model of the Environment
		8.4.5 The Reinforcement Learning Algorithm
		8.4.6 Methods to Implement Reinforcement Learning in ML
	8.5 Markov Decision Process
		8.5.1 Preliminaries
		8.5.2 Value Functions
	8.6 Dynamic Programming
		8.6.1 Policy Evaluation
		8.6.2 Policy Improvement
		8.6.3 Policy Iteration
		8.6.4 Efficiency of Dynamic Programming
		8.6.5 Dynamic Programming in Practice using Python
	Summary
	Review Questions
Chapter 9 Case Studies for Decision Sciences Using Python
	9.1 Use Case 1 − Retail Price Optimization Using Price Elasticity of
Demand Method
		9.1.1 Background
		9.1.2 Understanding the Data
		9.1.3 Conclusion
	9.2 Use Case 2 − Market Basket Analysis (MBA)
		9.2.1 Introduction
		9.2.2 Understating the Data
		9.2.3 Conclusion
	9.3 Use Case 3 − Sales Prediction of a Retailer
		9.3.1 Background
		9.3.2 Understanding the Data
		9.3.3 Conclusion
	9.4 Use Case 4 − Predicting the Cost of Insurance Claims for a
Property and Causalty (P&C) Insurance Company
		9.4.1 Background
		9.4.2 Understanding the Data
	9.5 Use Case 5 − E-Commerce Product Ranking and Sentiment Analysis
		9.5.1 Background
		9.5.2 Understanding the Data
	Summary
	Review Questions
Appendix: Python Cheat Sheet for Machine Learning
Bibliography
Index




نظرات کاربران