دسترسی نامحدود
برای کاربرانی که ثبت نام کرده اند
برای ارتباط با ما می توانید از طریق شماره موبایل زیر از طریق تماس و پیامک با ما در ارتباط باشید
در صورت عدم پاسخ گویی از طریق پیامک با پشتیبان در ارتباط باشید
برای کاربرانی که ثبت نام کرده اند
درصورت عدم همخوانی توضیحات با کتاب
از ساعت 7 صبح تا 10 شب
ویرایش: نویسندگان: T. Teoh, Z. Rong سری: ISBN (شابک) : 9789811686146, 9789811686153 ناشر: سال نشر: 2022 تعداد صفحات: 334 زبان: English فرمت فایل : PDF (درصورت درخواست کاربر به PDF، EPUB یا AZW3 تبدیل می شود) حجم فایل: 5 Mb
در صورت تبدیل فایل کتاب Artificial Intelligence with Python. Machine Learning: Foundations, Methodologies, and Applications به فرمت های PDF، EPUB، AZW3، MOBI و یا DJVU می توانید به پشتیبان اطلاع دهید تا فایل مورد نظر را تبدیل نمایند.
توجه داشته باشید کتاب هوش مصنوعی با پایتون یادگیری ماشینی: مبانی، روششناسی و کاربردها نسخه زبان اصلی می باشد و کتاب ترجمه شده به فارسی نمی باشد. وبسایت اینترنشنال لایبرری ارائه دهنده کتاب های زبان اصلی می باشد و هیچ گونه کتاب ترجمه شده یا نوشته شده به فارسی را ارائه نمی دهد.
Preface Acknowledgments Contents Part I Python 1 Python for Artificial Intelligence 1.1 Common Uses 1.1.1 Relative Popularity 1.1.2 Features 1.1.3 Syntax and Design 1.2 Scientific Programming 1.3 Why Python for Artificial Intelligence 2 Getting Started 2.1 Setting up Your Python Environment 2.2 Anaconda 2.2.1 Installing Anaconda 2.2.2 Further Installation Steps 2.2.3 Updating Anaconda 2.3 Installing Packages 2.4 Virtual Environment 2.5 Jupyter Notebooks 2.5.1 Starting the Jupyter Notebook 2.5.2 Notebook Basics Running Cells Modal Editing Inserting Unicode (e.g., Greek Letters) A Test Program 2.5.3 Working with the Notebook Tab Completion On-Line Help Other Content 2.5.4 Sharing Notebooks 3 An Introductory Example 3.1 Overview 3.2 The Task: Plotting a White Noise Process 3.3 Our First Program 3.3.1 Imports Why So Many Imports? Packages Subpackages 3.3.2 Importing Names Directly 3.3.3 Random Draws 3.4 Alternative Implementations 3.4.1 A Version with a for Loop 3.4.2 Lists 3.4.3 The for Loop 3.4.4 A Comment on Indentation 3.4.5 While Loops 3.5 Another Application 3.6 Exercises 3.6.1 Exercise 1 3.6.2 Exercise 2 3.6.3 Exercise 3 3.6.4 Exercise 4 3.6.5 Exercise 5 3.7 Solutions 3.7.1 Exercise 1 3.7.2 Exercise 2 3.7.3 Exercise 3 3.7.4 Exercise 4 3.7.5 Exercise 5 4 Basic Python 4.1 Hello, World! 4.2 Indentation 4.3 Variables and Types 4.3.1 Numbers 4.3.2 Strings 4.3.3 Lists 4.3.4 Dictionaries 4.4 Basic Operators 4.4.1 Arithmetic Operators 4.4.2 List Operators 4.4.3 String Operators 4.5 Logical Conditions 4.6 Loops 4.7 List Comprehensions 4.8 Exception Handling 4.8.1 Sets 5 Intermediate Python 5.1 Functions 5.2 Classes and Objects 5.3 Modules and Packages 5.3.1 Writing Modules 5.4 Built-in Modules 5.5 Writing Packages 5.6 Closures 5.7 Decorators 6 Advanced Python 6.1 Python Magic Methods 6.1.1 Exercise 6.1.2 Solution 6.2 Comprehension 6.3 Functional Parts 6.4 Iterables 6.5 Decorators 6.6 More on Object Oriented Programming 6.6.1 Mixins 6.6.2 Attribute Access Hooks 6.6.3 Callable Objects 6.6.4 _new_ vs _init_ 6.7 Properties 6.8 Metaclasses 7 Python for Data Analysis 7.1 Ethics 7.2 Data Analysis 7.2.1 Numpy Arrays 7.2.2 Pandas Selections 7.2.3 Matplotlib 7.3 Sample Code Part II Artificial Intelligence Basics 8 Introduction to Artificial Intelligence 8.1 Data Exploration 8.2 Problems with Data 8.3 A Language and Approach to Data-Driven Story-Telling 8.4 Example: Telling Story with Data 9 Data Wrangling 9.1 Handling Missing Data 9.1.1 Missing Data 9.1.2 Removing Missing Data 9.2 Transformation 9.2.1 Duplicates 9.2.2 Mapping 9.3 Outliers 9.4 Permutation 9.5 Merging and Combining 9.6 Reshaping and Pivoting 9.7 Wide to Long 10 Regression 10.1 Linear Regression 10.2 Decision Tree Regression 10.3 Random Forests 10.4 Neural Network 10.5 How to Improve Our Regression Model 10.5.1 Boxplot 10.5.2 Remove Outlier 10.5.3 Remove NA 10.6 Feature Importance 10.7 Sample Code 11 Classification 11.1 Logistic Regression 11.2 Decision Tree and Random Forest 11.3 Neural Network 11.4 Logistic Regression 11.5 Decision Tree 11.6 Feature Importance 11.7 Remove Outlier 11.8 Use Top 3 Features 11.9 SVM 11.9.1 Important Hyper Parameters 11.10 Naive Bayes 11.11 Sample Code 12 Clustering 12.1 What Is Clustering? 12.2 K-Means 12.3 The Elbow Method 13 Association Rules 13.1 What Are Association Rules 13.2 Apriori Algorithm 13.3 Measures for Association Rules Part III Artificial Intelligence Implementations 14 Text Mining 14.1 Read Data 14.2 Date Range 14.3 Category Distribution 14.4 Texts for Classification 14.5 Vectorize 14.6 CountVectorizer 14.7 TF-IDF 14.8 Feature Extraction with TF-IDF 14.9 Sample Code 15 Image Processing 15.1 Load the Dependencies 15.2 Load Image from urls 15.3 Image Analysis 15.4 Image Histogram 15.5 Contour 15.6 Grayscale Transformation 15.7 Histogram Equalization 15.8 Fourier Transformation 15.9 High pass Filtering in FFT 15.10 Pattern Recognition 15.11 Sample Code 16 Convolutional Neural Networks 16.1 The Convolution Operation 16.2 Pooling 16.3 Flattening 16.4 Exercise 16.5 CNN Architectures 16.5.1 VGG16 16.5.2 Inception Net 16.5.3 ResNet 16.6 Finetuning 16.7 Other Tasks That Use CNNs 16.7.1 Object Detection 16.7.2 Semantic Segmentation 17 Chatbot, Speech, and NLP 17.1 Speech to Text 17.2 Importing the Packages for Chatbot 17.3 Preprocessing the Data for Chatbot 17.3.1 Download the Data 17.3.2 Reading the Data from the Files 17.3.3 Preparing Data for Seq2Seq Model 17.4 Defining the Encoder-Decoder Model 17.5 Training the Model 17.6 Defining Inference Models 17.7 Talking with Our Chatbot 17.8 Sample Code 18 Deep Convolutional Generative Adversarial Network 18.1 What Are GANs? 18.2 Setup 18.2.1 Load and Prepare the Dataset 18.3 Create the Models 18.3.1 The Generator 18.3.2 The Discriminator 18.4 Define the Loss and Optimizers 18.4.1 Discriminator Loss 18.4.2 Generator Loss 18.5 Save Checkpoints 18.6 Define the Training Loop 18.6.1 Train the Model 18.6.2 Create a GIF 19 Neural Style Transfer 19.1 Setup 19.1.1 Import and Configure Modules 19.2 Visualize the Input 19.3 Fast Style Transfer Using TF-Hub 19.4 Define Content and Style Representations 19.4.1 Intermediate Layers for Style and Content 19.5 Build the Model 19.6 Calculate Style 19.7 Extract Style and Content 19.8 Run Gradient Descent 19.9 Total Variation Loss 19.10 Re-run the Optimization 20 Reinforcement Learning 20.1 Reinforcement Learning Analogy 20.2 Q-learning 20.3 Running a Trained Taxi Bibliography Index