دسترسی نامحدود
برای کاربرانی که ثبت نام کرده اند
برای ارتباط با ما می توانید از طریق شماره موبایل زیر از طریق تماس و پیامک با ما در ارتباط باشید
در صورت عدم پاسخ گویی از طریق پیامک با پشتیبان در ارتباط باشید
برای کاربرانی که ثبت نام کرده اند
درصورت عدم همخوانی توضیحات با کتاب
از ساعت 7 صبح تا 10 شب
ویرایش: نویسندگان: Tomasz Palczewski, Jaejun Lee, Lenin Mookiah سری: ISBN (شابک) : 180324366X, 9781803243665 ناشر: Packt Publishing - ebooks Account سال نشر: 2022 تعداد صفحات: 301 زبان: English فرمت فایل : EPUB (درصورت درخواست کاربر به PDF، EPUB یا AZW3 تبدیل می شود) حجم فایل: 8 Mb
در صورت تبدیل فایل کتاب Production-Ready Applied Deep Learning: Learn how to construct and deploy complex models in PyTorch and TensorFlow deep-learning frameworks به فرمت های PDF، EPUB، AZW3، MOBI و یا DJVU می توانید به پشتیبان اطلاع دهید تا فایل مورد نظر را تبدیل نمایند.
توجه داشته باشید کتاب آموزش عمیق کاربردی آماده تولید: نحوه ساخت و استقرار مدل های پیچیده در چارچوب های یادگیری عمیق PyTorch و TensorFlow را بیاموزید. نسخه زبان اصلی می باشد و کتاب ترجمه شده به فارسی نمی باشد. وبسایت اینترنشنال لایبرری ارائه دهنده کتاب های زبان اصلی می باشد و هیچ گونه کتاب ترجمه شده یا نوشته شده به فارسی را ارائه نمی دهد.
مهارتهای خود را برای طراحی مدلهای یادگیری عمیق و استقرار آنها در محیطهای تولیدی با سهولت و دقت بسیار زیاد کنید.
مهندسین یادگیری ماشین، متخصصان یادگیری عمیق و مهندسان داده بدون تجربه زیاد هنگام انتقال مدلهای خود به محیط تولید با مشکلات مختلفی مواجه میشوند.
توسعهدهندگان میتوانند مدلها را به قالب دلخواه تبدیل کرده و با درک کامل از مبادلات و رویکردهای جایگزین احتمالی، آنها را به کار گیرند. . این کتاب پیادهسازیهای مشخص و متدولوژیهای مرتبط را ارائه میکند که در دسترس هستند و به خوانندگان این امکان را میدهد تا دانش موجود در این کتاب را بلافاصله و بدون مشکل زیاد به کار ببرند.
در این کتاب، شما خواهید دید. یاد بگیرید که چگونه مدل های پیچیده را در چارچوب های یادگیری عمیق PyTorch و TensorFlow بسازید. شما دانشی را برای تبدیل مدل های خود از یک چارچوب به چارچوب دیگر به دست خواهید آورد و یاد خواهید گرفت که چگونه آنها را برای الزامات خاصی که تنظیمات استقرار معرفی می کند، تنظیم کنید. در پایان این کتاب، شما به طور کامل متوجه خواهید شد که چگونه یک مدل یادگیری عمیق شبیه به PoC را به یک نسخه آماده برای استفاده که برای محیط تولید هدف مناسب است، تبدیل کنید.
< span>خوانندگان تجربه عملی با چارچوب های یادگیری عمیق رایج و سرویس های وب محبوب طراحی شده برای تجزیه و تحلیل داده ها در مقیاس خواهند داشت. با استقرار صدها سرویس مبتنی بر هوش مصنوعی در مقیاس بزرگ، با دانش جمعی ما آشنا خواهید شد.
Supercharge your skills for tailoring deep-learning models and deploying them in production environments with ease and precision.
Machine learning engineers, deep learning specialists, and data engineers without extensive experience encounter various problems when moving their models to a production environment.
Developers will be able to transform models into a desired format and deploy them with a full understanding of tradeoffs and possible alternative approaches. The book provides concrete implementations and associated methodologies that are off-the-shelf allowing readers to apply the knowledge in this book right away without much difficulty.
In this book, you will learn how to construct complex models in PyTorch and TensorFlow deep-learning frameworks. You will acquire knowledge to transform your models from one framework to the other and learn how to tailor them for specific requirements that the deployment setting introduces. By the end of this book, you will fully understand how to convert a PoC-like deep learning model into a ready-to-use version that is suitable for the target production environment.
Readers will have hands-on experience with commonly used deep learning frameworks and popular web services designed for data analytics at scale. You will get to grips with our collective know-hows from deploying hundreds of AI-based services at large scale.
Machine learning engineers, deep learning specialists, and data scientists will find this book closing the gap between the theory and the applications with detailed examples. Readers with beginner level knowledge in machine learning or software engineering would find the contents easier to follow.
Cover Title Page Copyright and credits Contributors Table of Contents Preface Part 1 – Building a Minimum Viable Product Chapter 1: Effective Planning of Deep Learning-Driven Projects Technical requirements What is DL? Understanding the role of DL in our daily lives Overview of DL projects Project planning Building minimum viable products Building fully featured products Deployment and maintenance Project evaluation Planning a DL project Defining goal and evaluation metrics Stakeholder identification Task organization Resource allocation Defining a timeline Managing a project Summary Further reading Chapter 2: Data Preparation for Deep Learning Projects Technical requirements Setting up notebook environments Setting up a Python environment Installing Anaconda Setting up a DL project using Anaconda Data collection, data cleaning, and data preprocessing Collecting data Cleaning data Data preprocessing Extracting features from data Converting text using bag-of-words Applying term frequency-inverse document frequency (TF-IDF) transformation Creating one-hot encoding (one-of-k) Creating ordinal encoding Converting a colored image into a grayscale image Performing dimensionality reduction Applying fuzzy matching to handle similarity between strings Performing data visualization Performing basic visualizations using Matplotlib Drawing statistical graphs using Seaborn Introduction to Docker Introduction to dockerfiles Building a custom Docker image Summary Chapter 3: Developing a Powerful Deep Learning Model Technical requirements Going through the basic theory of DL How does DL work? DL model training Components of DL frameworks The data loading logic The model definition Model training logic Implementing and training a model in PyTorch PyTorch data loading logic PyTorch model definition PyTorch model training Implementing and training a model in TF TF data loading logic TF model definition TF model training An understanding of a complex, state-of-the-art model StyleGAN Implementation in PyTorch Implementation in TF Summary Chapter 4: Experiment Tracking, Model Management, and Dataset Versioning Technical requirements Overview of DL project tracking Components of DL project tracking Tools for DL project tracking DL project tracking with Weights & Biases Setting up W&B DL project tracking with MLflow and DVC Setting up MLflow Setting up MLflow with DVC Dataset versioning – beyond Weights & Biases, MLflow, and DVC Summary Part 2 – Building a Fully Featured Product Chapter 5: Data Preparation in the Cloud Technical requirements Data processing in the cloud Introduction to ETL Data processing system architecture Introduction to Apache Spark Resilient distributed datasets and DataFrames Loading data Processing data using Spark operations Processing data using user-defined functions Exporting data Setting up a single-node EC2 instance for ETL Setting up an EMR cluster for ETL Creating a Glue job for ETL Creating a Glue Data Catalog Setting up a Glue context Reading data Defining the data processing logic Writing data Utilizing SageMaker for ETL Creating a SageMaker notebook Running a Spark job through a SageMaker notebook Running a job from a custom container through a SageMaker notebook Comparing the ETL solutions in AWS Summary Chapter 6: Efficient Model Training Technical requirements Training a model on a single machine Utilizing multiple devices for training in TensorFlow Utilizing multiple devices for training in PyTorch Training a model on a cluster Model parallelism Data parallelism Training a model using SageMaker Setting up model training for SageMaker Training a TensorFlow model using SageMaker Training a PyTorch model using SageMaker Training a model in a distributed fashion using SageMaker SageMaker with Horovod Training a model using Horovod Setting up a Horovod cluster Configuring a TensorFlow training script for Horovod Configuring a PyTorch training script for Horovod Training a DL model on a Horovod cluster Training a model using Ray Setting up a Ray cluster Training a model in a distributed fashion using Ray Training a model using Kubeflow Introducing Kubernetes Setting up model training for Kubeflow Training a TensorFlow model in a distributed fashion using Kubeflow Training a PyTorch model in a distributed fashion using Kubeflow Summary Chapter 7: Revealing the Secret of Deep Learning Models Technical requirements Obtaining the best performing model using hyperparameter tuning Hyperparameter tuning techniques Hyperparameter tuning tools Understanding the behavior of the model with Explainable AI Permutation Feature Importance Feature Importance SHapley Additive exPlanations (SHAP) Local Interpretable Model-agnostic Explanations (LIME) Summary Part 3 – Deployment and Maintenance Chapter 8: Simplifying Deep Learning Model Deployment Technical requirements Introduction to ONNX Running inference using ONNX Runtime Conversion between TensorFlow and ONNX Converting a TensorFlow model into an ONNX model Converting an ONNX model into a TensorFlow model Conversion between PyTorch and ONNX Converting a PyTorch model into an ONNX model Converting an ONNX model into a PyTorch model Summary Chapter 9: Scaling a Deep Learning Pipeline Technical requirements Inferencing using Elastic Kubernetes Service Preparing an EKS cluster Configuring EKS Creating an inference endpoint using the TensorFlow model on EKS Creating an inference endpoint using a PyTorch model on EKS Communicating with an endpoint on EKS Improving EKS endpoint performance using Amazon Elastic Inference Resizing EKS cluster dynamically using autoscaling Inferencing using SageMaker Setting up an inference endpoint using the Model class Setting up a TensorFlow inference endpoint Setting up a PyTorch inference endpoint Setting up an inference endpoint from an ONNX model Handling prediction requests in batches using Batch Transform Improving SageMaker endpoint performance using AWS SageMaker Neo Improving SageMaker endpoint performance using Amazon Elastic Inference Resizing SageMaker endpoints dynamically using autoscaling Hosting multiple models on a single SageMaker inference endpoint Summary Chapter 10: Improving Inference Efficiency Technical requirements Network quantization – reducing the number of bits used for model parameters Performing post-training quantization Performing quantization-aware training Weight sharing – reducing the number of distinct weight values Performing weight sharing in TensorFlow Performing weight sharing in PyTorch Network pruning – eliminating unnecessary connections within the network Network pruning in TensorFlow Network pruning in PyTorch Knowledge distillation – obtaining a smaller network by mimicking the prediction Network Architecture Search – finding the most efficient network architecture Summary Chapter 11: Deep Learning on Mobile Devices Preparing DL models for mobile devices Generating a TF Lite model Generating a TorchScript model Creating iOS apps with a DL model Running TF Lite model inference on iOS Running TorchScript model inference on iOS Creating Android apps with a DL model Running TF Lite model inference on Android Running TorchScript model inference on Android Summary Chapter 12: Monitoring Deep Learning Endpoints in Production Technical requirements Introduction to DL endpoint monitoring in production Exploring tools for monitoring Exploring tools for alerting Monitoring using CloudWatch Monitoring a SageMaker endpoint using CloudWatch Monitoring a model throughout the training process in SageMaker Monitoring a live inference endpoint from SageMaker Monitoring an EKS endpoint using CloudWatch Summary Chapter 13: Reviewing the Completed Deep Learning Project Reviewing a DL project Conducting a post-implementation review Understanding the true value of the project Gathering the reusable knowledge, concepts, and artifacts for future projects Summary Index Other Books You May Enjoy