دسترسی نامحدود
برای کاربرانی که ثبت نام کرده اند
برای ارتباط با ما می توانید از طریق شماره موبایل زیر از طریق تماس و پیامک با ما در ارتباط باشید
در صورت عدم پاسخ گویی از طریق پیامک با پشتیبان در ارتباط باشید
برای کاربرانی که ثبت نام کرده اند
درصورت عدم همخوانی توضیحات با کتاب
از ساعت 7 صبح تا 10 شب
ویرایش: [2 ed.]
نویسندگان: UMBERTO MICHELUCCI
سری:
ISBN (شابک) : 9781484280201, 1484280202
ناشر: APRESS
سال نشر: 2022
تعداد صفحات: [397]
زبان: English
فرمت فایل : PDF (درصورت درخواست کاربر به PDF، EPUB یا AZW3 تبدیل می شود)
حجم فایل: 17 Mb
در صورت تبدیل فایل کتاب APPLIED DEEP LEARNING WITH TENSORFLOW 2 learn to implement advanced deep learning techniques... with python. به فرمت های PDF، EPUB، AZW3، MOBI و یا DJVU می توانید به پشتیبان اطلاع دهید تا فایل مورد نظر را تبدیل نمایند.
توجه داشته باشید کتاب یادگیری عمیق کاربردی با TENSORFLOW 2 آموزش پیاده سازی تکنیک های پیشرفته یادگیری عمیق... با پایتون. نسخه زبان اصلی می باشد و کتاب ترجمه شده به فارسی نمی باشد. وبسایت اینترنشنال لایبرری ارائه دهنده کتاب های زبان اصلی می باشد و هیچ گونه کتاب ترجمه شده یا نوشته شده به فارسی را ارائه نمی دهد.
Table of Contents About the Author About the Contributing Author About the Technical Reviewer Acknowledgments Foreword Introduction Chapter 1: Optimization and Neural Networks A Basic Understanding of Neural Networks The Problem of Learning A First Definition of Learning [Advanced Section] Assumption in the Formulation A Definition of Learning for Neural Networks Constrained vs. Unconstrained Optimization [Advanced Section] Reducing a Constrained Problem to an Unconstrained Optimization Problem Absolute and Local Minima of a Function Optimization Algorithms Line Search and Trust Region Steepest Descent The Gradient Descent Algorithm Choosing the Right Learning Rate Variations of GD Mini-Batch GD Stochastic GD How to Choose the Right Mini-Batch Size [Advanced Section] SGD and Fractals Exercises Conclusion Chapter 2: Hands-on with a Single Neuron A Short Overview of a Neuron’s Structure A Short Introduction to Matrix Notation An Overview of the Most Common Activation Functions Identity Function Sigmoid Function Tanh (Hyperbolic Tangent) Activation Function ReLU (Rectified Linear Unit) Activation Function Leaky ReLU The Swish Activation Function Other Activation Functions How to Implement a Neuron in Keras Python Implementation Tips: Loops and NumPy Linear Regression with a Single Neuron The Dataset for the Real-World Example Dataset Splitting Linear Regression Model Keras Implementation The Model’s Learning Phase Model’s Performance Evaluation on Unseen Data Logistic Regression with a Single Neuron The Dataset for the Classification Problem Dataset Splitting The Logistic Regression Model Keras Implementation The Model’s Learning Phase The Model’s Performance Evaluation Conclusion Exercises References Chapter 3: Feed-Forward Neural Networks A Short Review of Network’s Architecture and Matrix Notation Output of Neurons A Short Summary of Matrix Dimensions Example: Equations for a Network with Three Layers Hyper-Parameters in Fully Connected Networks A Short Review of the Softmax Activation Function for Multiclass Classifications A Brief Digression: Overfitting A Practical Example of Overfitting Basic Error Analysis Implementing a Feed-Forward Neural Network in Keras Multiclass Classification with Feed-Forward Neural Networks The Zalando Dataset for the Real-World Example Modifying Labels for the Softmax Function: One-Hot Encoding The Feed-Forward Network Model Keras Implementation Gradient Descent Variations Performances Comparing the Variations Examples of Wrong Predictions Weight Initialization Adding Many Layers Efficiently Advantages of Additional Hidden Layers Comparing Different Networks Tips for Choosing the Right Network Estimating the Memory Requirements of Models General Formula for the Memory Footprint Exercises References Chapter 4: Regularization Complex Networks and Overfitting What Is Regularization About Network Complexity ℓp Norm ℓ2 Regularization Theory of ℓ2 Regularization Keras Implementation ℓ1 Regularization Theory of ℓ1 Regularization and Keras Implementation Are the Weights Really Going to Zero? Dropout Early Stopping Additional Methods Exercises References Chapter 5: Advanced Optimizers Available Optimizers in Keras in TensorFlow 2.5 Advanced Optimizers Exponentially Weighted Averages Momentum RMSProp Adam Comparison of the Optimizers’ Performance Small Coding Digression Which Optimizer Should You Use? Chapter 6: Hyper-Parameter Tuning Black-Box Optimization Notes on Black-Box Functions The Problem of Hyper-Parameter Tuning Sample Black-Box Problem Grid Search Random Search Coarse to Fine Optimization Bayesian Optimization Nadaraya-Watson Regression Gaussian Process Stationary Process Prediction with Gaussian Processes Acquisition Function Upper Confidence Bound (UCB) Example Sampling on a Logarithmic Scale Hyper-Parameter Tuning with the Zalando Dataset A Quick Note about the Radial Basis Function Exercises References Chapter 7: Convolutional Neural Networks Kernels and Filters Convolution Examples of Convolution Pooling Padding Building Blocks of a CNN Convolutional Layers Pooling Layers Stacking Layers Together An Example of a CNN Conclusion Exercises References Chapter 8: A Brief Introduction to Recurrent Neural Networks Introduction to RNNs Notation The Basic Idea of RNNs Why the Name Recurrent Learning to Count Conclusion Further Readings Chapter 9: Autoencoders Introduction Regularization in Autoencoders Feed-Forward Autoencoders Activation Function of the Output Layer ReLU Sigmoid The Loss Function Mean Square Error Binary Cross-Entropy The Reconstruction Error Example: Reconstructing Handwritten Digits Autoencoder Applications Dimensionality Reduction Equivalence with PCA Classification Classification with Latent Features The Curse of Dimensionality: A Small Detour Anomaly Detection Model Stability: A Short Note Denoising Autoencoders Beyond FFA: Autoencoders with Convolutional Layers Implementation in Keras Exercises Further Readings Chapter 10: Metric Analysis Human-Level Performance and Bayes Error A Short Story About Human-Level Performance Human-Level Performance on MNIST Bias Metric Analysis Diagram Training Set Overfitting Test Set How to Split Your Dataset Unbalanced Class Distribution: What Can Happen Datasets with Different Distributions k-fold Cross Validation Manual Metric Analysis: An Example Exercises References Chapter 11: Generative Adversarial Networks (GANs) Introduction to GANs Training Algorithm for GANs A Practical Example with Keras and MNIST A Note on Training Conditional GANs Conclusion Appendix A: Introduction to Keras Some History Understanding the Sequential Model Understanding Keras Layers Setting the Activation Function Using Functional APIs Specifying Loss Functions and Metrics Putting It All Together and Training Modeling evaluate() and predict () Using Callback Functions Saving and Loading Models Saving Your Weights Manually Saving the Entire Model Conclusion Appendix B: Customizing Keras Customizing Callback Classes Example of a Custom Callback Class Custom Training Loops Calculating Gradients Custom Training Loop for a Neural Network Index