دسترسی نامحدود
برای کاربرانی که ثبت نام کرده اند
برای ارتباط با ما می توانید از طریق شماره موبایل زیر از طریق تماس و پیامک با ما در ارتباط باشید
در صورت عدم پاسخ گویی از طریق پیامک با پشتیبان در ارتباط باشید
برای کاربرانی که ثبت نام کرده اند
درصورت عدم همخوانی توضیحات با کتاب
از ساعت 7 صبح تا 10 شب
ویرایش:
نویسندگان: Jay Dawani
سری:
ISBN (شابک) : 1838647295, 9781838647292
ناشر: Packt Publishing
سال نشر: 2020
تعداد صفحات: 0
زبان: English
فرمت فایل : EPUB (درصورت درخواست کاربر به PDF، EPUB یا AZW3 تبدیل می شود)
حجم فایل: 83 مگابایت
در صورت تبدیل فایل کتاب Hands-On Mathematics for Deep Learning: Build a solid mathematical foundation for training efficient deep neural networks به فرمت های PDF، EPUB، AZW3، MOBI و یا DJVU می توانید به پشتیبان اطلاع دهید تا فایل مورد نظر را تبدیل نمایند.
توجه داشته باشید کتاب ریاضیات عملی برای یادگیری عمیق: ایجاد یک پایه ریاضی محکم برای آموزش شبکه های عصبی عمیق کارآمد نسخه زبان اصلی می باشد و کتاب ترجمه شده به فارسی نمی باشد. وبسایت اینترنشنال لایبرری ارائه دهنده کتاب های زبان اصلی می باشد و هیچ گونه کتاب ترجمه شده یا نوشته شده به فارسی را ارائه نمی دهد.
هدف اصلی این کتاب این است که پیشینه ریاضی پیشرفته را برای افرادی با پیشینه برنامه نویسی در دسترس قرار دهد. این کتاب خوانندگان را نه تنها با معماری های یادگیری عمیق بلکه با ریاضیات پشت سر آنها مجهز می کند. با این کتاب، ریاضیات مربوطه را که پشت ساخت مدل های یادگیری عمیق است، خواهید فهمید.
The main aim of this book is to make the advanced mathematical background accessible to someone with a programming background. This book will equip the readers with not only deep learning architectures but the mathematics behind them. With this book, you will understand the relevant mathematics that goes behind building deep learning models.
Title Page Copyright and Credits About Packt Contributors Table of Contents Preface Section 1: Essential Mathematics for Deep Learning Linear Algebra Comparing scalars and vectors Linear equations Solving linear equations in n-dimensions Solving linear equations using elimination Matrix operations Adding matrices Multiplying matrices Inverse matrices Matrix transpose Permutations Vector spaces and subspaces Spaces Subspaces Linear maps Image and kernel Metric space and normed space Inner product space Matrix decompositions Determinant Eigenvalues and eigenvectors Trace Orthogonal matrices Diagonalization and symmetric matrices Singular value decomposition Cholesky decomposition Summary Vector Calculus Single variable calculus Derivatives Sum rule Power rule Trigonometric functions First and second derivatives Product rule Quotient rule Chain rule Antiderivative Integrals The fundamental theorem of calculus Substitution rule Areas between curves Integration by parts Multivariable calculus Partial derivatives Chain rule Integrals Vector calculus Derivatives Vector fields Inverse functions Summary Probability and Statistics Understanding the concepts in probability Classical probability Sampling with or without replacement Multinomial coefficient Stirling's formula Independence Discrete distributions Conditional probability Random variables Variance Multiple random variables Continuous random variables Joint distributions More probability distributions Normal distribution Multivariate normal distribution Bivariate normal distribution Gamma distribution Essential concepts in statistics Estimation Mean squared error Sufficiency Likelihood Confidence intervals Bayesian estimation Hypothesis testing Simple hypotheses Composite hypothesis The multivariate normal theory Linear models Hypothesis testing Summary Optimization Understanding optimization and it's different types Constrained optimization Unconstrained optimization Convex optimization Convex sets Affine sets Convex functions Optimization problems Non-convex optimization Exploring the various optimization methods Least squares Lagrange multipliers Newton's method The secant method The quasi-Newton method Game theory Descent methods Gradient descent Stochastic gradient descent Loss functions Gradient descent with momentum The Nesterov's accelerated gradient Adaptive gradient descent Simulated annealing Natural evolution Exploring population methods Genetic algorithms Particle swarm optimization Summary Graph Theory Understanding the basic concepts and terminology Adjacency matrix Types of graphs Weighted graphs Directed graphs Directed acyclic graphs Multilayer and dynamic graphs Tree graphs Graph Laplacian Summary Section 2: Essential Neural Networks Linear Neural Networks Linear regression Polynomial regression Logistic regression Summary Feedforward Neural Networks Understanding biological neural networks Comparing the perceptron and the McCulloch-Pitts neuron The MP neuron Perceptron Pros and cons of the MP neuron and perceptron MLPs Layers Activation functions Sigmoid Hyperbolic tangent Softmax Rectified linear unit Leaky ReLU Parametric ReLU Exponential linear unit The loss function Mean absolute error Mean squared error Root mean squared error The Huber loss Cross entropy Kullback-Leibler divergence Jensen-Shannon divergence Backpropagation Training neural networks Parameter initialization All zeros Random initialization Xavier initialization The data Deep neural networks Summary Regularization The need for regularization Norm penalties L2 regularization L1 regularization Early stopping Parameter tying and sharing Dataset augmentation Dropout Adversarial training Summary Convolutional Neural Networks The inspiration behind ConvNets Types of data used in ConvNets Convolutions and pooling Two-dimensional convolutions One-dimensional convolutions 1 × 1 convolutions Three-dimensional convolutions Separable convolutions Transposed convolutions Pooling Global average pooling Convolution and pooling size Working with the ConvNet architecture Training and optimization Exploring popular ConvNet architectures VGG-16 Inception-v1 Summary Recurrent Neural Networks The need for RNNs The types of data used in RNNs Understanding RNNs Vanilla RNNs Bidirectional RNNs Long short-term memory Gated recurrent units Deep RNNs Training and optimization Popular architecture Clockwork RNNs Summary Section 3: Advanced Deep Learning Concepts Simplified Attention Mechanisms Overview of attention Understanding neural Turing machines Reading Writing Addressing mechanisms Content-based addressing mechanism Location-based address mechanism Exploring the types of attention Self-attention Comparing hard and soft attention Comparing global and local attention Transformers Summary Generative Models Why we need generative models Autoencoders The denoising autoencoder The variational autoencoder Generative adversarial networks Wasserstein GANs Flow-based networks Normalizing flows Real-valued non-volume preserving Summary Transfer and Meta Learning Transfer learning Meta learning Approaches to meta learning Model-based meta learning Memory-augmented neural networks Meta Networks Metric-based meta learning Prototypical networks Siamese neural networks Optimization-based meta learning Long Short-Term Memory meta learners Model-agnostic meta learning Summary Geometric Deep Learning Comparing Euclidean and non-Euclidean data Manifolds Discrete manifolds Spectral decomposition Graph neural networks Spectral graph CNNs Mixture model networks Facial recognition in 3D Summary Other Books You May Enjoy Index