دسترسی نامحدود
برای کاربرانی که ثبت نام کرده اند
برای ارتباط با ما می توانید از طریق شماره موبایل زیر از طریق تماس و پیامک با ما در ارتباط باشید
در صورت عدم پاسخ گویی از طریق پیامک با پشتیبان در ارتباط باشید
برای کاربرانی که ثبت نام کرده اند
درصورت عدم همخوانی توضیحات با کتاب
از ساعت 7 صبح تا 10 شب
ویرایش:
نویسندگان: Johnson I. Agbinya
سری:
ISBN (شابک) : 9788770220965, 9788770220958
ناشر:
سال نشر:
تعداد صفحات: 370
زبان: English
فرمت فایل : PDF (درصورت درخواست کاربر به PDF، EPUB یا AZW3 تبدیل می شود)
حجم فایل: 77 مگابایت
در صورت تبدیل فایل کتاب Applied Data Analytics - Principles and Applications به فرمت های PDF، EPUB، AZW3، MOBI و یا DJVU می توانید به پشتیبان اطلاع دهید تا فایل مورد نظر را تبدیل نمایند.
توجه داشته باشید کتاب تجزیه و تحلیل داده های کاربردی - اصول و برنامه ها نسخه زبان اصلی می باشد و کتاب ترجمه شده به فارسی نمی باشد. وبسایت اینترنشنال لایبرری ارائه دهنده کتاب های زبان اصلی می باشد و هیچ گونه کتاب ترجمه شده یا نوشته شده به فارسی را ارائه نمی دهد.
Front Cover Applied Data Analytics – Principles and Applications Contents Preface Acknowledgement List of Contributors List of Figures List of Tables List of Abbreviations 1 Markov Chain and its Applications 1.1 Introduction 1.2 Definitions 1.2.1 State Space 1.2.2 Trajectory 1.2.2.1 Transition probability 1.2.2.2 State transition matrix 1.3 Prediction Using Markov Chain 1.3.1 Initial State 1.3.2 Long-run Probability 1.3.2.1 Algebraic solution 1.3.2.2 Matrix method 1.4 Applications of Markov Chains 1.4.1 Absorbing Nodes in a Markov Chain 2 Hidden Markov Modelling (HMM) 2.1 HMM Notation 2.2 Emission Probabilities 2.3 A Hidden Markov Model 2.3.1 Setting up HMM Model 2.3.2 HMM in Pictorial Form 2.4 The Three Great Problems in HMM 2.4.1 Notation 2.4.1.1 Problem 1: Classification or the likelihood problem (find p(O\"026A30C )) 2.4.1.2 Problems 2: Trajectory estimation problem 2.4.1.3 Problem 3: System identification problem 2.4.2 Solution to Problem 1: Estimation of Likelihood 2.4.2.1 Naïve solution 2.4.2.2 Forward recursion 2.4.2.3 Backward recursion 2.4.2.4 Solution to Problem 2: Trajectory estimation problem 2.5 State Transition Table 2.5.1 Input Symbol Table 2.5.2 Output Symbol Table 2.6 Solution to Problem 3: Find the Optimal HMM 2.6.1 The Algorithm 2.7 Exercises 3 Introduction to Kalman Filters 3.1 Introduction 3.2 Scalar Form 3.2.1 Step (1): Calculate Kalman Gain 3.3 Matrix Form 3.3.1 Models of the State Variables 3.3.1.1 Using prediction and measurements in Kalman filters 3.3.2 Gaussian Representation of State 3.4 The State Matrix 3.4.1 State Matrix for Object Moving in a Single Direction 3.4.1.1 Tracking including measurements 3.4.2 State Matrix of an Object Moving in Two Dimensions 3.4.3 Objects Moving in Three-Dimensional Space 3.5 Kalman Filter Models with Noise References 4 Kalman Filter II 4.1 Introduction 4.2 Processing Steps in Kalman Filter 4.2.1 Covariance Matrices 4.2.2 Computation Methods for Covariance Matrix 4.2.2.1 Manual method 4.2.2.2 Deviation matrix computation method 4.2.3. Iterations in Kalman Filter 5 Genetic Algorithm 5.1 Introduction 5.2 Steps in Genetic Algorithm 5.3 Terminology of Genetic Algorithms (GAs) 5.4 Fitness Function 5.4.1 Generic Requirements of a Fitness Function 5.5 Selection 5.5.1 The Roulette Wheel 5.5.2 Crossover 5.5.2.1 Single-position crossover 5.2.2.2 Double crossover 5.2.2.3 Mutation 5.2.2.4 Inversion 5.6 Maximizing a Function of a Single Variable 5.7 Continuous Genetic Algorithms 5.7.1 Lowest Elevation on Topographical Maps 5.7.2 Application of GA to Temperature Recording with Sensors References 6 Calculus on Computational Graphs 6.1 Introduction 6.1.1 Elements of Computational Graphs 6.2 Compound Expressions 6.3 Computing Partial Derivatives 6.3.1 Partial Derivatives: Two Cases of the Chain Rule 6.3.1.1 Linear chain rule 6.3.1.2 Loop chain rule 6.3.1.3 Multiple loop chain rule 6.4 Computing of Integrals 6.4.1 Trapezoidal Rule 6.4.2 Simpson Rule 6.5 Multipath Compound Derivatives 7 Support Vector Machines 7.1 Introduction 7.2 Essential Mathematics of SVM 7.2.1 Introduction to Hyperplanes 7.2.2 Parallel Hyperplanes 7.2.3 Distance between Two Parallel Planes 7.3 Support Vector Machines 7.3.1 Problem Definition 7.3.2 Linearly Separable Case 7.4 Location of Optimal Hyperplane (Primal Problem) 7.4.1 Finding the Margin 7.4.2 Distance of a Point i from Separating Hyperplane 7.4.2.1 Margin for support vector points 7.4.3 Finding Optimal Hyperplane Problem 7.4.3.1 Hard margin 7.5 The Lagrangian Optimization Function 7.5.1 Optimization Involving Single Constraint 7.5.2 Optimization with Multiple Constraints 7.5.2.1 Single inequality constraint 7.5.2.2 Multiple inequality constraints 7.5.3 Karush–Kuhn–Tucker Conditions 7.6 SVM Optimization Problems 7.6.1 The Primal SVM Optimization Problem 7.6.2 The Dual Optimization Problem 7.6.2.1 Reformulation of the dual algorithm 7.7 Linear SVM (Non-linearly Separable) Data 7.7.1 Slack Variables 7.7.1.1 Primal formulation including slack variable 7.7.1.2 Dual formulation including slack variable 7.7.1.3 Choosing C in soft margin cases 7.7.2 Non-linear Data Classification Using Kernels 7.7.2.1 Polynomial kernel function 7.7.2.2 Multi-layer perceptron (Sigmoidal) kernel 7.7.2.3 Gaussian radial basis function 7.2.2.4 Creating new kernels References 8 Artificial Neural Networks 8.1 Introduction 8.2 Neuron 8.2.1 Activation Functions 8.2.1.1 Sigmoid 8.2.1.2 Hyperbolic tangent 8.2.1.3 Rectified Linear Unit (ReLU) 8.2.1.4 Leaky ReLU 8.2.1.5 Parametric rectifier 8.2.1.6 Maxout neuron 8.2.1.7 The Gaussian 8.2.1.8 Error calculation 8.2.1.9 Output layer node 8.2.1.10 Hidden layer nodes 8.2.1.11 Summary of derivations 9 Training of Neural Networks 9.1 Introduction 9.2 Practical Neural Network 9.3 Backpropagation Model 9.3.1 Computational Graph 9.4 Backpropagation Example with Computational Graphs 9.5 Back Propagation 9.6 Practical Training of Neural Networks 9.6.1 Forward Propagation 9.6.2 Backward Propagation 9.6.2.1 Adapting the weights 9.7 Initialisation of Weights Methods 9.7.1 Xavier Initialisation 9.7.2 Batch Normalisation 9.8 Conclusion References 10 Recurrent Neural Networks 10.1 Introduction 10.2 Introduction to Recurrent Neural Networks 10.3 Recurrent Neural Network 11 Convolutional Neural Networks 11.1 Introduction 11.2 Convolution Matrices 11.2.1 Three-Dimensional Convolution in CNN 11.3 Convolution Kernels 11.3.1 Design of Convolution Kernel 11.3.1.1 Separable Gaussian kernel 11.3.1.2 Separable Sobel kernel 11.3.1.3 Computation advantage 11.4 Convolutional Neural Networks 11.4.1 Concepts and Hyperparameters 11.4.1.1 Depth (D) 11.4.1.2 Zero-padding (P) 11.4.1.3 Receptive field (R) 11.4.1.4 Stride (S) 11.4.1.5 Activation function using rectified linear unit 11.4.2 CNN Processing Stages 11.4.2.1 Convolution layer 11.4.3 The Pooling Layer 11.4.4 The Fully Connected Layer 11.5 CNN Design Principles 11.6 Conclusion Reference 12 Principal Component Analysis 12.1 Introduction 12.2 Definitions 12.2.1 Covariance Matrices 12.3 Computation of Principal Components 12.3.1 PCA Using Vector Projection 12.3.2 PCA Computation Using Covariance Matrices 12.3.3 PCA Using Singular-Value Decomposition 12.3.4 Applications of PCA 12.3.4.1 Face recognition Reference 13 Moment-Generating Functions 13.1 Moments of Random Variables 13.1.1 Central Moments of Random Variables 13.1.2 Properties of Moments 13.2 Univariate Moment-Generating Functions 13.3 Series Representation of MGF 13.3.1 Properties of Probability Mass Functions 13.3.2 Properties of Probability Distribution Functions f(x) 13.4 Moment-Generating Functions of Discrete Random Variables 13.4.1 Bernoulli Random Variable 13.4.2 Binomial Random Variables 13.4.3 Geometric Random Variables 13.4.4 Poisson Random Variable 13.5 Moment-Generating Functions of Continuous Random Variables 13.5.1 Exponential Distributions 13.5.2 Normal Distribution 13.5.3 Gamma Distribution 13.6 Properties of Moment-Generating Functions 13.7 Multivariate Moment-Generating Functions 13.7.1 The Law of Large Numbers 13.8 Applications of MGF 14 Characteristic Functions 14.1 Characteristic Functions 14.1.1 Properties of Characteristic Functions 14.2 Characteristic Functions of Discrete Single Random Variables 14.2.1 Characteristic Function of a Poisson Random Variable 14.2.2 Characteristic Function of Binomial Random Variable 14.2.3 Characteristic Functions of Continuous Random Variables 15 Probability-Generating Functions 15.1 Probability-Generating Functions 15.2 Discrete Probability-Generating Functions 15.2.1 Properties of PGF 15.2.2 Probability-Generating Function of Bernoulli Random Variable 15.2.3 Probability-Generating Function for Binomial Random Variables 15.2.4 Probability-Generating Function for Poisson Random Variable 15.2.5 Probability-Generating Functions of Geometric Random Variables 15.2.6 Probability-Generating Function of Negative Binomial Random Variable 15.2.6.1 Negative binomial probability law 15.3 Applications of Probability-Generating Functions in Data Analytics 15.3.1 Discrete Event Applications 15.3.1.1 Coin tossing 15.3.1.2 Rolling a die 15.3.2 Modelling of Infectious Diseases 15.3.2.1 Early extinction probability 15.3.2.1.1 Models of extinction probability References 16 Digital Identity Management System Using Artificial Neural Networks 16.1 Introduction 16.2 Digital Identity Metrics 16.3 Identity Resolution 16.3.1 Fingerprint and Face Verification Challenges 16.3.1.1 Fingerprint 16.3.1.2 Face 16.4 Biometrics System Architecture 16.4.1 Fingerprint Recognition 16.4.2 Face Recognition 16.5 Information Fusion 16.6 Artificial Neural Networks 16.6.1 Artificial Neural Networks Implementation 16.7 Multimodal Digital Identity Management System Implementation 16.7.1 Terminal, Fingerprint Scanner and Camera 16.7.2 Fingerprint and Face Recognition SDKs 16.7.3 Database 16.7.4 Verification: Connect to Host and Select Verification 16.7.4.1 Verifying user 16.7.4.2 Successful verification 16.8 Conclusion References 17 Probabilistic Neural Network Classifiers for IoT Data Classification 17.1 Introduction 17.2 Probabilistic Neural Network (PNN) 17.3 Generalized Regression Neural Network (GRNN) 17.4 Vector Quantized GRNN (VQ-GRNN) 17.5 Experimental Works 17.6 Conclusion and Future Works References 18 MML Learning and Inference of Hierarchical Probabilistic Finite State Machines 18.1 Introduction 18.2 Finite State Machines (FSMs) and PFSMs 18.2.1 Mathematical Definition of a Finite State Machine 18.2.2 Representation of an FSM in a State Diagram 18.3 MML Encoding and Inference of PFSMs 18.3.1 Modelling a PFSM 18.3.1.1 Assertion code for hypothesis H 18.3.1.2 Assertion code for data D generated by hypothesis H 18.3.2 Inference of PFSM Using MML 18.3.2.1 Inference of PFSM by ordered merging (OM) 18.3.2.1.1 First stage merge 18.3.2.1.2 Second stage merge 18.3.2.1.3 Third stage merge 18.3.2.1.4 Ordered merging (OM) algorithm 18.3.2.2. Inference of PFSM using simulated annealing (SA) 18.3.2.2.1 Simulated annealing (SA) 18.3.2.2.2 Simulated annealing (SA) algorithm 18.4 Hierarchical Probabilistic Finite State Machine (HPFSM) 18.4.1 Defining an HPFSM 18.4.2 MML Assertion Code for the Hypothesis H of HPFSM 18.4.3 Encoding the transitions of HPFSM 18.5 Experiments 18.5.1 Experiments on Artificial datasets 18.5.1.1 Example-1 18.5.1.2 Example-2 18.5.2 Experiments on ADL Datasets 18.6 Summary References Solution to Exercises Index About the Author Back Cover