ورود به حساب

نام کاربری گذرواژه

گذرواژه را فراموش کردید؟ کلیک کنید

حساب کاربری ندارید؟ ساخت حساب

ساخت حساب کاربری

نام نام کاربری ایمیل شماره موبایل گذرواژه

برای ارتباط با ما می توانید از طریق شماره موبایل زیر از طریق تماس و پیامک با ما در ارتباط باشید


09117307688
09117179751

در صورت عدم پاسخ گویی از طریق پیامک با پشتیبان در ارتباط باشید

دسترسی نامحدود

برای کاربرانی که ثبت نام کرده اند

ضمانت بازگشت وجه

درصورت عدم همخوانی توضیحات با کتاب

پشتیبانی

از ساعت 7 صبح تا 10 شب

دانلود کتاب A Matrix Algebra Approach to Artificial Intelligence

دانلود کتاب رویکرد جبر ماتریسی به هوش مصنوعی

A Matrix Algebra Approach to Artificial Intelligence

مشخصات کتاب

A Matrix Algebra Approach to Artificial Intelligence

ویرایش: 1 
نویسندگان:   
سری:  
ISBN (شابک) : 9811527695, 9789811527692 
ناشر: Springer 
سال نشر: 2020 
تعداد صفحات: 844 
زبان: English 
فرمت فایل : PDF (درصورت درخواست کاربر به PDF، EPUB یا AZW3 تبدیل می شود) 
حجم فایل: 8 مگابایت 

قیمت کتاب (تومان) : 36,000



ثبت امتیاز به این کتاب

میانگین امتیاز به این کتاب :
       تعداد امتیاز دهندگان : 12


در صورت تبدیل فایل کتاب A Matrix Algebra Approach to Artificial Intelligence به فرمت های PDF، EPUB، AZW3، MOBI و یا DJVU می توانید به پشتیبان اطلاع دهید تا فایل مورد نظر را تبدیل نمایند.

توجه داشته باشید کتاب رویکرد جبر ماتریسی به هوش مصنوعی نسخه زبان اصلی می باشد و کتاب ترجمه شده به فارسی نمی باشد. وبسایت اینترنشنال لایبرری ارائه دهنده کتاب های زبان اصلی می باشد و هیچ گونه کتاب ترجمه شده یا نوشته شده به فارسی را ارائه نمی دهد.


توضیحاتی در مورد کتاب رویکرد جبر ماتریسی به هوش مصنوعی



جبر ماتریسی نقش مهمی در بسیاری از حوزه‌های هسته‌ای هوش مصنوعی (AI) از جمله یادگیری ماشین، شبکه‌های عصبی، ماشین‌های بردار پشتیبانی (SVM) و محاسبات تکاملی ایفا می‌کند. این کتاب یک بحث جامع و عمیق در مورد تئوری و روش‌های جبر ماتریس برای این چهار حوزه اصلی هوش مصنوعی ارائه می‌کند، در حالی که از دیدگاه جبر ماتریسی نظری به هوش مصنوعی نزدیک می‌شود.

این کتاب شامل دو است. بخش‌ها: بخش اول اصول جبر ماتریسی را به تفصیل مورد بحث قرار می‌دهد، در حالی که دومی بر کاربردهای رویکردهای جبر ماتریسی در هوش مصنوعی تمرکز دارد. برجسته کردن جبر ماتریس در یادگیری و تعبیه مبتنی بر گراف، تعبیه شبکه، شبکه های عصبی کانولوشن و نظریه بهینه سازی پارتو، و بحث در مورد موضوعات و پیشرفت های اخیر، منبع ارزشمندی برای دانشمندان، مهندسان و دانشجویان فارغ التحصیل در رشته های مختلف از جمله، اما محدود به علوم کامپیوتر، ریاضیات و مهندسی نیست.


توضیحاتی درمورد کتاب به خارجی

Matrix algebra plays an important role in many core artificial intelligence (AI) areas, including machine learning, neural networks, support vector machines (SVMs) and evolutionary computation. This book offers a comprehensive and in-depth discussion of matrix algebra theory and methods for these four core areas of AI, while also approaching AI from a theoretical matrix algebra perspective.

The book consists of two parts: the first discusses the fundamentals of matrix algebra in detail, while the second focuses on the applications of matrix algebra approaches in AI. Highlighting matrix algebra in graph-based learning and embedding, network embedding, convolutional neural networks and Pareto optimization theory, and discussing recent topics and advances, the book offers a valuable resource for scientists, engineers, and graduate students in various disciplines, including, but not limited to, computer science, mathematics and engineering.  



فهرست مطالب

Preface
	Structure and Contents
	Features and Contributions
	Audience
	Acknowledgments
A Note from the Family of Dr. Zhang
Contents
List of Notations
List of Figures
List of Tables
List of Algorithms
Part I Introduction to Matrix Algebra
	1 Basic Matrix Computation
		1.1 Basic Concepts of Vectors and Matrices
			1.1.1 Vectors and Matrices
			1.1.2 Basic Vector Calculus
			1.1.3 Basic Matrix Calculus
		1.2 Sets and Linear Mapping
			1.2.1 Sets
			1.2.2 Linear Mapping
		1.3 Norms
			1.3.1 Vector Norms
			1.3.2 Matrix Norms
		1.4 Random Vectors
			1.4.1 Statistical Interpretation of Random Vectors
			1.4.2 Gaussian Random Vectors
		1.5 Basic Performance of Matrices
			1.5.1 Quadratic Forms
			1.5.2 Determinants
			1.5.3 Matrix Eigenvalues
			1.5.4 Matrix Trace
			1.5.5 Matrix Rank
		1.6 Inverse Matrices and Moore–Penrose Inverse Matrices
			1.6.1 Inverse Matrices
			1.6.2 Left and Right Pseudo-Inverse Matrices
			1.6.3 Moore–Penrose Inverse Matrices
		1.7 Direct Sum and Hadamard Product
			1.7.1 Direct Sum of Matrices
			1.7.2 Hadamard Product
		1.8 Kronecker Products
			1.8.1 Definitions of Kronecker Products
			1.8.2 Performance of Kronecker Products
		1.9 Vectorization and Matricization
			1.9.1 Vectorization and Commutation Matrix
			1.9.2 Matricization of Vectors
		Brief Summary of This Chapter
		References
	2 Matrix Differential
		2.1 Jacobian Matrix and Gradient Matrix
			2.1.1 Jacobian Matrix
			2.1.2 Gradient Matrix
			2.1.3 Calculation of Partial Derivative and Gradient
		2.2 Real Matrix Differential
			2.2.1 Calculation of Real Matrix Differential
			2.2.2 Jacobian Matrix Identification
			2.2.3 Jacobian Matrix of Real Matrix Functions
		2.3 Complex Gradient Matrices
			2.3.1 Holomorphic Function and Complex Partial Derivative
			2.3.2 Complex Matrix Differential
			2.3.3 Complex Gradient Matrix Identification
		Brief Summary of This Chapter
		References
	3 Gradient and Optimization
		3.1 Real Gradient
			3.1.1 Stationary Points and Extreme Points
			3.1.2 Real Gradient of f(X)
		3.2 Gradient of Complex Variable Function
			3.2.1 Extreme Point of Complex Variable Function
			3.2.2 Complex Gradient
		3.3 Convex Sets and Convex Function Identification
			3.3.1 Standard Constrained Optimization Problems
			3.3.2 Convex Sets and Convex Functions
			3.3.3 Convex Function Identification
		3.4 Gradient Methods for Smooth Convex Optimization
			3.4.1 Gradient Method
			3.4.2 Projected Gradient Method
			3.4.3 Convergence Rates
		3.5 Nesterov Optimal Gradient Method
			3.5.1 Lipschitz Continuous Function
			3.5.2 Nesterov Optimal Gradient Algorithms
		3.6 Nonsmooth Convex Optimization
			3.6.1 Subgradient and Subdifferential
			3.6.2 Proximal Operator
			3.6.3 Proximal Gradient Method
		3.7 Constrained Convex Optimization
			3.7.1 Penalty Function Method
			3.7.2 Augmented Lagrange Multiplier Method
			3.7.3 Lagrange Dual Method
			3.7.4 Karush–Kuhn–Tucker Conditions
			3.7.5 Alternating Direction Method of Multipliers
		3.8 Newton Methods
			3.8.1 Newton Method for Unconstrained Optimization
			3.8.2 Newton Method for Constrained Optimization
		Brief Summary of This Chapter
		References
	4 Solution of Linear Systems
		4.1 Gauss Elimination
			4.1.1 Elementary Row Operations
			4.1.2 Gauss Elimination for Solving Matrix Equations
			4.1.3 Gauss Elimination for Matrix Inversion
		4.2 Conjugate Gradient Methods
			4.2.1 Conjugate Gradient Algorithm
			4.2.2 Biconjugate Gradient Algorithm
			4.2.3 Preconditioned Conjugate Gradient Algorithm
		4.3 Condition Number of Matrices
		4.4 Singular Value Decomposition (SVD)
			4.4.1 Singular Value Decomposition
			4.4.2 Properties of Singular Values
			4.4.3 Singular Value Thresholding
		4.5 Least Squares Method
			4.5.1 Least Squares Solution
			4.5.2 Rank-Deficient Least Squares Solutions
		4.6 Tikhonov Regularization and Gauss–Seidel Method
			4.6.1 Tikhonov Regularization
			4.6.2 Gauss–Seidel Method
		4.7 Total Least Squares Method
			4.7.1 Total Least Squares Solution
			4.7.2 Performances of TLS Solution
			4.7.3 Generalized Total Least Square
		4.8 Solution of Under-Determined Systems
			4.8.1 1-Norm Minimization
			4.8.2 Lasso
			4.8.3 LARS
		Brief Summary of This Chapter
		References
	5 Eigenvalue Decomposition
		5.1 Eigenvalue Problem and Characteristic Equation
			5.1.1 Eigenvalue Problem
			5.1.2 Characteristic Polynomial
		5.2 Eigenvalues and Eigenvectors
			5.2.1 Eigenvalues
			5.2.2 Eigenvectors
		5.3 Generalized Eigenvalue Decomposition (GEVD)
			5.3.1 Generalized Eigenvalue Decomposition
			5.3.2 Total Least Squares Method for GEVD
		5.4 Rayleigh Quotient and Generalized Rayleigh Quotient
			5.4.1 Rayleigh Quotient
			5.4.2 Generalized Rayleigh Quotient
			5.4.3 Effectiveness of Class Discrimination
		Brief Summary of This Chapter
		References
Part II Artificial Intelligence
	6 Machine Learning
		6.1 Machine Learning Tree
		6.2 Optimization in Machine Learning
			6.2.1 Single-Objective Composite Optimization
			6.2.2 Gradient Aggregation Methods
			6.2.3 Coordinate Descent Methods
			6.2.4 Benchmark Functions for Single-Objective  Optimization
		6.3 Majorization-Minimization Algorithms
			6.3.1 MM Algorithm Framework
			6.3.2 Examples of Majorization-Minimization Algorithms
		6.4 Boosting and Probably Approximately Correct Learning
			6.4.1 Boosting for Weak Learners
			6.4.2 Probably Approximately Correct Learning
		6.5 Basic Theory of Machine Learning
			6.5.1 Learning Machine
			6.5.2 Machine Learning Methods
			6.5.3 Expected Performance of Machine Learning  Algorithms
		6.6 Classification and Regression
			6.6.1 Pattern Recognition and Classification
			6.6.2 Regression
		6.7 Feature Selection
			6.7.1 Supervised Feature Selection
			6.7.2 Unsupervised Feature Selection
			6.7.3 Nonlinear Joint Unsupervised Feature Selection
		6.8 Principal Component Analysis
			6.8.1 Principal Component Analysis Basis
			6.8.2 Minor Component Analysis
			6.8.3 Principal Subspace Analysis
			6.8.4 Robust Principal Component Analysis
			6.8.5 Sparse Principal Component Analysis
		6.9 Supervised Learning Regression
			6.9.1 Principle Component Regression
			6.9.2 Partial Least Squares Regression
			6.9.3 Penalized Regression
			6.9.4 Gradient Projection for Sparse Reconstruction
		6.10 Supervised Learning Classification
			6.10.1 Binary Linear Classifiers
			6.10.2 Multiclass Linear Classifiers
		6.11 Supervised Tensor Learning (STL)
			6.11.1 Tensor Algebra Basics
			6.11.2 Supervised Tensor Learning Problems
			6.11.3 Tensor Fisher Discriminant analysis
			6.11.4 Tensor Learning for Regression
			6.11.5 Tensor K-Means Clustering
		6.12 Unsupervised Clustering
			6.12.1 Similarity Measures
			6.12.2 Hierarchical Clustering
			6.12.3 Fisher Discriminant Analysis (FDA)
			6.12.4 K-Means Clustering
		6.13 Spectral Clustering
			6.13.1 Spectral Clustering Algorithms
			6.13.2 Constrained Spectral Clustering
			6.13.3 Fast Spectral Clustering
		6.14 Semi-Supervised Learning Algorithms
			6.14.1 Semi-Supervised Inductive/Transductive Learning
			6.14.2 Self-Training
			6.14.3 Co-training
		6.15 Canonical Correlation Analysis
			6.15.1 Canonical Correlation Analysis Algorithm
			6.15.2 Kernel Canonical Correlation Analysis
			6.15.3 Penalized Canonical Correlation Analysis
		6.16 Graph Machine Learning
			6.16.1 Graphs
			6.16.2 Graph Laplacian Matrices
			6.16.3 Graph Spectrum
			6.16.4 Graph Signal Processing
			6.16.5 Semi-Supervised Graph Learning: Harmonic Function Method
			6.16.6 Semi-Supervised Graph Learning: Min-Cut Method
			6.16.7 Unsupervised Graph Learning: Sparse Coding Method
		6.17 Active Learning
			6.17.1 Active Learning: Background
			6.17.2 Statistical Active Learning
			6.17.3 Active Learning Algorithms
			6.17.4 Active Learning Based Binary Linear Classifiers
			6.17.5 Active Learning Using Extreme Learning Machine
		6.18 Reinforcement Learning
			6.18.1 Basic Concepts and Theory
			6.18.2 Markov Decision Process (MDP)
		6.19 Q-Learning
			6.19.1 Basic Q-Learning
			6.19.2 Double Q-Learning and Weighted Double Q-Learning
			6.19.3 Online Connectionist Q-Learning Algorithm
			6.19.4 Q-Learning with Experience Replay
		6.20 Transfer Learning
			6.20.1 Notations and Definitions
			6.20.2 Categorization of Transfer Learning
			6.20.3 Boosting for Transfer Learning
			6.20.4 Multitask Learning
			6.20.5 EigenTransfer
		6.21 Domain Adaptation
			6.21.1 Feature Augmentation Method
			6.21.2 Cross-Domain Transform Method
			6.21.3 Transfer Component Analysis Method
		Brief Summary of This Chapter
		References
	7 Neural Networks
		7.1 Neural Network Tree
		7.2 From Modern Neural Networks to Deep Learning
		7.3 Optimization of Neural Networks
			7.3.1 Online Optimization Problems
			7.3.2 Adaptive Gradient Algorithm
			7.3.3 Adaptive Moment Estimation
		7.4 Activation Functions
			7.4.1 Logistic Regression and Sigmoid Function
			7.4.2 Softmax Regression and Softmax Function
			7.4.3 Other Activation Functions
		7.5 Recurrent Neural Networks
			7.5.1 Conventional Recurrent Neural Networks
			7.5.2 Backpropagation Through Time (BPTT)
			7.5.3 Jordan Network and Elman Network
			7.5.4 Bidirectional Recurrent Neural Networks
			7.5.5 Long Short-Term Memory (LSTM)
			7.5.6 Improvement of Long Short-Term Memory
		7.6 Boltzmann Machines
			7.6.1 Hopfield Network and Boltzmann Machines
			7.6.2 Restricted Boltzmann Machine
			7.6.3 Contrastive Divergence Learning
			7.6.4 Multiple Restricted Boltzmann Machines
		7.7 Bayesian Neural Networks
			7.7.1 Naive Bayesian Classification
			7.7.2 Bayesian Classification Theory
			7.7.3 Sparse Bayesian Learning
		7.8 Convolutional Neural Networks
			7.8.1 Hankel Matrix and Convolution
			7.8.2 Pooling Layer
			7.8.3 Activation Functions in CNNs
			7.8.4 Loss Function
		7.9 Dropout Learning
			7.9.1 Dropout for Shallow and Deep Learning
			7.9.2 Dropout Spherical K-Means
			7.9.3 DropConnect
		7.10 Autoencoders
			7.10.1 Basic Autoencoder
			7.10.2 Stacked Sparse Autoencoder
			7.10.3 Stacked Denoising Autoencoders
			7.10.4 Convolutional Autoencoders (CAE)
			7.10.5 Stacked Convolutional Denoising Autoencoder
			7.10.6 Nonnegative Sparse Autoencoder
		7.11 Extreme Learning Machine
			7.11.1 Single-Hidden Layer Feedforward Networks with Random Hidden Nodes
			7.11.2 Extreme Learning Machine Algorithm for Regression and Binary Classification
			7.11.3 Extreme Learning Machine Algorithm for Multiclass Classification
		7.12 Graph Embedding
			7.12.1 Proximity Measures and Graph Embedding
			7.12.2 Multidimensional Scaling
			7.12.3 Manifold Learning: Isometric Map
			7.12.4 Manifold Learning: Locally Linear Embedding
			7.12.5 Manifold Learning: Laplacian Eigenmap
		7.13 Network Embedding
			7.13.1 Structure and Property Preserving Network Embedding
			7.13.2 Community Preserving Network Embedding
			7.13.3 Higher-Order Proximity Preserved Network Embedding
		7.14 Neural Networks on Graphs
			7.14.1 Graph Neural Networks (GNNs)
			7.14.2 DeepWalk and GraphSAGE
				DeepWalk
				GraphSAGE
			7.14.3 Graph Convolutional Networks (GCNs)
		7.15 Batch Normalization Networks
			7.15.1 Batch Normalization
				The Effect of BatchNorm on the Lipschitzness
				The Effect of BatchNorm to Smoothness
				BatchNorm Leads to a Favorable Initialization
			7.15.2 Variants and Extensions of Batch Normalization
				Batch Renormalization 7Ioffe17
				Layer Normalization (LN) 7Ba16
				Instance Normalization (IN) 7Ulungu99
				Group Normalization (GN) 7Wu18
		7.16 Generative Adversarial Networks (GANs)
			7.16.1 Generative Adversarial Network Framework
			7.16.2 Bidirectional Generative Adversarial Networks
			7.16.3 Variational Autoencoders
		Brief Summary of This Chapter
		References
	8 Support Vector Machines
		8.1 Support Vector Machines: Basic Theory
			8.1.1 Statistical Learning Theory
			8.1.2 Linear Support Vector Machines
		8.2 Kernel Regression Methods
			8.2.1 Reproducing Kernel and Mercer Kernel
			8.2.2 Representer Theorem and Kernel Regression
			8.2.3 Semi-Supervised and Graph Regression
			8.2.4 Kernel Partial Least Squares Regression
			8.2.5 Laplacian Support Vector Machines
		8.3 Support Vector Machine Regression
			8.3.1 Support Vector Machine Regressor
			8.3.2 ε-Support Vector Regression
			8.3.3 ν-Support Vector Machine Regression
		8.4 Support Vector Machine Binary Classification
			8.4.1 Support Vector Machine Binary Classifier
			8.4.2 ν-Support Vector Machine Binary Classifier
			8.4.3 Least Squares SVM Binary Classifier
			8.4.4 Proximal Support Vector Machine Binary Classifier
			8.4.5 SVM-Recursive Feature Elimination
		8.5 Support Vector Machine Multiclass Classification
			8.5.1 Decomposition Methods for Multiclass Classification
			8.5.2 Least Squares SVM Multiclass Classifier
			8.5.3 Proximal Support Vector Machine Multiclass Classifier
		8.6 Gaussian Process for Regression and Classification
			8.6.1 Joint, Marginal, and Conditional Probabilities
			8.6.2 Gaussian Process
			8.6.3 Gaussian Process Regression
			8.6.4 Gaussian Process Classification
		8.7 Relevance Vector Machine
			8.7.1 Sparse Bayesian Regression
			8.7.2 Sparse Bayesian Classification
			8.7.3 Fast Marginal Likelihood Maximization
		Brief Summary of This Chapter
		References
	9 Evolutionary Computation
		9.1 Evolutionary Computation Tree
		9.2 Multiobjective Optimization
			9.2.1 Multiobjective Combinatorial Optimization
			9.2.2 Multiobjective Optimization Problems
		9.3 Pareto Optimization Theory
			9.3.1 Pareto Concepts
			9.3.2 Fitness Selection Approach
			9.3.3 Nondominated Sorting Approach
			9.3.4 Crowding Distance Assignment Approach
			9.3.5 Hierarchical Clustering Approach
			9.3.6 Benchmark Functions for Multiobjective Optimization
		9.4 Noisy Multiobjective Optimization
			9.4.1 Pareto Concepts for Noisy Multiobjective Optimization
			9.4.2 Performance Metrics for Approximation Sets
		9.5 Multiobjective Simulated Annealing
			9.5.1 Principle of Simulated Annealing
			9.5.2 Multiobjective Simulated Annealing Algorithm
			9.5.3 Archived Multiobjective Simulated Annealing
		9.6 Genetic Algorithm
			9.6.1 Basic Genetic Algorithm Operations
			9.6.2 Genetic Algorithm with Gene Rearrangement  Clustering
		9.7 Nondominated Multiobjective Genetic Algorithms
			9.7.1 Fitness Functions
			9.7.2 Fitness Selection
			9.7.3 Nondominated Sorting Genetic Algorithms
			9.7.4 Elitist Nondominated Sorting Genetic Algorithm
		9.8 Evolutionary Algorithms (EAs)
			9.8.1 (1+1) Evolutionary Algorithm
			9.8.2 Theoretical Analysis on Evolutionary Algorithms
		9.9 Multiobjective Evolutionary Algorithms
			9.9.1 Classical Methods for Solving Multiobjective Optimization Problems
			9.9.2 MOEA Based on Decomposition (MOEA/D)
			9.9.3 Strength Pareto Evolutionary Algorithm
			9.9.4 Achievement Scalarizing Functions
		9.10 Evolutionary Programming
			9.10.1 Classical Evolutionary Programming
			9.10.2 Fast Evolutionary Programming
			9.10.3 Hybrid Evolutionary Programming
		9.11 Differential Evolution
			9.11.1 Classical Differential Evolution
			9.11.2 Differential Evolution Variants
		9.12 Ant Colony Optimization
			9.12.1 Real Ants and Artificial Ants
			9.12.2 Typical Ant Colony Optimization Problems
			9.12.3 Ant System and Ant Colony System
		9.13 Multiobjective Artificial Bee Colony Algorithms
			9.13.1 Artificial Bee Colony Algorithms
			9.13.2 Variants of ABC Algorithms
		9.14 Particle Swarm Optimization
			9.14.1 Basic Concepts
			9.14.2 The Canonical Particle Swarm
			9.14.3 Genetic Learning Particle Swarm Optimization
			9.14.4 Particle Swarm Optimization for Feature Selection
		9.15 Opposition-Based Evolutionary Computation
			9.15.1 Opposition-Based Learning
			9.15.2 Opposition-Based Differential Evolution
			9.15.3 Two Variants of Opposition-Based Learning
		Brief Summary of This Chapter
		References
Index




نظرات کاربران