دسترسی نامحدود
برای کاربرانی که ثبت نام کرده اند
برای ارتباط با ما می توانید از طریق شماره موبایل زیر از طریق تماس و پیامک با ما در ارتباط باشید
در صورت عدم پاسخ گویی از طریق پیامک با پشتیبان در ارتباط باشید
برای کاربرانی که ثبت نام کرده اند
درصورت عدم همخوانی توضیحات با کتاب
از ساعت 7 صبح تا 10 شب
ویرایش: 1
نویسندگان: Xian-Da Zhang
سری:
ISBN (شابک) : 1108417418, 9781108417419
ناشر: Cambridge University Press
سال نشر: 2017
تعداد صفحات: 760
زبان: English
فرمت فایل : PDF (درصورت درخواست کاربر به PDF، EPUB یا AZW3 تبدیل می شود)
حجم فایل: 4 مگابایت
در صورت تبدیل فایل کتاب Matrix Analysis and Applications به فرمت های PDF، EPUB، AZW3، MOBI و یا DJVU می توانید به پشتیبان اطلاع دهید تا فایل مورد نظر را تبدیل نمایند.
توجه داشته باشید کتاب تحلیل ماتریسی و کاربردها نسخه زبان اصلی می باشد و کتاب ترجمه شده به فارسی نمی باشد. وبسایت اینترنشنال لایبرری ارائه دهنده کتاب های زبان اصلی می باشد و هیچ گونه کتاب ترجمه شده یا نوشته شده به فارسی را ارائه نمی دهد.
این مطالعه متعادل و جامع، تئوری، روشها و کاربردهای تحلیل ماتریسی را در چارچوب نظری جدید ارائه میکند و به خوانندگان اجازه میدهد تا تحلیل ماتریس مرتبه دوم و بالاتر را در یک نور کاملاً جدید درک کنند. نویسنده در کنار موضوعات اصلی در تجزیه و تحلیل ماتریس، مانند تحلیل ارزش منفرد، حل معادلات ماتریس و تحلیل ویژه، کاربردها و دیدگاه های جدیدی را معرفی می کند که منحصر به فرد این کتاب است. موضوعات بسیار موضوعی تجزیه و تحلیل گرادیان و بهینه سازی نقش اصلی را در اینجا بازی می کنند. همچنین شامل تجزیه و تحلیل زیرفضا، تجزیه و تحلیل طرح ریزی و تحلیل تانسور است، موضوعاتی که اغلب در کتاب های دیگر نادیده گرفته می شوند. نویسنده با ارائه پایه ای محکم برای موضوع، تأکید ویژه ای بر کاربردهای فراوان تحلیل ماتریس در علم و مهندسی دارد و این کتاب را برای دانشمندان، مهندسان و دانشجویان فارغ التحصیل به طور یکسان مناسب می کند.
This balanced and comprehensive study presents the theory, methods and applications of matrix analysis in a new theoretical framework, allowing readers to understand second-order and higher-order matrix analysis in a completely new light. Alongside the core subjects in matrix analysis, such as singular value analysis, the solution of matrix equations and eigenanalysis, the author introduces new applications and perspectives that are unique to this book. The very topical subjects of gradient analysis and optimization play a central role here. Also included are subspace analysis, projection analysis and tensor analysis, subjects which are often neglected in other books. Having provided a solid foundation to the subject, the author goes on to place particular emphasis on the many applications matrix analysis has in science and engineering, making this book suitable for scientists, engineers and graduate students alike.
Contents Preface Notation Abbreviations Algorithms PART I MATRIX ALGEBRA 1 Introduction to Matrix Algebra 1.1 Basic Concepts of Vectors and Matrices 1.1.1 Vectors and Matrices 1.1.2 Basic Vector Calculus 1.1.3 Basic Matrix Calculus 1.1.4 Linear Independence of Vectors 1.1.5 Matrix Functions 1.2 Elementary Row Operations and Applications 1.2.1 Elementary Row Operations 1.2.2 Gauss Elimination Methods 1.3 Sets, Vector Subspaces and Linear Mapping 1.3.1 Sets 1.3.2 Fields and Vector Spaces 1.3.3 Linear Mapping 1.4 Inner Products and Vector Norms 1.4.1 Inner Products of Vectors 1.4.2 Norms of Vectors 1.4.3 Similarity Comparison Between Vectors 1.4.4 Banach Space, Euclidean Space, Hilbert Space 1.4.5 Inner Products and Norms of Matrices 1.5 Random Vectors 1.5.1 Statistical Interpretation of Random Vectors 1.5.2 Gaussian Random Vectors 1.6 Performance Indexes of Matrices 1.6.1 Quadratic Forms 1.6.2 Determinants 1.6.3 Matrix Eigenvalues 1.6.4 Matrix Trace 1.6.5 Matrix Rank 1.7 Inverse Matrices and Pseudo-Inverse Matrices 1.7.1 Definition and Properties of Inverse Matrices 1.7.2 Matrix Inversion Lemma 1.7.3 Inversion of Hermitian Matrices 1.7.4 Left and Right Pseudo-Inverse Matrices 1.8 Moore–Penrose Inverse Matrices 1.8.1 Definition and Properties 1.8.2 Computation of Moore–Penrose Inverse Matrix 1.9 Direct Sum and Hadamard Product 1.9.1 Direct Sum of Matrices 1.9.2 Hadamard Product 1.10 Kronecker Products and Khatri–Rao Product 1.10.1 Kronecker Products 1.10.2 Generalized Kronecker Products 1.10.3 Khatri–Rao Product 1.11 Vectorization and Matricization 1.11.1 Vectorization and Commutation Matrix 1.11.2 Matricization of a Vector 1.11.3 Properties of Vectorization Operator 1.12 Sparse Representations 1.12.1 Sparse Vectors and Sparse Representations 1.12.2 Sparse Representation of Face Recognition Exercises 2 Special Matrices 2.1 Hermitian Matrices 2.2 Idempotent Matrix 2.3 Permutation Matrix 2.3.1 Permutation Matrix and Exchange Matrix 2.3.2 Generalized Permutation Matrix 2.4 Orthogonal Matrix and Unitary Matrix 2.5 Band Matrix and Triangular Matrix 2.5.1 Band Matrix 2.5.2 Triangular Matrix 2.6 Summing Vector and Centering Matrix 2.6.1 Summing Vector 2.6.2 Centering Matrix 2.7 Vandermonde Matrix and Fourier Matrix 2.7.1 Vandermonde Matrix 2.7.2 Fourier Matrix 2.7.3 Index Vectors 2.7.4 FFT Algorithm 2.8 Hadamard Matrix 2.9 Toeplitz Matrix 2.9.1 Symmetric Toeplitz Matrix 2.9.2 Discrete Cosine Transform of Toeplitz Matrix Exercises 3 Matrix Differential 3.1 Jacobian Matrix and Gradient Matrix 3.1.1 Jacobian Matrix 3.1.2 Gradient Matrix 3.1.3 Calculation of Partial Derivative and Gradient 3.2 Real Matrix Differential 3.2.1 Calculation of Real Matrix Differential 3.2.2 Jacobian Matrix Identification 3.2.3 Jacobian Matrix of Real Matrix Functions 3.3 Real Hessian Matrix and Identification 3.3.1 Real Hessian Matrix 3.3.2 Real Hessian Matrix Identification 3.4 Complex Gradient Matrices 3.4.1 Holomorphic Function and Complex Partial Derivative 3.4.2 Complex Matrix Differential 3.4.3 Complex Gradient Matrix Identification 3.5 Complex Hessian Matrices and Identification 3.5.1 Complex Hessian Matrices 3.5.2 Complex Hessian Matrix Identification Exercises PART II MATRIX ANALYSIS 4 Gradient Analysis and Optimization 4.1 Real Gradient Analysis 4.1.1 Stationary Points and Extreme Points 4.1.2 Real Gradient Analysis of f(x) 4.1.3 Real Gradient Analysis of f(X) 4.2 Gradient Analysis of Complex Variable Function 4.2.1 Extreme Point of Complex Variable Function 4.2.2 Complex Gradient Analysis 4.3 Convex Sets and Convex Function Identification 4.3.1 Standard Constrained Optimization Problems 4.3.2 Convex Sets and Convex Functions 4.3.3 Convex Function Identification 4.4 Gradient Methods for Smooth Convex Optimization 4.4.1 Gradient Method 4.4.2 Conjugate Gradient Method 4.4.3 Convergence Rates 4.5 Nesterov Optimal Gradient Method 4.5.1 Lipschitz Continuous Function 4.5.2 Nesterov Optimal Gradient Algorithms 4.6 Nonsmooth Convex Optimization 4.6.1 Subgradient and Subdifferential 4.6.2 Proximal Operator 4.6.3 Proximal Gradient Method 4.7 Constrained Convex Optimization 4.7.1 Lagrange Multiplier Method 4.7.2 Penalty Function Method 4.7.3 Augmented Lagrange Multiplier Method 4.7.4 Lagrangian Dual Method 4.7.5 Karush–Kuhn–Tucker Conditions 4.7.6 Alternating Direction Method of Multipliers 4.8 Newton Methods 4.8.1 Newton Method for Unconstrained Optimization 4.8.2 Newton Method for Constrained Optimization 4.9 Original–Dual Interior-Point Method 4.9.1 Original–Dual Problems 4.9.2 First-Order Original–Dual Interior-Point Method 4.9.3 Second-Order Original–Dual Interior-Point Method Exercises 5 Singular Value Analysis 5.1 Numerical Stability and Condition Number 5.2 Singular Value Decomposition (SVD) 5.2.1 Singular Value Decomposition 5.2.2 Properties of Singular Values 5.2.3 Rank-Deficient Least Squares Solutions 5.3 Product Singular Value Decomposition (PSVD) 5.3.1 PSVD Problem 5.3.2 Accurate Calculation of PSVD 5.4 Applications of Singular Value Decomposition 5.4.1 Static Systems 5.4.2 Image Compression 5.5 Generalized Singular Value Decomposition (GSVD) 5.5.1 Definition and Properties 5.5.2 Algorithms for GSVD 5.5.3 Two Application Examples of GSVD 5.6 Low-Rank–Sparse Matrix Decomposition 5.6.1 Matrix Decomposition Problems 5.6.2 Singular Value Thresholding 5.6.3 Robust Principal Component Analysis 5.7 Matrix Completion 5.7.1 Matrix Completion Problems 5.7.2 Matrix Completion Model and Incoherence 5.7.3 Singular Value Thresholding Algorithm 5.7.4 Fast and Accurate Matrix Completion Exercises 6 Solving Matrix Equations 6.1 Least Squares Method 6.1.1 Ordinary Least Squares Methods 6.1.2 Properties of Least Squares Solutions 6.1.3 Data Least Squares 6.2 Tikhonov Regularization and Gauss–Seidel Method 6.2.1 Tikhonov Regularization 6.2.2 Regularized Gauss–Seidel Method 6.3 Total Least Squares (TLS) Methods 6.3.1 TLS Problems 6.3.2 TLS Solution 6.3.3 Performances of TLS Solution 6.3.4 Generalized Total Least Squares 6.3.5 Total Least Squares Fitting 6.3.6 Total Maximum Likelihood Method 6.4 Constrained Total Least Squares 6.4.1 Constrained Total Least Squares Method 6.4.2 Harmonic Superresolution 6.4.3 Image Restoration 6.5 Subspace Method for Solving Blind Matrix Equations 6.6 Nonnegative Matrix Factorization: Optimization Theory 6.6.1 Nonnegative Matrices 6.6.2 Nonnegativity and Sparsity Constraints 6.6.3 Nonnegative Matrix Factorization Model 6.6.4 Divergences and Deformed Logarithm 6.7 Nonnegative Matrix Factorization: Optimization Algorithms 6.7.1 Multiplication Algorithms 6.7.2 Nesterov Optimal Gradient Algorithm 6.7.3 Alternating Nonnegative Least Squares 6.7.4 Quasi-Newton Method 6.7.5 Sparse Nonnegative Matrix Factorization 6.8 Sparse Matrix Equation Solving: Optimization Theory 6.8.1 ℓ1-Norm Minimization 6.8.2 Lasso and Robust Linear Regression 6.8.3 Mutual Coherence and RIP Conditions 6.8.4 Relation to Tikhonov Regularization 6.8.5 Gradient Analysis of ℓ1-Norm Minimization 6.9 Sparse Matrix Equation Solving: Optimization Algorithms 6.9.1 Basis Pursuit Algorithms 6.9.2 First-Order Augmented Lagrangian Algorithm 6.9.3 Barzilai–Borwein Gradient Projection Algorithm 6.9.4 ADMM Algorithms for Lasso Problems 6.9.5 LARS Algorithms for Lasso Problems 6.9.6 Covariance Graphical Lasso Method 6.9.7 Homotopy Algorithm 6.9.8 Bregman Iteration Algorithms Exercises 7 Eigenanalysis 7.1 Eigenvalue Problem and Characteristic Equation 7.1.1 Eigenvalue Problem 7.1.2 Characteristic Polynomial 7.2 Eigenvalues and Eigenvectors 7.2.1 Eigenvalues 7.2.2 Eigenvectors 7.3 Similarity Reduction 7.3.1 Similarity Transformation of Matrices 7.3.2 Similarity Reduction of Matrices 7.3.3 Similarity Reduction of Matrix Polynomials 7.4 Polynomial Matrices and Balanced Reduction 7.4.1 Smith Normal Forms 7.4.2 Invariant Factor Method 7.4.3 Conversion of Jordan Form and Smith Form 7.4.4 Finding Smith Blocks from Jordan Blocks 7.4.5 Finding Jordan Blocks from Smith Blocks 7.5 Cayley–Hamilton Theorem with Applications 7.5.1 Cayley–Hamilton Theorem 7.5.2 Computation of Inverse Matrices 7.5.3 Computation of Matrix Powers 7.5.4 Calculation of Matrix Exponential Functions 7.6 Application Examples of Eigenvalue Decomposition 7.6.1 Pisarenko Harmonic Decomposition 7.6.2 Discrete Karhunen–Loeve Transformation 7.6.3 Principal Component Analysis 7.7 Generalized Eigenvalue Decomposition (GEVD) 7.7.1 Generalized Eigenvalue Decomposition 7.7.2 Total Least Squares Method for GEVD 7.7.3 Application of GEVD: ESPRIT 7.7.4 Similarity Transformation in GEVD 7.8 Rayleigh Quotient 7.8.1 Definition and Properties of Rayleigh Quotient 7.8.2 Rayleigh Quotient Iteration 7.8.3 Algorithms for Rayleigh Quotient 7.9 Generalized Rayleigh Quotient 7.9.1 Definition and Properties 7.9.2 Effectiveness of Class Discrimination 7.9.3 Robust Beamforming 7.10 Quadratic Eigenvalue Problems 7.10.1 Description of Quadratic Eigenvalue Problems 7.10.2 Solving Quadratic Eigenvalue Problems 7.10.3 Application Examples 7.11 Joint Diagonalization 7.11.1 Joint Diagonalization Problems 7.11.2 Orthogonal Approximate Joint Diagonalization 7.11.3 Nonorthogonal Approximate Joint Diagonalization Exercises 8 Subspace Analysis and Tracking 8.1 General Theory of Subspaces 8.1.1 Bases of Subspaces 8.1.2 Disjoint Subspaces and Orthogonal Complement 8.2 Column Space, Row Space and Null Space 8.2.1 Definitions and Properties 8.2.2 Subspace Basis Construction 8.2.3 SVD-Based Orthonormal Basis Construction 8.2.4 Basis Construction of Subspaces Intersection 8.3 Subspace Methods 8.3.1 Signal Subspace and Noise Subspace 8.3.2 Multiple Signal Classification (MUSIC) 8.3.3 Subspace Whitening 8.4 Grassmann Manifold and Stiefel Manifold 8.4.1 Equivalent Subspaces 8.4.2 Grassmann Manifold 8.4.3 Stiefel Manifold 8.5 Projection Approximation Subspace Tracking (PAST) 8.5.1 Basic PAST Theory 8.5.2 PAST Algorithms 8.6 Fast Subspace Decomposition 8.6.1 Rayleigh–Ritz Approximation 8.6.2 Fast Subspace Decomposition Algorithm Exercises 9 Projection Analysis 9.1 Projection and Orthogonal Projection 9.1.1 Projection Theorem 9.1.2 Mean Square Estimation 9.2 Projectors and Projection Matrices 9.2.1 Projector and Orthogonal Projector 9.2.2 Projection Matrices 9.2.3 Derivatives of Projection Matrix 9.3 Updating of Projection Matrices 9.3.1 Updating Formulas for Projection Matrices 9.3.2 Prediction Filters 9.3.3 Updating of Lattice Adaptive Filter 9.4 Oblique Projector of Full Column Rank Matrix 9.4.1 Definition and Properties of Oblique Projectors 9.4.2 Geometric Interpretation of Oblique Projectors 9.4.3 Recursion of Oblique Projectors 9.5 Oblique Projector of Full Row Rank Matrices 9.5.1 Definition and Properties 9.5.2 Calculation of Oblique Projection 9.5.3 Applications of Oblique Projectors Exercises PART III HIGHER-ORDER MATRIX ANALYSIS 10 Tensor Analysis 10.1 Tensors and their Presentation 10.1.1 Tensors 10.1.2 Tensor Representation 10.2 Vectorization and Matricization of Tensors 10.2.1 Vectorization and Horizontal Unfolding 10.2.2 Longitudinal Unfolding of Tensors 10.3 Basic Algebraic Operations of Tensors 10.3.1 Inner Product, Norm and Outer Product 10.3.2 Mode-n Product of Tensors 10.3.3 Rank of Tensor 10.4 Tucker Decomposition of Tensors 10.4.1 Tucker Decomposition (Higher-Order SVD) 10.4.2 Third-Order SVD 10.4.3 Alternating Least Squares Algorithms 10.5 Parallel Factor Decomposition of Tensors 10.5.1 Bilinear Model 10.5.2 Parallel Factor Analysis 10.5.3 Uniqueness Condition 10.5.4 Alternating Least Squares Algorithm 10.6 Applications of Low-Rank Tensor Decomposition 10.6.1 Multimodal Data Fusion 10.6.2 Fusion of Multimodal Brain Images 10.6.3 Process Monitoring 10.6.4 Note on Other Applications 10.7 Tensor Eigenvalue Decomposition 10.7.1 Tensor–Vector Products 10.7.2 Determinants and Eigenvalues of Tensors 10.7.3 Generalized Tensor Eigenvalues Problems 10.7.4 Orthogonal Decomposition of Symmetric Tensors 10.8 Preprocessing and Postprocessing 10.8.1 Centering and Scaling of Multi-Way Data 10.8.2 Compression of Data Array 10.9 Nonnegative Tensor Decomposition Algorithms 10.9.1 Multiplication Algorithm 10.9.2 ALS Algorithms 10.10 Tensor Completion 10.10.1 Simultaneous Tensor Decomposition and Completion 10.10.2 Smooth PARAFAC Tensor Completion 10.11 Software Exercises References Index