ورود به حساب

نام کاربری گذرواژه

گذرواژه را فراموش کردید؟ کلیک کنید

حساب کاربری ندارید؟ ساخت حساب

ساخت حساب کاربری

نام نام کاربری ایمیل شماره موبایل گذرواژه

برای ارتباط با ما می توانید از طریق شماره موبایل زیر از طریق تماس و پیامک با ما در ارتباط باشید


09117307688
09117179751

در صورت عدم پاسخ گویی از طریق پیامک با پشتیبان در ارتباط باشید

دسترسی نامحدود

برای کاربرانی که ثبت نام کرده اند

ضمانت بازگشت وجه

درصورت عدم همخوانی توضیحات با کتاب

پشتیبانی

از ساعت 7 صبح تا 10 شب

دانلود کتاب Tensors for data processing: theory, methods, and applications

دانلود کتاب تانسورها برای پردازش داده ها: نظریه، روش ها و کاربردها

Tensors for data processing: theory, methods, and applications

مشخصات کتاب

Tensors for data processing: theory, methods, and applications

ویرایش:  
نویسندگان:   
سری:  
ISBN (شابک) : 9780323859653, 0323859658 
ناشر: Elsevier Science & Technology, 
سال نشر: [2021]. 
تعداد صفحات: 1 online resource :
[598] 
زبان: English 
فرمت فایل : PDF (درصورت درخواست کاربر به PDF، EPUB یا AZW3 تبدیل می شود) 
حجم فایل: 18 Mb 

قیمت کتاب (تومان) : 40,000



ثبت امتیاز به این کتاب

میانگین امتیاز به این کتاب :
       تعداد امتیاز دهندگان : 7


در صورت تبدیل فایل کتاب Tensors for data processing: theory, methods, and applications به فرمت های PDF، EPUB، AZW3، MOBI و یا DJVU می توانید به پشتیبان اطلاع دهید تا فایل مورد نظر را تبدیل نمایند.

توجه داشته باشید کتاب تانسورها برای پردازش داده ها: نظریه، روش ها و کاربردها نسخه زبان اصلی می باشد و کتاب ترجمه شده به فارسی نمی باشد. وبسایت اینترنشنال لایبرری ارائه دهنده کتاب های زبان اصلی می باشد و هیچ گونه کتاب ترجمه شده یا نوشته شده به فارسی را ارائه نمی دهد.


توضیحاتی در مورد کتاب تانسورها برای پردازش داده ها: نظریه، روش ها و کاربردها

تانسورها برای پردازش داده (2021) [Liu] [9780128244470]


توضیحاتی درمورد کتاب به خارجی

Tensors for Data Processing (2021) [Liu] [9780128244470]



فهرست مطالب

Front Cover
Tensors for Data Processing
Copyright
Contents
List of contributors
Preface
1 Tensor decompositions: computations, applications, and challenges
	1.1 Introduction
		1.1.1 What is a tensor?
		1.1.2 Why do we need tensors?
	1.2 Tensor operations
		1.2.1 Tensor notations
		1.2.2 Matrix operators
		1.2.3 Tensor transformations
		1.2.4 Tensor products
		1.2.5 Structural tensors
		1.2.6 Summary
	1.3 Tensor decompositions
		1.3.1 Tucker decomposition
		1.3.2 Canonical polyadic decomposition
		1.3.3 Block term decomposition
		1.3.4 Tensor singular value decomposition
		1.3.5 Tensor network
			1.3.5.1 Hierarchical Tucker decomposition
			1.3.5.2 Tensor train decomposition
			1.3.5.3 Tensor ring decomposition
			1.3.5.4 Other variants
	1.4 Tensor processing techniques
	1.5 Challenges
	References
2 Transform-based tensor singular value decomposition in multidimensional image recovery
	2.1 Introduction
	2.2 Recent advances of the tensor singular value decomposition
		2.2.1 Preliminaries and basic tensor notations
		2.2.2 The t-SVD framework
		2.2.3 Tensor nuclear norm and tensor recovery
		2.2.4 Extensions
			2.2.4.1 Nonconvex surrogates
			2.2.4.2 Additional prior knowledge
			2.2.4.3 Multiple directions and higher-order tensors
		2.2.5 Summary
	2.3 Transform-based t-SVD
		2.3.1 Linear invertible transform-based t-SVD
		2.3.2 Beyond invertibility and data adaptivity
	2.4 Numerical experiments
		2.4.1 Examples within the t-SVD framework
		2.4.2 Examples of the transform-based t-SVD
	2.5 Conclusions and new guidelines
	References
3 Partensor
	3.1 Introduction
		3.1.1 Related work
		3.1.2 Notation
	3.2 Tensor decomposition
		3.2.1 Matrix least-squares problems
			3.2.1.1 The unconstrained case
			3.2.1.2 The nonnegative case
			3.2.1.3 The orthogonal case
		3.2.2 Alternating optimization for tensor decomposition
	3.3 Tensor decomposition with missing elements
		3.3.1 Matrix least-squares with missing elements
			3.3.1.1 The unconstrained case
			3.3.1.2 The nonnegative case
		3.3.2 Tensor decomposition with missing elements: the unconstrained case
		3.3.3 Tensor decomposition with missing elements: the nonnegative case
		3.3.4 Alternating optimization for tensor decomposition with missing elements
	3.4 Distributed memory implementations
		3.4.1 Some MPI preliminaries
			3.4.1.1 Communication domains and topologies
			3.4.1.2 Synchronization among processes
			3.4.1.3 Point-to-point communication operations
			3.4.1.4 Collective communication operations
			3.4.1.5 Derived data types
		3.4.2 Variable partitioning and data allocation
			3.4.2.1 Communication domains
		3.4.3 Tensor decomposition
			3.4.3.1 The unconstrained and the nonnegative case
			3.4.3.2 The orthogonal case
			3.4.3.3 Factor normalization and acceleration
		3.4.4 Tensor decomposition with missing elements
			3.4.4.1 The unconstrained case
			3.4.4.2 The nonnegative case
		3.4.5 Some implementation details
	3.5 Numerical experiments
		3.5.1 Tensor decomposition
		3.5.2 Tensor decomposition with missing elements
	3.6 Conclusion
	Acknowledgment
	References
4 A Riemannian approach to low-rank tensor learning
	4.1 Introduction
	4.2 A brief introduction to Riemannian optimization
		4.2.1 Riemannian manifolds
			4.2.1.1 Riemannian gradient
			4.2.1.2 Retraction
		4.2.2 Riemannian quotient manifolds
			4.2.2.1 Riemannian gradient on quotient manifold
			4.2.2.2 Retraction on quotient manifold
	4.3 Riemannian Tucker manifold geometry
		4.3.1 Riemannian metric and quotient manifold structure
			4.3.1.1 The symmetry structure in Tucker decomposition
			4.3.1.2 A metric motivated by a particular cost function
			4.3.1.3 A novel Riemannian metric
		4.3.2 Characterization of the induced spaces
			4.3.2.1 Characterization of the normal space
			4.3.2.2 Decomposition of tangent space into vertical and horizontal spaces
		4.3.3 Linear projectors
			4.3.3.1 The tangent space projector
			4.3.3.2 The horizontal space projector
		4.3.4 Retraction
		4.3.5 Vector transport
		4.3.6 Computational cost
	4.4 Algorithms for tensor learning problems
		4.4.1 Tensor completion
		4.4.2 General tensor learning
	4.5 Experiments
		4.5.1 Choice of metric
		4.5.2 Low-rank tensor completion
			4.5.2.1 Small-scale instances
			4.5.2.2 Large-scale instances
			4.5.2.3 Low sampling instances
			4.5.2.4 Ill-conditioned and low sampling instances
			4.5.2.5 Noisy instances
			4.5.2.6 Skewed dimensional instances
			4.5.2.7 Ribeira dataset
			4.5.2.8 MovieLens 10M dataset
		4.5.3 Low-rank tensor regression
		4.5.4 Multilinear multitask learning
	4.6 Conclusion
	References
5 Generalized thresholding for low-rank tensor recovery: approaches based on model and learning
	5.1 Introduction
	5.2 Tensor singular value thresholding
		5.2.1 Proximity operator and generalized thresholding
		5.2.2 Tensor singular value decomposition
		5.2.3 Generalized matrix singular value thresholding
		5.2.4 Generalized tensor singular value thresholding
	5.3 Thresholding based low-rank tensor recovery
		5.3.1 Thresholding algorithms for low-rank tensor recovery
		5.3.2 Generalized thresholding algorithms for low-rank tensor recovery
	5.4 Generalized thresholding algorithms with learning
		5.4.1 Deep unrolling
		5.4.2 Deep plug-and-play
	5.5 Numerical examples
	5.6 Conclusion
	References
6 Tensor principal component analysis
	6.1 Introduction
	6.2 Notations and preliminaries
		6.2.1 Notations
		6.2.2 Discrete Fourier transform
		6.2.3 T-product
		6.2.4 Summary
	6.3 Tensor PCA for Gaussian-noisy data
		6.3.1 Tensor rank and tensor nuclear norm
		6.3.2 Analysis of tensor PCA on Gaussian-noisy data
		6.3.3 Summary
	6.4 Tensor PCA for sparsely corrupted data
		6.4.1 Robust tensor PCA
			6.4.1.1 Tensor incoherence conditions
			6.4.1.2 Exact recovery guarantee of R-TPCA
			6.4.1.3 Optimization algorithm
		6.4.2 Tensor low-rank representation
			6.4.2.1 Tensor linear representation
			6.4.2.2 TLRR for data clustering
			6.4.2.3 TLRR for exact data recovery
			6.4.2.4 Optimization technique
			6.4.2.5 Dictionary construction
		6.4.3 Applications
			6.4.3.1 Application to data recovery
			6.4.3.2 Application to data clustering
		6.4.4 Summary
	6.5 Tensor PCA for outlier-corrupted data
		6.5.1 Outlier robust tensor PCA
			6.5.1.1 Formulation of OR-TPCA
			6.5.1.2 Exact subspace recovery guarantees
			6.5.1.3 Optimization
		6.5.2 The fast OR-TPCA algorithm
			6.5.2.1 Sketch of fast OR-TPCA
			6.5.2.2 Guarantees for fast OR-TPCA
		6.5.3 Applications
			6.5.3.1 Evaluation on synthetic data
			6.5.3.2 Evaluation on real applications
			6.5.3.3 Outlier detection
			6.5.3.4 Unsupervised and semi-supervised learning
			6.5.3.5 Experiments on fast OR-TPCA
		6.5.4 Summary
	6.6 Other tensor PCA methods
	6.7 Future work
	6.8 Summary
	References
7 Tensors for deep learning theory
	7.1 Introduction
	7.2 Bounding a function's expressivity via tensorization
		7.2.1 A measure of capacity for modeling input dependencies
		7.2.2 Bounding correlations with tensor matricization ranks
	7.3 A case study: self-attention networks
		7.3.1 The self-attention mechanism
			7.3.1.1 The operation of a self-attention layer
			7.3.1.2 Partition invariance of the self-attention separation rank
		7.3.2 Self-attention architecture expressivity questions
			7.3.2.1 The depth-to-width interplay in self-attention
			7.3.2.2 The input embedding rank bottleneck in self-attention
			7.3.2.3 Mid-architecture rank bottlenecks in self-attention
		7.3.3 Results on the operation of self-attention
			7.3.3.1 The effect of depth in self-attention networks
			7.3.3.2 The effect of bottlenecks in self-attention networks
		7.3.4 Bounding the separation rank of self-attention
			7.3.4.1 An upper bound on the separation rank
			7.3.4.2 A lower bound on the separation rank
	7.4 Convolutional and recurrent networks
		7.4.1 The operation of convolutional and recurrent networks
		7.4.2 Addressed architecture expressivity questions
			7.4.2.1 Depth efficiency in convolutional and recurrent networks
			7.4.2.2 Further results on convolutional networks
	7.5 Conclusion
	References
8 Tensor network algorithms for image classification
	8.1 Introduction
	8.2 Background
		8.2.1 Tensor basics
		8.2.2 Tensor decompositions
			8.2.2.1 Rank-1 tensor decomposition
			8.2.2.2 Canonical polyadic decomposition
			8.2.2.3 Tucker decomposition
			8.2.2.4 Tensor train decomposition
		8.2.3 Support vector machines
		8.2.4 Logistic regression
	8.3 Tensorial extensions of support vector machine
		8.3.1 Supervised tensor learning
		8.3.2 Support tensor machines
			8.3.2.1 Methodology
			8.3.2.2 Examples
			8.3.2.3 Conclusion
		8.3.3 Higher-rank support tensor machines
			8.3.3.1 Methodology
			8.3.3.2 Complexity analysis
			8.3.3.3 Examples
			8.3.3.4 Conclusion
		8.3.4 Support Tucker machines
			8.3.4.1 Methodology
			8.3.4.2 Examples
		8.3.5 Support tensor train machines
			8.3.5.1 Methodology
			8.3.5.2 Complexity analysis
			8.3.5.3 Effect of TT ranks on STTM classification
			8.3.5.4 Updating in site-k-mixed-canonical form
			8.3.5.5 Examples
			8.3.5.6 Conclusion
		8.3.6 Kernelized support tensor train machines
			8.3.6.1 Methodology
			8.3.6.2 Kernel validity of K-STTM
			8.3.6.3 Complexity analysis
			8.3.6.4 Examples
			8.3.6.5 Conclusion
	8.4 Tensorial extension of logistic regression
		8.4.1 Rank-1 logistic regression
			8.4.1.1 Examples
		8.4.2 Logistic tensor regression
			8.4.2.1 Examples
	8.5 Conclusion
	References
9 High-performance tensor decompositions for compressing and accelerating deep neural networks
	9.1 Introduction and motivation
	9.2 Deep neural networks
		9.2.1 Notations
		9.2.2 Linear layer
		9.2.3 Fully connected neural networks
		9.2.4 Convolutional neural networks
		9.2.5 Backpropagation
	9.3 Tensor networks and their decompositions
		9.3.1 Tensor networks
		9.3.2 CP tensor decomposition
		9.3.3 Tucker decomposition
		9.3.4 Hierarchical Tucker decomposition
		9.3.5 Tensor train and tensor ring decomposition
		9.3.6 Transform-based tensor decomposition
	9.4 Compressing deep neural networks
		9.4.1 Compressing fully connected layers
		9.4.2 Compressing the convolutional layer via CP decomposition
		9.4.3 Compressing the convolutional layer via Tucker decomposition
		9.4.4 Compressing the convolutional layer via TT/TR decompositions
		9.4.5 Compressing neural networks via transform-based decomposition
	9.5 Experiments and future directions
		9.5.1 Performance evaluations using the MNIST dataset
		9.5.2 Performance evaluations using the CIFAR10 dataset
		9.5.3 Future research directions
	References
10 Coupled tensor decompositions for data fusion
	10.1 Introduction
	10.2 What is data fusion?
		10.2.1 Context and definition
		10.2.2 Challenges of data fusion
		10.2.3 Types of fusion and data fusion strategies
	10.3 Decompositions in data fusion
		10.3.1 Matrix decompositions and statistical models
		10.3.2 Tensor decompositions
		10.3.3 Coupled tensor decompositions
	10.4 Applications of tensor-based data fusion
		10.4.1 Biomedical applications
		10.4.2 Image fusion
	10.5 Fusion of EEG and fMRI: a case study
	10.6 Data fusion demos
		10.6.1 SDF demo – approximate coupling
	10.7 Conclusion and prospects
	Acknowledgments
	References
11 Tensor methods for low-level vision
	11.1 Low-level vision and signal reconstruction
		11.1.1 Observation models
		11.1.2 Inverse problems
	11.2 Methods using raw tensor structure
		11.2.1 Penalty-based tensor reconstruction
			11.2.1.1 Low-rank matrix completion
			11.2.1.2 Low-rank tensor completion
			11.2.1.3 Smooth tensor completion
			11.2.1.4 Smooth tensor completion: an ADMM algorithm
			11.2.1.5 Smooth tensor completion: a PDHG/PDS algorithm
			11.2.1.6 Tensor reconstruction via minimization of convex penalties
		11.2.2 Tensor decomposition and reconstruction
			11.2.2.1 Majorization-minimization algorithm for tensor decomposition with missing entries
			11.2.2.2 MM algorithm for other low-level vision tasks
			11.2.2.3 Low-rank CP decomposition for tensor completion
			11.2.2.4 Low-rank Tucker decomposition for tensor completion
			11.2.2.5 Parallel matrix factorization for tensor completion
			11.2.2.6 Tucker decomposition with rank increment
			11.2.2.7 Smooth CP decomposition for tensor completion
			11.2.2.8 FR-SPC with rank increment
	11.3 Methods using tensorization
		11.3.1 Higher-order tensorization
			11.3.1.1 Vector-to-tensor
			11.3.1.2 Benefits of the folding operation
			11.3.1.3 Tensor representation of images: TS type I
			11.3.1.4 Tensor representation of images: TS type II
		11.3.2 Delay embedding/Hankelization
			11.3.2.1 Delay embedding/Hankelization of time series signals
			11.3.2.2 Benefits of delay embedding/Hankelization
			11.3.2.3 Multiway delay embedding/Hankelization of tensors
	11.4 Examples of low-level vision applications
		11.4.1 Image inpainting with raw tensor structure
		11.4.2 Image inpainting using tensorization
		11.4.3 Denoising, deblurring, and superresolution
	11.5 Remarks
	Acknowledgments
	References
12 Tensors for neuroimaging
	12.1 Introduction
	12.2 Neuroimaging modalities
	12.3 Multidimensionality of the brain
	12.4 Tensor decomposition structures
		12.4.1 Product operations for tensors
		12.4.2 Canonical polyadic decomposition
		12.4.3 Tucker decomposition
		12.4.4 Block term decomposition
	12.5 Applications of tensors in neuroimaging
		12.5.1 Filling in missing data
		12.5.2 Denoising, artifact removal, and dimensionality reduction
		12.5.3 Segmentation
		12.5.4 Registration and longitudinal analysis
		12.5.5 Source separation
		12.5.6 Activity recognition and source localization
			12.5.6.1 Seizure localization
			12.5.6.2 Seizure recognition
		12.5.7 Connectivity analysis
			12.5.7.1 Structural connectivity
			12.5.7.2 Functional connectivity
			12.5.7.3 Effective connectivity
		12.5.8 Regression
		12.5.9 Feature extraction and classification
		12.5.10 Summary and practical considerations
	12.6 Future challenges
	12.7 Conclusion
	References
13 Tensor representation for remote sensing images
	13.1 Introduction
	13.2 Optical remote sensing: HSI and MSI fusion
		13.2.1 Tensor notations and preliminaries
		13.2.2 Nonlocal patch tensor sparse representation for HSI-MSI fusion
			13.2.2.1 Problem formulation
			13.2.2.2 Nonlocal patch extraction
			13.2.2.3 Tensor sparse representation for nonlocal patch tensors
			13.2.2.4 Experiments and results
			13.2.2.5 Conclusion
		13.2.3 High-order coupled tensor ring representation for HSI-MSI fusion
			13.2.3.1 Multiscale high-order tensorization
			13.2.3.2 High-order tensor ring representation for HSI-MSI fusion
			13.2.3.3 Spectral manifold regularization
			13.2.3.4 Results on synthetic datasets
			13.2.3.5 Results on a real dataset
			13.2.3.6 Conclusion
		13.2.4 Joint tensor factorization for HSI-MSI fusion
			13.2.4.1 Problem formulation
			13.2.4.2 The joint tensor decomposition method
			13.2.4.3 Selection of parameters
			13.2.4.4 Experimental results
			13.2.4.5 Experimental results of the noise
			13.2.4.6 Analysis of computational costs
	13.3 Polarimetric synthetic aperture radar: feature extraction
		13.3.1 Brief description of PolSAR data
		13.3.2 The tensorial embedding framework
		13.3.3 Experiment and analysis
			13.3.3.1 Experiment preparation
			13.3.3.2 Experiment and analysis
	References
14 Structured tensor train decomposition for speeding up kernel-based learning
	14.1 Introduction
	14.2 Notations and algebraic background
	14.3 Standard tensor decompositions
		14.3.1 Tucker decomposition
		14.3.2 HOSVD
		14.3.3 Tensor networks and TT decomposition
			14.3.3.1 Tensor networks and their graph-based illustrations
			14.3.3.2 TT decomposition
	14.4 Dimensionality reduction based on a train of low-order tensors
		14.4.1 TD-train model: equivalence between a high-order TD and a train of low-order TDs
	14.5 Tensor train algorithm
		14.5.1 Description of the TT-HSVD algorithm
		14.5.2 Comparison of the sequential and the hierarchical schemes
	14.6 Kernel-based classification of high-order tensors
		14.6.1 Formulation of SVMs
		14.6.2 Polynomial and Euclidean tensor-based kernel
		14.6.3 Kernel on a Grassmann manifold
		14.6.4 The fast kernel subspace estimation based on tensor train decomposition (FAKSETT) method
	14.7 Experiments
		14.7.1 Datasets
		14.7.2 Classification performance
	14.8 Conclusion
	References
Index
Back Cover




نظرات کاربران