ورود به حساب

نام کاربری گذرواژه

گذرواژه را فراموش کردید؟ کلیک کنید

حساب کاربری ندارید؟ ساخت حساب

ساخت حساب کاربری

نام نام کاربری ایمیل شماره موبایل گذرواژه

برای ارتباط با ما می توانید از طریق شماره موبایل زیر از طریق تماس و پیامک با ما در ارتباط باشید


09117307688
09117179751

در صورت عدم پاسخ گویی از طریق پیامک با پشتیبان در ارتباط باشید

دسترسی نامحدود

برای کاربرانی که ثبت نام کرده اند

ضمانت بازگشت وجه

درصورت عدم همخوانی توضیحات با کتاب

پشتیبانی

از ساعت 7 صبح تا 10 شب

دانلود کتاب Compressed Sensing in Information Processing

دانلود کتاب سنجش فشرده در پردازش اطلاعات

Compressed Sensing in Information Processing

مشخصات کتاب

Compressed Sensing in Information Processing

ویرایش:  
نویسندگان: , ,   
سری: Applied and Numerical Harmonic Analysis 
ISBN (شابک) : 3031097440, 9783031097447 
ناشر: Birkhäuser 
سال نشر: 2022 
تعداد صفحات: 548
[549] 
زبان: English 
فرمت فایل : PDF (درصورت درخواست کاربر به PDF، EPUB یا AZW3 تبدیل می شود) 
حجم فایل: 13 Mb 

قیمت کتاب (تومان) : 32,000



ثبت امتیاز به این کتاب

میانگین امتیاز به این کتاب :
       تعداد امتیاز دهندگان : 7


در صورت تبدیل فایل کتاب Compressed Sensing in Information Processing به فرمت های PDF، EPUB، AZW3، MOBI و یا DJVU می توانید به پشتیبان اطلاع دهید تا فایل مورد نظر را تبدیل نمایند.

توجه داشته باشید کتاب سنجش فشرده در پردازش اطلاعات نسخه زبان اصلی می باشد و کتاب ترجمه شده به فارسی نمی باشد. وبسایت اینترنشنال لایبرری ارائه دهنده کتاب های زبان اصلی می باشد و هیچ گونه کتاب ترجمه شده یا نوشته شده به فارسی را ارائه نمی دهد.


توضیحاتی در مورد کتاب سنجش فشرده در پردازش اطلاعات

این حجم ارائه شده مهم‌ترین نتایج به‌دست‌آمده از برنامه اولویت DFG در سنجش فشرده در پردازش اطلاعات را نشان می‌دهد. موضوعات در نظر گرفته شده حول جنبه های به موقع سنجش فشرده با تمرکز ویژه بر کاربردها، از جمله رویکردهای حسگر فشرده برای یادگیری عمیق، می چرخد. حسگر فشرده دوخطی - کارایی، ساختار و استحکام. سنجش فشاری ساختار یافته از طریق یادگیری شبکه عصبی. سنجش فشرده برای MIMO عظیم. و امنیت ارتباطات آینده و سنجش فشاری.


توضیحاتی درمورد کتاب به خارجی

This contributed volume showcases the most significant results obtained from the DFG Priority Program on Compressed Sensing in Information Processing. Topics considered revolve around timely aspects of compressed sensing with a special focus on applications, including compressed sensing-like approaches to deep learning; bilinear compressed sensing - efficiency, structure, and robustness; structured compressive sensing via neural network learning; compressed sensing for massive MIMO; and security of future communication and compressive sensing.



فهرست مطالب

ANHA Series Preface
Preface
Acknowledgements
Contents
Contributors
1 Hierarchical Compressed Sensing
	1.1 Introduction
	1.2 Hierarchically Sparse Vectors
	1.3 Hierarchical Thresholding and Recovery Algorithms
	1.4 Hierarchically Restricted Isometric Measurements
		1.4.1 Gaussian Operators
		1.4.2 Coherence Measures
		1.4.3 Hierarchical Measurement Operators
	1.5 Sparse De-mixing of Low-Rank Matrices
	1.6 Selected Applications
		1.6.1 Channel Estimation in Mobile Communication
		1.6.2 Secure Massive Access
		1.6.3 Blind Quantum State Tomography
	1.7 Conclusion and Outlook
	References
2 Proof Methods for Robust Low-Rank Matrix Recovery
	2.1 Introduction
		2.1.1 Sample Applications
			2.1.1.1 Matrix Completion
			2.1.1.2 Blind Deconvolution
			2.1.1.3 Phase Retrieval
		2.1.2 This Work
	2.2 Recovery Guarantees via a Descent Cone Analysis
		2.2.1 Descent Cone Analysis
		2.2.2 Application 1: Generic Low-Rank Matrix Recovery
		2.2.3 Application 2: Phase Retrieval
		2.2.4 Limitations
	2.3 Recovery Guarantees via the Golfing Scheme
		2.3.1 Recovery Guarantees via Dual Certificates
		2.3.2 Golfing with Precision
		2.3.3 Application 3: Matrix Completion
		2.3.4 Application 4: Simultaneous Demixing and Blind Deconvolution
		2.3.5 Phase Retrieval with Incoherence
	2.4 More Refined Descent Cone Analysis
		2.4.1 Application 5: Blind Deconvolution
		2.4.2 Application 6: Phase Retrieval with Incoherence
	2.5 Conclusion
	Appendix: Descent Cone Elements Are Effectively Low Rank
	References
3 New Challenges in Covariance Estimation: Multiple Structures and Coarse Quantization
	3.1 Introduction
		3.1.1 Outline and Notation
	3.2 Motivation: Massive MIMO
	3.3 Estimation of Structured Covariance Matrices and Robustness Against Outliers
		3.3.1 Sparse Covariance Matrices
		3.3.2 Low-Rank Covariance Matrices
		3.3.3 Toeplitz Covariance Matrices and Combined Structures
	3.4 Estimation from Quantized Samples
		3.4.1 Sign Quantization
		3.4.2 Dithered Quantization
	3.5 The Underlying Structures of Massive MIMO Covariance Estimation
	3.6 Conclusion
	Appendix: Proof of Theorem 3.6
	References
4 Sparse Deterministic and Stochastic Channels: Identification of Spreading Functions and Covariances
	4.1 Motivation and Introduction
	4.2 Channel Identification and Estimation
		4.2.1 Time–Frequency Analysis in Finite Dimensions
		4.2.2 Deterministic Channels
		4.2.3 Stochastic Channels
	4.3 Results in Deterministic Setting
		4.3.1 Classical Results on Channel Identification
			4.3.1.1 Identification of SISO Channels
			4.3.1.2 Identification of MIMO Channels
		4.3.2 Linear Constraints
		4.3.3 Message Transmission Using Unidentified Channels
			4.3.3.1 Problem Formulation
			4.3.3.2 Message Transmission with Known Support
			4.3.3.3 Message Transmission with Unknown Support
	4.4 Results in Stochastic Setting
		4.4.1 Permissible and Defective Support Patterns
		4.4.2 Linear Constraints in Stochastic Setting
			4.4.2.1 WSSUS Pattern with Additional Off-diagonal Contributions
			4.4.2.2 Tensor Product Pattern with Additional Contributions
		4.4.3 Numerical Simulations
	Appendix
		Proof of Lemma 4.3
		Proof of Proposition 4.6
		Proof of Proposition 4.7
		Proof of Proposition 4.8
		Proof of Proposition 4.10
		Proof of Proposition 4.11
	References
5 Analysis of Sparse Recovery Algorithms via the Replica Method
	5.1 Introduction
	5.2 A Multi-terminal Setting for Compressive Sensing
		5.2.1 Characterization of the Recovery Performance
		5.2.2 Stochastic Model of System Components
		5.2.3 Stochastic Model for Jointly Sparse Signals
		5.2.4 Special Cases
			5.2.4.1 Classical Compressive Sensing
			5.2.4.2 Multiple Measurement Vectors
			5.2.4.3 Distributed Compressive Sensing
	5.3 Sparse Recovery via the Regularized Least-Squares Method
		5.3.1 Some Well-Known Forms
			5.3.1.1 p-Norm Minimization
			5.3.1.2 p,q-Norm Minimization
		5.3.2 Bayesian Interpretation
	5.4 Asymptotic Characterization
		5.4.1 Stieltjes and R-Transforms
	5.5 Building a Bridge to Statistical Mechanics
		5.5.1 Introduction to Statistical Mechanics
			5.5.1.1 Second Law of Thermodynamics
			5.5.1.2 Spin Glasses
			5.5.1.3 Thermodynamic Limit
			5.5.1.4 Averaging Trick
		5.5.2 Corresponding Spin Glass
			5.5.2.1 Asymptotic Distortion as a Macroscopic Parameter
		5.5.3 The Replica Method
	5.6 The Replica Analysis
		5.6.1 General Form of the Solution
		5.6.2 Constructing Parameterized Qj
			5.6.2.1 Replica Symmetric Solution
			5.6.2.2 Replica Symmetry Breaking
	5.7 Applications and Numerical Results
		5.7.1 Performance Analysis of Sparse Recovery
		5.7.2 Tuning RLS-Based Algorithms
	5.8 Summary and Final Discussions
		5.8.1 Decoupling Principle
		5.8.2 Nonuniform Sparsity Patterns
		5.8.3 Extensions to Bayesian Estimation
	5.9 Bibliographical Notes
	References
6 Unbiasing in Iterative Reconstruction Algorithms for Discrete Compressed Sensing
	6.1 Introduction
		6.1.1 Compressed Sensing Problem and Reconstruction Algorithms
		6.1.2 Discrete Setting
		6.1.3 Outline of the Chapter
	6.2 Problem Statement and Iterative Algorithms
		6.2.1 Factorization and Message-Passing Approaches
			6.2.1.1 Message-Passing Approaches
			6.2.1.2 Partitioning of the Problem
		6.2.2 Exponential Families
	6.3 Expectation-Consistent Approximate Inference
		6.3.1 Derivation and Optimization Procedure
		6.3.2 Algorithms
			6.3.2.1 Optimization: ECopts, ECoptc and ECseqs, ECseqc
			6.3.2.2 Vector Approximate Message Passing: VAMP
			6.3.2.3 Discussion
		6.3.3 Alternative Partitioning of the Problem
	6.4 Unbiasing of MMSE Estimators
		6.4.1 Joint Linear Estimators
			6.4.1.1 Average Unbiasing
			6.4.1.2 Individual Unbiasing
		6.4.2 Scalar Non-linear Estimators
			6.4.2.1 Signal-Oriented Unbiasing
			6.4.2.2 Noise-Oriented Unbiasing
		6.4.3 Iterative Schemes with Individual and Average Variances
	6.5 Numerical Results and Discussion
		6.5.1 Average Variance
		6.5.2 Individual Variances
	References
7 Recovery Under Side Constraints
	7.1 Introduction
	7.2 Sparse Recovery in Sensor Arrays
		7.2.1 Compressive Data Model for Sensor Arrays
		7.2.2 Sparse Recovery Formulations for Sensor Arrays
	7.3 Recovery Guarantees Under Side Constraints
		7.3.1 Integrality Constraints
		7.3.2 General Framework for Arbitrary Side Constraints
	7.4 Recovery Algorithms Under Different Side Constraints for the Linear Measurement Model
		7.4.1 Constant-Modulus Constraints
		7.4.2 Row- and Rank-Sparsity
		7.4.3 Block-Sparse Tensors
		7.4.4 Non-circularity
	7.5 Mixing Matrix Design
		7.5.1 Sensing Matrix Design: P = M Case
		7.5.2 Sensing Matrix Design: The General Case
		7.5.3 Numerical Results
	7.6 Recovery Algorithms for the Nonlinear Measurement Model
		7.6.1 Sparse Phase Retrieval
		7.6.2 Phase Retrieval with Dictionary Learning
	7.7 Conclusions
	References
8 Compressive Sensing and Neural Networks from a Statistical Learning Perspective
	8.1 Introduction
		8.1.1 Notation
	8.2 Deep Learning and Inverse Problems
		8.2.1 Learned Iterative Soft Thresholding
		8.2.2 Variants of LISTA
	8.3 Generalization of Deep Neural Networks
		8.3.1 Rademacher Complexity Analysis
		8.3.2 Generalization Bounds for Deep Neural Networks
	8.4 Generalization of Deep Thresholding Networks
		8.4.1 Boundedness: Assumptions and Results
		8.4.2 Dudley's Inequality
		8.4.3 Bounding the Rademacher Complexity
		8.4.4 A Perturbation Result
		8.4.5 Covering number estimates
		8.4.6 Main result
	8.5 Thresholding Networks for Sparse Recovery
	8.6 Conclusion and Outlook
	References
9 Angular Scattering Function Estimation Using Deep Neural Networks
	9.1 Introduction
		9.1.1 Related Work
		9.1.2 Outline and Notation
	9.2 System Model
	9.3 The Parametric Form of ASF
		9.3.1 The Continuous ASF Component
		9.3.2 The Discrete ASF Component
	9.4 Pre-processing: Discrete AoA Estimation via MUSIC
	9.5 A Deep Learning Approach to ASF Estimation
		9.5.1 Training Phase
		9.5.2 Network Architecture
		9.5.3 Test Phase
	9.6 Simulation Results
		9.6.1 Metrics for Comparison
		9.6.2 Performance with Different SNRs
		9.6.3 Performance with Different Sample Numbers
	9.7 Conclusion
	References
10 Fast Radio Propagation Prediction with Deep Learning
	10.1 Introduction
		10.1.1 Applications of Radio Maps
		10.1.2 Radio Map Prediction
		10.1.3 Radio Map Prediction Using Deep Learning
	10.2 Introduction to Radio Map Prediction with RadioUNet
		10.2.1 RadioUNet Methods
		10.2.2 The Training Data
		10.2.3 Generalizing What Was Learned to Real-Life Scenarios
		10.2.4 Applications
	10.3 Background and Preliminaries
		10.3.1 Wireless Communication
		10.3.2 Deep Learning
			10.3.2.1 An Interpretation of Deep Learning
			10.3.2.2 Convolutional Neural Networks
			10.3.2.3 UNets
			10.3.2.4 Supervised Learning of UNets via Stochastic Gradient Descent
			10.3.2.5 Curriculum Learning
			10.3.2.6 Out-of-Domain Generalization
	10.4 The RadioMapSeer Dataset
		10.4.1 General Setting
			10.4.1.1 Maps and Transmitters
			10.4.1.2 Coarsely Simulated Radio Maps
			10.4.1.3 Higher-Accuracy Simulations
			10.4.1.4 Pathloss Scale
		10.4.2 System Parameters
		10.4.3 Gray-Level Conversion
	10.5 Estimating Radio Maps via RadioUNets
		10.5.1 Motivation for RadioUNet
		10.5.2 Different Settings in Radio Map Estimation
			10.5.2.1 Network Input Scenarios
			10.5.2.2 Learning Scenarios
		10.5.3 RadioUNet Architectures
			10.5.3.1 Retrospective Improvement
			10.5.3.2 Adaptation to Real Measurements
			10.5.3.3 Thresholder
		10.5.4 Training
		10.5.5 RadioUNet Performance
	10.6 Comparison of RadioUNet to State of the Art
		10.6.1 Comparison to Model-Based Simulation
		10.6.2 Comparison to Data-Driven Interpolation
		10.6.3 Comparison to Model-Based Data Fitting
		10.6.4 Comparison to Deep Learning Data Fitting
	10.7 Applications
		10.7.1 Coverage Classification
		10.7.2 Pathloss-Based Fingerprint Localization
	10.8 Conclusion
	References
11 Active Channel Sparsification: Realizing Frequency-Division Duplexing Massive MIMO with Minimal Overhead
	11.1 Introduction
	11.2 System Model
		11.2.1 Related Work
		11.2.2 Contribution
	11.3 Channel Model
	11.4 Active Channel Sparsification and DL Channel Training
		11.4.1 Necessity and Implication of Stable Channel Estimation
		11.4.2 Sparsifying Precoder Design
		11.4.3 Channel Estimation and Multiuser Precoding
	11.5 Simulation Results
		11.5.1 Channel Estimation Error and Sum Rate vs. Pilot Dimension
		11.5.2 The Effect of Channel Sparsity
	11.6 Beam-Space Design for Arbitrary Array Geometries
		11.6.1 Jointly Diagonalizable Covariances
		11.6.2 ML via Projected Gradient Descent
		11.6.3 Extension of ACS to Arbitrary Array Geometries
	References
12 Atmospheric Radar Imaging Improvements Using Compressed Sensing and MIMO
	12.1 Introduction
	12.2 System Model and Inversion Methods for Atmospheric Radar Imaging
		12.2.1 System Model for SIMO Atmospheric Radar Imaging
		12.2.2 System Model for MIMO Atmospheric Radar Imaging
		12.2.3 SIMO vs MIMO Arrays
		12.2.4 Inversion Methods
			12.2.4.1 The Capon Method
			12.2.4.2 Maximum Entropy Method
			12.2.4.3 Compressed Sensing
		12.2.5 MIMO Implementations
			12.2.5.1 Time Diversity
			12.2.5.2 Waveform Diversity
			12.2.5.3 Suboptimal Diversity
	12.3 Applications of MIMO in Atmospheric Radar Imaging
		12.3.1 MIMO in Atmospheric Radar Imaging for Ionospheric Studies
		12.3.2 Polar Mesospheric Summer Echoes Imaging
		12.3.3 MIMO in Specular Meteor Radars to Measure Mesospheric Winds
	12.4 Summary and Future Work
	Table of Mathematical Symbols
	References
13 Over-the-Air Computation for Distributed Machine Learning and Consensus in Large Wireless Networks
	13.1 Introduction
	13.2 Over-the-Air Computation
		13.2.1 Digital Over-the-Air Computation
		13.2.2 Analog Over-the-Air Computation
		13.2.3 Analog Over-the-Air Computation as a Compressed Sensing Problem
	13.3 Applications of Over-the-Air Computation
		13.3.1 Distributed Machine Learning
		13.3.2 Consensus Over Wireless Channels
		13.3.3 Compressed Sensing
	13.4 Distributed Function Approximation in Wireless Channels
		13.4.1 Class of Functions
		13.4.2 System and Channel Model
		13.4.3 The Case of Independent Fading and Noise
		13.4.4 The Case of Correlated Fading and Noise
	13.5 DFA Applications to VFL
	13.6 Security in OTA Computation
		13.6.1 Information-Theoretic Preliminaries
		13.6.2 Result and Discussion
	13.7 Open Research Questions
	References
14 Information Theory and Recovery Algorithms for Data Fusion in Earth Observation
	14.1 General Framework
	14.2 Identification of Neural Networks: From One Neuron to Deep Neural Networks
		14.2.1 From One Neuron to Deep Networks
		14.2.2 Data Interpolation and Identification of Neural Networks
		14.2.3 Shallow Feed-Forward Neural Networks
			14.2.3.1 The Approximation to W: Active Sampling
			14.2.3.2 Whitening
			14.2.3.3 The Recovery Strategy of the Weights wi
		14.2.4 Deeper Networks
	14.3 Quantized Compressed Sensing with Message Passing Reconstruction
		14.3.1 Bayesian Compressed Sensing via Approximate Message Passing
		14.3.2 Two-Terminal Bayesian QCS
	14.4 Signal Processing in Earth Observation
		14.4.1 Multi-Sensor and Multi-Resolution Data Fusion
		14.4.2 Hyperspectral Unmixing Accounting for Spectral Variability
	References
15 Sparse Recovery of Sound Fields Using Measurements from Moving Microphones
	15.1 Problem Formulation and Signal Model
		15.1.1 General Problem
		15.1.2 Sparse Signal Structures
	15.2 Multidimensional Sampling and Reconstruction
		15.2.1 Temporal Sampling Model
		15.2.2 Spatial Sampling Model
		15.2.3 Spatio-Temporal Measurement Model
			15.2.3.1 Finite-Length Observations in Space
			15.2.3.2 Linear Equations for Parameter Recovery in Terms of Uniform Grids
	15.3 Sparse Sound-Field Recovery in Frequency Domain
		15.3.1 Basic Ideas for the Simplified Stationary Case
		15.3.2 From Static to Dynamic Sensing
		15.3.3 Sparse Recovery Along the Spectral Hypercone
		15.3.4 Perfect Excitation Sequences
		15.3.5 Algorithm for Sparse Recovery from Dynamic Measurements
			15.3.5.1 General Case
			15.3.5.2 Perfect-Excitation Case
	15.4 Coherence Analysis
		15.4.1 Influence of the Trajectory on the Sensing Matrix
		15.4.2 Coherence of Measurements
		15.4.3 Spectrally Flat Spatio-Temporal Excitation
	15.5 Trajectory Optimization
		15.5.1 Techniques for Measurement Matrix Optimization
		15.5.2 Fast Update Scheme for Trajectory Adjustments
	15.6 Summary
	References
16 Compressed Sensing in the Spherical Near-Field to Far-Field Transformation
	16.1 Spherical Near-Field Antenna Measurements
		16.1.1 Notation
	16.2 Compressed Sensing
	16.3 Definition and Backgrounds
		16.3.1 Wigner D-Functions and Spherical Harmonics
		16.3.2 Sparse Expansions of Band-Limited Functions
		16.3.3 Construction of the Sensing Matrix
			16.3.3.1 RIP Condition for Sensing Matrices
			16.3.3.2 Construction of Low-Coherence Sensing Matrices
		16.3.4 Numerical Evaluation
			16.3.4.1 Coherence Evaluation
			16.3.4.2 Phase Transition Diagram: Random vs. Deterministic
		16.3.5 Implementation in Spherical Near-Field Antenna Measurements
			16.3.5.1 Modifying the Scheme for Time Efficiency
			16.3.5.2 Extension to Arbitrary Surfaces
			16.3.5.3 Implementation Considerations: Basis Mismatch
	16.4 Phaseless Spherical Near-Field Antenna Measurements
		16.4.1 Phaseless Measurements
			16.4.1.1 Ambiguities in Phaseless Spherical Harmonics Expansion
		16.4.2 Numerical Evaluation
			16.4.2.1 Phase Transition Diagram
			16.4.2.2 Implementation in Spherical Near-Field Antenna Measurements
	16.5 Summary
	References
Applied and Numerical Harmonic Analysis
	Applied and Numerical Harmonic Analysis
		(104 volumes)




نظرات کاربران