ورود به حساب

نام کاربری گذرواژه

گذرواژه را فراموش کردید؟ کلیک کنید

حساب کاربری ندارید؟ ساخت حساب

ساخت حساب کاربری

نام نام کاربری ایمیل شماره موبایل گذرواژه

برای ارتباط با ما می توانید از طریق شماره موبایل زیر از طریق تماس و پیامک با ما در ارتباط باشید


09117307688
09117179751

در صورت عدم پاسخ گویی از طریق پیامک با پشتیبان در ارتباط باشید

دسترسی نامحدود

برای کاربرانی که ثبت نام کرده اند

ضمانت بازگشت وجه

درصورت عدم همخوانی توضیحات با کتاب

پشتیبانی

از ساعت 7 صبح تا 10 شب

دانلود کتاب Handbook of medical image computing and computer assisted intervention

دانلود کتاب راهنمای محاسبات تصویر پزشکی و مداخله به کمک کامپیوتر

Handbook of medical image computing and computer assisted intervention

مشخصات کتاب

Handbook of medical image computing and computer assisted intervention

ویرایش:  
نویسندگان:   
سری:  
ISBN (شابک) : 9780128161760 
ناشر: Elsevier 
سال نشر: 2020 
تعداد صفحات: 1054 
زبان: English 
فرمت فایل : PDF (درصورت درخواست کاربر به PDF، EPUB یا AZW3 تبدیل می شود) 
حجم فایل: 18 مگابایت 

قیمت کتاب (تومان) : 41,000



ثبت امتیاز به این کتاب

میانگین امتیاز به این کتاب :
       تعداد امتیاز دهندگان : 14


در صورت تبدیل فایل کتاب Handbook of medical image computing and computer assisted intervention به فرمت های PDF، EPUB، AZW3، MOBI و یا DJVU می توانید به پشتیبان اطلاع دهید تا فایل مورد نظر را تبدیل نمایند.

توجه داشته باشید کتاب راهنمای محاسبات تصویر پزشکی و مداخله به کمک کامپیوتر نسخه زبان اصلی می باشد و کتاب ترجمه شده به فارسی نمی باشد. وبسایت اینترنشنال لایبرری ارائه دهنده کتاب های زبان اصلی می باشد و هیچ گونه کتاب ترجمه شده یا نوشته شده به فارسی را ارائه نمی دهد.


توضیحاتی در مورد کتاب راهنمای محاسبات تصویر پزشکی و مداخله به کمک کامپیوتر

هندبوک محاسبات تصویر پزشکی و مداخله به کمک رایانه روش‌های پیشرفته مهم و تحقیقات پیشرفته را در محاسبات تصویر پزشکی و مداخله به کمک رایانه ارائه می‌کند، مرجعی جامع در مورد رویکردها و راه‌حل‌های فنی کنونی ارائه می‌کند، در حالی که الگوریتم‌های اثبات شده را برای انواع مختلف ارائه می‌کند. کاربردهای ضروری تصویربرداری پزشکی این کتاب عمدتاً برای محققان دانشگاه، دانشجویان فارغ التحصیل و پزشکان حرفه ای (با فرض سطح ابتدایی جبر خطی، احتمال و آمار و پردازش سیگنال) نوشته شده است که روی محاسبات تصویر پزشکی و مداخله به کمک رایانه کار می کنند.


توضیحاتی درمورد کتاب به خارجی

Handbook of Medical Image Computing and Computer Assisted Intervention presents important advanced methods and state-of-the art research in medical image computing and computer assisted intervention, providing a comprehensive reference on current technical approaches and solutions, while also offering proven algorithms for a variety of essential medical imaging applications. This book is written primarily for university researchers, graduate students and professional practitioners (assuming an elementary level of linear algebra, probability and statistics, and signal processing) working on medical image computing and computer assisted intervention.



فهرست مطالب

Cover
HANDBOOK OF
MEDICAL IMAGE
COMPUTING AND
COMPUTER
ASSISTED
INTERVENTION
Copyrigh
Contents
Contributors
Acknowledgment
1 Image synthesis and superresolution in medical imaging
	1.1 Introduction
	1.2 Image synthesis
		1.2.1 Physics-based image synthesis
		1.2.2 Classification-based synthesis
		1.2.3 Registration-based synthesis
		1.2.4 Example-based synthesis
		1.2.5 Scan normalization in MRI
	1.3 Superresolution
		1.3.1 Superresolution reconstruction
		1.3.2 Single-image deconvolution
		1.3.3 Example-based superresolution
	1.4 Conclusion
	References
2 Machine learning for image reconstruction
	2.1 Inverse problems in imaging
	2.2 Unsupervised learning in image reconstruction
	2.3 Supervised learning in image reconstruction
		2.3.1 Learning an improved regularization function
			Nonconvex regularization
			Bi-level optimization
			Convolutional neural networks as regularization
		2.3.2 Learning an iterative reconstruction model
			Example: Single-coil MRI reconstruction Schlemper2018
		2.3.3 Deep learning for image and data enhancement
		2.3.4 Learning a direct mapping
		2.3.5 Example: Comparison between learned iterative reconstruction and learned postprocessing
	2.4 Training data
		Transfer learning
	2.5 Loss functions and evaluation of image quality
	2.6 Discussion
	Acknowledgments
	References
3 Liver lesion detection in CT using deep learning techniques
	3.1 Introduction
		3.1.1 Prior work: segmentation vs. detection
		3.1.2 FCN for pixel-to-pixel transformations
	3.2 Fully convolutional network for liver lesion detection in CT examinations
		3.2.1 Lesion candidate detection via a fully convolutional network architecture
			3.2.1.1 FCN candidate generation results
		3.2.2 Superpixel sparse-based classification for false-positives reduction
		3.2.3 Experiments and results
			3.2.3.1 Data
			3.2.3.2 Comparative system performance
	3.3 Fully convolutional network for CT to PET synthesis to augment malignant liver lesion detection
		3.3.1 Related work
		3.3.2 Deep learning-based virtual-PET generation
			3.3.2.1 Training data preparation
			3.3.2.2 The networks
			3.3.2.3 SUV-adapted loss function
		3.3.3 Experiments and results
			3.3.3.1 Dataset
			3.3.3.2 Experimental setting
			3.3.3.3 Liver lesion detection using the virtual-PET
	3.4 Discussion and conclusions
	Acknowledgments
	References
4 CAD in lung
	4.1 Overview
	4.2 Origin of lung CAD
	4.3 Lung CAD systems
	4.4 Localized disease
		4.4.1 Lung nodule
			4.4.1.1 Nodule detection and segmentation
				Hessian-based approach
				Deep learning-based approach
		4.4.2 Ground Glass Opacity (GGO) nodule
		4.4.3 Enlarged lymph node
	4.5 Diffuse lung disease
		4.5.1 Emphysema
	4.6 Anatomical structure extraction
		4.6.1 Airway
		4.6.2 Blood vessel segmentation in the lung
		4.6.3 Lung area extraction
		4.6.4 Lung lobe segmentation
	References
5 Text mining and deep learning for disease classification
	5.1 Introduction
	5.2 Literature review
		5.2.1 Text mining
		5.2.2 Disease classification
	5.3 Case study 1: text mining in radiology reports and images
		5.3.1 Text mining radiology reports
			5.3.1.1 Architecture
				5.3.1.1.1 Medical findings recognition
				5.3.1.1.2 Universal dependency graph construction
				5.3.1.1.3 Negation and uncertainty detection
			5.3.1.2 Evaluation of NegBio
		5.3.2 ChestX-ray 14 construction
		5.3.3 Common thoracic disease detection and localization
			5.3.3.1 Architecture
				5.3.3.1.1 Unified DCNN framework
				5.3.3.1.2 Weakly-supervised pathology localization
			5.3.3.2 Evaluation
	5.4 Case study 2: text mining in pathology reports and images
		5.4.1 Image model
		5.4.2 Language model
		5.4.3 Dual-attention model
		5.4.4 Image prediction
		5.4.5 Evaluation
	5.5 Conclusion and future work
	Acknowledgments
	References
6 Multiatlas segmentation
	6.1 Introduction
	6.2 History of atlas-based segmentation
		6.2.1 Atlas generation
		6.2.2 Preprocessing
		6.2.3 Registration
			6.2.3.1 Linear
			6.2.3.2 Nonlinear
			6.2.3.3 Label propagation
		6.2.4 Atlas selection
		6.2.5 Label fusion
			6.2.5.1 Voting
			6.2.5.2 Rater modeling
			6.2.5.3 Bayesian / generative models
		6.2.6 Post hoc analysis
			6.2.6.1 Corrective learning
			6.2.6.2 EM-refinement
			6.2.6.3 Markov Random Field (MRF)
			6.2.6.4 Morphology correction
	6.3 Mathematical framework
		6.3.1 Problem definition
		6.3.2 Voting label fusion
		6.3.3 Statistical label fusion
		6.3.4 Spatially varying performance and nonlocal STAPLE
		6.3.5 Spatial STAPLE
		6.3.6 Nonlocal STAPLE
		6.3.7 Nonlocal spatial STAPLE
		6.3.8 E-step: estimation of the voxel-wise label probability
		6.3.9 M-step: estimation of the performance level parameters
	6.4 Connection between multiatlas segmentation and machine learning
	6.5 Multiatlas segmentation using machine learning
	6.6 Machine learning using multiatlas segmentation
	6.7 Integrating multiatlas segmentation and machine learning
	6.8 Challenges and applications
		6.8.1 Multiatlas labeling on cortical surfaces and sulcal landmarks
	6.9 Unsolved problems
	Glossary
	References
7 Segmentation using adversarial image-to-image networks
	7.1 Introduction
		7.1.1 Generative adversarial network
		7.1.2 Deep image-to-image network
	7.2 Segmentation using an adversarial image-to-image network
		7.2.1 Experiments
	7.3 Volumetric domain adaptation with intrinsic semantic cycle consistency
		7.3.1 Methodology
			7.3.1.1 3D dense U-Net for left atrium segmentation
			7.3.1.2 Volumetric domain adaptation with cycle consistency
		7.3.2 Experiments
		7.3.3 Conclusions
	References
8 Multimodal medical volumes translation and segmentation with generative adversarial network
	8.1 Introduction
	8.2 Literature review
		8.2.1 Medical image synthesis
		8.2.2 Image segmentation
	8.3 Preliminary
		8.3.1 CNN for segmentation
		8.3.2 Generative adversarial network
		8.3.3 Image-to-image translation for unpaired data
		8.3.4 Problems in unpaired volume-to-volume translation
	8.4 Method
		8.4.1 Volume-to-volume cycle consistency
		8.4.2 Volume-to-volume shape consistency
		8.4.3 Multimodal volume segmentation
		8.4.4 Method objective
	8.5 Network architecture and training details
		8.5.1 Architecture
		8.5.2 Training details
	8.6 Experimental results
		8.6.1 Dataset
		8.6.2 Cross-domain translation evaluation
		8.6.3 Segmentation evaluation
		8.6.4 Gap between synthetic and real data
		8.6.5 Is more synthetic data better?
	8.7 Conclusions
	References
9 Landmark detection and multiorgan segmentation: Representations and supervised approaches
	9.1 Introduction
	9.2 Landmark detection
		9.2.1 Landmark representation
			9.2.1.1 Point-based representation
			9.2.1.2 Relative offset representation
			9.2.1.3 Identity map representation
			9.2.1.4 Distance map representation
			9.2.1.5 Heat map representation
			9.2.1.6 Discrete action map representation
		9.2.2 Action classification for landmark detection
			9.2.2.1 Method
			9.2.2.2 Dataset & experimental setup
			9.2.2.3 Qualitative and quantitative results
	9.3 Multiorgan segmentation
		9.3.1 Shape representation
		9.3.2 Context integration for multiorgan segmentation
			9.3.2.1 Joint landmark detection using context integration
				Local context posterior
				Global context posterior
				MMSE estimate for landmark location
				Sparsity in global context
			9.3.2.2 Organ shape initialization and refinement
				Shape initialization using robust model alignment
				Discriminative boundary refinement
			9.3.2.3 Comparison with other methods
			9.3.2.4 Experimental results
	9.4 Conclusion
	References
10 Deep multilevel contextual networks for biomedical image segmentation
	10.1 Introduction
	10.2 Related work
		10.2.1 Electron microscopy image segmentation
		10.2.2 Nuclei segmentation
	10.3 Method
		10.3.1 Deep multilevel contextual network
		10.3.2 Regularization with auxiliary supervision
		10.3.3 Importance of receptive field
	10.4 Experiments and results
		10.4.1 Dataset and preprocessing
			10.4.1.1 2012 ISBI EM segmentation
			10.4.1.2 2015 MICCAI nuclei segmentation
		10.4.2 Details of training
		10.4.3 2012 ISBI neuronal structure segmentation challenge
			10.4.3.1 Qualitative evaluation
			10.4.3.2 Quantitative evaluation metrics
			10.4.3.3 Results comparison without postprocessing
			10.4.3.4 Results comparison with postprocessing
			10.4.3.5 Ablation studies of our method
		10.4.4 2015 MICCAI nuclei segmentation challenge
			10.4.4.1 Qualitative evaluation
			10.4.4.2 Quantitative evaluation metrics
			10.4.4.3 Quantitative results and comparison
		10.4.5 Computation time
	10.5 Discussion and conclusion
	Acknowledgment
	References
11 LOGISMOS-JEI: Segmentation using optimal graph search and just-enough interaction
	11.1 Introduction
	11.2 LOGISMOS
		11.2.1 Initial mesh
		11.2.2 Locations of graph nodes
		11.2.3 Cost function design
		11.2.4 Geometric constraints and priors
		11.2.5 Graph optimization
	11.3 Just-enough interaction
	11.4 Retinal OCT segmentation
	11.5 Coronary OCT segmentation
	11.6 Knee MR segmentation
	11.7 Modular application design
	11.8 Conclusion
	Acknowledgments
	References
12 Deformable models, sparsity and learning-based segmentation for cardiac MRI based analytics
	12.1 Introduction
		12.1.1 Deformable models for cardiac modeling
		12.1.2 Learning based cardiac segmentation
	12.2 Deep learning based segmentation of ventricles
		Network architecture
		Preprocessing and data augmentation
		Modified deep layer aggregation network
		Loss function
		Dataset and evaluation metrics
		Implementation details
		Results
	12.3 Shape refinement by sparse shape composition
	12.4 3D modeling
	12.5 Conclusion and future directions
	References
13 Image registration with sliding motion
	13.1 Challenges of motion discontinuities in medical imaging
	13.2 Sliding preserving regularization for Demons
		13.2.1 Direction-dependent and layerwise regularization
		13.2.2 Locally adaptive regularization
			Demons with bilateral filtering
			GIFTed Demons
			13.2.2.1 Graph-based regularization for demons
	13.3 Discrete optimization for displacements
		13.3.1 Energy terms for discrete registration
		13.3.2 Practical concerns and implementation details for 3D discrete registration
		13.3.3 Parameterization of nodes and displacements
			13.3.3.1 Efficient inference of regularization
	13.4 Image registration for cancer applications
	13.5 Conclusions
	References
14 Image registration using machine and deep learning
	14.1 Introduction
	14.2 Machine-learning-based registration
		14.2.1 Learning initialized deformation field
		14.2.2 Learning intermediate image
		14.2.3 Learning image appearance
	14.3 Machine-learning-based multimodal registration
		14.3.1 Learning similarity metric
		14.3.2 Learning common feature representation
		14.3.3 Learning appearance mapping
	14.4 Deep-learning-based registration
		14.4.1 Learning similarity metric
		14.4.2 Learning preliminary transformation parameters
		14.4.3 End-to-end learning for deformable registration
	References
15 Imaging biomarkers in Alzheimer\'s disease
	15.1 Introduction
	15.2 Range of imaging modalities and associated biomarkers
		15.2.1 Structural imaging
			15.2.1.1 Grey matter assessment
			15.2.1.2 White matter damage
			15.2.1.3 Microstructural imaging
		15.2.2 Functional and metabolite imaging
			15.2.2.1 Functional imaging
			15.2.2.2 Molecular imaging
	15.3 Biomarker extraction evolution
		15.3.1 Acquisition improvement
		15.3.2 Biomarkers extraction: from visual scales to automated processes
		15.3.3 Automated biomarker extraction: behind the scene
		15.3.4 Automated methodological development validation
	15.4 Biomarkers in practice
		15.4.1 Practical use
		15.4.2 Biomarkers\' path to validation
		15.4.3 Current challenges
	15.5 Biomarkers\' strategies: practical examples
		15.5.1 Global vs local
			15.5.1.1 Spatial patterns of abnormality - from global to local
			15.5.1.2 The case of the hippocampus
		15.5.2 Longitudinal vs cross-sectional
			15.5.2.1 Challenges in longitudinal analyses
			15.5.2.2 The case of the boundary shift integral (BSI)
	15.6 Future avenues of image analysis for biomarkers in Alzheimer\'s disease
		15.6.1 Community initiatives
			15.6.1.1 Interfield collaboration
			15.6.1.2 Standardization initiatives, challenges and open-source data
		15.6.2 Technical perspectives
			15.6.2.1 Combination of modalities and biomarkers - traditional approaches
			15.6.2.2 Ever-increasing potential of AI technologies: reproduction, combination, discovery
		15.6.3 Longitudinal prediction, simulation and ethical considerations
	References
16 Machine learning based imaging biomarkers in large scale population studies: A neuroimaging perspective
	16.1 Introduction
	16.2 Large scale population studies in neuroimage analysis: steps towards dimensional neuroimaging; harmonization challenges
		16.2.1 The ENIGMA project
		16.2.2 The iSTAGING project
		16.2.3 Harmonization of multisite neuroimaging data
	16.3 Unsupervised pattern learning for dimensionality reduction of neuroimaging data
		16.3.1 Finding imaging patterns of covariation
	16.4 Supervised classification based imaging biomarkers for disease diagnosis
		16.4.1 Automated classification of Alzheimer\'s disease patients
		16.4.2 Classification of schizophrenia patients in multisite large cohorts
	16.5 Multivariate pattern regression for brain age prediction
		16.5.1 Brain development index
		16.5.2 Imaging patterns of brain aging
	16.6 Deep learning in neuroimaging analysis
	16.7 Revealing heterogeneity of imaging patterns of brain diseases
	16.8 Conclusions
	References
17 Imaging biomarkers for cardiovascular diseases
	17.1 Introduction
	17.2 Cardiac imaging
	17.3 Cardiac shape and function
		17.3.1 Left ventricular mass
		17.3.2 Ejection fraction
		17.3.3 Remodeling
	17.4 Cardiac motion
		17.4.1 Wall motion analysis
		17.4.2 Myocardial strain
		17.4.3 Dyssynchrony
	17.5 Coronary and vascular function
		17.5.1 Coronary artery disease
		17.5.2 Myocardial perfusion
		17.5.3 Blood flow
		17.5.4 Vascular compliance
	17.6 Myocardial structure
		17.6.1 Tissue characterization
		17.6.2 Fiber architecture
	17.7 Population-based cardiac image biomarkers
	References
18 Radiomics
	18.1 Introduction
	18.2 Data acquisition & preparation
		18.2.1 Introduction
		18.2.2 Patient selection
		18.2.3 Imaging data collection
		18.2.4 Label data collection
		18.2.5 Conclusion
	18.3 Segmentation
		18.3.1 Introduction
		18.3.2 Segmentation methods
		18.3.3 Influence of segmentation on radiomics pipeline
		18.3.4 Conclusion
	18.4 Features
		18.4.1 Introduction
		18.4.2 Common features
			18.4.2.1 Morphological features
			18.4.2.2 First order features
			18.4.2.3 Higher order features
				Filter based
				Gray level matrix features
		18.4.3 Uncommon features
		18.4.4 Feature extraction
		18.4.5 Feature selection and dimensionality reduction
		18.4.6 Conclusion
	18.5 Data mining
		18.5.1 Introduction
		18.5.2 Correlation
		18.5.3 Machine learning
		18.5.4 Deep learning
		18.5.5 Conclusion
	18.6 Study design
		18.6.1 Introduction
		18.6.2 Training, validation and evaluation set
		18.6.3 Generating sets
			18.6.3.1 Cross-validation
			18.6.3.2 Separate evaluation set
		18.6.4 Evaluation metrics
			18.6.4.1 Confidence intervals
			18.6.4.2 Conclusion
	18.7 Infrastructure
		18.7.1 Introduction
		18.7.2 Data storage and sharing
		18.7.3 Feature toolboxes
		18.7.4 Learning toolboxes
		18.7.5 Pipeline standardization
		18.7.6 Conclusion
	18.8 Conclusion
	Acknowledgment
	References
19 Random forests in medical image computing
	19.1 A different way to use context
	19.2 Feature selection and ensembling
	19.3 Algorithm basics
		19.3.1 Inference
		19.3.2 Training
			Cost
			Optimization
			Stopping criteria
			Leaf predictions
			From trees to random forest
			Effect of model parameters
		19.3.3 Integrating context
	19.4 Applications
		19.4.1 Detection and localization
		19.4.2 Segmentation
		19.4.3 Image-based prediction
		19.4.4 Image synthesis
		19.4.5 Feature interpretation
		19.4.6 Algorithmic variations
	19.5 Conclusions
	References
20 Convolutional neural networks
	20.1 Introduction
	20.2 Neural networks
		20.2.1 Loss function
		20.2.2 Backpropagation
	20.3 Convolutional neural networks
		20.3.1 Convolutions
			Convolutions as an infinitely strong priors
			Equivariance
		20.3.2 Nonlinearities
		20.3.3 Pooling layers
		20.3.4 Fully connected layers
	20.4 CNN architectures for classification
	20.5 Practical methodology
		20.5.1 Data standardization and augmentation
		20.5.2 Optimizers and learning rate
		20.5.3 Weight initialization and pretrained networks
		20.5.4 Regularization
	20.6 Future challenges
	References
21 Deep learning: RNNs and LSTM
	21.1 From feedforward to recurrent
		21.1.1 Simple motivating example
		21.1.2 Naive solution
		21.1.3 Simple RNNs
		21.1.4 Representation power of simple RNNs
		21.1.5 More general recurrent neural networks
	21.2 Modeling with RNNs
		21.2.1 Discriminative sequence models
		21.2.2 Generative sequence models
		21.2.3 RNN-based encoder-decoder models
	21.3 Training RNNs (and why simple RNNs aren\'t enough)
		21.3.1 The chain rule for ordered derivatives
		21.3.2 The vanishing gradient problem
		21.3.3 Truncated backpropagation through time
		21.3.4 Teacher forcing
	21.4 Long short-term memory and gated recurrent units
	21.5 Example applications of RNNs at MICCAI
	References
22 Deep multiple instance learning for digital histopathology
	22.1 Multiple instance learning
	22.2 Deep multiple instance learning
	22.3 Methodology
	22.4 MIL approaches
		22.4.1 Instance-based approach
		22.4.2 Embedding-based approach
		22.4.3 Bag-based approach
	22.5 MIL pooling functions
		22.5.1 Max
		22.5.2 Mean
		22.5.3 LSE
		22.5.4 (Leaky) Noisy-OR
		22.5.5 Attention mechanism
		22.5.6 Interpretability
		22.5.7 Flexibility
	22.6 Application to histopathology
		22.6.1 Data augmentation
			22.6.1.1 Cropping
			22.6.1.2 Rotating and flipping
			22.6.1.3 Blur
			22.6.1.4 Color
				Color decomposition
				Color normalization
			22.6.1.5 Elastic deformations
			22.6.1.6 Generative models
		22.6.2 Performance metrics
			22.6.2.1 Accuracy
			22.6.2.2 Precision, recall and F1-score
			22.6.2.3 Receiver Operating Characteristic Area Under Curve
		22.6.3 Evaluation of MIL models
			22.6.3.1 Experimental setup
			22.6.3.2 Colon cancer
			22.6.3.3 Breast cancer
	References
23 Deep learning: Generative adversarial networks and adversarial methods
	23.1 Introduction
	23.2 Generative adversarial networks
		23.2.1 Objective functions
		23.2.2 The latent space
		23.2.3 Conditional GANs
		23.2.4 GAN architectures
	23.3 Adversarial methods for image domain translation
		23.3.1 Training with paired images
		23.3.2 Training without paired images
	23.4 Domain adaptation via adversarial training
	23.5 Applications in biomedical image analysis
		23.5.1 Sample generation
		23.5.2 Image synthesis
		23.5.3 Image quality enhancement
		23.5.4 Image segmentation
		23.5.5 Domain adaptation
		23.5.6 Semisupervised learning
	23.6 Discussion and conclusion
	References
24 Linear statistical shape models and landmark location
	24.1 Introduction
	24.2 Shape models
		24.2.1 Representing structures with points
		24.2.2 Comparing two shapes
		24.2.3 Aligning two shapes
		24.2.4 Aligning a set of shapes
		24.2.5 Building linear shape models
			24.2.5.1 Choosing the number of modes
			24.2.5.2 Examples of shape models
			24.2.5.3 Matching a model to known points
		24.2.6 Analyzing shapes
		24.2.7 Constraining parameters
		24.2.8 Limitations of linear models
		24.2.9 Dealing with uncertain data
		24.2.10 Alternative shape models
			24.2.10.1 Level set representations
			24.2.10.2 Medial representations
			24.2.10.3 Models of deformations
		24.2.11 3D models
	24.3 Automated landmark location strategies
		24.3.1 Exhaustive methods: searching for individual points
			24.3.1.1 Template matching
			24.3.1.2 Generative approaches
			24.3.1.3 Discriminative approaches
			24.3.1.4 Regression-based approaches
			24.3.1.5 Estimating score maps with CNNs
		24.3.2 Alternating approaches
			24.3.2.1 Constrained local models
		24.3.3 Iterative update approaches
			24.3.3.1 Updating parameters
			24.3.3.2 Regression-based updates
			24.3.3.3 Locating landmarks with agents
	24.4 Discussion
	24.A
		24.A.1 Computing modes when fewer samples than ordinates
		24.A.2 Closest point on a plane
		24.A.3 Closest point on an ellipsoid
	References
25 Computer-integrated interventional medicine: A 30 year perspective
	25.1 Introduction: a three-way partnership between humans, technology, and information to improve patient care
	25.2 The information flow in computer-integrated interventional medicine
		25.2.1 Patient-specific information
		25.2.2 Patient-specific models
		25.2.3 Diagnosis
		25.2.4 Treatment planning
		25.2.5 Intervention
		25.2.6 Assessment and follow-up
		25.2.7 Multipatient information and statistical analysis
		25.2.8 Intensive care, rehabilitation, and other treatment venues
	25.3 Intraoperative systems for CIIM
		25.3.1 Intraoperative imaging systems
		25.3.2 Navigational trackers
		25.3.3 Robotic devices
		25.3.4 Human-machine interfaces
	25.4 Emerging research themes
	References
26 Technology and applications in interventional imaging: 2D X-ray radiography/fluoroscopy and 3D cone-beam CT
	26.1 The 2D imaging chain
		26.1.1 Production of X-rays for fluoroscopy and CBCT
		26.1.2 Large-area X-ray detectors for fluoroscopy and cone-beam CT
		26.1.3 Automatic exposure control (AEC) and automatic brightness control (ABC)
		26.1.4 2D image processing
			26.1.4.1 Detector corrections / image preprocessing
			26.1.4.2 Postprocessing
		26.1.5 Radiation dose (fluoroscopy)
			26.1.5.1 Measurement of fluoroscopic dose
			26.1.5.2 Reference dose levels
	26.2 The 3D imaging chain
		26.2.1 3D imaging prerequisites
			26.2.1.1 Geometrical calibration
			26.2.1.2 I0 calibration
			26.2.1.3 Other correction factors
		26.2.2 3D image reconstruction
			26.2.2.1 Filtered backprojection
			26.2.2.2 Emerging methods: optimization-based (iterative) image reconstruction (OBIR)
			26.2.2.3 Emerging methods: machine learning methods for cone-beam CT
		26.2.3 Radiation dose (CBCT)
			26.2.3.1 Measurement of dose in CBCT
			26.2.3.2 Reference dose levels
	26.3 System embodiments
		26.3.1 Mobile systems: C-arms, U-arms, and O-arms
		26.3.2 Fixed-room C-arm systems
		26.3.3 Interventional multi-detector CT (MDCT)
	26.4 Applications
		26.4.1 Interventional radiology
			26.4.1.1 Neurological interventions
			26.4.1.2 Body interventions (oncology and embolization)
		26.4.2 Interventional cardiology
		26.4.3 Surgery
	References
27 Interventional imaging: MR
	27.1 Motivation
	27.2 Technical background
		27.2.1 Design, operation, and safety of an interventional MRI suite
		27.2.2 MR conditional devices
			27.2.2.1 Needles and biopsy guns
			27.2.2.2 Ablation systems
		27.2.3 Visualization requirements
		27.2.4 Intraprocedural guidance
			27.2.4.1 Passive tracking
			27.2.4.2 Active tracking - radiofrequency coils
			27.2.4.3 Semiactive tracking - gradient-based tracking
			27.2.4.4 Gradient-based tracking
			27.2.4.5 Optical tracking
		27.2.5 MR thermometry
		27.2.6 MR elastography
	27.3 Clinical applications
		27.3.1 Applications in oncology
			27.3.1.1 Clinical setup
			27.3.1.2 Clinical workflow
			27.3.1.3 MR-guided biopsies
			27.3.1.4 MR-guided thermal ablations
		27.3.2 MR-guided functional neurosurgery
			27.3.2.1 Intraoperative MRI and deep brain stimulation
			27.3.2.2 Intraoperative MRI and laser interstitial thermal therapy
			27.3.2.3 Safety considerations
	References
28 Interventional imaging: Ultrasound
	28.1 Introduction: ultrasound imaging
	28.2 Ultrasound-guided cardiac interventions
		28.2.1 Cardiac ultrasound imaging technology
			28.2.1.1 Transthoracic echocardiography - TTE
			28.2.1.2 Transesophageal echocardiography - TEE
			28.2.1.3 Intracardiac echocardiography - ICE
		28.2.2 3D cardiac ultrasound imaging
			28.2.2.1 Reconstructed 3D imaging
			28.2.2.2 Real-time 3D imaging
	28.3 Ultrasound data manipulation and image fusion for cardiac applications
		28.3.1 Multimodal image registration and fusion
		28.3.2 Integration of ultrasound imaging with surgical tracking
		28.3.3 Fusion of ultrasound imaging via volume rendering
	28.4 Ultrasound imaging in orthopedics
		28.4.1 Bone segmentation from ultrasound images
			28.4.1.1 Segmentation methods using image intensity and phase information
			28.4.1.2 Machine learning-based segmentation
			28.4.1.3 Incorporation of bone shadow region information to improve segmentation
		28.4.2 Registration of orthopedic ultrasound images
	28.5 Image-guided therapeutic applications
		28.5.1 Fluoroscopy & TEE-guided aortic valve implantation
		28.5.2 US-guided robot-assisted mitral valve repair
		28.5.3 Model-enhanced US-guided intracardiac interventions
		28.5.4 ICE-guided ablation therapy
		28.5.5 Image-guided spine interventions
	28.6 Summary and future perspectives
	Acknowledgments
	References
29 Interventional imaging: Vision
	29.1 Vision-based interventional imaging modalities
		29.1.1 Endoscopy
			29.1.1.1 Endoscope types
			29.1.1.2 Advances in endoscopic imaging
		29.1.2 Microscopy
	29.2 Geometric scene analysis
		29.2.1 Calibration and preprocessing
			29.2.1.1 Preprocessing
		29.2.2 Reconstruction
			29.2.2.1 Stereo reconstruction
			29.2.2.2 Simultaneous Localization and Mapping
			29.2.2.3 Shape-from-X
			29.2.2.4 Active reconstruction
		29.2.3 Registration
			29.2.3.1 Point-based registration
			29.2.3.2 Surface-based registration
	29.3 Visual scene interpretation
		29.3.1 Detection
			29.3.1.1 Surgical tools
			29.3.1.2 Phase detection
		29.3.2 Tracking
	29.4 Clinical applications
		29.4.1 Intraoperative navigation
		29.4.2 Tissue characterization
		29.4.3 Skill assessment
		29.4.4 Surgical workflow analysis
	29.5 Discussion
	Acknowledgments
	References
30 Interventional imaging: Biophotonics
	30.1 A brief introduction to light-tissue interactions and white light imaging
	30.2 Summary of chapter structure
	30.3 Fluorescence imaging
	30.4 Multispectral imaging
	30.5 Microscopy techniques
	30.6 Optical coherence tomography
	30.7 Photoacoustic methods
	30.8 Optical perfusion imaging
	30.9 Macroscopic scanning of optical systems and visualization
	30.10 Summary
	References
31 External tracking devices and tracked tool calibration
	31.1 Introduction
	31.2 Target registration error estimation for paired measurements
	31.3 External spatial measurement devices
		31.3.1 Electromagnetic tracking system
		31.3.2 Optical tracking system
		31.3.3 Deployment consideration
	31.4 Stylus calibration
	31.5 Template-based calibration
	31.6 Ultrasound probe calibration
	31.7 Camera hand-eye calibration
	31.8 Conclusion and resources
	References
32 Image-based surgery planning
	32.1 Background and motivation
	32.2 General concepts
	32.3 Treatment planning for bone fracture in orthopaedic surgery
		32.3.1 Background
		32.3.2 System overview
		32.3.3 Planning workflow
		32.3.4 Planning system
		32.3.5 Evaluation and validation
		32.3.6 Perspectives
	32.4 Treatment planning for keyhole neurosurgery and percutaneous ablation
		32.4.1 Background
		32.4.2 Placement constraints
		32.4.3 Constraint solving
		32.4.4 Evaluation and validation
		32.4.5 Perspectives
	32.5 Future challenges
	References
33 Human-machine interfaces for medical imaging and clinical interventions
	33.1 HCI for medical imaging vs clinical interventions
		33.1.1 HCI for diagnostic queries (using medical imaging)
		33.1.2 HCI for planning, guiding, and executing imperative actions (computer-assisted interventions)
	33.2 Human-computer interfaces: design and evaluation
	33.3 What is an interface?
	33.4 Human outputs are computer inputs
	33.5 Position inputs (free-space pointing and navigation interactions)
	33.6 Direct manipulation vs proxy-based interactions (cursors)
	33.7 Control of viewpoint
	33.8 Selection (object-based interactions)
	33.9 Quantification (object-based position setting)
	33.10 User interactions: selection vs position, object-based vs free-space
	33.11 Text inputs (strings encoded/parsed as formal and informal language)
	33.12 Language-based control (text commands or spoken language)
	33.13 Image-based and workspace-based interactions: movement and selection events
	33.14 Task representations for image-based and intervention-based interfaces
	33.15 Design and evaluation guidelines for human-computer interfaces: human inputs are computer outputs - the system design must respect perceptual capacities and constraints
	33.16 Objective evaluation of performance on a task mediated by an interface
	References
34 Robotic interventions
	34.1 Introduction
	34.2 Precision positioning
	34.3 Master-slave system
	34.4 Image guided robotic tool guide
	34.5 Interactive manipulation
	34.6 Articulated access
	34.7 Untethered microrobots
	34.8 Soft robotics
	34.9 Summary
	References
35 System integration
	35.1 Introduction
	35.2 System design
		35.2.1 Programming language and platform
		35.2.2 Design approaches
	35.3 Frameworks and middleware
		35.3.1 Middleware
			35.3.1.1 Networking: UDP and TCP
			35.3.1.2 Data serialization
			35.3.1.3 Robot Operating System (ROS)
			35.3.1.4 OpenIGTLink
		35.3.2 Application frameworks
			35.3.2.1 Requirements
			35.3.2.2 Overview of existing application frameworks
	35.4 Development process
		35.4.1 Software configuration management
		35.4.2 Build systems
		35.4.3 Documentation
		35.4.4 Testing
	35.5 Example integrated systems
		35.5.1 Da Vinci Research Kit (dVRK)
			35.5.1.1 DVRK system architecture
			35.5.1.2 dVRK I/O layer
			35.5.1.3 DVRK real-time control layer
			35.5.1.4 DVRK ROS interface
			35.5.1.5 DVRK with image guidance
			35.5.1.6 DVRK with augmented reality HMD
		35.5.2 SlicerIGT based interventional and training systems
			35.5.2.1 3D Slicer module design
			35.5.2.2 Surgical navigation system for breast cancer resection
			35.5.2.3 Virtual/augmented reality applications
	35.6 Conclusions
	References
36 Clinical translation
	36.1 Introduction
	36.2 Definitions
	36.3 Useful researcher characteristics for clinical translation
		36.3.1 Comfort zone
		36.3.2 Team-based approach
		36.3.3 Embracing change
		36.3.4 Commercialization
		36.3.5 Selection of a clinical translatable idea
		36.3.6 Clinical trials
		36.3.7 Regulatory approval
	36.4 Example of clinical translation: 3D ultrasound-guided prostate biopsy
		36.4.1 Clinical need
		36.4.2 Clinical research partners and generation of the hypothesis
		36.4.3 Development of basic tools
		36.4.4 Applied research
		36.4.5 Clinical research
		36.4.6 Commercialization
		36.4.7 Actions based on lessons learned
	36.5 Conclusions
	References
37 Interventional procedures training
	37.1 Introduction
	37.2 Assessment
		37.2.1 Rating by expert reviewers
		37.2.2 Real-time spatial tracking
		37.2.3 Automatic video analysis
		37.2.4 Crowdsourcing
	37.3 Feedback
		37.3.1 Feedback in complex procedures
		37.3.2 Learning curves and performance benchmarks
	37.4 Simulated environments
		37.4.1 Animal models
		37.4.2 Synthetic models
		37.4.3 Box trainers
		37.4.4 Virtual reality
	37.5 Shared resources
	37.6 Summary
	References
38 Surgical data science
	38.1 Concept of surgical data science (SDS)
	38.2 Clinical context for SDS and its applications
		Automating intelligent surgical assistance
		Training and assessing providers
		Improving measurement of surgical outcomes
		Integrating data science into the surgical care pathway
	38.3 Technical approaches for SDS
		Data sources
		Creating labeled data and dealing with sparsely annotated data:
		Ontologies and semantic models
		Inference and machine learning
	38.4 Future challenges for SDS
		Pervasive data capture
		Patient models
		Models of surgeon performance
		Surgical augmentation
		Efficient learning
		Causal analysis of interventional pathways
		Finding good use cases
	38.5 Conclusion
	Acknowledgments
	References
39 Computational biomechanics for medical image analysis
	39.1 Introduction
	39.2 Image analysis informs biomechanics: patient-specific computational biomechanics model from medical images
		39.2.1 Geometry extraction from medical images: segmentation
		39.2.2 Finite element mesh generation
		39.2.3 Image as a computational biomechanics model: meshless discretization
	39.3 Biomechanics informs image analysis: computational biomechanics model as image registration tool
		39.3.1 Biomechanics-based image registration: problem formulation
		39.3.2 Biomechanics-based image registration: examples
			39.3.2.1 Neuroimage registration
			39.3.2.2 Magnetic resonance (MR) image registration for intracranial electrode localization for epilepsy treatment
			39.3.2.3 Whole-body computed tomography (CT) image registration
	39.4 Discussion
	Acknowledgments
	References
40 Challenges in Computer Assisted Interventions
	40.1 Introduction to computer assisted interventions
		40.1.1 Requirements and definition
		40.1.2 Computer assistance
		40.1.3 Application domain for interventions
			40.1.3.1 General requirements for the design of computer assisted interventions
				Relevance
				Speed
				Flexibility
				Reproducibility
				Reliability
				Usability
				Safety
	40.2 Advanced technology in computer assisted interventions
		40.2.1 Robotics
		40.2.2 Augmented reality and advanced visualization/interaction concepts
		40.2.3 Artificial intelligence - data-driven decision support
	40.3 Translational challenge
		Clinical need
		Clinical trials
		Certification / regulatory affairs
		Reimbursement
		Service and education
		Financing
	40.4 Simulation
		Simulation within the healthcare innovation pathway
		Simulation-based assessment
		Assessment in healthcare innovation
		Prototyping
		Training
		Replacing old knowledge with new knowledge
		Engagement
		Intraoperative training and assistance
	40.5 Summary
	References
Index
Back Cover




نظرات کاربران