ورود به حساب

نام کاربری گذرواژه

گذرواژه را فراموش کردید؟ کلیک کنید

حساب کاربری ندارید؟ ساخت حساب

ساخت حساب کاربری

نام نام کاربری ایمیل شماره موبایل گذرواژه

برای ارتباط با ما می توانید از طریق شماره موبایل زیر از طریق تماس و پیامک با ما در ارتباط باشید


09117307688
09117179751

در صورت عدم پاسخ گویی از طریق پیامک با پشتیبان در ارتباط باشید

دسترسی نامحدود

برای کاربرانی که ثبت نام کرده اند

ضمانت بازگشت وجه

درصورت عدم همخوانی توضیحات با کتاب

پشتیبانی

از ساعت 7 صبح تا 10 شب

دانلود کتاب Deep Learning with PyTorch

دانلود کتاب یادگیری عمیق با PyTorch

Deep Learning with PyTorch

مشخصات کتاب

Deep Learning with PyTorch

دسته بندی: سایبرنتیک: هوش مصنوعی
ویرایش: 1 
نویسندگان: , ,   
سری:  
ISBN (شابک) : 1617295264, 9781617295263 
ناشر: Manning Publications 
سال نشر: 2020 
تعداد صفحات: 522 
زبان: English 
فرمت فایل : PDF (درصورت درخواست کاربر به PDF، EPUB یا AZW3 تبدیل می شود) 
حجم فایل: 44 مگابایت 

قیمت کتاب (تومان) : 51,000



کلمات کلیدی مربوط به کتاب یادگیری عمیق با PyTorch: یادگیری ماشینی، شبکه‌های عصبی، یادگیری عمیق، پردازش تصویر، پایتون، شبکه‌های عصبی کانولوشنال، شبکه‌های متخاصم مولد، برنامه‌های کاربردی وب، تشخیص تصویر، استقرار، نزول گرادیان، فلاسک، NumPy، PyTorch، HDF5، بیش‌برازش، تجزیه و تحلیل سری‌های زمانی، فعال‌سازی AlexNet، ResNet، Cancer، Tensor Calculus، شبکه های از پیش آموزش دیده، CycleGAN، Torch Hub



ثبت امتیاز به این کتاب

میانگین امتیاز به این کتاب :
       تعداد امتیاز دهندگان : 18


در صورت تبدیل فایل کتاب Deep Learning with PyTorch به فرمت های PDF، EPUB، AZW3، MOBI و یا DJVU می توانید به پشتیبان اطلاع دهید تا فایل مورد نظر را تبدیل نمایند.

توجه داشته باشید کتاب یادگیری عمیق با PyTorch نسخه زبان اصلی می باشد و کتاب ترجمه شده به فارسی نمی باشد. وبسایت اینترنشنال لایبرری ارائه دهنده کتاب های زبان اصلی می باشد و هیچ گونه کتاب ترجمه شده یا نوشته شده به فارسی را ارائه نمی دهد.


توضیحاتی در مورد کتاب یادگیری عمیق با PyTorch

هر روز در مورد روش‌های جدیدی برای استفاده بهینه از یادگیری عمیق می‌شنویم: تصویربرداری پزشکی بهبود یافته، تشخیص دقیق تقلب در کارت اعتباری، پیش‌بینی طولانی مدت آب و هوا و موارد دیگر. PyTorch این ابرقدرت‌ها را در دستان شما قرار می‌دهد و یک تجربه راحت پایتون را فراهم می‌کند که به شما کمک می‌کند به سرعت شروع کنید و سپس با شما رشد می‌کند و مهارت‌های یادگیری عمیق شما پیچیده‌تر می‌شود. یادگیری عمیق با PyTorch به شما آموزش می دهد که چگونه الگوریتم های یادگیری عمیق را با Python و PyTorch پیاده سازی کنید. این کتاب شما را به یک مطالعه موردی جذاب می برد: ساخت الگوریتمی که قادر به تشخیص تومورهای بدخیم ریه با استفاده از سی تی اسکن باشد. همانطور که نویسندگان شما را از طریق این مثال واقعی راهنمایی می کنند، متوجه خواهید شد که PyTorch چقدر می تواند موثر و سرگرم کننده باشد.


توضیحاتی درمورد کتاب به خارجی

Every other day we hear about new ways to put deep learning to good use: improved medical imaging, accurate credit card fraud detection, long range weather forecasting, and more. PyTorch puts these superpowers in your hands, providing a comfortable Python experience that gets you started quickly and then grows with you as you, and your deep learning skills, become more sophisticated. Deep Learning with PyTorch teaches you how to implement deep learning algorithms with Python and PyTorch. This book takes you into a fascinating case study: building an algorithm capable of detecting malignant lung tumors using CT scans. As the authors guide you through this real example, you'll discover just how effective and fun PyTorch can be.



فهرست مطالب

Deep Learning with PyTorch
contents
foreword
preface
acknowledgments
about this book
	Who should read this book
	How this book is organized: A roadmap
	About the code
	Hardware and software requirements
	liveBook discussion forum
	Other online resources
about the authors
about the cover illustration
Part 1: Core PyTorch
	Chapter 1: Introducing deep learning and the PyTorch Library
		1.1 The deep learning revolution
		1.2 PyTorch for deep learning
		1.3 Why PyTorch?
			1.3.1 The deep learning competitive landscape
		1.4 An overview of how PyTorch supports deep learning projects
		1.5 Hardware and software requirements
			1.5.1 Using Jupyter Notebooks
		1.6 Exercises
		1.7 Summary
	Chapter 2: Pretrained networks
		2.1 A pretrained network that recognizes the subject of an image
			2.1.1 Obtaining a pretrained network for image recognition
			2.1.2 AlexNet
			2.1.3 ResNet
			2.1.4 Ready, set, almost run
			2.1.5 Run!
		2.2 A pretrained model that fakes it until it makes it
			2.2.1 The GAN game
			2.2.2 CycleGAN
			2.2.3 A network that turns horses into zebras
		2.3 A pretrained network that describes scenes
			2.3.1 NeuralTalk2
		2.4 Torch Hub
		2.5 Conclusion
		2.6 Exercises
		2.7 Summary
	Chapter 3: It starts with a tensor
		3.1 The world as floating-point numbers
		3.2 Tensors: Multidimensional arrays
			3.2.1 From Python lists to PyTorch tensors
			3.2.2 Constructing our first tensors
			3.2.3 The essence of tensors
		3.3 Indexing tensors
		3.4 Named tensors
		3.5 Tensor element types
			3.5.1 Specifying the numeric type with dtype
			3.5.2 A dtype for every occasion
			3.5.3 Managing a tensor’s dtype attribute
		3.6 The tensor API
		3.7 Tensors: Scenic views of storage
			3.7.1 Indexing into storage
			3.7.2 Modifying stored values: In-place operations
		3.8 Tensor metadata: Size, offset, and stride
			3.8.1 Views of another tensor’s storage
			3.8.2 Transposing without copying
			3.8.3 Transposing in higher dimensions
			3.8.4 Contiguous tensors
		3.9 Moving tensors to the GPU
			3.9.1 Managing a tensor’s device attribute
		3.10 NumPy interoperability
		3.11 Generalized tensors are tensors, too
		3.12 Serializing tensors
			3.12.1 Serializing to HDF5 with h5py
		3.13 Conclusion
		3.14 Exercises
		3.15 Summary
	Chapter 4: Real-world data representation using tensors
		4.1 Working with images
			4.1.1 Adding color channels
			4.1.2 Loading an image file
			4.1.3 Changing the layout
			4.1.4 Normalizing the data
		4.2 3D images: Volumetric data
			4.2.1 Loading a specialized format
		4.3 Representing tabular data
			4.3.1 Using a real-world dataset
			4.3.2 Loading a wine data tensor
			4.3.3 Representing scores
			4.3.4 One-hot encoding
			4.3.5 When to categorize
			4.3.6 Finding thresholds
		4.4 Working with time series
			4.4.1 Adding a time dimension
			4.4.2 Shaping the data by time period
			4.4.3 Ready for training
		4.5 Representing text
			4.5.1 Converting text to numbers
			4.5.2 One-hot-encoding characters
			4.5.3 One-hot encoding whole words
			4.5.4 Text embeddings
			4.5.5 Text embeddings as a blueprint
		4.6 Conclusion
		4.7 Exercises
		4.8 Summary
	Chapter 5: The mechanics of learning
		5.1 A timeless lesson in modeling
		5.2 Learning is just parameter estimation
			5.2.1 A hot problem
			5.2.2 Gathering some data
			5.2.3 Visualizing the data
			5.2.4 Choosing a linear model as a first try
		5.3 Less loss is what we want
			5.3.1 From problem back to PyTorch
		5.4 Down along the gradient
			5.4.1 Decreasing loss
			5.4.2 Getting analytical
			5.4.3 Iterating to fit the model
			5.4.4 Normalizing inputs
			5.4.5 Visualizing (again)
		5.5 PyTorch’s autograd: Backpropagating all things
			5.5.1 Computing the gradient automatically
			5.5.2 Optimizers a la carte
			5.5.3 Training, validation, and overfitting
			5.5.4 Autograd nits and switching it off
		5.6 Conclusion
		5.7 Exercise
		5.8 Summary
	Chapter 6: Using a neural network to fit the data
		6.1 Artificial neurons
			6.1.1 Composing a multilayer network
			6.1.2 Understanding the error function
			6.1.3 All we need is activation
			6.1.4 More activation functions
			6.1.5 Choosing the best activation function
			6.1.6 What learning means for a neural network
		6.2 The PyTorch nn module
			6.2.1 Using __call__ rather than forward
			6.2.2 Returning to the linear model
		6.3 Finally a neural network
			6.3.1 Replacing the linear model
			6.3.2 Inspecting the parameters
			6.3.3 Comparing to the linear model
		6.4 Conclusion
		6.5 Exercises
		6.6 Summary
	Chapter 7: Telling birds from airplanes: Learning from images
		7.1 A dataset of tiny images
			7.1.1 Downloading CIFAR-10
			7.1.2 The Dataset class
			7.1.3 Dataset transforms
			7.1.4 Normalizing data
		7.2 Distinguishing birds from airplanes
			7.2.1 Building the dataset
			7.2.2 A fully connected model
			7.2.3 Output of a classifier
			7.2.4 Representing the output as probabilities
			7.2.5 A loss for classifying
			7.2.6 Training the classifier
			7.2.7 The limits of going fully connected
		7.3 Conclusion
		7.4 Exercises
		7.5 Summary
	Chapter 8: Using convolutions to generalize
		8.1 The case for convolutions
			8.1.1 What convolutions do
		8.2 Convolutions in action
			8.2.1 Padding the boundary
			8.2.2 Detecting features with convolutions
			8.2.3 Looking further with depth and pooling
			8.2.4 Putting it all together for our network
		8.3 Subclassing nn.Module
			8.3.1 Our network as an nn.Module
			8.3.2 How PyTorch keeps track of parameters and submodules
			8.3.3 The functional API
		8.4 Training our convnet
			8.4.1 Measuring accuracy
			8.4.2 Saving and loading our model
			8.4.3 Training on the GPU
		8.5 Model design
			8.5.1 Adding memory capacity: Width
			8.5.2 Helping our model to converge and generalize: Regularization
			8.5.3 Going deeper to learn more complex structures: Depth
			8.5.4 Comparing the designs from this section
			8.5.5 It’s already outdated
		8.6 Conclusion
		8.7 Exercises
		8.8 Summary
Part 2: Learning from images in the real world: Early detection of lung cancer
	Chapter 9: Using PyTorch to fight cancer
		9.1 Introduction to the use case
		9.2 Preparing for a large-scale project
		9.3 What is a CT scan, exactly?
		9.4 The project: An end-to-end detector for lung cancer
			9.4.1 Why can’t we just throw data at a neural network until it works?
			9.4.2 What is a nodule?
			9.4.3 Our data source: The LUNA Grand Challenge
			9.4.4 Downloading the LUNA data
		9.5 Conclusion
		9.6 Summary
	Chapter 10: Combining data sources into a unified dataset
		10.1 Raw CT data files
		10.2 Parsing LUNA’s annotation data
			10.2.1 Training and validation sets
			10.2.2 Unifying our annotation and candidate data
		10.3 Loading individual CT scans
			10.3.1 Hounsfield Units
		10.4 Locating a nodule using the patient coordinate system
			10.4.1 The patient coordinate system
			10.4.2 CT scan shape and voxel sizes
			10.4.3 Converting between millimeters and voxel addresses
			10.4.4 Extracting a nodule from a CT scan
		10.5 A straightforward dataset implementation
			10.5.1 Caching candidate arrays with the getCtRawCandidate function
			10.5.2 Constructing our dataset in LunaDataset.__init__
			10.5.3 A training/validation split
			10.5.4 Rendering the data
		10.6 Conclusion
		10.7 Exercises
		10.8 Summary
	Chapter 11: Training a classification model to detect suspected tumors
		11.1 A foundational model and training loop
		11.2 The main entry point for our application
		11.3 Pretraining setup and initialization
			11.3.1 Initializing the model and optimizer
			11.3.2 Care and feeding of data loaders
		11.4 Our first-pass neural network design
			11.4.1 The core convolutions
			11.4.2 The full model
		11.5 Training and validating the model
			11.5.1 The computeBatchLoss function
			11.5.2 The validation loop is similar
		11.6 Outputting performance metrics
			11.6.1 The logMetrics function
		11.7 Running the training script
			11.7.1 Needed data for training
			11.7.2 Interlude: The enumerateWithEstimate function
		11.8 Evaluating the model: Getting 99.7% correct means we’re done, right?
		11.9 Graphing training metrics with TensorBoard
			11.9.1 Running TensorBoard
			11.9.2 Adding TensorBoard support to the metrics logging function
		11.10 Why isn’t the model learning to detect nodules?
		11.11 Conclusion
		11.12 Exercises
		11.13 Summary
	Chapter 12: Improving training with metrics and augmentation
		12.1 High-level plan for improvement
		12.2 Good dogs vs. bad guys: False positives and false negatives
		12.3 Graphing the positives and negatives
			12.3.1 Recall is Roxie’s strength
			12.3.2 Precision is Preston’s forte
			12.3.3 Implementing precision and recall in logMetrics
			12.3.4 Our ultimate performance metric: The F1 score
			12.3.5 How does our model perform with our new metrics?
		12.4 What does an ideal dataset look like?
			12.4.1 Making the data look less like the actual and more like the “ideal”
			12.4.2 Contrasting training with a balanced LunaDataset to previous runs
			12.4.3 Recognizing the symptoms of overfitting
		12.5 Revisiting the problem of overfitting
			12.5.1 An overfit face-to-age prediction model
		12.6 Preventing overfitting with data augmentation
			12.6.1 Specific data augmentation techniques
			12.6.2 Seeing the improvement from data augmentation
		12.7 Conclusion
		12.8 Exercises
		12.9 Summary
	Chapter 13: Using segmentation to find suspected nodules
		13.1 Adding a second model to our project
		13.2 Various types of segmentation
		13.3 Semantic segmentation: Per-pixel classification
			13.3.1 The U-Net architecture
		13.4 Updating the model for segmentation
			13.4.1 Adapting an off-the-shelf model to our project
		13.5 Updating the dataset for segmentation
			13.5.1 U-Net has very specific input size requirements
			13.5.2 U-Net trade-offs for 3D vs. 2D data
			13.5.3 Building the ground truth data
			13.5.4 Implementing Luna2dSegmentationDataset
			13.5.5 Designing our training and validation data
			13.5.6 Implementing TrainingLuna2dSegmentationDataset
			13.5.7 Augmenting on the GPU
		13.6 Updating the training script for segmentation
			13.6.1 Initializing our segmentation and augmentation models
			13.6.2 Using the Adam optimizer
			13.6.3 Dice loss
			13.6.4 Getting images into TensorBoard
			13.6.5 Updating our metrics logging
			13.6.6 Saving our model
		13.7 Results
		13.8 Conclusion
		13.9 Exercises
		13.10 Summary
	Chapter 14: End-to-end nodule analysis, and where to go next
		14.1 Towards the finish line
		14.2 Independence of the validation set
		14.3 Bridging CT segmentation and nodule candidate classification
			14.3.1 Segmentation
			14.3.2 Grouping voxels into nodule candidates
			14.3.3 Did we find a nodule? Classification to reduce false positives
		14.4 Quantitative validation
		14.5 Predicting malignancy
			14.5.1 Getting malignancy information
			14.5.2 An area under the curve baseline: Classifying by diameter
			14.5.3 Reusing preexisting weights: Fine-tuning
			14.5.4 More output in TensorBoard
		14.6 What we see when we diagnose
			14.6.1 Training, validation, and test sets
		14.7 What next? Additional sources of inspiration (and data)
			14.7.1 Preventing overfitting: Better regularization
			14.7.2 Refined training data
			14.7.3 Competition results and research papers
		14.8 Conclusion
			14.8.1 Behind the curtain
		14.9 Exercises
		14.10 Summary
Part 3: Deployment
	Chapter 15: Deploying to production
		15.1 Serving PyTorch models
			15.1.1 Our model behind a Flask server
			15.1.2 What we want from deployment
			15.1.3 Request batching
		15.2 Exporting models
			15.2.1 Interoperability beyond PyTorch with ONNX
			15.2.2 PyTorch’s own export: Tracing
			15.2.3 Our server with a traced model
		15.3 Interacting with the PyTorch JIT
			15.3.1 What to expect from moving beyond classic Python/PyTorch
			15.3.2 The dual nature of PyTorch as interface and backend
			15.3.3 TorchScript
			15.3.4 Scripting the gaps of traceability
		15.4 LibTorch: PyTorch in C++
			15.4.1 Running JITed models from C++
			15.4.2 C++ from the start: The C++ API
		15.5 Going mobile
			15.5.1 Improving efficiency: Model design and quantization
		15.6 Emerging technology: Enterprise serving of PyTorch models
		15.7 Conclusion
		15.8 Exercises
		15.9 Summary
index




نظرات کاربران