ورود به حساب

نام کاربری گذرواژه

گذرواژه را فراموش کردید؟ کلیک کنید

حساب کاربری ندارید؟ ساخت حساب

ساخت حساب کاربری

نام نام کاربری ایمیل شماره موبایل گذرواژه

برای ارتباط با ما می توانید از طریق شماره موبایل زیر از طریق تماس و پیامک با ما در ارتباط باشید


09117307688
09117179751

در صورت عدم پاسخ گویی از طریق پیامک با پشتیبان در ارتباط باشید

دسترسی نامحدود

برای کاربرانی که ثبت نام کرده اند

ضمانت بازگشت وجه

درصورت عدم همخوانی توضیحات با کتاب

پشتیبانی

از ساعت 7 صبح تا 10 شب

دانلود کتاب MATLAB Parallel Comput Toolbox Users Guide

دانلود کتاب راهنمای کاربران جعبه ابزار محاسبات موازی متلب

MATLAB Parallel Comput Toolbox Users Guide

مشخصات کتاب

MATLAB Parallel Comput Toolbox Users Guide

ویرایش:  
نویسندگان:   
سری:  
 
ناشر: The MathWorks, Inc. 
سال نشر: 2021 
تعداد صفحات: 1068 
زبان: English 
فرمت فایل : PDF (درصورت درخواست کاربر به PDF، EPUB یا AZW3 تبدیل می شود) 
حجم فایل: 7 مگابایت 

قیمت کتاب (تومان) : 46,000



ثبت امتیاز به این کتاب

میانگین امتیاز به این کتاب :
       تعداد امتیاز دهندگان : 13


در صورت تبدیل فایل کتاب MATLAB Parallel Comput Toolbox Users Guide به فرمت های PDF، EPUB، AZW3، MOBI و یا DJVU می توانید به پشتیبان اطلاع دهید تا فایل مورد نظر را تبدیل نمایند.

توجه داشته باشید کتاب راهنمای کاربران جعبه ابزار محاسبات موازی متلب نسخه زبان اصلی می باشد و کتاب ترجمه شده به فارسی نمی باشد. وبسایت اینترنشنال لایبرری ارائه دهنده کتاب های زبان اصلی می باشد و هیچ گونه کتاب ترجمه شده یا نوشته شده به فارسی را ارائه نمی دهد.


توضیحاتی درمورد کتاب به خارجی



فهرست مطالب

Getting Started
	Parallel Computing Toolbox Product Description
	Parallel Computing Support in MathWorks Products
	Create and Use Distributed Arrays
		Creating Distributed Arrays
		Creating Codistributed Arrays
	Determine Product Installation and Versions
	Interactively Run a Loop in Parallel Using parfor
	Run Batch Parallel Jobs
		Run a Batch Job
		Run a Batch Job with a Parallel Pool
		Run Script as Batch Job from the Current Folder Browser
	Distribute Arrays and Run SPMD
		Distributed Arrays
		Single Program Multiple Data (spmd)
		Composites
	What Is Parallel Computing?
	Choose a Parallel Computing Solution
	Run MATLAB Functions with Automatic Parallel Support
		Find Automatic Parallel Support
	Run Non-Blocking Code in Parallel Using parfeval
	Evaluate Functions in the Background Using parfeval
	Use Parallel Computing Toolbox with Cloud Center Cluster in MATLAB Online
	Run MATLAB Functions on Thread Workers
		Check Thread Worker Supported Functions
	Functions Supported on Thread Workers
		Functions
		Methods
Parallel for-Loops (parfor)
	Decide When to Use parfor
		parfor-Loops in MATLAB
		Deciding When to Use parfor
		Example of parfor With Low Parallel Overhead
		Example of parfor With High Parallel Overhead
	Convert for-Loops Into parfor-Loops
	Ensure That parfor-Loop Iterations are Independent
	Nested parfor and for-Loops and Other parfor Requirements
		Nested parfor-Loops
		Convert Nested for-Loops to parfor-Loops
		Nested for-Loops: Requirements and Limitations
		parfor-Loop Limitations
	Scale Up parfor-Loops to Cluster and Cloud
	Use parfor-Loops for Reduction Assignments
	Use Objects and Handles in parfor-Loops
		Using Objects in parfor-Loops
		Handle Classes
		Sliced Variables Referencing Function Handles
	Troubleshoot Variables in parfor-Loops
		Ensure That parfor-Loop Variables Are Consecutive Increasing Integers
		Avoid Overflows in parfor-Loops
		Solve Variable Classification Issues in parfor-Loops
		Structure Arrays in parfor-Loops
		Converting the Body of a parfor-Loop into a Function
		Unambiguous Variable Names
		Transparent parfor-loops
		Global and Persistent Variables
	Loop Variables
	Sliced Variables
		Characteristics of a Sliced Variable
		Sliced Input and Output Variables
		Nested for-Loops with Sliced Variables
	Broadcast Variables
	Reduction Variables
		Notes About Required and Recommended Guidelines
		Basic Rules for Reduction Variables
		Requirements for Reduction Assignments
		Using a Custom Reduction Function
		Chaining Reduction Operators
	Temporary Variables
		Uninitialized Temporaries
		Temporary Variables Intended as Reduction Variables
		ans Variable
	Ensure Transparency in parfor-Loops or spmd Statements
		Parallel Simulink Simulations
	Improve parfor Performance
		Where to Create Arrays
		Profiling parfor-loops
		Slicing Arrays
		Optimizing on Local vs. Cluster Workers
	Run Code on Parallel Pools
		What Is a Parallel Pool?
		Automatically Start and Stop a Parallel Pool
		Alternative Ways to Start and Stop Pools
		Pool Size and Cluster Selection
	Choose Between Thread-Based and Process-Based Environments
		Select Parallel Environment
		Compare Process Workers and Thread Workers
		Solve Optimization Problem in Parallel on Process-Based and Thread-Based Pool
		What Are Thread-Based Environments?
		What are Process-Based Environments?
		Check Support for Thread-Based Environment
	Repeat Random Numbers in parfor-Loops
	Recommended System Limits for Macintosh and Linux
Single Program Multiple Data (spmd)
	Run Single Programs on Multiple Data Sets
		Introduction
		When to Use spmd
		Define an spmd Statement
		Display Output
		MATLAB Path
		Error Handling
		spmd Limitations
	Access Worker Variables with Composites
		Introduction to Composites
		Create Composites in spmd Statements
		Variable Persistence and Sequences of spmd
		Create Composites Outside spmd Statements
	Distributing Arrays to Parallel Workers
		Using Distributed Arrays to Partition Data Across Workers
		Load Distributed Arrays in Parallel Using datastore
		Alternative Methods for Creating Distributed and Codistributed Arrays
	Choose Between spmd, parfor, and parfeval
		Communicating Parallel Code
		Compare Performance of Multithreading and ProcessPool
		Compare Performance of parfor, parfeval, and spmd
Math with Codistributed Arrays
	Nondistributed Versus Distributed Arrays
		Introduction
		Nondistributed Arrays
		Codistributed Arrays
	Working with Codistributed Arrays
		How MATLAB Software Distributes Arrays
		Creating a Codistributed Array
		Local Arrays
		Obtaining information About the Array
		Changing the Dimension of Distribution
		Restoring the Full Array
		Indexing into a Codistributed Array
		2-Dimensional Distribution
	Looping Over a Distributed Range (for-drange)
		Parallelizing a for-Loop
		Codistributed Arrays in a for-drange Loop
	Run MATLAB Functions with Distributed Arrays
		Check Distributed Array Support in Functions
		Support for Sparse Distributed Arrays
Programming Overview
	How Parallel Computing Products Run a Job
		Overview
		Toolbox and Server Components
		Life Cycle of a Job
	Program a Job on a Local Cluster
	Specify Your Parallel Preferences
	Discover Clusters and Use Cluster Profiles
		Create and Manage Cluster Profiles
		Discover Clusters
		Create Cloud Cluster
		Add and Modify Cluster Profiles
		Import and Export Cluster Profiles
		Edit Number of Workers and Cluster Settings
		Use Your Cluster from MATLAB
	Apply Callbacks to MATLAB Job Scheduler Jobs and Tasks
	Job Monitor
		Typical Use Cases
		Manage Jobs Using the Job Monitor
		Identify Task Errors Using the Job Monitor
	Programming Tips
		Program Development Guidelines
		Current Working Directory of a MATLAB Worker
		Writing to Files from Workers
		Saving or Sending Objects
		Using clear functions
		Running Tasks That Call Simulink Software
		Using the pause Function
		Transmitting Large Amounts of Data
		Interrupting a Job
		Speeding Up a Job
	Control Random Number Streams on Workers
		Client and Workers
		Different Workers
		Normally Distributed Random Numbers
	Profiling Parallel Code
		Profile Parallel Code
		Analyze Parallel Profile Data
	Troubleshooting and Debugging
		Attached Files Size Limitations
		File Access and Permissions
		No Results or Failed Job
		Connection Problems Between the Client and MATLAB Job Scheduler
		SFTP Error: Received Message Too Long
	Big Data Workflow Using Tall Arrays and Datastores
		Running Tall Arrays in Parallel
		Use mapreducer to Control Where Your Code Runs
	Use Tall Arrays on a Parallel Pool
	Use Tall Arrays on a Spark Enabled Hadoop Cluster
		Creating and Using Tall Tables
	Run mapreduce on a Parallel Pool
		Start Parallel Pool
		Compare Parallel mapreduce
	Run mapreduce on a Hadoop Cluster
		Cluster Preparation
		Output Format and Order
		Calculate Mean Delay
	Partition a Datastore in Parallel
	Set Environment Variables on Workers
		Set Environment Variables for Cluster Profile
		Set Environment Variables for a Job or Pool
Program Independent Jobs
	Program Independent Jobs
	Program Independent Jobs on a Local Cluster
		Create and Run Jobs with a Local Cluster
		Local Cluster Behavior
	Program Independent Jobs for a Supported Scheduler
		Create and Run Jobs
		Manage Objects in the Scheduler
	Share Code with the Workers
		Workers Access Files Directly
		Pass Data to and from Worker Sessions
		Pass MATLAB Code for Startup and Finish
	Plugin Scripts for Generic Schedulers
		Sample Plugin Scripts
		Writing Custom Plugin Scripts
		Adding User Customization
		Managing Jobs with Generic Scheduler
		Submitting from a Remote Host
		Submitting without a Shared File System
Program Communicating Jobs
	Program Communicating Jobs
	Program Communicating Jobs for a Supported Scheduler
		Schedulers and Conditions
		Code the Task Function
		Code in the Client
	Further Notes on Communicating Jobs
		Number of Tasks in a Communicating Job
		Avoid Deadlock and Other Dependency Errors
GPU Computing
	GPU Capabilities and Performance
		Capabilities
		Performance Benchmarking
	Establish Arrays on a GPU
		Create GPU Arrays from Existing Data
		Create GPU Arrays Directly
		Examine gpuArray Characteristics
		Save and Load gpuArrays
	Random Number Streams on a GPU
		Client CPU and GPU
		Worker CPU and GPU
		Normally Distributed Random Numbers
	Run MATLAB Functions on a GPU
		MATLAB Functions with gpuArray Arguments
		Check or Select a GPU
		Use MATLAB Functions with a GPU
		Sharpen an Image Using the GPU
		Compute the Mandelbrot Set using GPU-Enabled Functions
		Work with Sparse Arrays on a GPU
		Work with Complex Numbers on a GPU
		Special Conditions for gpuArray Inputs
		Acknowledgments
	Identify and Select a GPU Device
	Run CUDA or PTX Code on GPU
		Overview
		Create a CUDAKernel Object
		Run a CUDAKernel
		Complete Kernel Workflow
	Run MEX-Functions Containing CUDA Code
		Write a MEX-File Containing CUDA Code
		Run the Resulting MEX-Functions
		Comparison to a CUDA Kernel
		Access Complex Data
		Compile a GPU MEX-File
	Measure and Improve GPU Performance
		Getting Started with GPU Benchmarking
		Improve Performance Using Single Precision Calculations
		Basic Workflow for Improving Performance
		Advanced Tools for Improving Performance
		Best Practices for Improving Performance
		Measure Performance on the GPU
		Vectorize for Improved GPU Performance
		Troubleshooting GPUs
	GPU Support by Release
		Supported GPUs
		CUDA Toolkit
		Forward Compatibility for GPU Devices
		Increase the CUDA Cache Size
Parallel Computing Toolbox Examples
	Profile Parallel Code
	Train Network in Parallel with Custom Training Loop
	Solve Differential Equation Using Multigrid Preconditioner on Distributed Discretization
	Plot During Parameter Sweep with parfeval
	Perform Webcam Image Acquisition in Parallel with Postprocessing
	Perform Image Acquisition and Parallel Image Processing
	Run Script as Batch Job
	Run Batch Job and Access Files from Workers
	Benchmark Cluster Workers
	Benchmark Your Cluster with the HPC Challenge
	Train Deep Learning Networks in Parallel
	Train Network Using Automatic Multi-GPU Support
	Process Big Data in the Cloud
	Use parfeval to Train Multiple Deep Learning Networks
	Train Network in the Cloud Using Automatic Parallel Support
	Use parfor to Train Multiple Deep Learning Networks
	Upload Deep Learning Data to the Cloud
	Send Deep Learning Batch Job to Cluster
	Run MATLAB Functions on Multiple GPUs
	Scale Up from Desktop to Cluster
	Plot During Parameter Sweep with parfor
	Update User Interface Asynchronously Using afterEach and afterAll
	Simple Benchmarking of PARFOR Using Blackjack
	Use Distributed Arrays to Solve Systems of Linear Equations with Direct Methods
	Use Distributed Arrays to Solve Systems of Linear Equations with Iterative Methods
	Using GOP to Achieve MPI_Allreduce Functionality
	Resource Contention in Task Parallel Problems
	Benchmarking Independent Jobs on the Cluster
	Benchmarking A\b
	Benchmarking A\b on the GPU
	Using FFT2 on the GPU to Simulate Diffraction Patterns
	Improve Performance of Element-wise MATLAB® Functions on the GPU using ARRAYFUN
	Measuring GPU Performance
	Generating Random Numbers on a GPU
	Illustrating Three Approaches to GPU Computing: The Mandelbrot Set
	Using GPU ARRAYFUN for Monte-Carlo Simulations
	Stencil Operations on a GPU
	Accessing Advanced CUDA Features Using MEX
	Improve Performance of Small Matrix Problems on the GPU using PAGEFUN
	Profiling Explicit Parallel Communication
	Profiling Load Unbalanced Codistributed Arrays
	Sequential Blackjack
	Distributed Blackjack
	Parfeval Blackjack
	Numerical Estimation of Pi Using Message Passing
	Query and Cancel parfeval Futures
	Use parfor to Speed Up Monte-Carlo Code
Objects
	ClusterPool
	codistributed
	codistributor1d
	codistributor2dbc
	Composite
	CUDAKernel
	distributed
	Future
	gpuArray
	gpuDevice
	GPUDeviceManager
	mxGPUArray
	parallel.Cluster
	parallel.cluster.Hadoop
	parallel.gpu.RandStream
	parallel.Job
	parallel.Pool
	parallel.pool.DataQueue
	parallel.pool.PollableDataQueue
	parallel.Task
	parallel.Worker
	ProcessPool
	RemoteClusterAccess
	ThreadPool
Functions
	addAttachedFiles
	afterAll
	afterEach
	afterEach
	arrayfun
	batch
	bsxfun
	cancel
	cancel
	changePassword
	classUnderlying
	clear
	codistributed
	codistributed.build
	codistributed.cell
	codistributed.colon
	codistributed.spalloc
	codistributed.speye
	codistributed.sprand
	codistributed.sprandn
	codistributor
	codistributor1d
	codistributor1d.defaultPartition
	codistributor2dbc
	codistributor2dbc.defaultLabGrid
	Composite
	createCommunicatingJob
	createJob
	createTask
	delete
	delete
	demote
	diary
	distributed
	distributed.cell
	distributed.spalloc
	distributed.speye
	distributed.sprand
	distributed.sprandn
	dload
	dsave
	exist
	existsOnGPU
	eye
	false
	fetchNext
	fetchOutputs
	fetchOutputs
	feval
	findJob
	findTask
	for (drange)
	gather
	gcat
	gcp
	getAttachedFilesFolder
	getCodistributor
	getCurrentCluster
	getCurrentJob
	getCurrentTask
	getCurrentWorker
	getDebugLog
	getJobClusterData
	getJobFolder
	getJobFolderOnCluster
	getLocalPart
	getLogLocation
	getTaskSchedulerIDs
	globalIndices
	gop
	gplus
	gpuDeviceCount
	gpuDeviceTable
	gpurng
	gputimeit
	help
	Inf
	isaUnderlying
	iscodistributed
	isComplete
	isdistributed
	isequal
	isequal
	isgpuarray
	isreplicated
	jobStartup
	labBarrier
	labBroadcast
	labindex
	labProbe
	labReceive
	labSend
	labSendReceive
	length
	listAutoAttachedFiles
	load
	logout
	mapreducer
	methods
	mexcuda
	mpiLibConf
	mpiprofile
	mpiSettings
	mxGPUCopyFromMxArray (C)
	mxGPUCopyGPUArray (C)
	mxGPUCopyImag (C)
	mxGPUCopyReal (C)
	mxGPUCreateComplexGPUArray (C)
	mxGPUCreateFromMxArray (C)
	mxGPUCreateGPUArray (C)
	mxGPUCreateMxArrayOnCPU (C)
	mxGPUCreateMxArrayOnGPU (C)
	mxGPUDestroyGPUArray (C)
	mxGPUGetClassID (C)
	mxGPUGetComplexity (C)
	mxGPUGetData (C)
	mxGPUGetDataReadOnly (C)
	mxGPUGetDimensions (C)
	mxGPUGetNumberOfDimensions (C)
	mxGPUGetNumberOfElements (C)
	mxGPUIsSame (C)
	mxGPUIsSparse (C)
	mxGPUIsValidGPUData (C)
	mxGPUSetDimensions (C)
	mxInitGPU (C)
	mxIsGPUArray (C)
	NaN
	numlabs
	ones
	pagefun
	parallel.cluster.generic.awsbatch.deleteBatchJob
	parallel.cluster.generic.awsbatch.deleteJobFilesFromS3
	parallel.cluster.generic.awsbatch.downloadJobFilesFromS3
	parallel.cluster.generic.awsbatch.downloadJobLogFiles
	parallel.cluster.generic.awsbatch.getBatchJobInfo
	parallel.cluster.generic.awsbatch.submitBatchJob
	parallel.cluster.generic.awsbatch.uploadJobFilesToS3
	parallel.cluster.Hadoop
	parallel.clusterProfiles
	parallel.defaultClusterProfile
	parallel.exportProfile
	parallel.gpu.CUDAKernel
	parallel.gpu.enableCUDAForwardCompatibility
	parallel.gpu.RandStream.create
	parallel.gpu.RandStream.getGlobalStream
	parallel.gpu.RandStream.list
	parallel.gpu.RandStream.setGlobalStream
	parallel.importProfile
	parallel.pool.Constant
	parcluster
	parfeval
	parfevalOnAll
	parfor
	parforOptions
	parpool
	pause
	pctconfig
	pctRunDeployedCleanup
	pctRunOnAll
	pload
	pmode
	poll
	poolStartup
	promote
	psave
	rand
	randi
	randn
	recreate
	redistribute
	reset
	resume
	saveAsProfile
	saveProfile
	setConstantMemory
	setJobClusterData
	shutdown
	sparse
	spmd
	start
	submit
	subsasgn
	subsref
	taskFinish
	taskStartup
	send
	ticBytes
	tocBytes
	true
	updateAttachedFiles
	wait
	wait (cluster)
	wait
	wait (GPUDevice)
	write
	zeros




نظرات کاربران