ورود به حساب

نام کاربری گذرواژه

گذرواژه را فراموش کردید؟ کلیک کنید

حساب کاربری ندارید؟ ساخت حساب

ساخت حساب کاربری

نام نام کاربری ایمیل شماره موبایل گذرواژه

برای ارتباط با ما می توانید از طریق شماره موبایل زیر از طریق تماس و پیامک با ما در ارتباط باشید


09117307688
09117179751

در صورت عدم پاسخ گویی از طریق پیامک با پشتیبان در ارتباط باشید

دسترسی نامحدود

برای کاربرانی که ثبت نام کرده اند

ضمانت بازگشت وجه

درصورت عدم همخوانی توضیحات با کتاب

پشتیبانی

از ساعت 7 صبح تا 10 شب

دانلود کتاب Parallel Computing Toolbox. User's Guide

دانلود کتاب جعبه ابزار محاسبات موازی راهنمای کاربر

Parallel Computing Toolbox. User's Guide

مشخصات کتاب

Parallel Computing Toolbox. User's Guide

ویرایش:  
 
سری:  
 
ناشر: MathWorks 
سال نشر: 2023 
تعداد صفحات: [1154] 
زبان: English 
فرمت فایل : PDF (درصورت درخواست کاربر به PDF، EPUB یا AZW3 تبدیل می شود) 
حجم فایل: 7 Mb 

قیمت کتاب (تومان) : 42,000



ثبت امتیاز به این کتاب

میانگین امتیاز به این کتاب :
       تعداد امتیاز دهندگان : 10


در صورت تبدیل فایل کتاب Parallel Computing Toolbox. User's Guide به فرمت های PDF، EPUB، AZW3، MOBI و یا DJVU می توانید به پشتیبان اطلاع دهید تا فایل مورد نظر را تبدیل نمایند.

توجه داشته باشید کتاب جعبه ابزار محاسبات موازی راهنمای کاربر نسخه زبان اصلی می باشد و کتاب ترجمه شده به فارسی نمی باشد. وبسایت اینترنشنال لایبرری ارائه دهنده کتاب های زبان اصلی می باشد و هیچ گونه کتاب ترجمه شده یا نوشته شده به فارسی را ارائه نمی دهد.


توضیحاتی درمورد کتاب به خارجی



فهرست مطالب

Getting Started
	Parallel Computing Toolbox Product Description
	Parallel Computing Support in MathWorks Products
	Create and Use Distributed Arrays
		Creating Distributed Arrays
		Creating Codistributed Arrays
	Determine Product Installation and Versions
	Interactively Run Loops in Parallel Using parfor
	Run Batch Parallel Jobs
		Run a Batch Job
		Run a Batch Job with a Parallel Pool
		Run Script as Batch Job from the Current Folder Browser
	Distribute Arrays and Run SPMD
		Distributed Arrays
		Single Program Multiple Data (spmd)
		Composites
	What Is Parallel Computing?
	Choose a Parallel Computing Solution
	Run MATLAB Functions with Automatic Parallel Support
		Find Automatic Parallel Support
	Run Non-Blocking Code in Parallel Using parfeval
	Evaluate Functions in the Background Using parfeval
	Use Parallel Computing Toolbox with Cloud Center Cluster in MATLAB Online
	Write Portable Parallel Code
		Run Parallel Code in Serial Without Parallel Computing Toolbox
		Automatically Scale Up with backgroundPool
		Write Custom Portable Parallel Code
	Parallel Language Decision Tables
		Choose Parallel Computing Language Feature
		Choose Workflow
Parallel for-Loops (parfor)
	Decide When to Use parfor
		parfor-Loops in MATLAB
		Deciding When to Use parfor
		Example of parfor With Low Parallel Overhead
		Example of parfor With High Parallel Overhead
	Convert for-Loops Into parfor-Loops
	Ensure That parfor-Loop Iterations are Independent
	Nested parfor and for-Loops and Other parfor Requirements
		Nested parfor-Loops
		Convert Nested for-Loops to parfor-Loops
		Nested for-Loops: Requirements and Limitations
		parfor-Loop Limitations
	Scale Up parfor-Loops to Cluster and Cloud
	Use parfor-Loops for Reduction Assignments
	Use Objects and Handles in parfor-Loops
		Objects
		Handle Classes
		Sliced Variables Referencing Function Handles
	Troubleshoot Variables in parfor-Loops
		Ensure That parfor-Loop Variables Are Consecutive Increasing Integers
		Avoid Overflows in parfor-Loops
		Solve Variable Classification Issues in parfor-Loops
		Structure Arrays in parfor-Loops
		Converting the Body of a parfor-Loop into a Function
		Unambiguous Variable Names
		Transparent parfor-loops
		Global and Persistent Variables
	Loop Variables
	Sliced Variables
		Characteristics of a Sliced Variable
		Sliced Input and Output Variables
		Nested for-Loops with Sliced Variables
		Data Type Limitations
	Broadcast Variables
		Performance Considerations
	Reduction Variables
		Notes About Required and Recommended Guidelines
		Basic Rules for Reduction Variables
		Requirements for Reduction Assignments
		Using a Custom Reduction Function
		Chaining Reduction Operators
	Temporary Variables
		Uninitialized Temporaries
		Temporary Variables Intended as Reduction Variables
		ans Variable
	Ensure Transparency in parfor-Loops or spmd Statements
		Parallel Simulink Simulations
	Improve parfor Performance
		Where to Create Arrays
		Profiling parfor-loops
		Slicing Arrays
		Optimizing on Local vs. Cluster Workers
	Run Code on Parallel Pools
		What Is a Parallel Pool?
		Automatically Start and Stop a Parallel Pool
		Alternative Ways to Start and Stop Pools
		Pool Size and Cluster Selection
	Choose Between Thread-Based and Process-Based Environments
		Select Parallel Environment
		Compare Process Workers and Thread Workers
		Solve Optimization Problem in Parallel on Process-Based and Thread-Based Pool
		What Are Thread-Based Environments?
		What are Process-Based Environments?
		Check Support for Thread-Based Environment
	Repeat Random Numbers in parfor-Loops
	Recommended System Limits for Macintosh and Linux
Asynchronous Parallel Programming
	Use afterEach and afterAll to Run Callback Functions
		Call afterEach on parfeval Computations
		Call afterAll on parfeval Computations
		Combine afterEach and afterAll
		Update User Interface Asynchronously Using afterEach and afterAll
		Handle Errors in Future Variables
Single Program Multiple Data (spmd)
	Run Single Programs on Multiple Data Sets
		Introduction
		When to Use spmd
		Define an spmd Statement
		Display Output
		MATLAB Path
		Error Handling
		spmd Limitations
	Access Worker Variables with Composites
		Introduction to Composites
		Create Composites in spmd Statements
		Variable Persistence and Sequences of spmd
		Create Composites Outside spmd Statements
	Distributing Arrays to Parallel Workers
		Using Distributed Arrays to Partition Data Across Workers
		Load Distributed Arrays in Parallel Using datastore
		Alternative Methods for Creating Distributed and Codistributed Arrays
	Choose Between spmd, parfor, and parfeval
		Communicating Parallel Code
		Compare Performance of Multithreading and ProcessPool
		Compare Performance of parfor, parfeval, and spmd
Math with Codistributed Arrays
	Nondistributed Versus Distributed Arrays
		Introduction
		Nondistributed Arrays
		Codistributed Arrays
	Working with Codistributed Arrays
		How MATLAB Software Distributes Arrays
		Creating a Codistributed Array
		Local Arrays
		Obtaining information About the Array
		Changing the Dimension of Distribution
		Restoring the Full Array
		Indexing into a Codistributed Array
		2-Dimensional Distribution
	Looping Over a Distributed Range (for-drange)
		Parallelizing a for-Loop
		Codistributed Arrays in a for-drange Loop
	Run MATLAB Functions with Distributed Arrays
		Check Distributed Array Support in Functions
		Support for Sparse Distributed Arrays
Programming Overview
	How Parallel Computing Software Runs a Job
		Overview
		Toolbox and Server Components
		Life Cycle of a Job
	Program a Job on a Local Cluster
	Specify Your Parallel Preferences
	Discover Clusters and Use Cluster Profiles
		Create and Manage Cluster Profiles
		Discover Clusters
		Create Cloud Cluster
		Add and Modify Cluster Profiles
		Import and Export Cluster Profiles
		Edit Number of Workers and Cluster Settings
		Use Your Cluster from MATLAB
	Apply Callbacks to MATLAB Job Scheduler Jobs and Tasks
	Job Monitor
		Typical Use Cases
		Manage Jobs Using the Job Monitor
		Identify Task Errors Using the Job Monitor
	Programming Tips
		Program Development Guidelines
		Current Working Directory of a MATLAB Worker
		Writing to Files from Workers
		Saving or Sending Objects
		Using clear functions
		Running Tasks That Call Simulink Software
		Using the pause Function
		Transmitting Large Amounts of Data
		Interrupting a Job
		Speeding Up a Job
	Control Random Number Streams on Workers
		Client and Workers
		Different Workers
		Normally Distributed Random Numbers
	Profiling Parallel Code
		Profile Parallel Code
		Analyze Parallel Profile Data
	Troubleshooting and Debugging
		Attached Files Size Limitations
		File Access and Permissions
		No Results or Failed Job
		Connection Problems Between the Client and MATLAB Job Scheduler
		"One of your shell's init files contains a command that is writing to stdout..."
	Big Data Workflow Using Tall Arrays and Datastores
		Running Tall Arrays in Parallel
		Use mapreducer to Control Where Your Code Runs
	Use Tall Arrays on a Parallel Pool
	Use Tall Arrays on a Spark Cluster
		Set Up a Spark Cluster and a Spark Enabled Hadoop Cluster
		Creating and Using Tall Tables
	Run mapreduce on a Parallel Pool
		Start Parallel Pool
		Compare Parallel mapreduce
	Run mapreduce on a Hadoop Cluster
		Cluster Preparation
		Output Format and Order
		Calculate Mean Delay
	Partition a Datastore in Parallel
	Set Environment Variables on Workers
		Set Environment Variables for Cluster Profile
		Set Environment Variables for a Job or Pool
Program Independent Jobs
	Program Independent Jobs
	Program Independent Jobs on a Local Cluster
		Create and Run Jobs with a Local Cluster
		Local Cluster Behavior
	Program Independent Jobs for a Supported Scheduler
		Create and Run Jobs
		Manage Objects in the Scheduler
	Share Code with the Workers
		Workers Access Files Directly
		Pass Data to and from Worker Sessions
		Pass MATLAB Code for Startup and Finish
	Plugin Scripts for Generic Schedulers
		Sample Plugin Scripts
		Writing Custom Plugin Scripts
		Adding User Customization
		Managing Jobs with Generic Scheduler
		Submitting from a Remote Host
		Submitting without a Shared File System
	Choose Batch Processing Function
		Batch Parallel Job Types
		Select Batch Function
Program Communicating Jobs
	Program Communicating Jobs
	Program Communicating Jobs for a Supported Scheduler
		Schedulers and Conditions
		Code the Task Function
		Code in the Client
	Further Notes on Communicating Jobs
		Number of Tasks in a Communicating Job
		Avoid Deadlock and Other Dependency Errors
GPU Computing
	Establish Arrays on a GPU
		Create GPU Arrays from Existing Data
		Create GPU Arrays Directly
		Examine gpuArray Characteristics
		Save and Load gpuArray Objects
	Random Number Streams on a GPU
		Client CPU and GPU
		Worker CPU and GPU
		Normally Distributed Random Numbers
	Run MATLAB Functions on a GPU
		MATLAB Functions with gpuArray Arguments
		Check gpuArray-Supported Functions
		Deep Learning with GPUs
		Check or Select a GPU
		Use MATLAB Functions with the GPU
		Examples Using GPUs
		Acknowledgments
	Identify and Select a GPU Device
	Sharpen an Image Using the GPU
	Compute the Mandelbrot Set using GPU-Enabled Functions
	Run CUDA or PTX Code on GPU
		CUDAKernel Workflow Overview
		Create a CUDAKernel Object
		Run a CUDAKernel
		Complete Kernel Workflow
	Run MEX-Functions Containing CUDA Code
		Write a MEX-File Containing CUDA Code
		Run the Resulting MEX-Functions
		Comparison to a CUDA Kernel
		Access Complex Data
		Compile a GPU MEX-File
		Install the CUDA Toolkit (Optional)
	Measure and Improve GPU Performance
		Measure GPU Performance
		Improve GPU Performance
	GPU Computing Requirements
	Run MATLAB using GPUs in the Cloud
		MathWorks Cloud Center
		Microsoft Azure Marketplace
		Reference Architectures
		Containers
	Work with Complex Numbers on a GPU
		Conditions for Working With Complex Numbers on a GPU
		Functions That Return Complex Data
	Work with Sparse Arrays on a GPU
		Create Sparse GPU Arrays
		Functions That Support Sparse GPU Arrays
Parallel Computing Toolbox Examples
	Profile Parallel Code
	Solve Differential Equation Using Multigrid Preconditioner on Distributed Discretization
	Plot During Parameter Sweep with parfeval
	Perform Webcam Image Acquisition in Parallel with Postprocessing
	Perform Image Acquisition and Parallel Image Processing
	Run Script as Batch Job
	Run Batch Job and Access Files from Workers
	Benchmark Cluster Workers
	Benchmark Your Cluster with the HPC Challenge
	Process Big Data in the Cloud
	Run MATLAB Functions on Multiple GPUs
		Advanced Support for Fast Multi-Node GPU Communication
	Scale Up from Desktop to Cluster
	Plot During Parameter Sweep with parfor
	Update User Interface Asynchronously Using afterEach and afterAll
	Simple Benchmarking of PARFOR Using Blackjack
	Use Distributed Arrays to Solve Systems of Linear Equations with Direct Methods
	Use Distributed Arrays to Solve Systems of Linear Equations with Iterative Methods
	Use spmdReduce to Achieve MPI_Allreduce Functionality
	Resource Contention in Task Parallel Problems
	Benchmarking Independent Jobs on the Cluster
	Benchmarking A\b
	Benchmarking A\b on the GPU
	Using FFT2 on the GPU to Simulate Diffraction Patterns
	Improve Performance of Element-Wise MATLAB Functions on the GPU Using arrayfun
	Measure GPU Performance
	Improve Performance Using a GPU and Vectorized Calculations
	Generating Random Numbers on a GPU
	Illustrating Three Approaches to GPU Computing: The Mandelbrot Set
	Using GPU arrayfun for Monte-Carlo Simulations
	Stencil Operations on a GPU
	Accessing Advanced CUDA Features Using MEX
	Improve Performance of Small Matrix Problems on the GPU Using pagefun
	Profiling Explicit Parallel Communication
	Profiling Load Unbalanced Codistributed Arrays
	Sequential Blackjack
	Distributed Blackjack
	Parfeval Blackjack
	Numerical Estimation of Pi Using Message Passing
	Query and Cancel parfeval Futures
	Use parfor to Speed Up Monte-Carlo Code
	Monitor Monte Carlo Batch Jobs with ValueStore
	Monitor Batch Jobs with ValueStore
Objects
	ClusterPool
	codistributed
	codistributor1d
	codistributor2dbc
	Composite
	parallel.gpu.CUDAKernel
	distributed
	FileStore
	gpuArray
	gpuDevice
	GPUDeviceManager
	mxGPUArray
	parallel.Cluster
	parallel.cluster.Hadoop
	parallel.cluster.Spark
	parallel.gpu.RandStream
	parallel.Job
	parallel.Pool
	parallel.pool.Constant
	parallel.pool.DataQueue
	parallel.pool.PollableDataQueue
	parallel.Task
	parallel.Worker
	ProcessPool
	RemoteClusterAccess
	ThreadPool
	ValueStore
Functions
	addAttachedFiles
	afterEach
	arrayfun
	batch
	bsxfun
	cancel
	cancelAll
	changePassword
	classUnderlying
	clear
	codistributed.build
	codistributed.cell
	codistributed.colon
	codistributed.spalloc
	codistributed.speye
	codistributed.sprand
	codistributed.sprandn
	codistributor
	codistributor1d.defaultPartition
	codistributor2dbc.defaultWorkerGrid
	Composite
	copyFileFromStore
	copyFileToStore
	createCommunicatingJob
	createJob
	createTask
	delete
	delete
	demote
	diary
	distributed
	distributed.cell
	distributed.spalloc
	distributed.speye
	distributed.sprand
	distributed.sprandn
	dload
	dsave
	exist
	existsOnGPU
	eye
	false
	fetchOutputs
	feval
	findJob
	findTask
	for (drange)
	gather
	gcat
	gcp
	getAttachedFilesFolder
	get
	getCodistributor
	getCurrentCluster
	getCurrentJob
	getCurrentFileStore
	getCurrentTask
	getCurrentValueStore
	getCurrentWorker
	getDebugLog
	getJobClusterData
	getJobFolder
	getJobFolderOnCluster
	getLocalPart
	getLogLocation
	getTaskSchedulerIDs
	globalIndices
	gop
	gplus
	gpuDeviceCount
	gpuDeviceTable
	gpurng
	gputimeit
	help
	Inf
	isaUnderlying
	iscodistributed
	isComplete
	isdistributed
	isequal
	isgpuarray
	isKey
	isreplicated
	jobStartup
	labindex
	keys
	labBarrier
	labBroadcast
	labProbe
	labReceive
	labSend
	labSendReceive
	length
	listAutoAttachedFiles
	load
	logout
	mapreducer
	methods
	mexcuda
	mpiLibConf
	mpiprofile
	mpiSettings
	mxGPUCopyFromMxArray (C)
	mxGPUCopyGPUArray (C)
	mxGPUCopyImag (C)
	mxGPUCopyReal (C)
	mxGPUCreateComplexGPUArray (C)
	mxGPUCreateFromMxArray (C)
	mxGPUCreateGPUArray (C)
	mxGPUCreateMxArrayOnCPU (C)
	mxGPUCreateMxArrayOnGPU (C)
	mxGPUDestroyGPUArray (C)
	mxGPUGetClassID (C)
	mxGPUGetComplexity (C)
	mxGPUGetData (C)
	mxGPUGetDataReadOnly (C)
	mxGPUGetDimensions (C)
	mxGPUGetNumberOfDimensions (C)
	mxGPUGetNumberOfElements (C)
	mxGPUIsSame (C)
	mxGPUIsSparse (C)
	mxGPUIsValidGPUData (C)
	mxGPUSetDimensions (C)
	mxInitGPU (C)
	mxIsGPUArray (C)
	NaN
	numlabs
	ones
	pagefun
	parallel.cluster.generic.awsbatch.deleteBatchJob
	parallel.cluster.generic.awsbatch.deleteJobFilesFromS3
	parallel.cluster.generic.awsbatch.downloadJobFilesFromS3
	parallel.cluster.generic.awsbatch.downloadJobLogFiles
	parallel.cluster.generic.awsbatch.getBatchJobInfo
	parallel.cluster.generic.awsbatch.submitBatchJob
	parallel.cluster.generic.awsbatch.uploadJobFilesToS3
	parallel.cluster.Hadoop
	parallel.clusterProfiles
	parallel.listProfiles
	parallel.defaultClusterProfile
	parallel.defaultProfile
	parallel.exportProfile
	parallel.gpu.enableCUDAForwardCompatibility
	parallel.gpu.RandStream.create
	parallel.gpu.RandStream.getGlobalStream
	parallel.gpu.RandStream.list
	parallel.gpu.RandStream.setGlobalStream
	parallel.importProfile
	parcluster
	parfeval
	parfevalOnAll
	parfor
	parforOptions
	parpool
	pause
	pctconfig
	pctRunDeployedCleanup
	pctRunOnAll
	pload
	pmode
	poll
	poolStartup
	promote
	psave
	put
	rand
	randi
	randn
	recreate
	redistribute
	remove
	reset
	resume
	saveAsProfile
	saveProfile
	setConstantMemory
	setJobClusterData
	shutdown
	sparse
	spmd
	spmdBarrier
	spmdBroadcast
	spmdCat
	spmdIndex
	spmdPlus
	spmdProbe
	spmdReceive
	spmdReduce
	spmdSend
	spmdSendReceive
	spmdSize
	start
	submit
	subsasgn
	subsref
	taskFinish
	taskStartup
	send
	ticBytes
	tocBytes
	true
	updateAttachedFiles
	wait
	wait (cluster)
	wait
	write
	zeros




نظرات کاربران