0% found this document useful (0 votes)
12 views

Buy ebook MATLAB Parallel Computing Toolbox User s Guide The Mathworks cheap price

Mathworks

Uploaded by

kozakbriar18
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

Buy ebook MATLAB Parallel Computing Toolbox User s Guide The Mathworks cheap price

Mathworks

Uploaded by

kozakbriar18
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 55

Experience Seamless Full Ebook Downloads for Every Genre at textbookfull.

com

MATLAB Parallel Computing Toolbox User s Guide The


Mathworks

https://textbookfull.com/product/matlab-parallel-computing-
toolbox-user-s-guide-the-mathworks/

OR CLICK BUTTON

DOWNLOAD NOW

Explore and download more ebook at https://textbookfull.com


Recommended digital products (PDF, EPUB, MOBI) that
you can download immediately if you are interested.

MATLAB Econometrics Toolbox User s Guide The Mathworks

https://textbookfull.com/product/matlab-econometrics-toolbox-user-s-
guide-the-mathworks/

textboxfull.com

MATLAB Bioinformatics Toolbox User s Guide The Mathworks

https://textbookfull.com/product/matlab-bioinformatics-toolbox-user-s-
guide-the-mathworks/

textboxfull.com

MATLAB Mapping Toolbox User s Guide The Mathworks

https://textbookfull.com/product/matlab-mapping-toolbox-user-s-guide-
the-mathworks/

textboxfull.com

MATLAB Optimization Toolbox User s Guide The Mathworks

https://textbookfull.com/product/matlab-optimization-toolbox-user-s-
guide-the-mathworks/

textboxfull.com
MATLAB Trading Toolbox User s Guide The Mathworks

https://textbookfull.com/product/matlab-trading-toolbox-user-s-guide-
the-mathworks/

textboxfull.com

MATLAB Computer Vision Toolbox User s Guide The Mathworks

https://textbookfull.com/product/matlab-computer-vision-toolbox-user-
s-guide-the-mathworks/

textboxfull.com

MATLAB Curve Fitting Toolbox User s Guide The Mathworks

https://textbookfull.com/product/matlab-curve-fitting-toolbox-user-s-
guide-the-mathworks/

textboxfull.com

MATLAB Fuzzy Logic Toolbox User s Guide The Mathworks

https://textbookfull.com/product/matlab-fuzzy-logic-toolbox-user-s-
guide-the-mathworks/

textboxfull.com

MATLAB Global Optimization Toolbox User s Guide The


Mathworks

https://textbookfull.com/product/matlab-global-optimization-toolbox-
user-s-guide-the-mathworks/

textboxfull.com
Parallel Computing Toolbox™
User's Guide

R2020a
How to Contact MathWorks

Latest news: www.mathworks.com

Sales and services: www.mathworks.com/sales_and_services

User community: www.mathworks.com/matlabcentral

Technical support: www.mathworks.com/support/contact_us

Phone: 508-647-7000

The MathWorks, Inc.


1 Apple Hill Drive
Natick, MA 01760-2098
Parallel Computing Toolbox™ User's Guide
© COPYRIGHT 2004–2020 by The MathWorks, Inc.
The software described in this document is furnished under a license agreement. The software may be used or copied
only under the terms of the license agreement. No part of this manual may be photocopied or reproduced in any form
without prior written consent from The MathWorks, Inc.
FEDERAL ACQUISITION: This provision applies to all acquisitions of the Program and Documentation by, for, or through
the federal government of the United States. By accepting delivery of the Program or Documentation, the government
hereby agrees that this software or documentation qualifies as commercial computer software or commercial computer
software documentation as such terms are used or defined in FAR 12.212, DFARS Part 227.72, and DFARS 252.227-7014.
Accordingly, the terms and conditions of this Agreement and only those rights specified in this Agreement, shall pertain
to and govern the use, modification, reproduction, release, performance, display, and disclosure of the Program and
Documentation by the federal government (or other entity acquiring for or through the federal government) and shall
supersede any conflicting contractual terms or conditions. If this License fails to meet the government's needs or is
inconsistent in any respect with federal procurement law, the government agrees to return the Program and
Documentation, unused, to The MathWorks, Inc.
Trademarks
MATLAB and Simulink are registered trademarks of The MathWorks, Inc. See
www.mathworks.com/trademarks for a list of additional trademarks. Other product or brand names may be
trademarks or registered trademarks of their respective holders.
Patents
MathWorks products are protected by one or more U.S. patents. Please see www.mathworks.com/patents for
more information.
Revision History
November 2004 Online only New for Version 1.0 (Release 14SP1+)
March 2005 Online only Revised for Version 1.0.1 (Release 14SP2)
September 2005 Online only Revised for Version 1.0.2 (Release 14SP3)
November 2005 Online only Revised for Version 2.0 (Release 14SP3+)
March 2006 Online only Revised for Version 2.0.1 (Release 2006a)
September 2006 Online only Revised for Version 3.0 (Release 2006b)
March 2007 Online only Revised for Version 3.1 (Release 2007a)
September 2007 Online only Revised for Version 3.2 (Release 2007b)
March 2008 Online only Revised for Version 3.3 (Release 2008a)
October 2008 Online only Revised for Version 4.0 (Release 2008b)
March 2009 Online only Revised for Version 4.1 (Release 2009a)
September 2009 Online only Revised for Version 4.2 (Release 2009b)
March 2010 Online only Revised for Version 4.3 (Release 2010a)
September 2010 Online only Revised for Version 5.0 (Release 2010b)
April 2011 Online only Revised for Version 5.1 (Release 2011a)
September 2011 Online only Revised for Version 5.2 (Release 2011b)
March 2012 Online only Revised for Version 6.0 (Release 2012a)
September 2012 Online only Revised for Version 6.1 (Release 2012b)
March 2013 Online only Revised for Version 6.2 (Release 2013a)
September 2013 Online only Revised for Version 6.3 (Release 2013b)
March 2014 Online only Revised for Version 6.4 (Release 2014a)
October 2014 Online only Revised for Version 6.5 (Release 2014b)
March 2015 Online only Revised for Version 6.6 (Release 2015a)
September 2015 Online only Revised for Version 6.7 (Release 2015b)
March 2016 Online only Revised for Version 6.8 (Release 2016a)
September 2016 Online only Revised for Version 6.9 (Release 2016b)
March 2017 Online only Revised for Version 6.10 (Release 2017a)
September 2017 Online only Revised for Version 6.11 (Release 2017b)
March 2018 Online only Revised for Version 6.12 (Release 2018a)
September 2018 Online only Revised for Version 6.13 (Release 2018b)
March 2019 Online only Revised for Version 7.0 (Release 2019a)
September 2019 Online only Revised for Version 7.1 (Release 2019b)
March 2020 Online only Revised for Version 7.2 (Release 2020a)
Contents

Getting Started
1
Parallel Computing Toolbox Product Description . . . . . . . . . . . . . . . . . . . . 1-2

Parallel Computing Support in MathWorks Products . . . . . . . . . . . . . . . . 1-3

Create and Use Distributed Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-4


Creating Distributed Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-4
Creating Codistributed Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-5

Determine Product Installation and Versions . . . . . . . . . . . . . . . . . . . . . . . 1-6

Interactively Run a Loop in Parallel Using parfor . . . . . . . . . . . . . . . . . . . 1-7

Run Batch Parallel Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-9


Run a Batch Job . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-9
Run a Batch Job with a Parallel Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-9
Run Script as Batch Job from the Current Folder Browser . . . . . . . . . . . . 1-11

Distribute Arrays and Run SPMD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-12


Distributed Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-12
Single Program Multiple Data (spmd) . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-12
Composites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-12

What Is Parallel Computing? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-14

Choose a Parallel Computing Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-16

Run MATLAB Functions with Automatic Parallel Support . . . . . . . . . . . . 1-20


Find Automatic Parallel Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-20

Run Non-Blocking Code in Parallel Using parfeval . . . . . . . . . . . . . . . . . 1-22

Evaluate Functions in the Background Using parfeval . . . . . . . . . . . . . . 1-23

Use Parallel Computing Toolbox with Cloud Center clusters in MATLAB


Online . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-24

v
Parallel for-Loops (parfor)
2
Decide When to Use parfor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-2
parfor-Loops in MATLAB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-2
Deciding When to Use parfor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-2
Example of parfor With Low Parallel Overhead . . . . . . . . . . . . . . . . . . . . . 2-3
Example of parfor With High Parallel Overhead . . . . . . . . . . . . . . . . . . . . 2-4

Convert for-Loops Into parfor-Loops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-7

Ensure That parfor-Loop Iterations are Independent . . . . . . . . . . . . . . . 2-10

Nested parfor and for-Loops and Other parfor Requirements . . . . . . . . 2-13


Nested parfor-Loops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-13
Convert Nested for-Loops to parfor-Loops . . . . . . . . . . . . . . . . . . . . . . . 2-14
Nested for-Loops: Requirements and Limitations . . . . . . . . . . . . . . . . . . 2-16
parfor-Loop Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-17

Scale Up parfor-Loops to Cluster and Cloud . . . . . . . . . . . . . . . . . . . . . . . 2-21

Use parfor-Loops for Reduction Assignments . . . . . . . . . . . . . . . . . . . . . . 2-26

Use Objects and Handles in parfor-Loops . . . . . . . . . . . . . . . . . . . . . . . . . 2-27


Using Objects in parfor-Loops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-27
Handle Classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-27
Sliced Variables Referencing Function Handles . . . . . . . . . . . . . . . . . . . 2-27

Troubleshoot Variables in parfor-Loops . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-29


Ensure That parfor-Loop Variables Are Consecutive Increasing Integers
..................................................... 2-29
Avoid Overflows in parfor-Loops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-29
Solve Variable Classification Issues in parfor-Loops . . . . . . . . . . . . . . . . 2-30
Structure Arrays in parfor-Loops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-31
Converting the Body of a parfor-Loop into a Function . . . . . . . . . . . . . . . 2-32
Unambiguous Variable Names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-33
Transparent parfor-loops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-33
Global and Persistent Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-33

Loop Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-35

Sliced Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-37


Characteristics of a Sliced Variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-37
Sliced Input and Output Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-38
Nested for-Loops with Sliced Variables . . . . . . . . . . . . . . . . . . . . . . . . . . 2-39

Broadcast Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-41

Reduction Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-42


Notes About Required and Recommended Guidelines . . . . . . . . . . . . . . . 2-43
Basic Rules for Reduction Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-43
Requirements for Reduction Assignments . . . . . . . . . . . . . . . . . . . . . . . . 2-44
Using a Custom Reduction Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-45
Chaining Reduction Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-46

vi Contents
Temporary Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-48
Uninitialized Temporaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-48
Temporary Variables Intended as Reduction Variables . . . . . . . . . . . . . . . 2-49
ans Variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-49

Ensure Transparency in parfor-Loops or spmd Statements . . . . . . . . . . . 2-50


Parallel Simulink Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-51

Improve parfor Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-52


Where to Create Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-52
Profiling parfor-loops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-53
Slicing Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-54
Optimizing on Local vs. Cluster Workers . . . . . . . . . . . . . . . . . . . . . . . . . 2-55

Run Code on Parallel Pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-56


What Is a Parallel Pool? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-56
Automatically Start and Stop a Parallel Pool . . . . . . . . . . . . . . . . . . . . . . 2-56
Alternative Ways to Start and Stop Pools . . . . . . . . . . . . . . . . . . . . . . . . . 2-57
Pool Size and Cluster Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-59

Choose Between Thread-Based and Process-Based Environments . . . . . 2-61


Select Parallel Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-61
Compare Process Workers and Thread Workers . . . . . . . . . . . . . . . . . . . 2-64
Solve Optimization Problem in Parallel on Thread-Based Pool . . . . . . . . . 2-65
What Are Thread-Based Environments? . . . . . . . . . . . . . . . . . . . . . . . . . 2-67
What are Process-Based Environments? . . . . . . . . . . . . . . . . . . . . . . . . . 2-67
Check Support for Thread-Based Environment . . . . . . . . . . . . . . . . . . . . 2-68

Repeat Random Numbers in parfor-Loops . . . . . . . . . . . . . . . . . . . . . . . . 2-70

Recommended System Limits for Macintosh and Linux . . . . . . . . . . . . . 2-71

Single Program Multiple Data (spmd)


3
Run Single Programs on Multiple Data Sets . . . . . . . . . . . . . . . . . . . . . . . . 3-2
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-2
When to Use spmd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-2
Define an spmd Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-2
Display Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-4
MATLAB Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-4
Error Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-4
spmd Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-4

Access Worker Variables with Composites . . . . . . . . . . . . . . . . . . . . . . . . . 3-7


Introduction to Composites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-7
Create Composites in spmd Statements . . . . . . . . . . . . . . . . . . . . . . . . . . 3-7
Variable Persistence and Sequences of spmd . . . . . . . . . . . . . . . . . . . . . . 3-8
Create Composites Outside spmd Statements . . . . . . . . . . . . . . . . . . . . . . 3-9

Distributing Arrays to Parallel Workers . . . . . . . . . . . . . . . . . . . . . . . . . . 3-10


Using Distributed Arrays to Partition Data Across Workers . . . . . . . . . . . 3-10

vii
Load Distributed Arrays in Parallel Using datastore . . . . . . . . . . . . . . . . 3-10
Alternative Methods for Creating Distributed and Codistributed Arrays . 3-12

Math with Codistributed Arrays


4
Nondistributed Versus Distributed Arrays . . . . . . . . . . . . . . . . . . . . . . . . . 4-2
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-2
Nondistributed Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-2
Codistributed Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-3

Working with Codistributed Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-4


How MATLAB Software Distributes Arrays . . . . . . . . . . . . . . . . . . . . . . . . 4-4
Creating a Codistributed Array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-5
Local Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-8
Obtaining information About the Array . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-9
Changing the Dimension of Distribution . . . . . . . . . . . . . . . . . . . . . . . . . 4-10
Restoring the Full Array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-10
Indexing into a Codistributed Array . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-11
2-Dimensional Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-12

Looping Over a Distributed Range (for-drange) . . . . . . . . . . . . . . . . . . . . 4-16


Parallelizing a for-Loop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-16
Codistributed Arrays in a for-drange Loop . . . . . . . . . . . . . . . . . . . . . . . 4-17

Run MATLAB Functions with Distributed Arrays . . . . . . . . . . . . . . . . . . . 4-19


Check Distributed Array Support in Functions . . . . . . . . . . . . . . . . . . . . 4-19
Support for Sparse Distributed Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . 4-19

Programming Overview
5
How Parallel Computing Products Run a Job . . . . . . . . . . . . . . . . . . . . . . . 5-2
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-2
Toolbox and Server Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-3
Life Cycle of a Job . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-6

Program a Job on a Local Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-8

Specify Your Parallel Preferences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-9

Discover Clusters and Use Cluster Profiles . . . . . . . . . . . . . . . . . . . . . . . . 5-11


Create and Manage Cluster Profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-11
Discover Clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-12
Create Cloud Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-14
Add and Modify Cluster Profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-14
Import and Export Cluster Profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-18
Edit Number of Workers and Cluster Settings . . . . . . . . . . . . . . . . . . . . . 5-19
Use Your Cluster from MATLAB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-19

viii Contents
Apply Callbacks to MATLAB Job Scheduler Jobs and Tasks . . . . . . . . . . . 5-21

Job Monitor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-24


Typical Use Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-24
Manage Jobs Using the Job Monitor . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-24
Identify Task Errors Using the Job Monitor . . . . . . . . . . . . . . . . . . . . . . . 5-25

Programming Tips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-26


Program Development Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-26
Current Working Directory of a MATLAB Worker . . . . . . . . . . . . . . . . . . 5-27
Writing to Files from Workers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-27
Saving or Sending Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-27
Using clear functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-28
Running Tasks That Call Simulink Software . . . . . . . . . . . . . . . . . . . . . . 5-28
Using the pause Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-28
Transmitting Large Amounts of Data . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-28
Interrupting a Job . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-28
Speeding Up a Job . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-28

Control Random Number Streams on Workers . . . . . . . . . . . . . . . . . . . . . 5-29


Client and Workers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-29
Different Workers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-30
Normally Distributed Random Numbers . . . . . . . . . . . . . . . . . . . . . . . . . 5-31

Profiling Parallel Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-32


Profile Parallel Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-32
Analyze Parallel Profile Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-34

Troubleshooting and Debugging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-42


Attached Files Size Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-42
File Access and Permissions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-42
No Results or Failed Job . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-43
Connection Problems Between the Client and MATLAB Job Scheduler . . 5-44
SFTP Error: Received Message Too Long . . . . . . . . . . . . . . . . . . . . . . . . 5-44

Big Data Workflow Using Tall Arrays and Datastores . . . . . . . . . . . . . . . . 5-46


Running Tall Arrays in Parallel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-47
Use mapreducer to Control Where Your Code Runs . . . . . . . . . . . . . . . . . 5-47

Use Tall Arrays on a Parallel Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-48

Use Tall Arrays on a Spark Enabled Hadoop Cluster . . . . . . . . . . . . . . . . 5-51


Creating and Using Tall Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-51

Run mapreduce on a Parallel Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-54


Start Parallel Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-54
Compare Parallel mapreduce . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-54

Run mapreduce on a Hadoop Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-57


Cluster Preparation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-57
Output Format and Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-57
Calculate Mean Delay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-57

Partition a Datastore in Parallel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-60

ix
Program Independent Jobs
6
Program Independent Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-2

Program Independent Jobs on a Local Cluster . . . . . . . . . . . . . . . . . . . . . . 6-3


Create and Run Jobs with a Local Cluster . . . . . . . . . . . . . . . . . . . . . . . . . 6-3
Local Cluster Behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-5

Program Independent Jobs for a Supported Scheduler . . . . . . . . . . . . . . . 6-7


Create and Run Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-7
Manage Objects in the Scheduler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-11

Share Code with the Workers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-13


Workers Access Files Directly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-13
Pass Data to and from Worker Sessions . . . . . . . . . . . . . . . . . . . . . . . . . . 6-14
Pass MATLAB Code for Startup and Finish . . . . . . . . . . . . . . . . . . . . . . . 6-15

Plugin Scripts for Generic Schedulers . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-17


Sample Plugin Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-17
Writing Custom Plugin Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-19
Adding User Customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-24
Managing Jobs with Generic Scheduler . . . . . . . . . . . . . . . . . . . . . . . . . . 6-25
Submitting from a Remote Host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-26
Submitting without a Shared File System . . . . . . . . . . . . . . . . . . . . . . . . 6-27

Program Communicating Jobs


7
Program Communicating Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-2

Program Communicating Jobs for a Supported Scheduler . . . . . . . . . . . . 7-3


Schedulers and Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-3
Code the Task Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-3
Code in the Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-4

Further Notes on Communicating Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-6


Number of Tasks in a Communicating Job . . . . . . . . . . . . . . . . . . . . . . . . . 7-6
Avoid Deadlock and Other Dependency Errors . . . . . . . . . . . . . . . . . . . . . 7-6

GPU Computing
8
GPU Capabilities and Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-2
Capabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-2
Performance Benchmarking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-2

x Contents
Establish Arrays on a GPU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-3
Create GPU Arrays from Existing Data . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-3
Create GPU Arrays Directly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-4
Examine gpuArray Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-4
Save and Load gpuArrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-5

Random Number Streams on a GPU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-6


Client CPU and GPU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-6
Worker CPU and GPU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-7
Normally Distributed Random Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . 8-7

Run MATLAB Functions on a GPU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-9


MATLAB Functions with gpuArray Arguments . . . . . . . . . . . . . . . . . . . . . 8-9
Check or Select a GPU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-10
Use MATLAB Functions with a GPU . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-10
Sharpen an Image Using the GPU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-12
Compute the Mandelbrot Set using GPU-Enabled Functions . . . . . . . . . . 8-13
Work with Sparse Arrays on a GPU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-15
Work with Complex Numbers on a GPU . . . . . . . . . . . . . . . . . . . . . . . . . 8-16
Special Conditions for gpuArray Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . 8-17
Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-18

Identify and Select a GPU Device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-19

Run CUDA or PTX Code on GPU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-20


Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-20
Create a CUDAKernel Object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-20
Run a CUDAKernel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-24
Complete Kernel Workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-26

Run MEX-Functions Containing CUDA Code . . . . . . . . . . . . . . . . . . . . . . . 8-28


Write a MEX-File Containing CUDA Code . . . . . . . . . . . . . . . . . . . . . . . . 8-28
Run the Resulting MEX-Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-28
Comparison to a CUDA Kernel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-29
Access Complex Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-29
Compile a GPU MEX-File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-30

Measure and Improve GPU Performance . . . . . . . . . . . . . . . . . . . . . . . . . 8-31


Getting Started with GPU Benchmarking . . . . . . . . . . . . . . . . . . . . . . . . 8-31
Improve Performance Using Single Precision Calculations . . . . . . . . . . . 8-31
Basic Workflow for Improving Performance . . . . . . . . . . . . . . . . . . . . . . . 8-31
Advanced Tools for Improving Performance . . . . . . . . . . . . . . . . . . . . . . 8-32
Best Practices for Improving Performance . . . . . . . . . . . . . . . . . . . . . . . 8-33
Measure Performance on the GPU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-34
Vectorize for Improved GPU Performance . . . . . . . . . . . . . . . . . . . . . . . . 8-35
Troubleshooting GPUs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-36

GPU Support by Release . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-38


Supported GPUs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-38
CUDA Toolkit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-39
Increase the CUDA Cache Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-39

xi
Objects
9

Functions
10

xii Contents
1

Getting Started

• “Parallel Computing Toolbox Product Description” on page 1-2


• “Parallel Computing Support in MathWorks Products” on page 1-3
• “Create and Use Distributed Arrays” on page 1-4
• “Determine Product Installation and Versions” on page 1-6
• “Interactively Run a Loop in Parallel Using parfor” on page 1-7
• “Run Batch Parallel Jobs” on page 1-9
• “Distribute Arrays and Run SPMD” on page 1-12
• “What Is Parallel Computing?” on page 1-14
• “Choose a Parallel Computing Solution” on page 1-16
• “Run MATLAB Functions with Automatic Parallel Support” on page 1-20
• “Run Non-Blocking Code in Parallel Using parfeval” on page 1-22
• “Evaluate Functions in the Background Using parfeval” on page 1-23
• “Use Parallel Computing Toolbox with Cloud Center clusters in MATLAB Online” on page 1-24
1 Getting Started

Parallel Computing Toolbox Product Description


Perform parallel computations on multicore computers, GPUs, and computer clusters

Parallel Computing Toolbox lets you solve computationally and data-intensive problems using
multicore processors, GPUs, and computer clusters. High-level constructs—parallel for-loops, special
array types, and parallelized numerical algorithms—enable you to parallelize MATLAB® applications
without CUDA or MPI programming. The toolbox lets you use parallel-enabled functions in MATLAB
and other toolboxes. You can use the toolbox with Simulink® to run multiple simulations of a model in
parallel. Programs and models can run in both interactive and batch modes.

The toolbox lets you use the full processing power of multicore desktops by executing applications on
workers (MATLAB computational engines) that run locally. Without changing the code, you can run
the same applications on clusters or clouds (using MATLAB Parallel Server™). You can also use the
toolbox with MATLAB Parallel Server to execute matrix calculations that are too large to fit into the
memory of a single machine.

1-2
Parallel Computing Support in MathWorks Products

Parallel Computing Support in MathWorks Products


Parallel Computing Toolbox provides you with tools for a local cluster of workers on your client
machine. MATLAB Parallel Server software allows you to run as many MATLAB workers on a remote
cluster of computers as your licensing allows.

Most MathWorks products enable you to run applications in parallel. For example, Simulink models
can run simultaneously in parallel, as described in “Run Multiple Simulations” (Simulink). MATLAB
Compiler™ and MATLAB Compiler SDK™ software let you build and deploy parallel applications; for
example, see the “Parallel Computing” section of MATLAB Compiler “Standalone Applications”
(MATLAB Compiler).

Several MathWorks products now offer built-in support for the parallel computing products, without
requiring extra coding. For the current list of these products and their parallel functionality, see:
https://www.mathworks.com/products/parallel-computing/parallel-support.html

1-3
1 Getting Started

Create and Use Distributed Arrays

In this section...
“Creating Distributed Arrays” on page 1-4
“Creating Codistributed Arrays” on page 1-5

If your data is currently in the memory of your local machine, you can use the distributed function
to distribute an existing array from the client workspace to the workers of a parallel pool.
Distributed arrays use the combined memory of multiple workers in a parallel pool to store the
elements of an array. For alternative ways of partitioning data, see “Distributing Arrays to Parallel
Workers” on page 3-10.You can use distributed arrays to scale up your big data computation.
Consider distributed arrays when you have access to a cluster, as you can combine the memory of
multiple machines in your cluster.

A distributed array is a single variable, split over multiple workers in your parallel pool. You can
work with this variable as one single entity, without having to worry about its distributed nature.
Explore the functionalities available for distributed arrays in the Parallel Computing Toolbox:
“Run MATLAB Functions with Distributed Arrays” on page 4-19.

When you create a distributed array, you cannot control the details of the distribution. On the
other hand, codistributed arrays allow you to control all aspects of distribution, including
dimensions and partitions. In the following, you learn how to create both distributed and
codistributed arrays.

Creating Distributed Arrays


You can create a distributed array in different ways:

• Use the distributed function to distribute an existing array from the client workspace to the
workers of a parallel pool.
• You can directly construct a distributed array on the workers. You do not need to first create the
array in the client, so that client workspace memory requirements are reduced. The functions
available include eye(___,'distributed'), rand(___,'distributed'), etc. For a full list,
see the distributed object reference page.
• Create a codistributed array inside an spmd statement, see “Single Program Multiple Data
(spmd)” on page 1-12. Then access it as a distributed array outside the spmd statement. This
lets you use distribution schemes other than the default.

In this example, you create an array in the client workspace, then turn it into a distributed array:

parpool('local',4) % Create pool


A = magic(4); % Create magic 4-by-4 matrix
B = distributed(A); % Distribute to the workers
B % View results in client.
whos % B is a distributed array here.
delete(gcp) % Stop pool

You have createdB as a distributed array, split over the workers in your parallel pool. This is
shown in the figure.

1-4
Create and Use Distributed Arrays

Creating Codistributed Arrays


Unlike distributed arrays, codistributed arrays allow you to control all aspects of distribution,
including dimensions and partitions. You can create a codistributed array in different ways:

• “Partitioning a Larger Array” on page 4-6 — Start with a large array that is replicated on all
workers, and partition it so that the pieces are distributed across the workers. This is most useful
when you have sufficient memory to store the initial replicated array.
• “Building from Smaller Arrays” on page 4-6 — Start with smaller replicated arrays stored on
each worker, and combine them so that each array becomes a segment of a larger codistributed
array. This method reduces memory requirements as it lets you build a codistributed array from
smaller pieces.
• “Using MATLAB Constructor Functions” on page 4-7 — Use any of the MATLAB constructor
functions like rand or zeros with a codistributor object argument. These functions offer a quick
means of constructing a codistributed array of any size in just one step.

In this example, you create a codistributed array inside an spmd statement, using a nondefault
distribution scheme. First, define 1-D distribution along the third dimension, with 4 parts on worker
1, and 12 parts on worker 2. Then create a 3-by-3-by-16 array of zeros.

parpool('local',2) % Create pool


spmd
codist = codistributor1d(3,[4,12]);
Z = zeros(3,3,16,codist);
Z = Z + labindex;
end
Z % View results in client.
whos % Z is a distributed array here.
delete(gcp) % Stop pool

For more details on codistributed arrays, see “Working with Codistributed Arrays” on page 4-4.

See Also

Related Examples
• “Distributing Arrays to Parallel Workers” on page 3-10
• “Big Data Workflow Using Tall Arrays and Datastores” on page 5-46
• “Single Program Multiple Data (spmd)” on page 1-12

1-5
1 Getting Started

Determine Product Installation and Versions


To determine if Parallel Computing Toolbox software is installed on your system, type this command
at the MATLAB prompt.

ver

When you enter this command, MATLAB displays information about the version of MATLAB you are
running, including a list of all toolboxes installed on your system and their version numbers.

If you want to run your applications on a cluster, see your system administrator to verify that the
version of Parallel Computing Toolbox you are using is the same as the version of MATLAB Parallel
Server installed on your cluster.

1-6
Interactively Run a Loop in Parallel Using parfor

Interactively Run a Loop in Parallel Using parfor


In this example, you start with a slow for-loop, and you speed up the calculation using a parfor-
loop instead. parfor splits the execution of for-loop iterations over the workers in a parallel pool.

This example calculates the spectral radius of a matrix and converts a for-loop into a parfor-loop.
Find out how to measure the resulting speedup.
1 In the MATLAB Editor, enter the following for-loop. Add tic and toc to measure the time
elapsed.
tic
n = 200;
A = 500;
a = zeros(n);
for i = 1:n
a(i) = max(abs(eig(rand(A))));
end
toc
2 Run the script, and note the elapsed time.
Elapsed time is 31.935373 seconds.
3 In the script, replace the for-loop with a parfor-loop.
tic
n = 200;
A = 500;
a = zeros(n);
parfor i = 1:n
a(i) = max(abs(eig(rand(A))));
end
toc
4 Run the new script, and run it again. Note that the first run is slower than the second run,
because the parallel pool takes some time to start and make the code available to the workers.
Note the elapsed time for the second run.

By default, MATLAB automatically opens a parallel pool of workers on your local machine.
Starting parallel pool (parpool) using the 'local' profile ... connected to 4 workers.
...
Elapsed time is 10.760068 seconds.

1-7
1 Getting Started

The parfor run on four workers is about three times faster than the corresponding for-loop
run. The speed-up is smaller than the ideal speed-up of a factor of four on four workers. This is
due to parallel overhead, including the time required to transfer data from the client to the
workers and back. This example shows a good speed-up with relatively small parallel overhead,
and benefits from conversion into a parfor-loop. Not all for-loop iterations can be turned into
faster parfor-loops. To learn more, see “Decide When to Use parfor” on page 2-2.

One key requirement for using parfor-loops is that the individual iterations must be independent.
Independent problems suitable for parfor processing include Monte Carlo simulations and
parameter sweeps. For next steps, see “Convert for-Loops Into parfor-Loops” on page 2-7.

In this example, you managed to speed up the calculation by converting the for-loop into a parfor-
loop on four workers. You might reduce the elapsed time further by increasing the number of workers
in your parallel pool, see “Scale Up parfor-Loops to Cluster and Cloud” on page 2-21.

You can modify your cluster profiles to control how many workers run your loops, and whether the
workers are local or on a cluster. For more information on profiles, see “Discover Clusters and Use
Cluster Profiles” on page 5-11.

Modify your parallel preferences to control whether a parallel pool is created automatically, and how
long it remains available before timing out. For more information on preferences, see “Specify Your
Parallel Preferences” on page 5-9.

You can run Simulink models in parallel with the parsim command instead of using parfor-loops.
For more information and examples of using Simulink in parallel, see “Run Multiple Simulations”
(Simulink).

See Also
parfor | parpool | tic | toc

More About
• “Decide When to Use parfor” on page 2-2
• “Convert for-Loops Into parfor-Loops” on page 2-7
• “Scale Up parfor-Loops to Cluster and Cloud” on page 2-21

1-8
Run Batch Parallel Jobs

Run Batch Parallel Jobs


Run a Batch Job
To offload work from your MATLAB session to run in the background in another session, you can use
the batch command inside a script.
1 To create the script, type:
edit mywave
2 In the MATLAB Editor, create a for-loop:
for i = 1:1024
A(i) = sin(i*2*pi/1024);
end
3 Save the file and close the Editor.
4 Use the batch command in the MATLAB Command Window to run your script on a separate
MATLAB worker:
job = batch('mywave')

5 batch does not block MATLAB and you can continue working while computations take place. If
you need to block MATLAB until the job finishes, use the wait function on the job object.
wait(job)
6 After the job finishes, you can retrieve and view its results. The load command transfers
variables created on the worker to the client workspace, where you can view the results:
load(job,'A')
plot(A)
7 When the job is complete, permanently delete its data and remove its reference from the
workspace:
delete(job)
clear job

batch runs your code on a local worker or a cluster worker, but does not require a parallel pool.

You can use batch to run either scripts or functions. For more details, see the batch reference page.

Run a Batch Job with a Parallel Pool


You can combine the abilities to offload a job and run a loop in a parallel pool. This example combines
the two to create a simple batch parfor-loop.
1 To create a script, type:
edit mywave

1-9
1 Getting Started

2 In the MATLAB Editor, create a parfor-loop:

parfor i = 1:1024
A(i) = sin(i*2*pi/1024);
end
3 Save the file and close the Editor.
4 Run the script in MATLAB with the batch command. Indicate that the script should use a
parallel pool for the loop:

job = batch('mywave','Pool',3)

This command specifies that three workers (in addition to the one running the batch script) are
to evaluate the loop iterations. Therefore, this example uses a total of four local workers,
including the one worker running the batch script. Altogether, there are five MATLAB sessions
involved, as shown in the following diagram.

5 To view the results:

wait(job)
load(job,'A')
plot(A)

The results look the same as before, however, there are two important differences in execution:

• The work of defining the parfor-loop and accumulating its results are offloaded to another
MATLAB session by batch.
• The loop iterations are distributed from one MATLAB worker to another set of workers
running simultaneously ('Pool' and parfor), so the loop might run faster than having only
one worker execute it.
6 When the job is complete, permanently delete its data and remove its reference from the
workspace:

delete(job)
clear job

1-10
Run Batch Parallel Jobs

Run Script as Batch Job from the Current Folder Browser


From the Current Folder browser, you can run a MATLAB script as a batch job by browsing to the
file’s folder, right-clicking the file, and selecting Run Script as Batch Job. The batch job runs on the
cluster identified by the default cluster profile. The following figure shows the menu option to run the
script file script1.m:

Running a script as a batch from the browser uses only one worker from the cluster. So even if the
script contains a parfor loop or spmd block, it does not open an additional pool of workers on the
cluster. These code blocks execute on the single worker used for the batch job. If your batch script
requires opening an additional pool of workers, you can run it from the command line, as described in
“Run a Batch Job with a Parallel Pool” on page 1-9.

When you run a batch job from the browser, this also opens the Job Monitor. The Job Monitor is a tool
that lets you track your job in the scheduler queue. For more information about the Job Monitor and
its capabilities, see “Job Monitor” on page 5-24.

See Also
batch

Related Examples
• “Run Batch Job and Access Files from Workers”

1-11
1 Getting Started

Distribute Arrays and Run SPMD


Distributed Arrays
The workers in a parallel pool communicate with each other, so you can distribute an array among
the workers. Each worker contains part of the array, and all the workers are aware of which portion
of the array each worker has.

Use the distributed function to distribute an array among the workers:


M = magic(4) % a 4-by-4 magic square in the client workspace
MM = distributed(M)

Now MM is a distributed array, equivalent to M, and you can manipulate or access its elements in the
same way as any other array.
M2 = 2*MM; % M2 is also distributed, calculation performed on workers
x = M2(1,1) % x on the client is set to first element of M2

Single Program Multiple Data (spmd)


The single program multiple data (spmd) construct lets you define a block of code that runs in parallel
on all the workers in a parallel pool. The spmd block can run on some or all the workers in the pool.
spmd % By default creates pool and uses all workers
R = rand(4);
end

This code creates an individual 4-by-4 matrix, R, of random numbers on each worker in the pool.

Composites
Following an spmd statement, in the client context, the values from the block are accessible, even
though the data is actually stored on the workers. On the client, these variables are called Composite
objects. Each element of a composite is a symbol referencing the value (data) on a worker in the pool.
Note that because a variable might not be defined on every worker, a Composite might have
undefined elements.

Continuing with the example from above, on the client, the Composite R has one element for each
worker:
X = R{3}; % Set X to the value of R from worker 3.

The line above retrieves the data from worker 3 to assign the value of X. The following code sends
data to worker 3:
X = X + 2;
R{3} = X; % Send the value of X from the client to worker 3.

If the parallel pool remains open between spmd statements and the same workers are used, the data
on each worker persists from one spmd statement to another.
spmd
R = R + labindex % Use values of R from previous spmd.
end

1-12
Distribute Arrays and Run SPMD

A typical use for spmd is to run the same code on a number of workers, each of which accesses a
different set of data. For example:

spmd
INP = load(['somedatafile' num2str(labindex) '.mat']);
RES = somefun(INP)
end

Then the values of RES on the workers are accessible from the client as RES{1} from worker 1,
RES{2} from worker 2, etc.

There are two forms of indexing a Composite, comparable to indexing a cell array:

• AA{n} returns the values of AA from worker n.


• AA(n) returns a cell array of the content of AA from worker n.

Although data persists on the workers from one spmd block to another as long as the parallel pool
remains open, data does not persist from one instance of a parallel pool to another. That is, if the pool
is deleted and a new one created, all data from the first pool is lost.

For more information about using distributed arrays, spmd, and Composites, see “Distributed
Arrays”.

1-13
1 Getting Started

What Is Parallel Computing?


Parallel computing allows you to carry out many calculations simultaneously. Large problems can
often be split into smaller ones, which are then solved at the same time.

The main reasons to consider parallel computing are to

• Save time by distributing tasks and executing these simultaneously


• Solve big data problems by distributing data
• Take advantage of your desktop computer resources and scale up to clusters and cloud computing

With Parallel Computing Toolbox, you can

• Accelerate your code using interactive parallel computing tools, such as parfor and parfeval
• Scale up your computation using interactive Big Data processing tools, such as distributed,
tall, datastore, and mapreduce
• Use gpuArray to speed up your calculation on the GPU of your computer
• Use batch to offload your calculation to computer clusters or cloud computing facilities

Here are some useful Parallel Computing concepts:

• Node: standalone computer, containing one or more CPUs / GPUs. Nodes are networked to form a
cluster or supercomputer
• Thread: smallest set of instructions that can be managed independently by a scheduler. On a GPU,
multiprocessor or multicore system, multiple threads can be executed simultaneously (multi-
threading)
• Batch: off-load execution of a functional script to run in the background
• Scalability: increase in parallel speedup with the addition of more resources

What tools do MATLAB and Parallel Computing Toolbox offer?

• MATLAB workers: MATLAB computational engines that run in the background without a graphical
desktop. You use functions in the Parallel Computing Toolbox to automatically divide tasks and
assign them to these workers to execute the computations in parallel. You can run local workers to
take advantage of all the cores in your multicore desktop computer. You can also scale up to run
your workers on a cluster of machines, using the MATLAB Parallel Server. The MATLAB session
you interact with is known as the MATLAB client. The client instructs the workers with parallel
language functions.
• Parallel pool: a parallel pool of MATLAB workers created using parpool or functions with
automatic parallel support. By default, parallel language functions automatically create a parallel
pool for you when necessary. To learn more, see “Run Code on Parallel Pools” on page 2-56.

For the default local profile, the default number of workers is one per physical CPU core using a
single computational thread. This is because even though each physical core can have several
virtual cores, the virtual cores share some resources, typically including a shared floating point
unit (FPU). Most MATLAB computations use this unit because they are double-precision floating
point. Restricting to one worker per physical core ensures that each worker has exclusive access
to a floating point unit, which generally optimizes performance of computational code. If your
code is not computationally intensive, for example, it is input/output (I/O) intensive, then consider

1-14
What Is Parallel Computing?

using up to two workers per physical core. Running too many workers on too few resources may
impact performance and stability of your machine.
• Speed up: Accelerate your code by running on multiple MATLAB workers or GPUs, for example,
using parfor, parfeval, or gpuArray.
• Scale up your data: Partition your big data across multiple MATLAB workers, using tall arrays and
distributed arrays. To learn more, see “Big Data Processing”.
• Asynchronous processing: Use parfeval to execute a computing task in the background without
waiting for it to complete.
• Scale up to clusters and clouds: If your computing task is too big or too slow for your local
computer, you can offload your calculation to a cluster onsite or in the cloud using MATLAB
Parallel Server. For more information, see “Clusters and Clouds”.

See Also

Related Examples
• “Choose a Parallel Computing Solution” on page 1-16
• “Identify and Select a GPU Device” on page 8-19
• “Decide When to Use parfor” on page 2-2
• “Run Single Programs on Multiple Data Sets” on page 3-2
• “Evaluate Functions in the Background Using parfeval” on page 1-23
• “Distributing Arrays to Parallel Workers” on page 3-10
• “Run Batch Parallel Jobs” on page 1-9

1-15
1 Getting Started

Choose a Parallel Computing Solution


Process your data faster or scale up your big data computation using the capabilities of MATLAB,
Parallel Computing Toolbox and MATLAB Parallel Server.

Problem Solutions Required Products More Information


Do you want Profile your code. MATLAB “Profile Your Code to Improve
to process Performance” (MATLAB)
your data Vectorize your code. MATLAB “Vectorization” (MATLAB)
faster?
Use automatic parallel MATLAB “Run MATLAB Functions with
computing support in Automatic Parallel Support” on
MathWorks products. Parallel Computing page 1-20
Toolbox
If you have a GPU, try MATLAB “Run MATLAB Functions on a
gpuArray. GPU” on page 8-9
Parallel Computing
Toolbox
Use parfor. MATLAB “Interactively Run a Loop in
Parallel Using parfor” on page 1-
Parallel Computing 7
Toolbox
Are you Try parfeval. MATLAB “Evaluate Functions in the
looking for Background Using parfeval” on
other ways to Parallel Computing page 1-23
speed up your Toolbox
processing? Try spmd. MATLAB “Run Single Programs on
Multiple Data Sets” on page 3-
Parallel Computing 2
Toolbox
Do you want To work with out-of- MATLAB “Big Data Workflow Using Tall
to scale up memory data with any Arrays and Datastores” on page
your big data number of rows, use tall 5-46
calculation? arrays.

This workflow is well


suited to data analytics and
machine learning.
Use tall arrays in parallel MATLAB “Use Tall Arrays on a Parallel
on your local machine. Pool” on page 5-48
Parallel Computing
Toolbox
Use tall arrays in parallel MATLAB “Use Tall Arrays on a Spark
on your cluster. Enabled Hadoop Cluster” on
Parallel Computing page 5-51
Toolbox

MATLAB Parallel
Server

1-16
Choose a Parallel Computing Solution

Problem Solutions Required Products More Information


If your data is large in MATLAB “Run MATLAB Functions with
multiple dimensions, use Distributed Arrays” on page 4-
distributed instead. Parallel Computing 19
Toolbox
This workflow is well
suited to linear algebra MATLAB Parallel
problems. Server
Do you want Use batch to run your MATLAB Parallel “Run Batch Parallel Jobs” on
to offload to a code on clusters and Server page 1-9
cluster? clouds.

1-17
Another Random Scribd Document
with Unrelated Content
inspiration; a quintet by Ernst von Dohnányi. Sgambati has written a
quintet without distinction. Mr. Dunhill tells us in his book[83] on
chamber music that there is an excellent quintet by a young British
composer, James Friskin. Moreover the sextet for piano and strings
by Joseph Holbrooke, in which a double bass is added to the
quartet, deserves mention. And among American composers Arthur
Foote and George Chadwick should be mentioned, the one for his
quintet in A minor, opus 38, the other for his quintet in E-flat major,
without opus number.

Only a few piano quartets have been written since those of Brahms
and Dvořák which are significant of any development or even of a
freshness of life. Those of Fauré have already been mentioned as
being perfect in style, but on the whole they seem less original and
less interesting than the quintet by the same composer. Saint-Saëns’
quartet, opus 41, is remarkable for the brilliant treatment of the
pianoforte, and the fine sense of instrumental style which it reveals,
but is on the whole uninteresting and is certainly insignificant
compared with the quartets of Fauré or those of d’Indy and
Chausson. D’Indy’s quartet, opus 7, in A minor is no longer a new
work, nor does it show in any striking way those qualities in French
music which have more recently come to splendid blooming. But it is
carefully wrought and the three movements are moderately
interesting. The second is perhaps the best music, the third is
certainly the most spirited. There is more of the manner though
perhaps less of the spirit of César Franck in Chausson’s quartet in A
major, opus 30.

In the North we come across an early work by Richard Strauss, opus


13, in the form of a pianoforte quartet, which is exceedingly long, but
interesting to the student who wishes to trace the development of
Strauss’ art of self-expression. The pianoforte is not given undue
prominence and the scoring is worthier of more interesting material.
Still farther north one meets with Christian Sinding’s quartet in E
minor, which is chiefly a tour de force for the pianist.
Excepting sonatas for pianoforte and various other instruments, the
great amount of chamber music into which the piano enters consists
of trios, pianoforte quartets and pianoforte quintets. Mention must
not be omitted, however, of Schubert’s quintet for piano and strings
in which the cello is replaced by double bass. The employment of the
air of one of his songs (Die Forelle) as the subject for the variations
in the slow movement has given the work the name Forellen Quintet.
The treatment of the piano in the variations is exceedingly effective.

III
As to sonatas, those for violin and piano are treated elsewhere.
There are too many to be discussed in this chapter. There are fewer
for the cello and the best of these may here be mentioned. Skill in
playing the violoncello was slower to develop than that in playing the
violin. This was probably because the viola da gamba with its six
strings was easier to play and was more in favor as a solo
instrument. The baryton was a kind of viola da gamba with
sympathetic strings stretched under the fingerboard, and even as
late as the maturity of Haydn this instrument was in general favor.
But the tone of the viola da gamba was lighter than that of the
violoncello, and so by the beginning of the eighteenth century the
cello was preferred to the gamba for the bass parts of works like
Corelli’s in concerted style. Little by little it rose into prominence from
this humble position. Meanwhile the immortal suites for the
violoncello alone by Bach had been written. Bach was probably
advised in the handling of the instrument by Abel, who was a famous
gamba player; so that it seems likely that these suites were
conceived for the gamba as much as for the cello.[84] The last of
them, however, was written especially for the viola pomposa, an
instrument which Bach invented himself. This was a small cello with
an extra string tuned to E, a fifth above the A of the cello.

Among composers who wrote expressly for the cello were Giorgio
Antoniotti, who lived in Milan about 1740, and Lanzetti, who was
'cellist to the king of Sardinia between 1730 and 1750. Later the
Italians A. Canavasso and Carlo Ferrari (b. 1730) became famous as
players, and Boccherini also was a brilliant cellist.

However, the cello sprang into its present importance as a solo


instrument largely through the Frenchman Jean Louis Duport (1749-
1819), whose understanding of the instrument led him to a discovery
of those principles of fingering and bowing which have made modern
virtuosity possible. His Essai sur le doigter du violoncelle et la
conduite de l’archet was truly an epoch-making work. That a new
edition was issued as recently as 1902 proves the lasting worth and
stability of his theories.

Frederick William II, King of Prussia, to whom Mozart dedicated


three of his string quartets, was a pupil of Duport’s. Mozart’s
quartets, written with an eye to pleasing the monarch, give special
prominence to the cello. Hence through Duport we approach the
great masters and their works for the cello.

Beethoven wrote five sonatas for cello and piano. The first two, opus
5, were written in 1796, while Beethoven was staying in Berlin,
evidently with the intention of dedicating them to Frederick William II,
and for his own appearance in public with Duport. They are
noticeably finer, or more expressive works, than the early sonatas for
violin, opus 12; perhaps because the cello does not suggest a style
which, empty of meaning, is yet beautiful and effective by reason of
sheer brilliance. The violin sonatas, all of them except the last, are
largely virtuoso music. The cello sonatas are more serious and on
the whole more sober. This may be laid to thoroughly practical
reasons. The cello has not the variety of technical possibilities that
the violin has, nor even in such rapid passages as can be played
upon it can it give a brilliant or carrying tone. By reason of its low
register it can be all too easily overpowered by the piano. Only the
high notes on the A string can make themselves heard above a solid
or resonant accompaniment. Hence if the composer desires to write
a brilliant, showy sonata for pianoforte and cello, he can do so only
by sacrificing all but the topmost registers of the cello. Even at that
the piano is more than likely to put the cello wholly in the shade.

To write effectively for the combination, therefore, and in such a way


as to bring out the variety of resources of the cello, limited as they
may be, one must not write brilliantly, but clearly, in a transparent
and careful style. Of such a style these early sonatas of Beethoven
offer an excellent example, though the music itself sounds today old-
fashioned and formal.

The best of the first sonata, which consists of a long slow


introduction, an allegro, and an allegro vivace, all in F major, is the
last movement. This is in mood a little scherzo, in form a rondo.
Particularly the chief subject is delightfully scored for the two
instruments at the very opening. The second sonata, in G minor,
begins like the first with a long slow introduction, in which the piano
has some elaborate figuration. There follows an allegro molto, rather
a presto, in 3/4 time, the opening theme of which has almost the
spontaneous melodiousness of Schubert. The pianoforte has a great
deal of work in triplets, which are high on the keyboard when the
cello is playing in its lower registers, and only low when the cello is
high enough to escape being overpowered. This constant movement
in triplets will remind one of the first pianoforte sonata. The final
rondo is on the whole less effective than the rondo of the first sonata.
Toward the end, however, there is considerable animation in which
one finds cello and piano taking equal share. The piano has for
many measures groups of rapid accompaniment figures against
which the cello has saucy little phrases in staccato notes. Then the
cello takes up the rolling figures with great effect and the piano has a
capricious and brilliant melody in high registers.

The next sonata, opus 69, in A major, was not written until twelve
years later. A different Beethoven speaks in it. The first theme,
announced at once by the cello alone, gives the key to the spirit of
the work. It is gentle (dolce) in character, but full of a quiet and
moving strength. After giving the first phrase of it alone the cello
holds a long low E, over which the piano lightly completes it. There is
a cadenza for piano, and then, after the piano has given the whole
theme once again, there is a short cadenza for cello, leading to a
short transition at the end of which one finds the singing second
theme. This is first given out by the piano over smooth scales by the
cello, and then the cello takes it up and the piano plays the scales.
Nothing could be more exquisite than the combination of these two
instruments in this altogether lovely sonata, which without effort
permits each in turn or together to reveal its most musical qualities.
Sometimes the cello is low and impressive, strong and independent,
while the piano is lively and sparkling, as in the closing parts of the
first section of the first movement. Again the cello has vigorous
rolling figures that bring out the fullest sonority the instrument is
capable of, while the piano adds the theme against such a vibrant
background, with no fear of drowning the cello, as in the first portions
of the development section.

The scherzo is the second movement, and here again each


instrument is allowed a full expression of its musical powers. The
style is light, the rhythm syncopated. There is fascinating play at
imitations. And in the trio the cello plays in rich double-stops. There
is but a short adagio before the final allegro, only a brief but telling
expression of seriousness, and then the allegro brings to full flower
the quiet, concealed, so to speak, and tranquil happiness of the first
movement.

Finally there are two sonatas, opus 102, which are in every way
representative of the Beethoven of the last pianoforte sonatas and
even the last quartets. The first of these—in C major—Beethoven
himself entitled a ‘free sonata,’ and the form is indeed free, recalling
the form of the A major pianoforte sonata, opus 101, upon which
Beethoven was working at the same time. In spirit, too, it is very like
the A major sonata, but lacks the more obvious melodic charm. The
sonata begins with an andante, in that singing yet mystical style
which characterizes so much of Beethoven’s last work, and the
andante does not end but seems to lose itself, to become absorbed
in a mist of trills, out of which there springs a vigorous allegro vivace,
in the dotted march rhythm which one finds in the later pianoforte
sonatas. After this, a short rhapsodical adagio brings us back to a bit
of the opening andante, which once more trills itself away, seems to
be snuffed out, as it were, by a sudden little phrase which, all
unexpected, announces the beginning of the final rondo.

The second of the two, in D major, is more regular in structure. There


is an allegro con brio in clear form, an adagio, and a final fugue,
following the adagio without pause. In both these sonatas every
trace of the virtuoso has disappeared. Both are fantasies, or poems
of hidden meaning. Because of this mysteriousness, and also
because the lack of all virtuoso elements seems to leave the
combination a little dry, the sonatas are not quite so satisfactory as
the opus 69.

Besides the sonatas Beethoven wrote three sets of variations for


cello and piano, only one of which—on the air Ein Mädchen oder
Weibchen from Mozart’s ‘Magic Flute’—has an opus number. These
are early works and are without special interest or value.

It is remarkable how little chamber music has been written for


pianoforte and cello by subsequent composers. By Schumann there
is only a set of five short pieces, in Volkston, opus 102. Some of
these are charming, but all are, of course, slight. Schumann uses the
cello in very high registers, notably in the first, third, and fourth. In
the second part of the third he even writes sixths for the cello in such
high registers. The low registers are rather neglected, so that the set
is monotonous in color.

Mendelssohn wrote some Variations concertantes, opus 17, for


piano and cello, and two sonatas, opus 45 in B-flat, and opus 58 in
D. The piano predominates in the variations. The second and fourth
are hardly more than piano solos; but in others the cello is effectively
handled. The third, the fifth with its pizzicato, which, by the way
Mendelssohn stood in a fair way to overwhelm entirely by a noisy
piano, and the eighth, with its long held note, later its wide rolling
figures and powerful sixths, account in a measure for the wide
popularity which this work once enjoyed among cellists. But the life
has gone out of it. Of the sonatas little can be said but that they are
generally well scored, and that they display the qualities of the cello
in its various registers. The piano is less well treated, for
Mendelssohn had, after all, little instinct for a variety of pianoforte
effects. The theme in the last movement of the first sonata has
something of a vigorous swing. The chief theme of the first
movement of the second sonata, too, though it will irritate those to
whom Mendelssohn’s mannerisms have become distressing, has a
breadth of line, and rises up quite manfully to its high point. But the
second theme rather proves that there can be too much of a good
thing. The allegretto is not dangerously fascinating, but it has a sort
of charm. Mendelssohn’s treatment of the cello is generally suited to
the salon. He brings out many of its qualities, but in a way which
seems to accentuate the shortcomings of the instrument. In his
hands the cello is a sentimental singer with a small voice.

With Brahms the cello is more an instrument of mystery and gloom.


His fondness for low notes here causes him to write constantly for
the two lower strings, and his sonatas may suffer in the opinion of
some by the lack of a more vehement expression which is in some
measure possible to the upper strings. The first sonata, opus 38, is
in E minor and is more acceptable to the unfamiliar ear than the later
one in F major, opus 99. But the tone of the great part of the E minor
sonata is gloomy, though the second theme of the first movement
has warmth and the allegretto quasi menuetto a certain light
movement. The F major sonata was probably written with the playing
of Robert Hausmann (b. 1852) in mind. Mr. Fuller-Maitland finds in it
a ‘mood of wild energy such as is not frequent in Brahms’ later
works.’ For all the gloominess of the first and the sternness of the
second of these sonatas there is a splendid dignity in both which
must ever give them a firm place in the literature for the violoncello. It
may be that they lose in grace because Brahms has so carefully
shunned any brilliant display; but on the other hand what they lose in
grace is more than made up by what they gain in virility. The
sentimental qualities in the cello have been so much emphasized
that without these sonatas of Brahms, and those of Beethoven, one
might well believe that it had none other than a sugary voice.
Great Violoncellists: Jean Gerardi, David Popper, Pablo
Casals.
Among more modern sonatas only two stand out with any
prominence. One of these is by Grieg. It is in A minor, full of passion
and swing. No doubt it owes its prominence to the charm of the
Norwegian material out of which Grieg has made it. There are
incisive rhythms that make one aware of the strength of the cello.
The piano is a little too prominent in certain parts. Grieg has favored
its brilliance. But nevertheless the sonata is a manly and refreshing
work.

A sonata for cello and piano in F major, opus 6, by Richard Strauss


has been gratefully adopted by cellists. Musically it is neither
profound nor interesting, though there is no lack of technical skill, as
in the fugal parts of the first movement, and though there are some
passages of great beauty. The second theme of the first movement
is what one might call luscious; there is a glorious theme in the last
movement contrasting with the light motives which generally
predominate; and the climax of the slow movement is passionate.
The pianoforte is not well handled, and there is a sameness in
rhythms; but the balance between the two instruments is remarkably
well kept. In the development of second theme material in the first
movement there are passages in which the cello is made boldly and
passionately to sing, and the use of its very low notes in the climax
of the slow movement, as well as the light figures in the last, leave
no doubt as to the variety which is in spite of all possible to it.

There remains only to mention the sonata by Max Reger, opus 78,
two sonatas by Emanuel Moór, one by Guy Ropartz in G minor, two
by Camille Saint-Saëns, opus 32 and opus 123, as among those
which make a partial success of the extremely difficult combination.

If excellent music for cello and piano is so rare, music for the viola
and piano is almost entirely wanting. The two instruments do not go
well together. Practically the only example of the combination in the
works of the great masters is furnished by Schumann’s
Märchenbilder, which are but indifferent music. York Bowen, an
English composer, has considered it worthy of the sonata, and has
written two for it, one in C minor and one in F major. Mr. Benjamin
Dale has also written some agreeable pieces, including a suite and a
fantasy.

IV
There are relatively few works also in which the piano has been
combined with wind instruments. The wind instruments which have
been most employed in chamber music are the flute, oboe, clarinet,
and bassoon. Occasionally there is a short bit for horn, or for English
horn, and rarely something for trumpet or saxophone. No special
combination of these instruments either by themselves or with the
piano has obtained signal favor, and we may therefore confine
ourselves to mentioning with brief notice the various works of the
great masters in turn. We will include likewise here their chamber
works for wind instruments without pianoforte.

Of Haydn’s works we will only mention the two trios for flute and
violin and the octet for two oboes, two clarinets, two horns and two
bassoons. Most of Mozart’s works for wind instruments bear the
mark of some occasion. There are a great many Serenades and
Divertimenti, which can hardly be called representative of his best
and can hardly be distinguished from each other. Among the
interesting works are the concerto for flute and harp (K 299), the trio
for clarinet, viola and piano (K 498), the quintet for pianoforte, oboe,
clarinet, horn and bassoon (K 452), and the quintet for clarinet and
strings (K 581). The trio was composed in Vienna in August, 1786,
and is conspicuous for a fine handling of the viola. The clarinet is not
used at all in the lower registers, lest it interfere with the viola.
Mozart considered the quintet for piano and wind instruments at the
time he wrote it the best thing he had written. It was composed in
March, 1784, for a public concert and was received with great
applause. Jahn wrote of it that from beginning to end it was a true
triumph in the art of recognizing and adapting the peculiar
euphonious quality of each instrument. Doubtless it served as a
model for Beethoven’s composition in the same form.

Mozart was the first among composers to recognize the beauty of


the clarinet. Among his warmest friends was Anton Stadler, an
excellent clarinet player, and the great clarinet quintet was
composed for Stadler and is known as the Stadler quintet. The
clarinet, owing to the peculiar penetrating quality, is somewhat
necessarily treated as a solo instrument; but the background
supplied by the strings is no mere accompaniment. The whole work
shows the finest care and may well rank with the string quintets
among Mozart’s greatest and most pleasing works.

Beethoven’s works for wind instruments in chamber music are not


numerous. In the expression of his forceful and passionate ideas he
demanded a medium of far greater technical ability than he could
ask of the wind players of that day. There is an early trio for piano,
flute and bassoon, written before he left Bonn; an octet in E-flat for
two oboes, two clarinets, two bassoons, and two horns, written in
1792, but published as opus 103; and a few other early works
without value; a sextet for two violins, viola, cello, and two horns,
written in 1795 and not published till 1819, then as opus 81; another
early sextet, opus 71, for two clarinets, two bassoons, and two
horns; and finally the most considerable of his compositions for an
ensemble of wind instruments, the quintet in E-flat major, opus 16,
for piano, oboe, clarinet, horn, and bassoon, the septet in E-flat,
opus 20, for clarinet, horn, bassoon, violin, viola, cello, and double-
bass. The sonata in F, opus 17, for horn and piano was written in a
night, according to a well-known story, for the horn player Punto—
originally Stich—and can hardly be considered as more than a bit of
pot-boiling.

Most of these early works were written for an occasion. Prince


Maximilian Franz, in whose service Beethoven was for a time
employed before he left Bonn and came to Vienna, was especially
fond of wind instruments. His ‘Table-music’ was generally of this kind
and he had in his employ two oboists, two clarinetists, two horn
players, and two players of the bassoon. Beethoven’s early works
therefore may be considered to have been written with these players
in mind. He was sure of having them performed. In later years he
looked with no little scorn upon many of them. Even of the septet,
opus 20, he is reported to have said that there was some natural
feeling in it but little art. And of the early sextet which was published
in 1809 as opus 70 he wrote to his publishers that it was one of his
early pieces and was, moreover, written in a night, that there was
little further to say about it except that it was written by a composer
who had at least produced some better works—though many men
might still consider this the best. Yet it is to be observed that in nearly
all of them Beethoven made the best of the possibilities open to him,
possibilities which were greatly restricted by the general lack of
technical skill in playing wind instruments, and that all show at least
a clear and logical form.

The octet, opus 103, the sextet, opus 81, the sextet, opus 71, and
the quintet, opus 16, are all in the key of E-flat major, a key which is
favorable to all wood-wind instruments. The octet was written, as we
have said, in 1792. Beethoven rearranged it as a string quintet and
in that form it was published in 1796 as opus 4. In its original form
the chief rôle is taken by the oboe, especially in the slow second
movement, which has the touch of a pastoral idyl. The last
movement in rondo form offers the clarinets an opportunity in the first
episode. A Rondino for the same combination of instruments written
about the same time seems to forecast parts of Fidelio. The sextet
for two horns and string quartet is little more than a duet for the
horns with a string accompaniment.

We may pass over the trio for two oboes and English horn, published
as opus 87, and the flute duet written for his friend Degenhart on the
night of August 23, 1792. The sextet, opus 71, which Beethoven said
was written in a night, is none the less written with great care. The
prelude introduction and the cheerful style suggest some happy sort
of serenade music. The melody (bassoon) in the adagio is of great
beauty. There are, among its movements, a minuet and a lively
rondo in march rhythm.
The quintet, opus 16, in which the piano is joined with four
instruments may well have been suggested by Mozart’s quintet in
the same form; though Beethoven was a great pianist and had
already in an earlier trio and a sonata experimented in combining the
pianoforte with wind instruments. The wind instruments are here
treated as an independent group and the part for the piano is
brilliant. There is a richness of ideas throughout which raises the
work above the earlier compositions for wind.

The septet in E-flat, opus 20, for clarinet, horn, bassoon, violin, viola,
cello and double-bass, is undoubtedly the finest of Beethoven’s
works for combinations of wind instruments. It was written just before
1800 and was so full of joy and humor that those who had heard
Beethoven’s other works with a hostile ear were quite won over for
the time being by this. Technically it may be considered the result of
all his previous experiments. It is rather in the manner of a suite.
There is a slow prelude, an allegro con brio, an adagio cantabile, a
tempo di menuetto, which he later arranged for pianoforte and
incorporated in the little sonata, opus 49, No. 1, a theme and
variations, a scherzo, and a final presto, which is preceded by an
introductory andante of great beauty and of more seriousness than is
characteristic of the work as a whole. The success of the work is due
first to the freshness of the ideas, then to the skill with which they are
arranged for the difficult combination of instruments. For Beethoven
has made something of charm out of the very shortcomings of the
wind instruments. The short phrases, the straightforward character of
all the themes and motives, and the general simplicity all show these
necessarily restricted instruments at their very best.

Schubert’s octet for two violins, viola, cello, double-bass, clarinet,


horn, and bassoon is among the most beautiful pieces of chamber
music for the wind instruments. It is the first of Schubert’s
contributions to chamber music which fully reveals his genius.
Mention may also be made of the variations for flute and piano on
the melody of one of his songs Trockene Blumen.
None of the great composers was more appreciative of the clarinet
than Weber. It is made to sound beautifully in all his overtures,
notably in that to ‘Oberon.’
Arnold Schönberg.

After a photo from life (1913)


He wrote two concertos for clarinet and orchestra, and a big sonata
in concerto style, opus 48, for clarinet and piano. Besides these
there is an Air and Variations, opus 33, for clarinet and piano, and a
quintet, opus 34, for clarinet and strings. Weber also wrote a
charming trio, opus 63, for flute, cello, and piano.

Spohr, too, showed a special favor towards the clarinet and he, like
Weber, wrote two concertos for it. Three of Spohr’s works which
were broadly famous in their day and much beloved are the nonet for
strings, flute, oboe, clarinet, horn, and bassoon, opus 31; the octet
for violin, two violas, cello, double-bass, clarinet, and two horns,
opus 32; and the quintet for flute, clarinet, horn, bassoon, and piano.
The two former are delicately scored, but the latter is marred by the
piano. Some idea of the fervor with which Spohr’s music was loved
may be gained from the fact that Chopin, the most selective and
fastidiously critical of all composers, conceived Spohr’s nonet to be
one of the greatest works of music. Doubtless the perfection of style
delighted him, a virtue for which he was willing to forgive many a
weakness. At present Spohr’s music is in danger of being totally
neglected.

Mendelssohn contributed nothing to this branch of chamber music,


and Schumann’s contributions were slight enough. There is a set of
Märchenerzählungen, opus 132, for clarinet, viola, and pianoforte,
which have some romantic charm but no distinction, and three
Romances for oboe. Brahms’ trio for clarinet, violoncello, and piano
has already been mentioned. Besides these he wrote two excellent
sonatas for clarinet and piano, and a quintet for clarinet and strings.
These works are almost unique among Brahms’ compositions for an
unveiled tenderness and sweetness. All three were probably in a
measure inspired by the playing of his friend Professor Mühlfeld,
who even from the orchestra made an impression with his clarinet
upon the memories of those who gathered at the epoch-making
performances at Bayreuth. The quintet, opus 115, is one of the most
poetic and moving of all Brahms’ compositions. The two clarinet
sonatas, one in F minor and one in E-flat major, were published
together in 1896 as opus 120. In these there is the same unusual
tenderness which appeals so directly to the heart in the quintet.

Since the time of Brahms most composers have written something in


small forms for the wind instruments with or without piano or strings.
Most of these have a charm, yet perhaps none is to be distinguished.
One of the most pleasing is Pierné’s Pastorale variée, for flute, oboe,
clarinet, trombone, horn, and two bassoons. But here we have in
truth a small wind orchestra. D’Indy’s Chanson et Danses, opus 50,
two short pieces for flute, two clarinets, horn, and two bassoons,
Fauré’s Nocturne, opus 33, for flute, two oboes, two clarinets, two
horns and two bassoons, and some of the smaller pieces of a
composer little known, J. Mouquet, are representative of the best
that the modern French composers have done in this kind of
chamber music. Debussy’s Rhapsodie, for clarinet and piano, is
evidently a pièce d’occasion. It was written for the Concours at the
Conservatoire. Max Reger’s sonata in A-flat, opus 49, No. 1, for
clarinet and piano, and a concerto for Waldhorn and piano by
Richard Strauss stand out conspicuously among the works of the
Germans. In this country Mr. Charles Martin Loeffler is to be
recognized as one with an unusually keen instinct for the effects of
wind instruments in chamber music. His two Rhapsodies for oboe,
viola, and piano show a delicacy of style that cannot be matched in
work for a similar combination by other composers.
FOOTNOTES:
[82] A few measures after L in the edition published by J. Hamelle, Paris.

[83] ‘Chamber Music, a Treatise for Students,’ by Thomas F. Dunhill. London,


1913.

[84] See Spitta: ‘Johann Sebastian Bach.’


Welcome to our website – the ideal destination for book lovers and
knowledge seekers. With a mission to inspire endlessly, we offer a
vast collection of books, ranging from classic literary works to
specialized publications, self-development books, and children's
literature. Each book is a new journey of discovery, expanding
knowledge and enriching the soul of the reade

Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.

Let us accompany you on the journey of exploring knowledge and


personal growth!

textbookfull.com

You might also like