Buy ebook MATLAB Parallel Computing Toolbox User s Guide The Mathworks cheap price
Buy ebook MATLAB Parallel Computing Toolbox User s Guide The Mathworks cheap price
com
https://textbookfull.com/product/matlab-parallel-computing-
toolbox-user-s-guide-the-mathworks/
OR CLICK BUTTON
DOWNLOAD NOW
https://textbookfull.com/product/matlab-econometrics-toolbox-user-s-
guide-the-mathworks/
textboxfull.com
https://textbookfull.com/product/matlab-bioinformatics-toolbox-user-s-
guide-the-mathworks/
textboxfull.com
https://textbookfull.com/product/matlab-mapping-toolbox-user-s-guide-
the-mathworks/
textboxfull.com
https://textbookfull.com/product/matlab-optimization-toolbox-user-s-
guide-the-mathworks/
textboxfull.com
MATLAB Trading Toolbox User s Guide The Mathworks
https://textbookfull.com/product/matlab-trading-toolbox-user-s-guide-
the-mathworks/
textboxfull.com
https://textbookfull.com/product/matlab-computer-vision-toolbox-user-
s-guide-the-mathworks/
textboxfull.com
https://textbookfull.com/product/matlab-curve-fitting-toolbox-user-s-
guide-the-mathworks/
textboxfull.com
https://textbookfull.com/product/matlab-fuzzy-logic-toolbox-user-s-
guide-the-mathworks/
textboxfull.com
https://textbookfull.com/product/matlab-global-optimization-toolbox-
user-s-guide-the-mathworks/
textboxfull.com
Parallel Computing Toolbox™
User's Guide
R2020a
How to Contact MathWorks
Phone: 508-647-7000
Getting Started
1
Parallel Computing Toolbox Product Description . . . . . . . . . . . . . . . . . . . . 1-2
v
Parallel for-Loops (parfor)
2
Decide When to Use parfor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-2
parfor-Loops in MATLAB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-2
Deciding When to Use parfor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-2
Example of parfor With Low Parallel Overhead . . . . . . . . . . . . . . . . . . . . . 2-3
Example of parfor With High Parallel Overhead . . . . . . . . . . . . . . . . . . . . 2-4
vi Contents
Temporary Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-48
Uninitialized Temporaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-48
Temporary Variables Intended as Reduction Variables . . . . . . . . . . . . . . . 2-49
ans Variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-49
vii
Load Distributed Arrays in Parallel Using datastore . . . . . . . . . . . . . . . . 3-10
Alternative Methods for Creating Distributed and Codistributed Arrays . 3-12
Programming Overview
5
How Parallel Computing Products Run a Job . . . . . . . . . . . . . . . . . . . . . . . 5-2
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-2
Toolbox and Server Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-3
Life Cycle of a Job . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-6
viii Contents
Apply Callbacks to MATLAB Job Scheduler Jobs and Tasks . . . . . . . . . . . 5-21
ix
Program Independent Jobs
6
Program Independent Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-2
GPU Computing
8
GPU Capabilities and Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-2
Capabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-2
Performance Benchmarking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-2
x Contents
Establish Arrays on a GPU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-3
Create GPU Arrays from Existing Data . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-3
Create GPU Arrays Directly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-4
Examine gpuArray Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-4
Save and Load gpuArrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-5
xi
Objects
9
Functions
10
xii Contents
1
Getting Started
Parallel Computing Toolbox lets you solve computationally and data-intensive problems using
multicore processors, GPUs, and computer clusters. High-level constructs—parallel for-loops, special
array types, and parallelized numerical algorithms—enable you to parallelize MATLAB® applications
without CUDA or MPI programming. The toolbox lets you use parallel-enabled functions in MATLAB
and other toolboxes. You can use the toolbox with Simulink® to run multiple simulations of a model in
parallel. Programs and models can run in both interactive and batch modes.
The toolbox lets you use the full processing power of multicore desktops by executing applications on
workers (MATLAB computational engines) that run locally. Without changing the code, you can run
the same applications on clusters or clouds (using MATLAB Parallel Server™). You can also use the
toolbox with MATLAB Parallel Server to execute matrix calculations that are too large to fit into the
memory of a single machine.
1-2
Parallel Computing Support in MathWorks Products
Most MathWorks products enable you to run applications in parallel. For example, Simulink models
can run simultaneously in parallel, as described in “Run Multiple Simulations” (Simulink). MATLAB
Compiler™ and MATLAB Compiler SDK™ software let you build and deploy parallel applications; for
example, see the “Parallel Computing” section of MATLAB Compiler “Standalone Applications”
(MATLAB Compiler).
Several MathWorks products now offer built-in support for the parallel computing products, without
requiring extra coding. For the current list of these products and their parallel functionality, see:
https://www.mathworks.com/products/parallel-computing/parallel-support.html
1-3
1 Getting Started
In this section...
“Creating Distributed Arrays” on page 1-4
“Creating Codistributed Arrays” on page 1-5
If your data is currently in the memory of your local machine, you can use the distributed function
to distribute an existing array from the client workspace to the workers of a parallel pool.
Distributed arrays use the combined memory of multiple workers in a parallel pool to store the
elements of an array. For alternative ways of partitioning data, see “Distributing Arrays to Parallel
Workers” on page 3-10.You can use distributed arrays to scale up your big data computation.
Consider distributed arrays when you have access to a cluster, as you can combine the memory of
multiple machines in your cluster.
A distributed array is a single variable, split over multiple workers in your parallel pool. You can
work with this variable as one single entity, without having to worry about its distributed nature.
Explore the functionalities available for distributed arrays in the Parallel Computing Toolbox:
“Run MATLAB Functions with Distributed Arrays” on page 4-19.
When you create a distributed array, you cannot control the details of the distribution. On the
other hand, codistributed arrays allow you to control all aspects of distribution, including
dimensions and partitions. In the following, you learn how to create both distributed and
codistributed arrays.
• Use the distributed function to distribute an existing array from the client workspace to the
workers of a parallel pool.
• You can directly construct a distributed array on the workers. You do not need to first create the
array in the client, so that client workspace memory requirements are reduced. The functions
available include eye(___,'distributed'), rand(___,'distributed'), etc. For a full list,
see the distributed object reference page.
• Create a codistributed array inside an spmd statement, see “Single Program Multiple Data
(spmd)” on page 1-12. Then access it as a distributed array outside the spmd statement. This
lets you use distribution schemes other than the default.
In this example, you create an array in the client workspace, then turn it into a distributed array:
You have createdB as a distributed array, split over the workers in your parallel pool. This is
shown in the figure.
1-4
Create and Use Distributed Arrays
• “Partitioning a Larger Array” on page 4-6 — Start with a large array that is replicated on all
workers, and partition it so that the pieces are distributed across the workers. This is most useful
when you have sufficient memory to store the initial replicated array.
• “Building from Smaller Arrays” on page 4-6 — Start with smaller replicated arrays stored on
each worker, and combine them so that each array becomes a segment of a larger codistributed
array. This method reduces memory requirements as it lets you build a codistributed array from
smaller pieces.
• “Using MATLAB Constructor Functions” on page 4-7 — Use any of the MATLAB constructor
functions like rand or zeros with a codistributor object argument. These functions offer a quick
means of constructing a codistributed array of any size in just one step.
In this example, you create a codistributed array inside an spmd statement, using a nondefault
distribution scheme. First, define 1-D distribution along the third dimension, with 4 parts on worker
1, and 12 parts on worker 2. Then create a 3-by-3-by-16 array of zeros.
For more details on codistributed arrays, see “Working with Codistributed Arrays” on page 4-4.
See Also
Related Examples
• “Distributing Arrays to Parallel Workers” on page 3-10
• “Big Data Workflow Using Tall Arrays and Datastores” on page 5-46
• “Single Program Multiple Data (spmd)” on page 1-12
1-5
1 Getting Started
ver
When you enter this command, MATLAB displays information about the version of MATLAB you are
running, including a list of all toolboxes installed on your system and their version numbers.
If you want to run your applications on a cluster, see your system administrator to verify that the
version of Parallel Computing Toolbox you are using is the same as the version of MATLAB Parallel
Server installed on your cluster.
1-6
Interactively Run a Loop in Parallel Using parfor
This example calculates the spectral radius of a matrix and converts a for-loop into a parfor-loop.
Find out how to measure the resulting speedup.
1 In the MATLAB Editor, enter the following for-loop. Add tic and toc to measure the time
elapsed.
tic
n = 200;
A = 500;
a = zeros(n);
for i = 1:n
a(i) = max(abs(eig(rand(A))));
end
toc
2 Run the script, and note the elapsed time.
Elapsed time is 31.935373 seconds.
3 In the script, replace the for-loop with a parfor-loop.
tic
n = 200;
A = 500;
a = zeros(n);
parfor i = 1:n
a(i) = max(abs(eig(rand(A))));
end
toc
4 Run the new script, and run it again. Note that the first run is slower than the second run,
because the parallel pool takes some time to start and make the code available to the workers.
Note the elapsed time for the second run.
By default, MATLAB automatically opens a parallel pool of workers on your local machine.
Starting parallel pool (parpool) using the 'local' profile ... connected to 4 workers.
...
Elapsed time is 10.760068 seconds.
1-7
1 Getting Started
The parfor run on four workers is about three times faster than the corresponding for-loop
run. The speed-up is smaller than the ideal speed-up of a factor of four on four workers. This is
due to parallel overhead, including the time required to transfer data from the client to the
workers and back. This example shows a good speed-up with relatively small parallel overhead,
and benefits from conversion into a parfor-loop. Not all for-loop iterations can be turned into
faster parfor-loops. To learn more, see “Decide When to Use parfor” on page 2-2.
One key requirement for using parfor-loops is that the individual iterations must be independent.
Independent problems suitable for parfor processing include Monte Carlo simulations and
parameter sweeps. For next steps, see “Convert for-Loops Into parfor-Loops” on page 2-7.
In this example, you managed to speed up the calculation by converting the for-loop into a parfor-
loop on four workers. You might reduce the elapsed time further by increasing the number of workers
in your parallel pool, see “Scale Up parfor-Loops to Cluster and Cloud” on page 2-21.
You can modify your cluster profiles to control how many workers run your loops, and whether the
workers are local or on a cluster. For more information on profiles, see “Discover Clusters and Use
Cluster Profiles” on page 5-11.
Modify your parallel preferences to control whether a parallel pool is created automatically, and how
long it remains available before timing out. For more information on preferences, see “Specify Your
Parallel Preferences” on page 5-9.
You can run Simulink models in parallel with the parsim command instead of using parfor-loops.
For more information and examples of using Simulink in parallel, see “Run Multiple Simulations”
(Simulink).
See Also
parfor | parpool | tic | toc
More About
• “Decide When to Use parfor” on page 2-2
• “Convert for-Loops Into parfor-Loops” on page 2-7
• “Scale Up parfor-Loops to Cluster and Cloud” on page 2-21
1-8
Run Batch Parallel Jobs
5 batch does not block MATLAB and you can continue working while computations take place. If
you need to block MATLAB until the job finishes, use the wait function on the job object.
wait(job)
6 After the job finishes, you can retrieve and view its results. The load command transfers
variables created on the worker to the client workspace, where you can view the results:
load(job,'A')
plot(A)
7 When the job is complete, permanently delete its data and remove its reference from the
workspace:
delete(job)
clear job
batch runs your code on a local worker or a cluster worker, but does not require a parallel pool.
You can use batch to run either scripts or functions. For more details, see the batch reference page.
1-9
1 Getting Started
parfor i = 1:1024
A(i) = sin(i*2*pi/1024);
end
3 Save the file and close the Editor.
4 Run the script in MATLAB with the batch command. Indicate that the script should use a
parallel pool for the loop:
job = batch('mywave','Pool',3)
This command specifies that three workers (in addition to the one running the batch script) are
to evaluate the loop iterations. Therefore, this example uses a total of four local workers,
including the one worker running the batch script. Altogether, there are five MATLAB sessions
involved, as shown in the following diagram.
wait(job)
load(job,'A')
plot(A)
The results look the same as before, however, there are two important differences in execution:
• The work of defining the parfor-loop and accumulating its results are offloaded to another
MATLAB session by batch.
• The loop iterations are distributed from one MATLAB worker to another set of workers
running simultaneously ('Pool' and parfor), so the loop might run faster than having only
one worker execute it.
6 When the job is complete, permanently delete its data and remove its reference from the
workspace:
delete(job)
clear job
1-10
Run Batch Parallel Jobs
Running a script as a batch from the browser uses only one worker from the cluster. So even if the
script contains a parfor loop or spmd block, it does not open an additional pool of workers on the
cluster. These code blocks execute on the single worker used for the batch job. If your batch script
requires opening an additional pool of workers, you can run it from the command line, as described in
“Run a Batch Job with a Parallel Pool” on page 1-9.
When you run a batch job from the browser, this also opens the Job Monitor. The Job Monitor is a tool
that lets you track your job in the scheduler queue. For more information about the Job Monitor and
its capabilities, see “Job Monitor” on page 5-24.
See Also
batch
Related Examples
• “Run Batch Job and Access Files from Workers”
1-11
1 Getting Started
Now MM is a distributed array, equivalent to M, and you can manipulate or access its elements in the
same way as any other array.
M2 = 2*MM; % M2 is also distributed, calculation performed on workers
x = M2(1,1) % x on the client is set to first element of M2
This code creates an individual 4-by-4 matrix, R, of random numbers on each worker in the pool.
Composites
Following an spmd statement, in the client context, the values from the block are accessible, even
though the data is actually stored on the workers. On the client, these variables are called Composite
objects. Each element of a composite is a symbol referencing the value (data) on a worker in the pool.
Note that because a variable might not be defined on every worker, a Composite might have
undefined elements.
Continuing with the example from above, on the client, the Composite R has one element for each
worker:
X = R{3}; % Set X to the value of R from worker 3.
The line above retrieves the data from worker 3 to assign the value of X. The following code sends
data to worker 3:
X = X + 2;
R{3} = X; % Send the value of X from the client to worker 3.
If the parallel pool remains open between spmd statements and the same workers are used, the data
on each worker persists from one spmd statement to another.
spmd
R = R + labindex % Use values of R from previous spmd.
end
1-12
Distribute Arrays and Run SPMD
A typical use for spmd is to run the same code on a number of workers, each of which accesses a
different set of data. For example:
spmd
INP = load(['somedatafile' num2str(labindex) '.mat']);
RES = somefun(INP)
end
Then the values of RES on the workers are accessible from the client as RES{1} from worker 1,
RES{2} from worker 2, etc.
There are two forms of indexing a Composite, comparable to indexing a cell array:
Although data persists on the workers from one spmd block to another as long as the parallel pool
remains open, data does not persist from one instance of a parallel pool to another. That is, if the pool
is deleted and a new one created, all data from the first pool is lost.
For more information about using distributed arrays, spmd, and Composites, see “Distributed
Arrays”.
1-13
1 Getting Started
• Accelerate your code using interactive parallel computing tools, such as parfor and parfeval
• Scale up your computation using interactive Big Data processing tools, such as distributed,
tall, datastore, and mapreduce
• Use gpuArray to speed up your calculation on the GPU of your computer
• Use batch to offload your calculation to computer clusters or cloud computing facilities
• Node: standalone computer, containing one or more CPUs / GPUs. Nodes are networked to form a
cluster or supercomputer
• Thread: smallest set of instructions that can be managed independently by a scheduler. On a GPU,
multiprocessor or multicore system, multiple threads can be executed simultaneously (multi-
threading)
• Batch: off-load execution of a functional script to run in the background
• Scalability: increase in parallel speedup with the addition of more resources
• MATLAB workers: MATLAB computational engines that run in the background without a graphical
desktop. You use functions in the Parallel Computing Toolbox to automatically divide tasks and
assign them to these workers to execute the computations in parallel. You can run local workers to
take advantage of all the cores in your multicore desktop computer. You can also scale up to run
your workers on a cluster of machines, using the MATLAB Parallel Server. The MATLAB session
you interact with is known as the MATLAB client. The client instructs the workers with parallel
language functions.
• Parallel pool: a parallel pool of MATLAB workers created using parpool or functions with
automatic parallel support. By default, parallel language functions automatically create a parallel
pool for you when necessary. To learn more, see “Run Code on Parallel Pools” on page 2-56.
For the default local profile, the default number of workers is one per physical CPU core using a
single computational thread. This is because even though each physical core can have several
virtual cores, the virtual cores share some resources, typically including a shared floating point
unit (FPU). Most MATLAB computations use this unit because they are double-precision floating
point. Restricting to one worker per physical core ensures that each worker has exclusive access
to a floating point unit, which generally optimizes performance of computational code. If your
code is not computationally intensive, for example, it is input/output (I/O) intensive, then consider
1-14
What Is Parallel Computing?
using up to two workers per physical core. Running too many workers on too few resources may
impact performance and stability of your machine.
• Speed up: Accelerate your code by running on multiple MATLAB workers or GPUs, for example,
using parfor, parfeval, or gpuArray.
• Scale up your data: Partition your big data across multiple MATLAB workers, using tall arrays and
distributed arrays. To learn more, see “Big Data Processing”.
• Asynchronous processing: Use parfeval to execute a computing task in the background without
waiting for it to complete.
• Scale up to clusters and clouds: If your computing task is too big or too slow for your local
computer, you can offload your calculation to a cluster onsite or in the cloud using MATLAB
Parallel Server. For more information, see “Clusters and Clouds”.
See Also
Related Examples
• “Choose a Parallel Computing Solution” on page 1-16
• “Identify and Select a GPU Device” on page 8-19
• “Decide When to Use parfor” on page 2-2
• “Run Single Programs on Multiple Data Sets” on page 3-2
• “Evaluate Functions in the Background Using parfeval” on page 1-23
• “Distributing Arrays to Parallel Workers” on page 3-10
• “Run Batch Parallel Jobs” on page 1-9
1-15
1 Getting Started
MATLAB Parallel
Server
1-16
Choose a Parallel Computing Solution
1-17
Another Random Scribd Document
with Unrelated Content
inspiration; a quintet by Ernst von Dohnányi. Sgambati has written a
quintet without distinction. Mr. Dunhill tells us in his book[83] on
chamber music that there is an excellent quintet by a young British
composer, James Friskin. Moreover the sextet for piano and strings
by Joseph Holbrooke, in which a double bass is added to the
quartet, deserves mention. And among American composers Arthur
Foote and George Chadwick should be mentioned, the one for his
quintet in A minor, opus 38, the other for his quintet in E-flat major,
without opus number.
Only a few piano quartets have been written since those of Brahms
and Dvořák which are significant of any development or even of a
freshness of life. Those of Fauré have already been mentioned as
being perfect in style, but on the whole they seem less original and
less interesting than the quintet by the same composer. Saint-Saëns’
quartet, opus 41, is remarkable for the brilliant treatment of the
pianoforte, and the fine sense of instrumental style which it reveals,
but is on the whole uninteresting and is certainly insignificant
compared with the quartets of Fauré or those of d’Indy and
Chausson. D’Indy’s quartet, opus 7, in A minor is no longer a new
work, nor does it show in any striking way those qualities in French
music which have more recently come to splendid blooming. But it is
carefully wrought and the three movements are moderately
interesting. The second is perhaps the best music, the third is
certainly the most spirited. There is more of the manner though
perhaps less of the spirit of César Franck in Chausson’s quartet in A
major, opus 30.
III
As to sonatas, those for violin and piano are treated elsewhere.
There are too many to be discussed in this chapter. There are fewer
for the cello and the best of these may here be mentioned. Skill in
playing the violoncello was slower to develop than that in playing the
violin. This was probably because the viola da gamba with its six
strings was easier to play and was more in favor as a solo
instrument. The baryton was a kind of viola da gamba with
sympathetic strings stretched under the fingerboard, and even as
late as the maturity of Haydn this instrument was in general favor.
But the tone of the viola da gamba was lighter than that of the
violoncello, and so by the beginning of the eighteenth century the
cello was preferred to the gamba for the bass parts of works like
Corelli’s in concerted style. Little by little it rose into prominence from
this humble position. Meanwhile the immortal suites for the
violoncello alone by Bach had been written. Bach was probably
advised in the handling of the instrument by Abel, who was a famous
gamba player; so that it seems likely that these suites were
conceived for the gamba as much as for the cello.[84] The last of
them, however, was written especially for the viola pomposa, an
instrument which Bach invented himself. This was a small cello with
an extra string tuned to E, a fifth above the A of the cello.
Among composers who wrote expressly for the cello were Giorgio
Antoniotti, who lived in Milan about 1740, and Lanzetti, who was
'cellist to the king of Sardinia between 1730 and 1750. Later the
Italians A. Canavasso and Carlo Ferrari (b. 1730) became famous as
players, and Boccherini also was a brilliant cellist.
Beethoven wrote five sonatas for cello and piano. The first two, opus
5, were written in 1796, while Beethoven was staying in Berlin,
evidently with the intention of dedicating them to Frederick William II,
and for his own appearance in public with Duport. They are
noticeably finer, or more expressive works, than the early sonatas for
violin, opus 12; perhaps because the cello does not suggest a style
which, empty of meaning, is yet beautiful and effective by reason of
sheer brilliance. The violin sonatas, all of them except the last, are
largely virtuoso music. The cello sonatas are more serious and on
the whole more sober. This may be laid to thoroughly practical
reasons. The cello has not the variety of technical possibilities that
the violin has, nor even in such rapid passages as can be played
upon it can it give a brilliant or carrying tone. By reason of its low
register it can be all too easily overpowered by the piano. Only the
high notes on the A string can make themselves heard above a solid
or resonant accompaniment. Hence if the composer desires to write
a brilliant, showy sonata for pianoforte and cello, he can do so only
by sacrificing all but the topmost registers of the cello. Even at that
the piano is more than likely to put the cello wholly in the shade.
The next sonata, opus 69, in A major, was not written until twelve
years later. A different Beethoven speaks in it. The first theme,
announced at once by the cello alone, gives the key to the spirit of
the work. It is gentle (dolce) in character, but full of a quiet and
moving strength. After giving the first phrase of it alone the cello
holds a long low E, over which the piano lightly completes it. There is
a cadenza for piano, and then, after the piano has given the whole
theme once again, there is a short cadenza for cello, leading to a
short transition at the end of which one finds the singing second
theme. This is first given out by the piano over smooth scales by the
cello, and then the cello takes it up and the piano plays the scales.
Nothing could be more exquisite than the combination of these two
instruments in this altogether lovely sonata, which without effort
permits each in turn or together to reveal its most musical qualities.
Sometimes the cello is low and impressive, strong and independent,
while the piano is lively and sparkling, as in the closing parts of the
first section of the first movement. Again the cello has vigorous
rolling figures that bring out the fullest sonority the instrument is
capable of, while the piano adds the theme against such a vibrant
background, with no fear of drowning the cello, as in the first portions
of the development section.
Finally there are two sonatas, opus 102, which are in every way
representative of the Beethoven of the last pianoforte sonatas and
even the last quartets. The first of these—in C major—Beethoven
himself entitled a ‘free sonata,’ and the form is indeed free, recalling
the form of the A major pianoforte sonata, opus 101, upon which
Beethoven was working at the same time. In spirit, too, it is very like
the A major sonata, but lacks the more obvious melodic charm. The
sonata begins with an andante, in that singing yet mystical style
which characterizes so much of Beethoven’s last work, and the
andante does not end but seems to lose itself, to become absorbed
in a mist of trills, out of which there springs a vigorous allegro vivace,
in the dotted march rhythm which one finds in the later pianoforte
sonatas. After this, a short rhapsodical adagio brings us back to a bit
of the opening andante, which once more trills itself away, seems to
be snuffed out, as it were, by a sudden little phrase which, all
unexpected, announces the beginning of the final rondo.
There remains only to mention the sonata by Max Reger, opus 78,
two sonatas by Emanuel Moór, one by Guy Ropartz in G minor, two
by Camille Saint-Saëns, opus 32 and opus 123, as among those
which make a partial success of the extremely difficult combination.
If excellent music for cello and piano is so rare, music for the viola
and piano is almost entirely wanting. The two instruments do not go
well together. Practically the only example of the combination in the
works of the great masters is furnished by Schumann’s
Märchenbilder, which are but indifferent music. York Bowen, an
English composer, has considered it worthy of the sonata, and has
written two for it, one in C minor and one in F major. Mr. Benjamin
Dale has also written some agreeable pieces, including a suite and a
fantasy.
IV
There are relatively few works also in which the piano has been
combined with wind instruments. The wind instruments which have
been most employed in chamber music are the flute, oboe, clarinet,
and bassoon. Occasionally there is a short bit for horn, or for English
horn, and rarely something for trumpet or saxophone. No special
combination of these instruments either by themselves or with the
piano has obtained signal favor, and we may therefore confine
ourselves to mentioning with brief notice the various works of the
great masters in turn. We will include likewise here their chamber
works for wind instruments without pianoforte.
Of Haydn’s works we will only mention the two trios for flute and
violin and the octet for two oboes, two clarinets, two horns and two
bassoons. Most of Mozart’s works for wind instruments bear the
mark of some occasion. There are a great many Serenades and
Divertimenti, which can hardly be called representative of his best
and can hardly be distinguished from each other. Among the
interesting works are the concerto for flute and harp (K 299), the trio
for clarinet, viola and piano (K 498), the quintet for pianoforte, oboe,
clarinet, horn and bassoon (K 452), and the quintet for clarinet and
strings (K 581). The trio was composed in Vienna in August, 1786,
and is conspicuous for a fine handling of the viola. The clarinet is not
used at all in the lower registers, lest it interfere with the viola.
Mozart considered the quintet for piano and wind instruments at the
time he wrote it the best thing he had written. It was composed in
March, 1784, for a public concert and was received with great
applause. Jahn wrote of it that from beginning to end it was a true
triumph in the art of recognizing and adapting the peculiar
euphonious quality of each instrument. Doubtless it served as a
model for Beethoven’s composition in the same form.
The octet, opus 103, the sextet, opus 81, the sextet, opus 71, and
the quintet, opus 16, are all in the key of E-flat major, a key which is
favorable to all wood-wind instruments. The octet was written, as we
have said, in 1792. Beethoven rearranged it as a string quintet and
in that form it was published in 1796 as opus 4. In its original form
the chief rôle is taken by the oboe, especially in the slow second
movement, which has the touch of a pastoral idyl. The last
movement in rondo form offers the clarinets an opportunity in the first
episode. A Rondino for the same combination of instruments written
about the same time seems to forecast parts of Fidelio. The sextet
for two horns and string quartet is little more than a duet for the
horns with a string accompaniment.
We may pass over the trio for two oboes and English horn, published
as opus 87, and the flute duet written for his friend Degenhart on the
night of August 23, 1792. The sextet, opus 71, which Beethoven said
was written in a night, is none the less written with great care. The
prelude introduction and the cheerful style suggest some happy sort
of serenade music. The melody (bassoon) in the adagio is of great
beauty. There are, among its movements, a minuet and a lively
rondo in march rhythm.
The quintet, opus 16, in which the piano is joined with four
instruments may well have been suggested by Mozart’s quintet in
the same form; though Beethoven was a great pianist and had
already in an earlier trio and a sonata experimented in combining the
pianoforte with wind instruments. The wind instruments are here
treated as an independent group and the part for the piano is
brilliant. There is a richness of ideas throughout which raises the
work above the earlier compositions for wind.
The septet in E-flat, opus 20, for clarinet, horn, bassoon, violin, viola,
cello and double-bass, is undoubtedly the finest of Beethoven’s
works for combinations of wind instruments. It was written just before
1800 and was so full of joy and humor that those who had heard
Beethoven’s other works with a hostile ear were quite won over for
the time being by this. Technically it may be considered the result of
all his previous experiments. It is rather in the manner of a suite.
There is a slow prelude, an allegro con brio, an adagio cantabile, a
tempo di menuetto, which he later arranged for pianoforte and
incorporated in the little sonata, opus 49, No. 1, a theme and
variations, a scherzo, and a final presto, which is preceded by an
introductory andante of great beauty and of more seriousness than is
characteristic of the work as a whole. The success of the work is due
first to the freshness of the ideas, then to the skill with which they are
arranged for the difficult combination of instruments. For Beethoven
has made something of charm out of the very shortcomings of the
wind instruments. The short phrases, the straightforward character of
all the themes and motives, and the general simplicity all show these
necessarily restricted instruments at their very best.
Spohr, too, showed a special favor towards the clarinet and he, like
Weber, wrote two concertos for it. Three of Spohr’s works which
were broadly famous in their day and much beloved are the nonet for
strings, flute, oboe, clarinet, horn, and bassoon, opus 31; the octet
for violin, two violas, cello, double-bass, clarinet, and two horns,
opus 32; and the quintet for flute, clarinet, horn, bassoon, and piano.
The two former are delicately scored, but the latter is marred by the
piano. Some idea of the fervor with which Spohr’s music was loved
may be gained from the fact that Chopin, the most selective and
fastidiously critical of all composers, conceived Spohr’s nonet to be
one of the greatest works of music. Doubtless the perfection of style
delighted him, a virtue for which he was willing to forgive many a
weakness. At present Spohr’s music is in danger of being totally
neglected.
Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.
textbookfull.com