Distributed and Cloud Computing 1st Edition Hwang Solutions Manualpdf download
Distributed and Cloud Computing 1st Edition Hwang Solutions Manualpdf download
https://testbankdeal.com/product/distributed-and-cloud-
computing-1st-edition-hwang-solutions-manual/
https://testbankdeal.com/product/distributed-systems-concepts-and-
design-5th-edition-coulouris-solutions-manual/
https://testbankdeal.com/product/digital-logic-and-microprocessor-
design-with-interfacing-2nd-edition-hwang-solutions-manual/
https://testbankdeal.com/product/computer-accounting-with-quickbooks-
online-a-cloud-based-approach-1st-edition-yacht-solutions-manual/
https://testbankdeal.com/product/drugs-society-and-human-
behavior-16th-edition-hart-test-bank/
Statistics The Art and Science of Learning from Data 4th
Edition Agresti Solutions Manual
https://testbankdeal.com/product/statistics-the-art-and-science-of-
learning-from-data-4th-edition-agresti-solutions-manual/
https://testbankdeal.com/product/taxation-of-
individuals-2018-edition-9th-edition-spilker-solutions-manual/
https://testbankdeal.com/product/21st-century-astronomy-5th-edition-
kay-test-bank/
https://testbankdeal.com/product/laboratory-manual-for-anatomy-and-
physiology-6th-edition-marieb-solutions-manual/
https://testbankdeal.com/product/prescotts-microbiology-10th-edition-
willey-solutions-manual/
Intentional Interviewing and Counseling Facilitating 9th
Edition Ivey Solutions Manual
https://testbankdeal.com/product/intentional-interviewing-and-
counseling-facilitating-9th-edition-ivey-solutions-manual/
Solutions to Homework Problems in Chapter 6
Hwang, Fox and Dongarra: Distributed and Cloud Computing,
Morgan Kaufmann Publishers, copyrighted 2012
Note: The solutions of Chapter 6 problems were assisted by graduate students from
Indiana University under the supervision of Dr. Judy Qiu:
Problem 6.1:
Get the source code from: http://dl.dropbox.com/u/12951553/bookanswers/answer6.1.zip
(a). We implemented a demo system, which is quite simple in its functionality: there’s a search
box used to find contacts, and once a contact has been found, we list recent emails and
attachments associated with the contact. To do this, the application offers 3 urls that are
called by the JavaScript running in the browser to obtain the data: search.json,
messages.json and files.json.
How the system respond to the request to get message history for a given contact
is done by calling /messages.json which accepts an email address as a GET parameter.
Note, this functionality requires an authentication step not shown here. The code behind
that call is as follows:
class MessagesHandler(webapp.RequestHandler):
def get(self):
current_user = users.get_current_user()
current_email = current_user.email()
emailAddr = self.request.get('email')
contextIO = ContextIO(api_key=settings.CONTEXTIO_OAUTH_KEY,
api_secret=settings.CONTEXTIO_OAUTH_SECRET,
api_url=settings.CONTEXTIO_API_URL)
response = contextIO.contactmessages(emailAddr,account=current_email)
self.response.out.write(simplejson.dumps(response.get_data()))
The code simply uses the contactmessages.json API call of and returns all the messages
including the subject, other recipients, thread ID, and even attachments in JSON format.
The complete code for this demo application has been made available by the Context.IO team
on our GitHub account (https://github.com/contextio/AppEngineDemo).
This answer is based on the Google App Engine Blog Post at
http://googleappengine.blogspot.com/2011/05/accessing-gmail-accounts-from-app.html.
(b). The dashboard of Google App Engine provides measurement on useful aspects of the
deployed application. For example, execution logs, version control, quota details, datastore
viewer, administration tools. It also provides detailed resource usage information as the
following:
6-1
Critical measurement can be easily retrieved from this powerful dashborad.
(c) . Automatic scaling is built in with App Engine, and it’s not visible to users.
http://code.google.com/appengine/whyappengine.html#scale
6-2
Problem 6.2:
Get the source code: http://dl.dropbox.com/u/12951553/bookanswers/answer6.2.zip
Here we design a very simple data storage system using the Blobstore service to
illustrate how Google App Engine handles data. The Blobstore API allows your application to
serve data objects, called blobs, that are much larger than the size allowed for objects in the
Datastore service. Blobs are useful for serving large files, such as video or image files, and for
allowing users to upload large data files. Blobs are created by uploading a file through an HTTP
request.
Typically, your applications will do this by presenting a form with a file upload field to the
user. When the form is submitted, the Blobstore creates a blob from the file's contents and
returns an opaque reference to the blob, called a blob key, which you can later use to serve the
blob. The application can serve the complete blob value in response to a user request, or it can
read the value directly using a streaming file-like interface. This system includes the following
functions: user login, data listing, data upload/download. Gzip compression is used when
possible to decrease the cost.
User login: This function is implemented using the User Service provided in GAE. If the user
is already signed in to your application, get_current_user() returns the User object for the user.
Otherwise, it returns None. If the user has signed in, display a personalized message, using the
nickname associated with the user's account. If the user has not signed in, tell webapp to
redirect the user's browser to the Google account sign-in screen. The redirect includes the URL
to this page (self.request.uri) so the Google account sign-in mechanism will send the user back
here after the user has signed in or registered for a new account.
user = users.get_current_user()
if user:
self.response.headers['Content-Encoding'] = 'gzip'
self.response.headers['Content-Type'] = 'text/plain'
self.response.out.write('Hello, ' + user.nickname())
self.response.out.write('<a href=' + users.create_logout_url("/") +'>sign out</a><br/>');
else:
self.redirect(users.create_login_url(self.request.uri))
The content is gzip compressed when sent back from the server. Also, a log out link is provided.
Data listing: To list the data uploaded by a specific user, the GQL is used to guarantee users
can only see/access data belongs to him.
class Blob(db.Model):
"""Models a data entry with an user, content, name, size, and date."""
user = db.UserProperty()
name = db.StringProperty(multiline=True)
content = blobstore.BlobReferenceProperty(blobstore.BlobKey)
date = db.DateTimeProperty(auto_now_add=True)
size = db.IntegerProperty()
6-3
This defines a data blob class with five properties: user whose value is a User object, name
whose value is a String, content whose value is a BlobKey pointed to this blob, date whose
value is datetime.datetime, and size whose value is an Integer. GQL, a SQL-like query
language, provides access to the App Engine datastore query engine's features using a familiar
syntax. The query happens here:
upload_url = blobstore.create_upload_url('/upload')
There is an asynchronous version, create_upload_url_async(). It allows your application
code to continue running while Blobstore generates the upload URL.
The form must include a file upload field, and the form's enctype must be set to multipart
/form data. When the user submits the form, the POST is handled by the Blobstore API, which
creates the blob. The API creates an info record for the blob and stores the record in the
datastore, and passes the rewritten request to your application on a given path as a blob key:
self.response.out.write('<html><body>')
self.response.out.write('<form action="%s" method="POST" enctype="multipart/form-data">' %
upload_url)
self.response.out.write("""Upload File: <input type="file" name="file"><br> <input type="submit"
name="submit" value="Submit"> </form></body></html>""")
• In this handler, you can store the blob key with the rest of your application's data model.
The blob key itself remains accessible from the blob info entity in the datastore. Note that
after the user submits the form and your handler is called, the blob has already been
saved and the blob info added to the datastore. If your application doesn't want to keep
the blob, you should delete the blob immediately to prevent it from becoming orphaned:
class UploadHandler(blobstore_handlers.BlobstoreUploadHandler):
def post(self):
try:
upload_files = self.get_uploads('file') # 'file' is file upload field in the form
blob_info = upload_files[0]
myblob = Blob()
myblob.name = blob_info.filename
myblob.size = blob_info.size
myblob.user = users.get_current_user()
myblob.content = blob_info.key()
myblob.put()
self.redirect('/')
except:
6-4
self.redirect('/')
Data download: To serve blobs, you must include a blob download handler as a path in your
application. The application serves a blob by setting a header on the outgoing response. The
following sample uses the webapp framework. When using webapp, the handler should pass
the blob key for the desired blob to self.send_blob(). In this example, the blob key is passed to
the download handler as part of the URL. The download handler can get the blob key by any
means you choose, such as through another method or user action.
class ServeHandler(blobstore_handlers.BlobstoreDownloadHandler):
def get(self, resource):
resource = str(urllib.unquote(resource))
blob_info = blobstore.BlobInfo.get(resource)
self.send_blob(blob_info)
The webapp framework provides the download handler class blobstore_handlers.
BlobstoreDownloadHandler to help you parse the form data. For more information, see the
reference for BlobstoreDownloadHandler. Blobs can be served from any application URL. To
serve a blob in your application, you put a special header in the response containing the blob
key. App Engine replaces the body of the response with the content of the blob.
Problem 6.3:
Source code: http://dl.dropbox.com/u/12951553/bookanswers/answer6.3.zip
For this question, we provided a JAVA SimpleDB application with all critical functions like
domain creation, data insertion, data edition, data deletion, and domain deletion. With these
functions demonstrate how to make basic requests to Amazon SimpleDB using the AWS SDK
for Java. The reader can easily scale this application up to meet the requirements from the
question.
Prerequisites: You must have a valid Amazon Web Services developer account, and be signed
up to use Amazon SimpleDB. For more information on Amazon SimpleDB, please refer to
http://aws.amazon.com/simpledb
http://aws.amazon.com/security-credentials
Problem 6.4:
6-5
Now, design and request an EC2 configuration on the AWS platform for parallel
multiplication of two very large matrices with an order exceeding 50,000.
The parallel matrix multiplication is implemented using Hadoop 0.20.205, and experiments are
performed on Amazon EC2 platform with sample matrices between orders of 20,000 and
50,000. Steps to implement parallel matrix multiplication using Hadoop is as follows:
1) Split Matrix A and Matrix B into two grid of n*n blocked matrices. There will be 2*n*n
Map tasks, and n*n Reduce tasks.
2) Each Map task holds either A[p][q] or B[p][q] and then sends it to ‘n’ Reduce tasks
r[p][1<i<n], or r[1< j<n][q] respectively.
3) Each Reduce task r[p][q] receive 2*n sub-matrices which include A[p][1<i<n], and
B[q][1<j<n] from Map tasks, then Reduce task multiply A[p][1<i<n] to B[q][1<j<n], then
sum them up.
The advantages of this algorithm are: 1) splitting large matrix into small sub-matrices such
that working memory of sub-matrices can be fit in memory of small EC2 instance. 2) many small
tasks increase the application parallelism. The disadvantages include the parallel overhead in
terms of scheduling, communication, and sorting caused by many tasks.
EC2 configuration
In the experiments, we use instance type: EMR, M1.small: 1.7GB memory, 1core per node. We
created four instances group with 1, 2, 4, 8, 16 nodes respectively. One should note that
Hadoop jobtracker and namenode take one node for dedicated usage for the 2,4,8,16 nodes
cases.
Steps:
a. ./elastic-mapreduce --create --instance-count 16 –alive (apply resource)
b. ./elastic-mapreduce --jobflow j-22ZM5UUKIK69O –ssh (ssh to master node)
c. ./ s3cmd get s3://wc-jar/ matrix-multiply-hadoop.jar (download program jar file)
d. ./s3cmd get s3://wc-input/matrix-50k-5k ./50k-5k (download input data)
e. Hadoop dfs –put 50k-5k/* 50k-5k (upload data to HDFS)
f. Hadoop jar matrix-multiply-hadoop.jar 50k-5k output 50000 5000 10 (run
program)
Analysis
Figure1,2,3,4 show that our parallel matrix multiply implementation can scale well in EC2
especially for large matrices. For example, the relative speed-up for processing 20k,30k,40k,50k
data are 4.43, 7.75, 9.67, 11.58 respectively when using 16 nodes. The larger the matrices
sizes are, the better the parallel efficiency the application have. (The reason why performance
using two nodes is only a little faster than one node case is because the jobtracker and
tasktracker were run on separate nodes).
Other issues in the experiments:
Storage utilization: data size are 16GB+36GB+64GB+100GB for 20k, 30k,40k,50k data sets
respectively, and there are 216GB data in total. The total costs for the experiments are input
data transfer in: $0.1*216GB = $21.6; EC2 instances: M1.small, 290hours*$0.08/hour = $23.2.
System metric, such as resource utilization: using “CloudWatch” in AWS Management Console.
Fault tolerance, see answer for problem 4.10.
Experiments results
6-6
Figure 1:Parallel Matrix Multiply for 20K Figure 2:Parallel Matrix Multiply for 30K
Figure3:Parallel Matrix Multiply for 40K Figure4:Parallel Matrix Multiply for 50K
Problem 6.5:
We implemented the parallel matrix multiply application using EMR and S3 on AWS
platform. The basic algorithm and configuration are as the same as in problem 6.4. The only
difference is that in problem 6.6, Hadoop retrieve the input data from S3 rather than HDFS in
problem 6.4.
Analysis
Figure1,2,3,4 show that the parallel matrix multiply can scale well in EMR/S3 environment
especially for large matrices. The relative speed-up of processing 20k,30k,40k,50k data are
7.24, 12.3, 16.4, 19.39 respectively when using 16 nodes. The super-linear speedup results
were mainly caused by serious network contention when using single node to retrieve input data
from S3. As compared to results using HDFS in problem 6.4, the results of 20k, 30k, 40k, 50k
data sets using S3 on 16 nodes are 1.3, 1.65, 1.67, 1.66 times slower in job turnaround time
respectively. The results using fewer nodes are even much slower. For example, the results of
50k data using S3 using 2 nodes are 2.19 times slower than HDFS case. These results indicate
the big overhead when using Hadoop retrieves input data from S3. In figure 5, we show the
average speed of transferring data from S3 to EC2 instance is 8.54MB/sec. The detailed
algorithm, configuration and analysis of other issues such as speedup, cost-efficiency see
answers in problem 6.4.
Performance Results:
6-7
Figure 1:Parallel Matrix Multiply for 20K Figure 2: Parallel Matrix Multiply for 30K
Figure3: Parallel Matrix Multiply for 40K Figure4:Parallel Matrix Multiply for 50K
Problem 6.6:
Outline of Eli Lilly cloud usage
Eli Lilly uses cloud computing in its research area of the company. In silico analyses is a
large part of the research process for the pharmaceutical industry, and Eli Lilly is no exception.
Cloud computing provides Lilly the ability for bursting capabilities when its internal compute
environment is being utilized. Additionally, Eli Lilly relies on cloud computing for analyses on
public datasets, where there is little to no concern on intellectual property or security. By running
these analyses outside of its primary data centers, the company can free up internal resources
for high performance computing and high throughput computing workflows that either may not fit
well in the cloud or the analyses are considered more proprietary or regulated.
6-8
As of 2009, Eli Lilly was mainly using Amazon Web Services cloud, but have plans for
using many more cloud vendors in the future, requiring an orchestration layer between Eli Lily
and the various cloud services. According to Eli Lilly, a new server in AWS can be up and
running in three minutes compared to the seven and a half weeks it take to deploy a server
internally. A 64-node AWS Linux cluster can be online in five minutes compared with three
months it takes to set such a cluster internally.
One of the main drivers for Lilly to use the cloud is to speed development efforts through
the drug pipeline more quickly. If analyses can be done in a fraction of the time because of the
scale of the cloud then thousands of dollars spent on utility computing to speed up the pipeline
can generate millions of dollars of revenue in a quicker timeframe.
Sources:
http://www.informationweek.com/news/hardware/data_centers/228200755
http://www.informationweek.com/news/healthcare/clinical-systems/227400374
http://www.informationweek.com/cloud-computing/blog/archives/2009/01/whats_next_in_t.html
Problem 6.7:
The source codes of this application can be obtained from the following link:
http://dl.dropbox.com/u/27392330/forCloudBook/AzureTableDemo-gaoxm.zip .
Using the Azure SDK for Microsoft Visual Studio, we developed a simple web application as
shown in the following Figure. This application is extended from the Azure Table demo made by
Nancy Strickland (http://www.itmentors.com/code/2011/03/AzureUpdates/Tables.zip),
It can be used to demonstrate the application of Windows Azure Table, and to finish some
simple performance tests of Windows Azure Table. A Web role is created for this application,
which accesses the Windows Azure Table service from the Web server side. When the "Add
Customer" button is clicked, a new entity will be created and inserted in to an Azure table.
When the "Query Customer" button is clicked, the table is queried with the customer code and
the customer's name will be shown after "Name". And when proper values are set in the
"number of rows", "batch size", and "start rowkey" boxes, users can click the different "test"
buttons to complete different performance tests for Windows Azure Table.
Besides the local version, we also tried to deploy the application on a virtual machine in the
Azure cloud. Some experiences we got from writing and deploying this application are:
1. The concept and separation of "Web role", "VM role" and "Worker role" during development
are not straightforward to understand, and it takes some time to learn how to develop Azure
applications.
2. Users cannot remotely login to VMs by default. It takes some special configurations. Besides,
the security restrictions on VMs make it hard to operate the VMs. For example, almost all
websites are marked as "untrusted" by IE in the VMs, which makes it very hard to even
download something using the browser.
3. The SDK for Microsoft Visual Studio is powerful. The integration of the debugging and
deployment stages in Visual Studio is very convenient and easy to use. However, the
deployment process takes a long time, and it is hard to diagnose what is wrong if the
deployment fails.
6-9
4. Overall, we think the Amazon EC2 models and Amazon Web Services are easier to
understand and closer to developers' current experience
Problem 6.8:
In Map-Reduce Programming model, there is a special case with implementing only the
map phase, which is also known as “map-only” problem. This achievement can enhance
existing application/binary to have high throughput with running them in parallel fashion; in other
word, it helps standalone program to utilize the large scale computing capability. The goal of
this exercise is to write a Hadoop “map-only” program with a bioinformatics application BLAST
(NCBI BLAST+: ftp://ftp.ncbi.nlm.nih.gov/blast/executables/blast+/2.2.23/) under a Linux/Unix
environment.
6 - 10
Visit https://testbankdead.com
now to explore a rich
collection of testbank,
solution manual and enjoy
exciting offers!
Source code: http://dl.dropbox.com/u/12951553/bookanswers/feiteng_blast.zip
Problem 6.9:
This problem is research-oriented. Visit the posted Manjrasoft Aneka Software web site
for details and example Solutions.
Problem 6.10:
Repeat applications in Problems 6.1 to 6.7 using the academic/open source packages
described in Section 6.6 namely Eucalyptus, Nimbus, OpenStack, OpenNebula, Sector/Sphere.
This software is all available on FutureGrid http://www.futureGrid.org with a number of tutorials.
Answer to question 6.15 also provides an overview of using Hadoop on FutureGrid cloud
envrionments.
Problem 6.11:
Test run the large-scale matrix multiplication program on two or three cloud performs (GAE,
AWS, and Azure). You can also choose another data-intensive application such as large-scale
search or business processing applications involving the masses from the general public.
Implement the application on at least two or all three cloud platforms, separately. The major
objective is to minimize the execution time of the application. The minor objective is to minimize
the user service costs. (a) Run the service on the Google GAE platform
(d) Compare your compute and storage costs, design experiences, and experimental
results on all three cloud platforms. Report their relative performance and QoS results
measured.
Implementations:
The implementation of large-scale matrix multiplication program on AWS and Azure using
Hadoop and MPI are given in this chapter. The solution using Hadoop on Amazon AWS
platform was discussed in problem 6.4&6.6. Here we discuss the solution using MPI on Azure
HPC scheduler. A parallel matrix multiply algorithm, named Fox algorithm, was implemented
using MS.MPI. Then we created the host service and deployed the Windows HPC cluster on
Azure using Azure HPC Scheduler SDK tools. After that we logon to HPC cluster head node
and submit the large scale matrix multiplication there.
Source code : http://156.56.93.128/PBMS/doc/answer6.14.zip
Steps:
1) Setup Azure HPC SDK environment:
http://msdn.microsoft.com/en-us/library/windowsazure/hh545593.aspx
6 - 11
2) Configure and deploy HPC Cluster on Azure.
http://msdn.microsoft.com/en-us/library/hh560239(v=vs.85).aspx
3) Logon to head node of HPC cluster and copy executable binary on head node
4) Setup execution environment and configure firewall exception:
clusrun /nodegroup:computenode xcopy /E /Y \\HEADNODE1\approot\*.* F:\approot\
clusrun /nodegroup:computenode hpcfwutil register FoxMatrix.exe
F:\approot\FoxMatrix.exe
http://msdn.microsoft.com/en-us/library/hh560242(v=vs.85).aspx.
5) Submit MPI job to HPC scheduler:
job submit /nodegroup:computenodes /numnodes:16 mpiexec -n 16 -wdir F:\approot\
F:\approot\FoxMatrix.exe 16000
Comparison:
As compared with Amazon AWS, both the two platforms provide graphic interface for users to
deploy Hadoop or HPC cluster respectively. Developers can submit the HPC jobs and Hadoop
jobs to the dynamically deployed cluster either on the head node or on the client PC through job
submission API. In regard to the performance, both applications run on Azure and EC2 show
the performance fluctuation. Figure 1&2 show the maximum error of performance fluctuation of
Hadoop using S3, Hadoop using HDFS, MPIAzure, MPICluster are 8.1%, 1.9%, 5.3%, and
1.2% respectively. The network bandwidth fluctuation is the main reason lead to performance
fluctuation of Hadoop S3 implementation. The performance fluctuation of MPIAzure
implementation is due to the aggregated delay of MPI communication primitives caused by
system noise in guest OS in Cloud environment.
Performance analysis:
Performance analysis of parallel matrix multiplication on Amazon EC2 has been discussed
in problem 6.4. This section just analysis performance of MPIAzure implementation. Figure 1
show the speedup of the MPICluster implementation is 8.6%, 37.1%, and 19.3% faster than that
of MPIAzure implementation when using 4, 9, and 16 nodes respectively. Again, the
performance degradation of MPIAzure implementation is due to the poor network performance
in Cloud environment.
This is caused by the poor network performance in Cloud environment. Figure 4 shows
the performance of Fox algorithm of three implementations using 16 compute nodes. As
expected, MPIAzure is slower than MPICluster, but is faster than DryadCluser. Figure 4&5 show
the parallel overhead versus 1/Sqrt(n), where n refers to number of matrices elements per node.
6 - 12
In figure 5, the parallel overhead using 5x5, 4x4 and 3x3 nodes cases are linear in 1/Sqrt(n),
which indicate the Fox MS.MPI implementation scale well in our HPC cluster with the infinite
band network. In figure 4, the parallel overhead using 3x3 and 4x4nodes do not converge to X
axis for large matrices sizes. The reason is the serious network contention occurred in Cloud
environment when running with large matrices.
Figure 3: speedup for number of nodes using Figure 4: Job time of different runtime on Azure and
MPIAzure and MPICluster on difference nodes HPC cluster for different problem sizes
Figure 5: parallel overhead vs. 1/Sqrt(n) for the Figure 6: parallel overhead vs. 1/Sqrt(n) for the
Fox/MPIAzure/MKL on 3x3 and 4x4 nodes Fox/MPICluster/MKL on 3x3 and 4x4 nodes
Problem. 6.12:
Google Apache Hadoop Microsoft
Programming MapReduce MapReduce Dryad
Environment
Coding Language
and Programming
Model used
Mechanisms GFS(Google HDFS (Hadoop Shared directories
for Data File System) Distributed File and local disks
Handling System)
Failure handling Re-execute failed Re-execution of Re-execution of
Methods tasks and deplicated failed tasks; failed tasks;
6 - 13
execu- tion of the Duplicate Duplicate execution
slow tasks execution of slow of slow tasks
tasks
High-Level Sawzall Pig Latin, Hive DryadLINQ
Language
for data anlysis
OS and Cluster Linux Clusters Linux Clusters, Windows HPCS
Environment Amazon Elastic cluster
MapReduce
on EC2
Intermediate data By File transfer By File transfer File, TCP pipes,
transfer method or using the http or using the http shared-memory
links links FIFOs
Problem 6.13:
The following program illustrates a sample application for image filtering using Aneka’s
MapReduce Programming Model. Note that the actual image filtering is dependent on the
problem domain and you may use any algorithm you see fit.
class Program
{
/// Reference to the configuration object.
static Configuration configuration = null;
/// Location of the configuration file.
static string configurationFileLocation = "conf.xml";
/// Processes the arguments given to the application and according
to the parameters read runs the application or shows the help.
/// <param name="args">program arguments</param>
static void Main(string[] args)
{
try
{
//Process the arguments
Program.ProcessArgs(args);
Program.SetupWorkspace();
//configure MapReduceApplication
MapReduceApplication<ImageFilterMapper, ImageFilterReducer>
application = new MapReduceApplication<ImageFilterMapper,
ImageFilterReducer>("ImageFilter", configuration);
//invoke and wait for result
application.InvokeAndWait(new EventHandler<Aneka.Entity.
ApplicationEventArgs>
(OnApplicationFinished));
}
catch (Exception ex)
{
Console.WriteLine(" Message: {0}", ex.Message);
Console.WriteLine("Application terminated unexpectedly.");
}
}
/// Hooks the ApplicationFinished events and Process the results
if the application has been successful.
/// <param name="sender">event source</param>
/// <param name="e">event information</param>
6 - 14
static void OnApplicationFinished(object sender,
Aneka.Entity.ApplicationEventArgs e)
{
if (e.Exception != null)
{
Console.WriteLine(e.Exception.Message);
}
6 - 15
Other documents randomly have
different content
a choctaw council.
An aged Indian, who for many years had spent much of his time
among the white people both in Pennsylvania and New Jersey, one
day, about the year 1770, observed, that the Indians had not only a
much easier way of getting a wife than the whites, but were also
more certain of getting a good one; ‘for’ (said he in his broken
English) ‘white man court—court—may be one whole year!—may be
two before he marry!—well!—may be then get very good wife—but,
may be not—may be very cross! Well now, suppose cross! Scold so
soon as get awake in the morning! Scold all day! Scold until sleep!—
all one; he must keep him! White people have law forbidding
throwing away wife, be he ever so cross! must keep him always!
Well? how does Indian do? Indian when he see industrious squaw,
which he like, he go to him, place his two fore-fingers close aside
each other, make two look like one—look squaw in the face—see him
smile—which is all one he say, yes! so he take him home—no danger
he be cross! no! no! Squaw know too well what Indian do if he be
cross!—throw him away and take another! Squaw love to eat meat!
no husband! no meat! Squaw do every thing to please husband; he
do the same to please squaw! live happy!’
shenandoh, the oneida chief.
Tecumseh was one of the most remarkable men that has ever
figured in our aboriginal history. He gained an ascendancy over the
minds of his countrymen entirely by the commanding force of his
character, and the persuasive power of his eloquence. These
instruments enabled him to produce a degree of union and
combination among the North-western tribes, by no means less
remarkable than the confederacies which signalized the times of king
Philip and of Pontiac. His brother, the prophet, was a pusillanimous
driveller, compared with Tecumseh; and exerted all his influence by
addressing the superstitious fears of his countrymen; whereas the
great warrior addressed himself to the higher principles of their
nature, and made successful appeals to their reason, and even to
their humanity. Of the last we have a signal example in his arresting
the massacre of the American prisoners at Fort Meigs.
It has somewhere been observed, that “every circumstance relating
to this extraordinary man will be read with interest.” We believe it,
and therefore proceed with the following account, which appeared in
a western periodical of 1826.
“About thirty years ago (as the writer received the narrative from
Captain Thomas Bryan, of Kentucky) the said Bryan was employed
as a surveyor of the Virginia Military Lands, northwest of the Ohio
river. While engaged in completing a chain of surveys, extending
from the head waters of Brush Creek to those of Paint Creek (now
the central part of the State of Ohio), his provisions became scant,
and at length entirely exhausted. He directed his hunter—who had
been unsuccessful on a recent excursion—to make another attempt
to procure subsistence, and to meet him at a particular point then
designated; where, after closing the labour of the day, he should
encamp with his chain-men and marker.
“Towards evening, the men became exhausted with hunger. They
were in the heart of a solitary wilderness, and every circumstance
was calculated to produce the greatest dejection of spirit. After
making great exertions to reach the point designated, where they
were to encamp upon their arrival, they met their hunter, who had
been again unsuccessful. Feeling for himself and his comrades every
emotion of a noble heart, he was alarmed for their situation. The
hunter declared he had used every exertion in pursuit of game, but
all his attempts were of no avail; that the whole forest appeared to
him to be entirety destitute both of birds and beasts! Under these
awful apprehensions of starvation, he knew that it would be a vain
attempt to reach the settlement;—he trembled, and shed tears.
Captain Bryan, at this critical juncture, felt his spirits roused at the
reflection of their desperate situation; he thrust his jacob-staff in the
earth, and ordered his men to prepare a camp, and make a good
fire; he seizes the gun and ammunition of the unsuccessful hunter,
and darted forth in pursuit of game. The weather had become
exceedingly cold, for it was in the depth of winter—every rivulet was
bound in ice. He had not proceeded far before he was gratified with
the cheering sight of three elks, making towards him. He succeeded
in killing two, and, shortly after, a bear. He now called for his men,
and ordered his game to be carried to the camp. No one, but those
similarly situated, can conceive the feelings excited on such an
occasion.
But, perilous as the situation of the surveyor and his party might
appear, there were others who were threatened with the like
appalling distress. Three or four Indians, who had been out on a
hunting excursion, hearing the report of Captain Bryan’s gun, made
immediately in that direction, and had arrived at the camp before
Bryan returned. On his appearance there, they informed him, as well
as they could (some of them speaking a little English), of their
wretched situation. They told him that, for three days, their whole
party had subsisted on one skunk, and that was exhausted. They
described the absence of the game, in the language of the hunter,
as if “the whole forest was entirely destitute both of birds and
beasts.” They were informed by Captain Bryan, that he had plenty
for himself, his men, and themselves; desired them to fix their camp,
make a good fire, and assist his men in flaying the bear and elks,
which were now brought into camp— and then to cut, carve, and
cook for themselves. Their very looks were expressive of the joy
they now felt for a deliverance so unexpected—nor did they spare
the provisions. Their hunger was such, that, as soon as one round
was served, another—another—and another, in succession—was
greedily devoured.
A fine-looking, tall, dignified savage, then approached the surveyor’s
camp—rather young in appearance than otherwise. He very
gracefully stepped up to Captain Bryan (who was now reposing in
his camp, on account of rheumatism, occasioned by his recent
exposure), and informed him, that the old man in his camp was a
Chief; that he felt under great obligations to the Great and Good
Spirit for so signal an interposition in their favour; that he was about
to make a prayer, and address the Good Spirit, and thank him: that
it was the custom, on such occasions, for the Indians to stand up in
their camp; and that his Chief requested the captain and his men, to
conform, in like manner, by standing up in their camp. The captain
replied, that his men would all conform, and order should be
preserved; but, as for himself, his affliction would compel him to
keep his seat—but this must not be construed into disrespect. The
captain remarked to me, that he was not himself a religious
character, though a man of feeling.
“The old Chief raised himself upon his feet, as did those around him;
and, lifting up his hands, commenced his prayer and thanksgiving
with an audible voice. And such an address to Deity, on such an
occasion—as far as I could understand him—I never before heard
flow from mortal lips! The tone—the modulation of his voice—the
gestures—all corresponded to make a very deep impression upon us.
In the course of his thanksgiving—as I gathered from the Indians—
he recapitulated the doleful situation in which they were so recently
placed—the awful horrors of starvation, with which they were
threatened—the vain attempts they had made to procure food, until
He, the Great and Good Spirit, had sent that good White man, and
had crowned his exertions with success; and so directed him and
them to meet, and to find plenty.” Who can fully describe the
abundant overflowings of a grateful heart? He continued in this
vehement strain for about half an hour, “when,” remarked Captain
B., “my own men reflecting on their own recent situation,
retrospecting what had taken place, and beholding the pious
gratitude of a ‘Child of the Forest,’ feeling the same sensations, they
were melted into tenderness—if not into tears.”
The person who so gracefully addressed Captain Bryan, in behalf of
his Chief, was Tecumseh.
indian logic.
6
A few years since, whilst the mistaken zeal of many good men, led
them to think that their red brethren of the forest might be
Christianized before they were civilized,—a missionary was sent out
among them to convert them to the Christian faith. The missionary
was unfortunately one of those preachers who delight in speculative
and abstruse doctrines, and who teach the inefficacy of all human
exertions in obtaining salvation. He called the Indians together to
hear what he called the Gospel. The Sachem or Chief of the tribe to
which he was sent, came with the rest. The missionary in the course
of his sermon, (which was upon the very simple and intelligible
doctrine of election) undertook to prove, that some were made to be
saved, and some to be damned, without any regard to their good or
bad conduct. As an illustration of his doctrine, he cited the case of
Jacob and Esau, and attempted to show that God loved the one and
hated the other before either of them was born. The Sachem heard
him attentively, and after meeting invited him to his wigwam. After
some conversation, the Sachem thus addressed the Missionary: “Sir,
me tell you a story: My wife have two boys, twins; both of them as
pretty as the two you tell me about to-day. One of them she love
and feed him; the other she let lie on the ground crying. I tell her
take him up, or he die. She no mind me. Pretty soon he die. Now
what shall I do to her?”—Why, said the Missionary, she ought to be
hung!—“Well,” said the Sachem, “then you go home and hang your
God, for you say he do just so. You no preach any more here, unless
you preach more good than this.” The Missionary finding himself
amongst a people too enlightened to give credence to his narrow
and heart-revolting principles, thought it expedient to seek a new
field of labor.
the indian and the dutch clergyman.
One of the Moravian Indians who had been baptized by the name of
Jonathan, meeting some white people, who had entered into so
violent a dispute about baptism and the holy communion, that they
at last proceeded to blows—‘These people,’ said he, ‘know nothing of
our Saviour; for they speak of Him as we do of a strange country.’
indian fidelity.
testbankdeal.com