0% found this document useful (0 votes)
13 views

103536068

Uploaded by

addhuanglin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

103536068

Uploaded by

addhuanglin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 81

Download the Full Ebook and Access More Features - ebooknice.

com

(Ebook) Data-Driven Reservoir Modeling by Shahab


D. Mohaghegh ISBN 9781613995945, 1613995946

https://ebooknice.com/product/data-driven-reservoir-
modeling-51768034

OR CLICK HERE

DOWLOAD EBOOK

Download more ebook instantly today at https://ebooknice.com


Instant digital products (PDF, ePub, MOBI) ready for you
Download now and discover formats that fit your needs...

Start reading on any device today!

(Ebook) Data-Driven Analytics for the Geological Storage


of Co2 by Shahab Mohaghegh ISBN 9781315280790,
9781315280806, 9781315280813, 1315280795, 1315280809,
1315280817
https://ebooknice.com/product/data-driven-analytics-for-the-
geological-storage-of-co2-7160076

ebooknice.com

(Ebook) Artificial Intelligence & Data Mining Applications


in the E&P Industry by Shahab D. Mohaghegh (Ed.), Saud M.
Al-Fattah (Ed.), Andrei S. Popa (Ed.) ISBN 9781613990643,
1613990642
https://ebooknice.com/product/artificial-intelligence-data-mining-
applications-in-the-e-p-industry-4738816

ebooknice.com

(Ebook) Data-Driven Evolutionary Modeling in Materials


Technology by Nirupam Chakraborti ISBN 9781032061733,
1032061731
https://ebooknice.com/product/data-driven-evolutionary-modeling-in-
materials-technology-44746376

ebooknice.com

(Ebook) Data Driven Mathematical Modeling in Agriculture:


Tools and Technologies by Sabyasachi Pramanik & Sandip Roy
& Rajesh Bose
https://ebooknice.com/product/data-driven-mathematical-modeling-in-
agriculture-tools-and-technologies-58406166

ebooknice.com
(Ebook) Geostatistical Reservoir Modeling by Clayton V.
Deutsch ISBN 0195138066

https://ebooknice.com/product/geostatistical-reservoir-
modeling-2094530

ebooknice.com

(Ebook) Geostatistical Reservoir Modeling by Michael J.


Pyrcz, Clayton V. Deutsch ISBN 9780199731442, 0199731446

https://ebooknice.com/product/geostatistical-reservoir-
modeling-4684238

ebooknice.com

(Ebook) Usage-Driven Database Design: From Logical Data


Modeling through Physical Schema Definition by George
Tillmann (auth.) ISBN 9781484227213, 9781484227220,
1484227212, 1484227220
https://ebooknice.com/product/usage-driven-database-design-from-
logical-data-modeling-through-physical-schema-definition-5880864

ebooknice.com

(Ebook) Dynamic Modeling of Complex Industrial Processes:


Data-driven Methods and Application Research by Chao Shang
(auth.) ISBN 9789811066764, 9789811066771, 9811066760,
9811066779
https://ebooknice.com/product/dynamic-modeling-of-complex-industrial-
processes-data-driven-methods-and-application-research-6990250

ebooknice.com

(Ebook) Modeling Survival Data Using Frailty Models by


David D. Hanagal ISBN 9781439836675, 1439836671

https://ebooknice.com/product/modeling-survival-data-using-frailty-
models-1837992

ebooknice.com
Data-Driven Reservoir
Modeling
Data-Driven Reservoir
Modeling

Top-Down Modeling (TDM)

A Paradigm Shift in Reservoir Modeling


The Art and Science of Building Reservoir Models
Based on Field Measurements

Shahab D. Mohaghegh
Intelligent Solutions Incorporated and West Virginia
University
As of this printing:
http://intelligentsolutionsinc.com/Products/IMagine.sh
tml
2017

Society of Petroleum Engineers


© Copyright 2017 Society of Petroleum Engineers

All rights reserved. No portion of this book may be reproduced in any


form or by any means, including electronic storage and retrieval
systems, except by explicit, prior written permission of the publisher
except for brief passages excerpted for review and critical purposes.

Printed in the United States of America.

Disclaimer

This book was prepared by members of the Society of Petroleum


Engineers and their well-qualified colleagues from material published
in the recognized technical literature and from their own individual
experience and expertise. While the material presented is believed
to be based on sound technical knowledge, neither the Society of
Petroleum Engineers nor any of the authors or editors herein provide
a warranty either expressed or implied in its application.
Correspondingly, the discussion of materials, methods, or techniques
that may be covered by patents implies no freedom to use such
materials, methods, or techniques without permission through
appropriate licensing. Nothing described within this book should be
construed to lessen the need to apply sound engineering judgment
nor to carefully apply accepted engineering practices in the design,
implementation, or application of the techniques described herein.
Learn more about SPE events and volunteer opportunities at:
www.spe.org
Order books using your SPE member discount at: www.spe.org/store

ISBN 978-1-61399-560-0

First Printing 2017

Society of Petroleum Engineers


222 Palisades Creek Drive
Richardson, TX 75080-2040 USA

http://www.spe.org/store
[email protected]
1.972.952.9393
Dedication

This book is dedicated to Turgay Ertekin. He was, is, and will always
be my mentor and role model. I consider myself fortunate to have
known him and to have worked under his supervision as a graduate
student. What I have learned from him has guided me not only in
reservoir engineering, but in life.
Acknowledgments

I would like to acknowledge and thank my colleagues at Intelligent


Solutions Incorporated, Razi Gaskari and Mohammad Maysami, for
their invaluable contributions throughout the many years that we
have worked together.
I would like to acknowledge and thank all my graduate students
throughout the past 25 years. They have enriched my life, and many
have become life-long friends.
I am thankful to Narges for her unwavering support and for putting
up with my tough schedule, and I am thankful to Dorna for giving my
life meaning.
Foreword

Data-Driven Reservoir Modeling is intended to introduce a


technology that is relatively new to petroleum engineers and
geoscientists whose day-to-day job responsibilities always bring
them to junctures where critical technical decisions need to be made
and strategies need to be established. The technology covered in
this book adds another decision-making tool to the arsenal of
upstream technologists of the petroleum industry. This book should
also be useful to petroleum engineering and geosciences
undergraduate students in their junior or senior year, as well as to
graduate students with some degree of exposure to the principles of
petroleum engineering field operations, petroleum geology, and
petroleum geophysics.
The aim of this book is to present a methodology that is rather
new to the petroleum engineering community and is particularly
suited to the application of data analytics to physical problems of
reservoir engineering for tracking the state of dynamics with the goal
of strengthening the decision-making process. With the help of the
pragmatic approach provided in this book, data-driven modeling can
be effectively used in field planning and development studies.
In today’s field practices, zillions of bytes of information are
generated daily. Every piece of data carries key signatures about the
physical properties of the system being studied and about the
ongoing physical and thermodynamic processes. The collected data
can be so massive that they overwhelm manpower, while the
available computational power may not permit conducting a
comprehensive analysis. In addition to these issues, if we are
dealing with data that are generated by relationships that are not
understood or, at best, vaguely understood (i.e., all the physics and
thermodynamics of the ongoing processes are not well known),
reservoir analysis becomes even more challenging. In times like
these, machine-learning-based algorithmic protocols (intelligent
systems) come to the rescue. These systems are knowledge-based
intelligent systems that emulate not only human intelligence but also
the reasoning and decision-making aspects of human intelligence. In
our daily operations, we are always confronted with uncertain data;
here, the strategy is to exploit imprecision and uncertainty to achieve
tractable, robust, and low-cost solutions. This is why capturing
associations and discovering existing regularities in a big data set
can become a reality even if the diversity of the data is large and the
relationships between independent and dependent variables are
understood only dimly.
The book assumes some degree of familiarity with the upstream
petroleum industry vocabulary and the physics of flow in porous
media. In order to maximize the benefits from the book, one also
needs to be knowledgeable about transport processes and the
thermodynamics of oilfield fluids. With that knowledge bank in place,
it will be possible to critically analyze the results generated. Being
conversant with computers on various platforms will be helpful if the
reader is interested not only in using this class of solutions but also
in developing such a catalogue of solutions.
The author of the book, Shahab Mohaghegh, is a leading
authority in the application of data analytics in petroleum
engineering. His writing style is extremely lucid and informative.
Each chapter of the book is well structured and possesses logical
continuity, clarity, and thoroughness. In my view, the book brings a
wonderful opportunity to explore new modeling frontiers that are
applicable to hydrocarbon reservoirs. No matter what type of
reservoir or production engineering problem you are working on, Big
Data analytics, when applied properly, has the potential to guide and
streamline the solution work flow. Here, I present a summary of how
the author covers topical areas of data-driven reservoir modeling.
Chapter 1 briefly reviews the reservoir models that are currently
used in studies related to reservoir management and discusses the
challenges of history matching, which is a critically important and
notoriously difficult application for reservoir characterization. The
chapter continues with a discussion of top-down modeling (TDM)
and discusses how physical processes and geological
characteristics collectively play an important role in generating the
response function from the field (typically, pressure-transient and/or
rate-transient data).
Chapter 2 provides a succinct review of the theory of data-driven
problem-solving methodology. This chapter stresses the importance
of powerful domain expertise in applying machine learning and
pattern recognition to the class of problems studied in the book.
Chapter 3 outlines the historical progression of reservoir modeling
and discusses the critical juncture where a decision needs to be
made on the levels of accuracy and computational overhead that are
faced during a full-fledged simulation study. What makes the
decision even more challenging is that engineers who are
conducting reservoir simulation studies almost always find
themselves in the middle of a sea of uncertainties.
Chapter 4 concisely covers Big Data analytics methodologies,
including data mining, artificial intelligence, artificial neural networks,
and fuzzy logic, with greater emphasis on the last two. This chapter
should be especially useful for readers who do not have great
familiarity with data analytics practices.
Chapter 5 accentuates the importance of good understanding of
the conventional reservoir-modeling techniques and principles of
machine learning to appreciate the advantages and disadvantages
of each of these two broadly dissimilar approaches.
Chapter 6 reminds the reader that there are empirical models that
use the data collected in analyzing the performance of a
hydrocarbon reservoir (e.g., decline curve analysis). This chapter
also discusses some of the weaknesses that we face in the
application of such empirical methodologies.
Chapter 7 introduces the concept of TDM as a new work flow in
data-driven reservoir-modeling applications. The principal strength of
TDM is recognized in terms of its encompassing approach, such that
all available field measurements can be integrated in a seamless
manner. What is more striking here is that even if we do not have a
complete understanding of the dynamics of the ongoing physical
phenomena, the TDM protocol is capable of generating a
representative comprehensive model that incorporates all of the
available data.
Chapter 8 discusses the spatio-temporal nature of the data base
that is inevitably faced in reservoir engineering studies. The
nonlinear nature of the process dynamics and parameters that are
involved in the processes undoubtedly makes reservoir modeling
even more challenging. The matters that need to be addressed at
this stage typically include further simplification of the model.
However, most of the assumptions that are used in such
simplifications may not be compatible with the nature of the data
collected, because these data internally carry many critical
implications of the nonlinearities. Along these lines, the impact of
static and dynamic parameters on the response functions is
discussed.
Chapter 9 addresses the nonunique nature of inverse solutions (in
this case, history-matching protocols). Like any other inverse model,
a model that matches the history successfully cannot guarantee the
accuracy of the predictions; however, a good-quality history match
increases the level of confidence about the performance of the
model. In a history-matching application spatio-temporal properties
are accommodated with the help of artificial neural networks. In this
process, a critical step is the correct prioritization of the relevance of
such parameters. This enhances the overall optimization of the
process that is being studied.
Chapter 10 covers the importance of conducting a critical analysis
of the results of the TDM. Such an analysis will provide an
opportunity to optimize the process that is being modeled and at the
same time will establish realistic bounds to the results that are being
generated. These limitations can be accommodated by keeping
realistic and practical bounds on the operational parameters. In view
of the high computational speeds that can be achieved by the TDM
approach, it will be possible to conduct an expansive Monte Carlo
simulation study using a large number of scenarios. In this chapter,
the dynamic nature of the TDM is reiterated, just to ensure that the
user does not forget to update the overall structure of the TDM
whenever necessary.
Chapter 11 presents a compendium of three case studies
involving mature oil fields from different corners of the world.
Chapter 12 discusses the limitations of data-driven reservoir
modeling, indicating the importance of the representative nature of
the data available in developing the model. This becomes especially
important because the data used also carry vital information about
the physical processes within the system.
Chapter 13 provides a discussion based on the author’s extensive
experience about what one might expect to see in the future
concerning the use of data-driven reservoir modeling. One potential
area where this type of modeling will be handy is in the analysis of
fiber-optic data-collection systems that are finding applications in
long horizontal wellbores. For example, it is believed that even small
temperature variations captured in the horizontal well will provide
critically important information about the identification of the
producing and nonproducing zones.
Finally, extensive references are provided for any reader who is
interested to learn more about data-driven reservoir modeling.
We hope you now have a good idea of what this volume is all
about and what it can do for the problems that you are working on. I
am confident that this book will equip you with what you need to
know in order to develop realistic solutions for problems you may
have thought that you would not be able solve. Here is your
opportunity.

Turgay Ertekin
Professor of Petroleum and Natural Gas Engineering
Pennsylvania State University
University Park, Pennsylvania, USA
26 November 2016
Table of Contents

Dedication
Acknowledgments
Foreword
1 Introduction
1.1 Reservoir Models for Reservoir Management
1.2 What Is Top-Down Modeling?
1.2.1 Role of Physics and Geology
1.2.2 Formulation and Computational Footprint
1.2.3 Expected Outcome of a Top-Down Model
1.2.4 Limitations of TDM
1.2.5 Software Tool for the Development of TDM
1.3 Paradigm Shift
1.3.1 Drilling Operation
1.3.2 Mature Fields
1.3.3 Smart Completions, Smart Wells, and Smart Fields
1.3.4 Production From Shale Assets
1.3.5 Reservoir Simulation Models
2 Data-Driven Problem Solving
2.1 Misunderstanding Data-Driven Reservoir Modeling
3 Reservoir Modeling
4 Data-Driven Technologies
4.1 Data Mining
4.2 Artificial Intelligence
4.3 Artificial Neural Networks
4.3.1 Structure of a Neural Network
4.3.2 Mechanics of Neural Network Operation
4.4 Fuzzy Logic
4.4.1 Fuzzy Set Theory
4.4.2 Approximate Reasoning
4.4.3 Fuzzy Inference
5 Pitfalls of Using Machine Learning in Reservoir Modeling
6 Fact-Based Reservoir Management
6.1 Empirical Models in the E&P Industry
6.1.1 Decline Curve Analysis
6.1.2 Capacitance/Resistance Modeling
7 Top-Down Modeling
7.1 Components of a Top-Down Model
7.2 Formulation and Computational Footprint of TDM
7.3 Curse of Dimensionality
7.4 Correlation Is Not the Same as Causation
7.5 Quality Control and Quality Assurance of the Data
7.5.1 Inspecting the Quality of the Data
7.5.2 QC of the Production Data
8 The Spatio-Temporal Database
8.1 Static Data
8.2 Dynamic Data
8.3 Well Trajectory and Completion Data
8.3.1 Two-Dimensional vs. Three-Dimensional Reservoir
Modeling
8.4 Resolution in Time and Space
8.4.1 Resolution in Space
8.4.2 Resolution in Time
8.5 Role of Offset Wells
8.6 Structure of the Spatio-Temporal Database
8.7 Required Quantity and Quality of Data
9 History Matching the Top-Down Model
9.1 Practical Considerations During the Training of a Neural
Network
9.1.1 Selection of Input Parameters
9.1.2 Partitioning the Data Set
9.1.3 Structure and Topology
9.1.4 The Training Process
9.1.5 Convergence
9.2 History-Matching Schemes in TDM
9.2.1 Sequential History Matching
9.2.2 Random History Matching
9.2.3 Mixed History Matching
9.3 Validation of the Top-Down Model
9.3.1 Material Balance Check
10 Post-Modeling Analysis of the Top-Down Model
10.1 Forecasting Oil Production, GOR, and WC
10.2 Production Optimization
10.2.1 Choke-Setting Optimization
10.2.2 Artificial-Lift Optimization
10.2.3 Water-Injection Optimization
10.3 Reservoir Characterization
10.4 Determination of Infill Locations
10.5 Recovery Optimization
10.6 Type Curves
10.7 Uncertainty Analysis
10.8 Updating the Top-Down Model
11 Examples and Case Studies
11.1 Case Study No. 1: A Mature Onshore Field in Central
America
11.2 Case Study No. 2: Mature Offshore Field in the North Sea
11.3 Case Study No. 3: Mature Onshore Field in the Middle East
11.3.1 Data Used During the Top-Down-Model
Development
11.3.2 Top-Down-Model Training and History Matching
11.3.3 Post-Modeling Analysis
11.3.4 Performing a “Stress Test” on the Top-Down Model
12 Limitations of Data-Driven Reservoir Modeling
13 The Future of Data-Driven Reservoir Modeling
References
Index
Chapter 1

Introduction

We are living in an interesting time. Let us put the speed at which


technology is changing our world in perspective. Take the example of
the printing press vs. electronic mail or email. The printing press was
invented in the fifteenth century and changed how people
communicated. It was the most common mode of communication for
more than 25 generations.1 It still is an integral part of our lives. But
no one doubts that many of its functions are now performed by new
technologies such as email, the Internet, eBooks, eNews, and so on.
Furthermore, email became popular only after the 1990s. It quickly
grew to become the most used mode of communication among
people of all persuasions. However, now, in less than only one
generation, it is losing its prominent position in human
communications to other newly invented communication modes. The
new generation hardly uses email. Email is now being replaced by
text messages and social network communications. Email, such a
step-change in our communications, is ready to relinquish its
prominence in less than one generation. This is the speed at which
the technology is moving forward.
The modern oil industry is a bit more than one hundred years old.2
Most technologies that are currently in use by petroleum engineers
and geoscientists can be traced back to their development for the oil
industry during the mid-twentieth century.3 Some of our newer
technologies (e.g., horizontal wells, seismic survey, measurement
while drilling) are less than a few decades old. This book presents an
example of a new and quite recent technology that is making its way
into the oil and gas industry. This book is dedicated to application of
artificial intelligence and data mining in the upstream oil and gas
industry, and specifically to their application to build comprehensive
reservoir models.

1.1 Reservoir Models for Reservoir Management


It has been said that all models are wrong, but some models are
useful (Box 1976). One of the objectives of reservoir engineers is to
build reliable reservoir models to be used by reservoir managers in
order to make decisions. The uniqueness and complexity of each
hydrocarbon reservoir make the accomplishment of this objective
quite challenging. For the purposes of this book, we define the
problem as follows:
Building full-field models that have practical utility for managing
complex reservoirs. In order to fit this mold, the full-field reservoir
model must be accurate and have a small computational
footprint.
Complementary stipulations for such a model include
incorporation of all available information about the reservoir,
including detailed information about all wells. The two attributes that
are emphasized in the above definition are accuracy and small
computational footprint. The model has to be accurate so that it can
honor the past (history matched) before any comments can be made
regarding its predictive capabilities. Furthermore, to be considered a
viable reservoir management tool, the model’s computational
footprint should be small enough to warrant sensitivity analyses,
quantification of uncertainties, and exploration of large solution
spaces for field development planning.
The importance of accuracy of a full-field reservoir model should
be obvious. Nevertheless, it must be emphasized that by history
matching the past performance of the reservoir, which is usually
preserved in the form of production history, one develops confidence
in the utility of the model for further analyses. As the number of
dynamic measurements in the field increases, so too does the
complexity of the history-matching process. For example, when
flowing bottomhole pressure or production rates are the only
measured dynamic properties that are to be history matched (usually
one of the two is used as the constraint, while the other one is
calculated), the reservoir modeler has a much easier task to
accomplish (and a better chance of success) as compared to the
cases where other dynamic data such as Gas Oil Ratio (GOR),
Water Cut (WC), time-lapsed saturation and static reservoir pressure
(as a function of time) have been measured (and are present) in the
history and need to be simultaneously history matched.4
The importance of the computational footprint cannot be
overemphasized. It strikes at the heart of the problem: the model
utility. This is a practical issue. No matter how accurate a full-field
reservoir model is, a large computational footprint can make it
useless in practice.5 Reducing the computational footprint of
numerical models is the main reason companies have moved toward
supercomputers and clusters of parallel central processing units. The
never-ending race between larger models (high resolution in space
and time) and faster computers continues to preoccupy many of our
colleagues in the larger companies.
Tasks such as sensitivity analysis and quantification of
uncertainties associated with the geological (static) model (the
backbone of any reservoir model) are complex and time-consuming
undertakings. As the run time (time for execution after the
development is completed) of a model starts increasing to more than
a few hours, performing such analyses becomes impractical. The
solution is either to cut corners or to come up with tricks to perform
fewer runs and make the most out of the results. This last exercise
(reducing the number of runs) starts introducing limitations in the
analyses. We pay a price by reducing the number of runs.
Sometimes unjustifiable assumptions of linearity must be made in
order to make certain approaches work. There is very little we can
do. It is simply a conundrum that we cannot simplify our way out of.
As long as we live by the laws of the current paradigm,6 there is only
so much that we can do.
Equally complex and puzzling are field development and planning
problems. As the number of wells (both producers and injectors) and
the type of constraints increase, so too does the solution space that
needs to be searched for optimal (or near-optimal) solutions.
Solution space in this context is defined as the number of possible
combinations of wells and their associated constraints that can form
a solution (combinatorial explosion). Any optimization routine, no
matter how smart it may be, requires examining a large number of
solutions in order to find the optimal or the near-optimal solutions.
Each solution in the case of a development plan means at least a
single run of the reservoir model. It is easy to see that as the
execution time (computational footprint) of a reservoir model
increases, its utility as an objective function for planning is
compromised. Again, many reservoir engineers continue their quest
to build proxy models that are simplified versions of the more-
complex models, so that they can be used for such purposes, but
there is always a price to pay. Sometimes the price we end up
paying is so severe that it undermines the original efforts of building
such complex and detailed numerical reservoir models from the very
start.
Are there solutions for problems such as those mentioned above?
Yes and no. For as long as we are sticking to the traditional
paradigm of building reservoir models, there are no solutions. Here,
by the traditional paradigm, we are referring to the well-known
sequence of building the geological (static) model and then using the
principles of fluid flow in porous media to develop a dynamic model
based on the numerical solutions of the partial-differential equation
that governs fluid flow in porous media, the so-called numerical
reservoir simulation and modeling, and finally modifying the static
model in order to history match the dynamic model.
As long as we adhere to these core principles, we are simply
pushing the envelope. There are, and continue to be, successes and
failures. Incremental gains are made here and there. But if we want
to remove this serious practical shortcoming altogether, we need to
move beyond the traditional paradigm of building reservoir models.
The solution requires a paradigm shift. This paradigm shift and its
manifestation in reservoir modeling are the subjects of this book.
This is not a general book that discusses the virtues and
capabilities of data-driven analytics and indicates areas of the
upstream oil and gas industry that can benefit from machine learning
and data mining. This book is very specific. It has identified one
specific area in our industry, namely reservoir modeling, and
provides ample details on how this new and exciting technology
(artificial intelligence and data mining) can be used to build new
reservoir models. From that point of view, it is the only book of its
kind in the oil and gas industry that provides step-by-step details in
how to build a data-driven reservoir model, also known as a Top-
Down Model (TDM). The following section is a summary of what can
be expected in this book.

1.2 What Is Top-Down Modeling?


To efficiently develop and operate a petroleum reservoir, it is
important to have a model. Currently, numerical reservoir simulation
is the accepted and widely used technology for this purpose. Data-
driven reservoir modeling (also known as top-down modeling or
TDM7) is an alternative (or a complement) to numerical reservoir
simulation. TDM uses a Big Data solution (machine learning and
pattern recognition) to develop (train, calibrate, and validate) full-field
reservoir models based on field measurements (facts) rather than
mathematical formulations of our current understanding of the
physics of the fluid flow through porous media.
Unlike other empirical technologies, which only use production
data as a tool to forecast production,8 or only use
production/injection data for its analysis,9 TDM integrates all
available field measurements in order to forecast production from
every single well in a field with multiple wells. The field
measurements that are used by TDM to build a full-field reservoir
model include well locations and trajectories, completions and
stimulations, well logs, core data, well tests, seismic, and
production/injection history (including wellhead pressure and choke
setting). TDM combines all the information from the sources
mentioned above into a cohesive, comprehensive, full-field reservoir
model using artificial intelligence technologies. A top-down model is
defined as a full-field model within which production [including gas/oil
ratio (GOR) and water cut (WC)] is conditioned to all the measured
reservoir characteristics and operational constraints. TDM matches
the historical production (and is validated through blind history
matching) and is capable of forecasting a field’s future behavior on a
well-by-well basis. Imagine a decline curve analysis technique that
covers the entire field, well by well, and incorporates reservoir
characteristics and operational constraints and accounts for
interaction between wells whether they are only producers or a
combination of producers and injectors. TDM is such a tool.
The novelty of data-driven reservoir modeling stems from the fact
that it is a complete departure from traditional approaches to
reservoir modeling. Fact-based, data-driven reservoir modeling
manifests a paradigm shift in how reservoir engineers and
geoscientists model fluid flow through porous media. In this new
paradigm, current understanding of physics and geology in a given
reservoir is replaced by facts (data/field measurements) as the
foundation of the model. This characteristic of TDM makes it a viable
modeling technology for unconventional (shale) assets where the
physics of the hydrocarbon production (in the presence of massive
hydraulic fractures) is not yet well understood.

1.2.1 Role of Physics and Geology. Although it does not start from
the first principles of physics, a top-down model is very much a
physics-based reservoir mode. The incorporation of physics in TDM
is quite non-traditional. Reservoir characteristics and geological
aspects are incorporated in the model insofar as they are measured.
Although interpretations are intentionally left out during the model
development, reservoir engineering knowledge plays a vital role in
the construction of the top-down model. Furthermore, expert
knowledge and interpretation are extensively used during the
analysis of model results. Although fluid flow through porous media
is not explicitly (mathematically) formulated during the development
of data-driven reservoir models, successful development of such
models requires a solid understanding and experience in reservoir
engineering and geosciences. Physics and geology are the
foundation and the framework for the assimilation of the data set that
is used to develop the top-down model. The diffusivity equation has
inspired the invention of this technology and was the blueprint upon
which TDM was developed.

1.2.2 Formulation and Computational Footprint. The top-down


model is built by correlating10 flow rate at each well and at each
timestep11 to a set of measured static and dynamic variables. The
static variables include reservoir characteristics such as well logs
(e.g., gamma ray, sonic, density, resistivity), porosity, formation tops
and thickness, and others at the following locations:

1. At and around each well


2. The average from the drainage area of each well
3. The average from the drainage area of the offset producers
4. The average from the drainage area of the offset injectors

The dynamic variables include operational constraints and


production/injection characteristics at the appropriate timestep when
production is being calculated (estimated), such as

1. Wellhead or bottomhole pressure, or choke size, at timestep


t
2. Completion modification (e.g., operation of inflow-control
valve, squeeze off) at timestep t
3. Number of days of production at timestep t
4. GOR, WC, and oil production volume at timestep t−1
5. Water and/or gas injection at timestep t
6. Well stimulation details
7. Production characteristics of the offset producers at timestep
t−1

The data (enumerated above) that are incorporated into the top-
down model show its differences from other empirically formulated
models. Once the development of the top-down model is completed,
its deployment in forecast mode is computationally efficient. A single
run of the top-down model is usually measured in seconds or in
some cases in minutes. Size of a top-down model is determined by
the number of producer and injector wells. The small computational
footprint makes TDM an ideal tool for reservoir management,
uncertainty quantification, and field development planning.
Development and deployment costs of TDM are a small fraction of
that for numerical reservoir simulation.
1.2.3 Expected Outcome of a Top-Down Model. Data-driven
reservoir modeling can accurately model a mature hydrocarbon field
and successfully forecast its future production behavior under a large
variety of operational scenarios. Outcomes of TDM are forecast for
oil production, GOR, and WC of existing wells as well as field
development planning and infill drilling. When TDM is used to identify
the communication between wells, it generates a map of reservoir
conductivity that is defined as a composite variable that includes
multiple geologic features and rock characteristics contributing to
fluid flow in the reservoir. This is accomplished by deconvolving the
impact of operational issues from reservoir characteristics on
production.

1.2.4 Limitations of TDM. Data-driven reservoir modeling is


applicable only to fields with a certain amount of production history;
for this reason, TDM is not applicable to greenfields and fields with a
small number of wells and short production history. Another limitation
of TDM is that it is not valid once the physics of the fluid flow in a
field goes through a complete and dramatic change. For example,
once a top-down model is developed for a field under primary
recovery, it cannot be applied to enhanced- recovery phases of the
same field.

1.2.5 Software Tool for the Development of TDM. Top-Down


Modeling was pioneered by Intelligent Solutions Incorporated (ISI).
ISI has recently released a software product for the development
and deployment of Top-Down Model, called IMagine™. At the time of
publication of this book, no other company has announced a similar
product for the development of top-down models.

1.3 Paradigm Shift


Paradigm shift, a term first coined by Thomas Kuhn (1996),
constitutes a change in basic assumptions within the ruling theory of
science. According to Kuhn, “A paradigm is what members of a
scientific community, and they alone, share” (Kuhn 1977). Jim Gray,
the American computer scientist who received the Turing Award for
his seminal contribution to computer science, once said, “Originally,
there was just experimental science, and then there was theoretical
science, with Kepler’s Law, Newton’s Law of Motion, Maxwell’s
Equations, and so on. Then for many problems, the theoretical
models grew too complicated to solve analytically, and people had to
start simulating. These simulations have carried us through much of
the last half of the last millennium. At this point, these simulations are
generating a whole lot of data, along with a huge increase in data
from the experimental sciences. People now do not actually look
through telescopes. Instead, they are ‘looking’ through large scale,
complex instruments which relay data to datacenters, and only then
look at the information on their computers.… The new model is for
data to be captured by instruments or generated by simulations
before being processed by software and for the resulting information
or knowledge to be stored in computers.… The techniques and
technologies for such data-intensive science are so different that it is
worth distinguishing data intensive science from computational
science as a new, fourth paradigm for scientific exploration” (Bell et
al. 2009).
So how does this paradigm shift constitute itself in the exploration
and production industry? Do we collect and/or generate (either from
our simulators or from instruments) enough data to benefit from such
a paradigm shift? In this book, we examine the reservoir-modeling
discipline in the exploration and production industry in order to
address these questions.
What is the current paradigm of building models that attempts to
explore and explain fluid flow in porous media? We use analytical as
well as numerical approaches both at the well level and at the
reservoir level. This is the current paradigm. In our attempts at
analytical solutions, where well testing is a good example, we
approximate the problem in order to come to an exact analytical
solution. Assumptions such as reservoir homogeneity, well-defined
reservoir boundaries, and single-phase flow are among the basic
assumptions that are required in order for the analytical solutions to
be applicable. On the other hand, when we try to define the problem
more realistically, by incorporating reservoir heterogeneity, irregular
reservoir boundaries, multiple wells, and multiphase flow, then we
are forced to discretize the problem in time and space and generate
a large number of linear equations that can be solved numerically.
Then, we use numerical solutions to solve the system of linear
equations that was generated. Numerical solutions to the partial-
differential equations are only approximate solutions. In other words,
in attempts to minimize assumptions in the problem, we approximate
the solutions. This paradigm has served our industry for decades
and has resulted in many scientific and practical advances. We are
not advocating that this be shelved. Data-driven reservoir modeling
provides an alternative to (and in many cases a complement to) the
traditional analytical and numerical approaches to modeling fluid flow
in porous media. It is the paradigm shift as it is applied to reservoir
modeling and reservoir management.
When it comes to reservoir modeling, we generate or collect
massive amounts of data throughout the life of a hydrocarbon-
producing field. Data-driven reservoir modeling proposes the use of
this massive amount of data that is collected in the form of drilling
characteristics, well construction and trajectories, well logs of all
different natures, core data, well tests, seismic surveys, and finally
production and injection histories along with pressure
measurements, in order to build a reservoir model that is entirely
based on these field measurements and minimizes the incorporation
of our interpretations and biases into the resulting model. Let us
examine examples where large amounts of data are collected during
oil and gas exploration and production.

1.3.1 Drilling Operation. The modern drilling operation generates


hundreds of gigabytes of data on a daily basis. Measurement while
drilling (MWD) and logging while drilling (LWD), which have been
around for years, generate considerable amounts of data in real time
while the drilling operation is ongoing. Complementing these data
with seismic surveys and geological models that are developed for a
given field includes incredible amounts of information about the field
that can help increase drilling efficiencies and eventually move the
industry toward completely autonomous drilling operation.
1.3.2 Mature Fields. Mature fields around the world are sources of
vast amounts of data and information that have been collected over
decades. Mature fields usually include a large number of wells that
have been drilled throughout its history. The fact that the wells have
been drilled at different time periods provides valuable insight into
the fluid flow as well as pressure and saturation distribution
throughout the field. This includes large amounts of historical
production and injection data usually with the associated wellhead
pressure or choke settings. Most of the wells, if not all of them, have
the basic set of well logs. Several wells will have been cored, and
therefore some core analyses are also available. Usually, well tests
are available, and sometimes seismic surveys have been performed
(sometimes more than once).
In many cases, the size of the mature field determines the amount
of data that can be expected to be available. More-prolific fields
usually are blessed with larger amounts of data and more- diverse
types of data. The amount and the variety of the data available on
some prolific mature fields can be staggering. So much so, that it
overwhelms reservoir engineers and reservoir managers. Many
times in such cases large amounts of data will go unused and
unanalyzed.

1.3.3 Smart Completions, Smart Wells, and Smart Fields. Smart


fields have two major characteristics. First, they include smart
completions with controls and measurements taking place at
different locations along the completion, and, second, installation of
permanent downhole gauges provides high-resolution data streams.
Even haphazardly designed smart fields have generated massive
amounts of data that hardly ever are looked at, even offline.
Smart completions let engineers intervene with details of wells’
operations from a distance. Smart wells transmit nearly continuous
(real-time) data streams (e.g., pressure, flow rate) to the remote
office providing immediate feedback on the consequences of
recently made decisions and actions taken. Smart fields include
multiple smart wells providing the possibility of managing the entire
reservoir remotely and in real time. Smart fields generate terabytes12
and petabytes13 of data that are good examples of Big Data in the
upstream oil and gas industry.

1.3.4 Production From Shale Assets. We have been witnessing an


incredible increase in hydrocarbon production from source rocks,
such as shale, in recent years. Production from shale has been
made possible by drilling long lateral wells and then stimulating them
using massive, multiple stages of hydraulic fractures. During this
process, operators are now collecting large amounts of data that
include well-construction data, reservoir characteristics in the form of
well logs, completion data including much detail about each cluster
of hydraulic-fracturing procedures, and finally detailed production
data. This is a massive amount of data, especially when we consider
that the well count in shale assets is in the hundreds.
Furthermore, the introduction of distributed temperature sensing
and distributed acoustic sensing systems is adding a whole new
dimension to the important data that can be collected during oil and
gas production in shale wells. Once the distributed temperature
sensing and distributed acoustic sensing data can be coupled with
microseismic and other reservoir and production-related data that
have been collected from the shale wells, engineers and
geoscientists will have a realistic shot at understanding all the
complexities that are associated with hydrocarbon production from
shale in the presence of coupled induced and natural fractures.
There should be no doubt in anyone’s mind that the only technology
that has a realistic shot at making all this possible is advanced data-
driven analytics, also known as “Shale Analytics.”14

1.3.5 Reservoir Simulation Models. It is a well-known fact that


reservoir simulation models generate massive amounts of data.
Actually the amount of data that is generated by reservoir simulation
models is so large that in order just to look at them for analysis,
special visualization tools are required. Imagine that a large reservoir
of tens or even hundreds of square miles that includes multiple
distinct geological layers is being modeled. Furthermore, imagine
that a large number of wells has been modeled in this reservoir.
Having reservoir simulation models with tens and or even hundreds
of millions of gridblocks is becoming standard in today’s oil and gas
industry.
Now imagine generating oil, gas, and water production from the
wells. Such a model generates gigabytes or terabytes of data for
each of its timesteps, which include not only the static characteristics
of the reservoir that has been modeled but also the resulting
pressure and saturation distributions throughout the time and space
for each gridblock. If the simulation is compositional, the amount of
generated data increases exponentially.

1Each century can be counted as four to five generations.


2Actual oil production for commercial use goes back to the mid-1820s in
Imperial Russia.
3Darcy’s law goes back to 1856, and Arp’s decline curves were introduced in
1945.
4Needless to say, just because a model can history match the past does not
mean that its predictive capabilities are guaranteed. Those familiar with the
history-matching process are well aware that history matching is an art as much as
a science. Unfortunately, performing unreasonable tuning and modification of
parameters to reach a history match is not an uncommon practice in our industry.
A history-matched model is a nonunique model, by definition.
5Especially in the eyes of reservoir managers, who most of the time are the
primary users of reservoir simulation models as a tool.
6The current paradigm, known as the computational paradigm, states that
building models includes development of governing equations using first-principles
physics and then using discrete mathematics to numerically solve the governing
equations.
7Throughout this book “data-driven reservoir modeling” and “top-down
modeling” are used interchangeably.
8Decline curve analysis
9Capacitance/resistance model
10Correlation that is conditioned to causation will be discussed in more detail in
subsequent sections of this book.
11The length of timesteps in TDM is a function of the production characteristics
and the age of the reservoir as well as the availability of data. It is usually either
daily, monthly, or annual.
12A terabyte of data is 1012 bytes of data.
13A petabyte of data is 1015 bytes of data.
14A book by this name (Shale Analytics) has recently been published by
Springer-Verlag.
Chapter 2

Data-Driven Problem Solving

The definition that will be provided for data mining will underline the
main reason behind referring to the set of activities that are
presented in this book as data-driven modeling rather than data
mining. It is true that we are using data mining (among other tools) to
perform the analyses (building reservoir models) that are presented
here, but, as will become more and more clear, our objective is not
merely to identify the utility of yet another newly popularized tool in
the oil industry (a task that by itself may have merit and may be
treated as an academic exercise), but rather to offer a solution and a
tool that can be used by professionals in our industry today. Ever
since its introduction as a discipline in the mid-1990s, Data Science
has been used as a synonym for applied statistics. Today, Data
Science is used in multiple disciplines and is enjoying immense
popularity. Application of Data Science to physics-based (such as the
oil and gas industry) vs. nonphysics-based disciplines has been the
cause of much confusion. Such distinctions surface once Data
Science is applied to serious industrial applications rather than to
simple academic problems.
When Data Science is applied to nonphysics-based problems,
such as social networks and social media, consumer relations,
demographics, politics, and medical and/or pharmaceutical sciences,
it is merely applied statistics. This is because there are no sets of
governing partial-differential (or other mathematical) equations that
have been developed to model human behavior such as the
response of human biology to drugs. In such cases (nonphysics-
based areas), the relationship between correlation and causation
cannot be resolved using physical experiments and the results of the
data mining analyses are usually justified or explained by scientist
and statisticians, using psychological, sociological, or biological
arguments.
Applying Data Science to physics-based problems such as
reservoir modeling is a completely different story. The interaction
between parameters that are of interest to physics-based problems,
despite their complex nature, has been understood and modeled by
scientists and engineers for decades. Therefore, treating the data
that are measured throughout the life of a hydrocarbon reservoir as
just numbers that need to be processed in order to learn their
interactions is a gross mistreatment and oversimplification of the
problem, and hardly ever generates useful results. That is why many
such attempts have resulted in poor outcomes, so that many
engineers (and scientists) have concluded that Data Science has
few serious applications in our industry.
Given the excitement that has been generated by the application
of data analytics and data mining in other industries, such as retail
and social media, a number of professionals have contemplated the
usefulness of these technologies in the oil and gas industries. The
results have been a good number of articles in several oil- and gas-
related conferences, journals, and magazines. However, some of
these authors lack domain expertise in the upstream oil and gas
industry, while others lack a reasonable and fundamental
understanding of data-driven technologies such as machine learning
and pattern recognition. As such, they do not usually offer impressive
studies that can be used in the industry by professionals to address
their day-to-day issues and problems. Many professionals in our
industry, who have listened to the presentations on the general
theme of artificial intelligence and data mining in the oil industry,
have expressed a common complaint. They claim that they have
hardly ever heard presentations (especially from the larger
operators) that have anything substantial to offer. The main theme of
these presentations can be summarized in the following statements:

1. Artificial intelligence and data mining are great tools that


have been used in other industries and have helped them
tremendously.
2. We in the oil industry should be using this technology, too.
3. My company has been using this technology in the past
several months/years and has benefited from it.

In other words, these presentations lack substance, content, and


solutions. They are always too general, and many times the larger
companies hide behind issues such as confidentiality of the data in
order not to be specific. Such presentations have generated a notion
that because most of the presenters are not domain experts, they
are not comfortable to make technically specific presentations in
front of domain experts. Contrary to the type of presentation that has
just been mentioned and the associated publications that usually
include vague statements, in this book we have a specific mission
and have specific problems to solve, and therefore we present very
specific solutions.
We are addressing an old problem in our industry using a
completely new solution method. The problem we are planning to
address is the development of all-inclusive, full-field reservoir models
—models that can tell a comprehensive and yet cohesive story about
how fluid flows in hydrocarbon reservoirs, models that can
incorporate all the measurements in their machinery. Because
measurements made in the field represent a large variety of
characteristics at different scales, this task is very challenging. It is
so challenging that many petroleum professionals who have been
dealing with this problem for years (but using conventional and
traditional techniques) are very skeptical that it can be solved.
For example, well logs look at the reservoir characteristics at a
scale that is only inches away from the wellbore. Cores are taken at
the scale of the wellbore and are then sampled at an even smaller
scale so that they are looking at a very small portion of the rock (one
or two inches in size) to examine fluid flow in a laboratory scale. On
the other hand, seismic surveys measure the rock characteristics at
a scale that is measured in tens of feet. Reconciling all these
measurements into a seamless, single model is a giant task.
We have already come up with wonderful tools to address these
problems (analytical and numerical modeling of fluid flow in the
reservoir) that have provided our industry with valuable insight for
decades. Nevertheless, the author cannot think of anyone
(professionals, engineers, or geoscientists) who understands the
depth of the problem being addressed in the day-to-day operations
in our industry and is still under the illusion that we have completely,
once and for all, solved the problem and therefore do not need any
new inventions.
Therefore, for the purpose of this book, we define data-driven
modeling as the process of applying artificial intelligence and data
mining methods such as machine learning and pattern recognition in
order to uncover hidden patterns in large data sets that represent
fluid flow in porous media. Then, we include the patterns we have
discovered into a cohesive and comprehensive full-field reservoir
model with verifiable forecasting abilities that can be used to manage
fields and their production.

2.1 Misunderstanding Data-Driven Reservoir


Modeling
As machine learning and data-driven analytics become a prominent
force in today’s science and technology, it becomes difficult to
undermine their inevitable role in the upstream oil and gas industry.
This fact makes it hard for the “traditionalists” to completely dismiss
the impact of such technologies in modern petroleum engineering.
Therefore, either intentionally or (in some cases) unintentionally, the
resistance towards the incorporation of this new technology in the oil
and gas industry presents itself in a different manner.
In such cases, which the author has seen on several occasions,
the outright rejection of artificial intelligence and data mining
technology and its application to the upstream oil and gas industry
stems from claiming that they are “nothing new.” Obviously, this is
the result of misunderstanding this new and game-changing
technology. Such reactions to the application of artificial intelligence
and data mining usually surface as a claim that it is not a “big deal,”
and is therefore not important. This is a technique that is used in
order not to lose credibility or to be called a “traditionalist,” which
means being resistant to change and to new ideas.
This has given rise to a new group of traditional petroleum
engineers (or earth scientists) who have become overnight experts
in machine learning and data-driven analytics. They may (or may
not) have read a paper or a book on the algorithms that are the
foundation of neural networks and (maybe) have understood the
mathematical structure of such techniques (as will be discussed in
subsequent chapters, the mathematics behind machine learning is
not complex and is easy to understand) and now feel compelled to
join discussions on the topic (and sometimes even feel qualified to
review related articles for journals and, of course, reject them) and
dismiss the methods as a new way of doing old stuff that we have
been exercising in the oil industry for decades; some even go as far
as using decline curve analysis as an example and mention that
data-driven reservoir modeling is not much more than an extension
of decline curve analysis.
This class of traditional engineers and geoscientists, a large
number of whom are found among our own reservoir engineers,
reservoir modelers, and other engineers who are involved in
numerical modeling of complex dynamic phenomena such as
computational fluid dynamics, fails to understand that the
incorporation of data analytics in modeling is a paradigm shift in
modeling and not just using another technique.
This is not the difference between using finite element vs. finite
difference in order to discretize the medium in which the fluid is
flowing. It is not the difference between solving the system of
equations implicitly or explicitly to arrive at a solution. It is not even
the difference between solving the equations analytically vs.
numerically. This difference can be traced back to how one views the
reality. It is the philosophfical difference between Aristotelian vs.
Platonic view of the world. That is why even some smart people in
science and engineering still have trouble understanding the major
differences between these technologies.
Chapter 3

Reservoir Modeling

Development of numerical reservoir simulation was a necessary


response to an ever-growing understanding of the complexities
associated with oil and gas production from hydrocarbon reservoirs.
Although it had a modest beginning in terms of size and detail, it
enjoyed reasonably fast growth. The computational requirements of
numerical reservoir simulation made it a good candidate for
mainframe computers and later for UNIX15-based systems. It was not
until the mid-1990s to the late 1990s that a considerable increase in
computational power of personal computers (PCs) opened new
doors in the numerical reservoir simulation industry. Major vendors of
reservoir simulators started deploying their software applications on
PCs, and since that time use of mainframes and UNIX-based
systems has been all but abandoned, and almost all the
commercially available simulators are now PC-based.
As the computational power of personal desktop computers has
increased in accordance with Moore’s law,16 so too has the level of
detail that reservoir engineers and geoscientists have included in
their numerical reservoir simulation analysis. Given the fact that
discrete calculus is the foundation of numerical reservoir simulation
and modeling,17 the reservoir rock is divided into a large number of
small cells, and the collection of all these small cells forms the
geological (geocellular) model. Fluid flow through the interconnected
small cells with the inclusion of wellbores as inner-boundary
conditions and other rocks (with their fluid content, if any)
surrounding the hydrocarbon reservoir as the outer-boundary
condition forms the essence of what we know today as numerical
reservoir simulation. Therefore, as the size of the reservoir
increases, and the size of the cells being used to model the reservoir
decreases (to include more details on reservoir and flow
characteristics), the computational footprint of the reservoir
simulation increases dramatically.
There has been a race between the computational power that is
now offered through parallel processing of multiple central
processing units (CPUs) and the number of gridblocks (cells) that is
used to model complex reservoirs. Most of the numerical simulation
models that are developed for real reservoirs include considerably
more than a million cells. The numerical reservoir simulation model
that was the subject of the author’s most recent projects included an
original geological model with more than 21 million gridblocks that
had been upscaled to 2.5 million gridblocks for the dynamic model.
Even in a computational environment that includes tens of parallel
CPUs, a single run of such a simulation model takes several hours.
In some cases, when unconventional hydrocarbon reservoirs are
being modeled [steam-assisted-gravity-drainage models (SAGD), or
naturally fractured reservoirs that include a large number of induced
fractures such as shale], the computational time of numerical models
is measured in days and weeks. In other words, to achieve the
accuracy and the precision that numerical reservoir simulation
models provide, one must sacrifice speed and timely response to
queries.
These descriptions of the computational footprint of numerical
simulation models explain why it is quite understandable that many
reservoir management teams do not have a favorable view of
numerical reservoir simulation models as reservoir management
tools. Reservoir managers need to make decisions in a timely
manner, and they need tools that are accurate and fast.

1. The reservoir management tool must be accurate. It goes


without saying that the quality of the decisions made by a
reservoir manager is a function of the accuracy of the tool
that is used to model the reservoir. Accuracy here means the
degree of correctness of the responses generated by the
model (tool) as a function of the details (field measurements
—reservoir characteristics and operational conditions) that
are involved in the analysis. For example, if decline curve
analysis is the tool that is being used to make decisions, it is
obvious that no reservoir characteristics and no operational
constraints are used during the decision-making process.
Therefore, the quality of the decisions is seriously
compromised because the tools being used only approximate
reality in order to achieve the required speed.
2. The reservoir management tool must be fast. Reservoir
managers are required to make decisions for short-term,
medium-term, and long-term planning of a reservoir.
Furthermore, reservoir managers need to make decisions
that take into account (a) multiple scenarios, and (b)
uncertainties associated with many parameters involved in
hydrocarbon production. Therefore, reservoir management
tools need to have a small computational footprint so that
they can accommodate thousands of model executions
(scenarios). The large number of scenarios is then assessed
in a short time period (minutes or hours, not days and weeks)
to form the basis of reservoir management decisions. Large
numbers of model runs (executions) are required to search
large solution spaces and to quantify uncertainties.

Although they fulfill one of these two requirements (accuracy),


numerical simulation models fail to accommodate the second
requirement (speed) that must be met if they are to be an effective
reservoir management tool. Furthermore, numerical reservoir
simulation requires significant investment in time and manpower,
which translates into serious budgetary considerations. The
manpower must include teams of geologists, petrophysicists,
geophysicists, and reservoir engineers to develop detailed static and
dynamic models. Since the size of the numerical models is usually
substantial, serious computational resources (clusters of parallel
CPUs) are required for the execution of these models.
Once the dynamic model is developed, it almost never matches
the production measurements from the field. Therefore, a long and
tedious process is undertaken in order to condition the numerical
simulator to match the observations (measurements) from the field.
This is called history matching. The art and science of history
matching should be the subject of a book, because there are very
few books that specifically guide reservoir engineers on how to
correctly perform history matching. Although it is easy to find a few
chapters in some reservoir simulation books that explain the
technical step-by-step mechanism of the history-matching process, it
is hard to find books and writings that guide engineers on what to do
and what not to do (which parameters are okay to change and which
ones should be avoided and under what conditions, along with the
consequences of some of the changes that one may make) while
performing history matching. That is why reservoir managers
sometimes are quite skeptical about numerical reservoir models,
since not much is usually mentioned (explicitly) on how a history
match is accomplished. The reason for this skepticism is not hard to
understand because the fact that a model history has matched
observed production does not necessarily mean that the history
match has been achieved legitimately.
The general process of history matching a numerical reservoir
simulation model carries within itself a major, though implicit,
assumption. This assumption is so deep-seated that many reservoir
engineers may not even accept it as an assumption and deal with it
as a fact rather than an assumption. Nevertheless, the author
believes that what is mentioned in the next paragraph is more an
assumption than it is a fact. The assumption states as follows:
The governing equations for the fluid flow in porous media that is
formulated in the numerical reservoir simulation software
applications are unchangeable facts and are the ground truth that
are applicable to the hydrocarbon reservoir being modeled;
therefore, if our dynamic model does not match the field
measurements (observed and measured production from wells),
then the fault must be with the static (geological) model. The
consequences of this assumption are that the geological (static)
model must be modified until a match of the observed production
can be achieved. This claim is further justified by claiming that the
geological (static) model includes every cubic foot of the reservoir
while our actual measurements have been performed only at
discrete locations (wells). Furthermore, because the grid cells have
been populated with reservoir parameters between wells by means
of interpretations and/or geostatistics, they are susceptible to
mistakes and need to be modified during the history-matching
process. In other words, when we do not get a history match using
our dynamic model (the inner working of which is the truth, 100% of
the time), then it is the fault of those that have developed and
delivered the static model and, therefore, it needs to be changed
until we get a history match.
One should expect that the above assumption must have
consequences on how we history match the numerical reservoir
simulation model. One of these consequences should be that the
actual measured values at the well locations should not be the
subject of modification in order to get a history match. Nevertheless,
changing even measured reservoir parameters at the well locations,
in order to get a history match, is not an uncommon practice. That is
why there is a notion in the industry that states: “The time required to
get a history match for a numerical reservoir simulation model equals
the deadline imposed by management.” That is why it is hard to find
“do’s” and “don’ts” that are widely accepted among the practitioners.
Even if many of the practitioners have their own “do’s” and “don’ts,”
these rules are always applicable, as long as they help in getting a
history match. If getting a history match proves to be a hard task and
the deadline imposed by the management starts to get too close for
comfort, then, maybe, practitioners start compromising on their own
“do’s” and “don’ts.”
Perhaps one effective way to address these history-matching
issues is for the team of reservoir engineers and the team of
geoscientists who have developed the static model to meet regularly
and discuss the difficulties faced during the history-matching process
(this is not common because it can get very long and expensive).
Then, the geoscientists should deal with some of the issues faced by
the reservoir engineers and reservoir modelers during the history-
matching process and should come up with new interpretations. This
would require an extended amount of time and resources and
flexible deadlines that are usually hard to come by, but it still does
not provide any guarantees that it would solve the major and
fundamental issues that are faced during the history-matching
process.
Even when a history match is achieved, it is a well-known fact that
it is not unique. Therefore, quantification of uncertainties associated
with the geological (static) model is always an important issue to
address. This requires large amounts of resources, and, in many
cases, it is not performed comprehensively. Traditional proxy models
that are currently used in the industry (simplified physics-based—
reduced-order—models or statistics-based response surfaces)
provide speed but fail to fulfill the accuracy requirements.
Smart proxies such as surrogate reservoir models (SRMs)18 that
are data-driven, highly accurate (without sacrificing the physics that
has been modeled into the simulator or reducing the space and time
resolution) and very fast. A single run of a smart proxy such as an
SRM only takes a few seconds while its accuracy in replicating the
response of the numerical simulation model is in the higher 90th
percentiles. Therefore, the SRM, as a smart proxy, fulfills both
requirements (speed and accuracy) of a reservoir management tool.
However, development of a smart proxy is conditioned on having a
functioning numerical reservoir simulation model.
The top-down model, on the other hand, provides a complete
alternative to the numerical reservoir simulation model and can serve
as an appropriate tool for reservoir management. The top-down
model is a comprehensive, full-field, empirical reservoir model that
does not modify the collection of the measured reservoir
characteristics in order to history match multiple independent
production variables. The remainder of this book is dedicated to the
details of top-down modeling and how it achieves its objectives.

15UNIX is a family of multitasking, multiuser computer operating systems that


derive from the original AT&T UNIX, developed in the 1970s at the Bell Labs
research center by Ken Thompson, Dennis Ritchie, and others.
16The observation was made in 1965 by Gordon Moore, cofounder of Intel, that
the number of transistors per square inch on integrated circuits had doubled every
year ever since the integrated circuit was invented. Moore predicted that this trend
would continue for the foreseeable future.
17Discrete calculus is used to turn the nonlinear, second-order, partial-
differential equation that governs the fluid flow in porous media into a set of linear
algebraic equations that are easily solved with well-known numerical techniques.
18Smart proxy modeling will be the subject of a separate book that would apply
data-driven reservoir modeling to developing proxy models for numerical reservoir
simulation models.
Chapter 4

Data-Driven Technologies

Data-driven technologies are a set of new techniques that rely on


data rather than our current understanding of the physical
phenomena in order to build models, solve problems, and make
recommendations and help us make decisions. In the context of
reservoir engineering and reservoir modeling, data are also referred
to as facts. This is based on the assumption that the measurements
made in the field actually represent facts about the reservoir and the
state of the fluid flow in it. It is well understood that measurements
include noise and that noise, as an integrated part of the collected
data, can be handled.

4.1 Data Mining


Wikipedia (https://en.wikipedia.org/wiki/Data_mining) defines data
mining as “the computational process of discovering patterns in large
data sets involving methods at the intersection of artificial
intelligence, machine learning, statistics, and database systems…
The overall goal of the data mining process is to extract information
from a data set and transform it into an understandable structure for
further use.” This definition explains the reason that the terms “data
mining” and “knowledge discovery” are used interchangeably.
Wikipedia further explains: “The actual data mining task is the
automatic or semi-automatic analysis of large quantities of data to
extract previously unknown, interesting patterns such as groups of
data records (cluster analysis), unusual records (anomaly detection),
and dependencies (association rule mining…).”
Before data-driven algorithms can be used, a target data set must
be assembled. In data-driven reservoir modeling, we call this target
data set the “spatio-temporal database.” Because data-driven
modeling can only uncover patterns that are actually present in the
data, the spatio-temporal database must be large enough to contain
such patterns while remaining concise enough to be mined within
reasonable time. Preprocessing of the spatio-temporal database
includes data cleaning (cleansing) to remove the records that contain
erroneous data [erroneous data should be distinguished from simple
noise that is present in any data that are handled (collected) by
people] and records with missing data. Removing records with
missing data can be a bit tricky and must be carried out with much
care. This topic will be covered in more detail in the later sections of
this book.
Data-driven reservoir-modeling development and analysis
includes three phases. The first phase of the data-driven reservoir
modeling, which is an exploratory analysis in nature, relates to the
data mining. The second phase of data-driven reservoir modeling
that is mainly concerned with the development (training, history
matching, and validation) of a predictive reservoir model for the
entire field is accomplished through artificial intelligence. The final
phase of the data-driven reservoir modeling, which is the post-model
analysis, includes a combination of both data mining and artificial
intelligence. Since data mining is exploratory in nature, it mostly
includes unsupervised algorithms. However, in the case of its
application to reservoir engineering and reservoir modeling, some of
the traditional unsupervised algorithms have been modified in order
to incorporate reservoir engineering and geoscience domain
expertise in the analysis.

4.2 Artificial Intelligence


Artificial intelligence has been called by different names. It has been
referred to as “virtual intelligence,” “computational intelligence,” and
“soft computing.” Until recently there was not a uniformly acceptable
name for this collection of analytic tools among the researchers and
practitioners of the technology.19 Today the term “artificial
intelligence” is used most commonly as an umbrella term. This was
not the case until recently, because “artificial intelligence” has
historically referred to rule-based expert systems and was used
synonymously with expert systems. Expert systems made many
promises of delivering intelligent computers and programs, but these
promises never materialized. Many believe that the term “soft
computing” is the most appropriate term and that “virtual intelligence”
is a subset of “soft computing.” Although there is merit to this
argument, we will continue using the term “artificial intelligence”
throughout the book.
Artificial intelligence may be defined as a collection of new
analytic tools that attempts to imitate life (Zurada et al. 1994).
Artificial intelligence techniques exhibit an ability to learn and deal
with new situations. Artificial neural networks, evolutionary
programming, and fuzzy logic are among the main technologies that
are classified as artificial intelligence. These techniques possess one
or more attributes of “reason,” such as generalization, discovery,
association, and abstraction (Eberhart et al. 1996). In the last
decade, artificial intelligence has matured to become a set of analytic
tools that facilitate solving problems that were previously difficult or
impossible to solve. The current effective use of artificial intelligence
includes the integration of these tools, augmented by more-
conventional tools, such as statistical analysis, to build sophisticated
systems that can solve challenging problems.
These tools are now used in many different disciplines and have
found their way into commercial products. Artificial intelligence is
used in areas such as medical diagnosis, credit card fraud detection,
bank loan approval, smart household appliances, subway systems,
automatic transmissions, financial portfolio management, robot
navigation systems, driverless cars, and many more. In the oil and
gas industry, these tools have been used to solve problems related
to pressure transient analysis, well log interpretation, reservoir
characterization, and candidate well selection for stimulation, among
other things.
Their use in building complete and comprehensive, full-field
reservoir models is new and has been around for only a few years.
This technology has tremendous potential because it can generate
Random documents with unrelated
content Scribd suggests to you:
theory which had been preserved; yet the Greek scales were still in
use, though misnamed by the theorists, and composers for the
church still conformed to them. But about the beginning of the ninth
century a new element appeared in music for the church which the
Greeks had left practically untouched and which was probably the
contribution of the barbarian peoples of northern and western
Europe, either the Germans or the Celts, namely, part-singing. To
the single plain-song melodies of the ritual composers added
another accompanying melody or part. The resultant progression of
concords and discords was incipient harmony, the practice of so
weaving two and later three and four melodies together was the
beginning of the science or art of polyphony.

I
Polyphony was practically foreign to the music of the Greeks. They
had observed, it is true, that a chorus of men and boys produced a
different quality of sound from that of a chorus made up of all men
or all boys, and they had analyzed the difference and found the
cause of it to be that boys’ voices were an octave higher than men’s;
and that boys and men singing together did not sing the same
notes. This effect, which they also imitated with voices and certain
instruments they called Antiphony, and they considered it more
pleasing than the effect of voices or instruments in the same pitch
which they called Homophony. The practice of making music in
octaves was called magadizing, from the name of a large harp-like
instrument, the magadis, upon which it was possible. But
magadizing cannot be considered the forerunner of polyphony, for,
though melodies an octave apart may be considered not strictly the
same, still they pursue the same course and are in no way
independent of each other; and the effect of a melody sung in
octaves differs from the effect of one sung in unison only in quality,
not at all in kind.
The allegiance of theorists to Greek culture all through the Middle
Ages and the Renaissance has tended to conceal the actual origin of
polyphony, but as early as 1767 J. J. Rousseau wrote in his
Dictionnaire de musique, ‘It is hard not to suspect that all our
harmony is an invention of the Goths or the Barbarians.’ And later:
‘It was reserved to the people of the North to make this great
discovery and to bequeath it as the foundation of all the rules of the
art of music.’
The kernel from which the complicated science of polyphony sprang
is simple to understand. One voice sang a melody, another voice or
an instrument, starting with it, wove a counter-melody about it,
elaborated by the flourishes and melismas which are still dear to the
people of the Orient. Some such sort of primitive improvisation
seems to have been practised by the people of northern Europe, and
to have been taken over by the church singers. The later art of
déchant sur le livre or improvised descant was essentially no
different and seems to have been of very ancient origin. The early
theorists naturally took it upon themselves to regulate and
systematize the popular practice, and thereupon polyphony first
comes to our notice through their works in a very stiff and ugly form
of music called organum, which in its strictest form is hardly more to
be considered polyphony than the magadizing of the Greeks.
The works of many of the ninth century theorists such as Aurelian of
Réomé, and Remy of Auxerre, suggest that some form of part-
singing was practised in their day, though they leave us in confusion
owing to the ambiguity of their language. The famous scholar Scotus
Erigena (880) mentions organum, but in a passage that is difficult
and obscure. Regino, abbot of Prum in 892, is the first to define
consonance and dissonance in such a way as to leave no doubt that
he considers them from the point of view of polyphony, that is to
say, as sounds that are the result of two different notes sung
simultaneously. In the works of Hucbald of St. Amand in Flanders,
quite at the end of the century, if not well into the tenth (Hucbald
died in 930 or 932, over ninety years of age), there is at last a
definite and clear description of organum. The word organum is an
adaptation of the name of the instrument on which the art could be
imitated, or, perhaps, from which it partly originated, the organ; just
as the Greeks coined a word from magadis.
Of Hucbald’s life little is known save that he was born about 840,
that he was a monk, a poet, and a musician, a disciple of St. Remy
of Auxerre and a friend of St. Odo of Cluny. Up to within recent
years several important works on music were attributed to him, of
which only one seems now to be actually his—the tract, De
Harmonica Institutione, of which several copies are in existence. This
and the Musica Enchiriadis of his friend St. Odo are responsible for
the widespread belief that polyphony actually sprang from a hideous
progression of empty fourths and fifths. Both theorists, in their
efforts to confine the current form of extemporized descant in the
strict bounds of theory, reduced it thus: to a given melody taken
from the plain-song of the church the descanter or organizer added
another at the interval of a fifth or fourth below, which followed the
first melody or cantus firmus note by note in strictly parallel
movement. The fourth seems to have been regarded as the
pleasanter of the intervals, though, as we shall see, it led composers
into difficulties, to overcome which Hucbald himself proposed a
relaxation of the stiff parallel movement between the parts. In the
strict organum or diaphony the movement was thus:

Either or both of the parts might be doubled at the octave, in which


case the diaphony was called composite.
Just why the intervals of the fifth and fourth should have been
chosen for this parallel music, which is excruciating to our modern
ears, is not positively known. The simple obvious answer to the
riddle is that Hucbald and his contemporaries based their theories on
the theories of the Greeks, who regarded the fifth and fourth as
consonances nearest the perfect consonance of the octave and
unison. But in that case we have to ask ourselves why Hucbald and
his followers regarded the diaphony of the fourth as pleasanter than
that of the fifth which they none the less acknowledged was more
nearly perfect. Dr. Hugo Riemann has suggested a solution to this
difficulty which is in substance that organum was an attempt to
assimilate elements of an ancient art of singing practised by the
Welsh and other Celtic singers. The Welsh scale is a pentatonic
scale, that is, a scale of five steps in which half steps are skipped. In
terms of the keyboard, it can be represented by a scale starting
upon E-flat and proceeding to the E-flat above or below only by way
of the black keys between or by a similar progression between any
other two black keys an octave apart. In such a scale parallel fourths
are impossible, as indeed they are in the Greek scales of eight notes
upon which the church music was based; but whereas the
progression of the fourths in the Greek scales is broken by the
imperfect and very unpleasant interval of the tritone, in the
pentatonic scale it is interrupted by the pleasing major third. Such a
progression of fourths and thirds seems to spring almost naturally
from the pentatonic scales and was very likely much practised by the
ancient Welsh singers.[65] A comparison of two examples will make
the difference obvious.

The presence in the octatonic scale of the disagreeable tritone,


marked with a star in the example, forced even Hucbald and Odo to
make some provision for avoiding it. This consisted in limiting the
movement of the ‘organizing’ voice. It was not allowed to descend
below a certain point in the scale. In those cases, therefore, in which
the cantus firmus began in such a way that the organizing voice
could not accompany it at the start without sinking below its
prescribed limit the organizing voice must start with the same note
as the cantus firmus and hold that note until the cantus firmus had
risen so that it was possible for the organizing voice to follow it at
the interval of the fourth. In the same way the parts were forced to
close at the unison if the movement of the cantus firmus did not
permit the organizing voice to follow it at the interval of a fourth
without going below its limit. The following example will make this
clear:

In this case it will be noted that the movement of the parts is no


longer continuously parallel, but that there are passages in which it
is oblique. Indeed it is hardly conceivable that strict parallel
movement was ever adhered to in anything but theory. It is
interesting to observe how even in theory it had to give way, and
how by the presence of the tritone in the scale the theorists were
practically forced into a genuine polyphonic style. The strict style, as
we have already remarked, was hardly more polyphonic than the
magadizing of the Greeks; for, though the voice parts are actually
different, still each is closely bound to the other and has no
independent movement of its own; but in the freer style there is a
difference if not an independence of movement.
In connection with this example it is also well to note that through
the oblique movement the parts are made to sound other intervals
than the fourth or fifth or unison, which with the octave were
regarded for centuries as the only consonances. At the first star they
are singing the harsh interval of a second; immediately after they
sing a major third. By the earliest theorists these dissonances were
disregarded or accepted as necessary evils, the unavoidable results
of the restrictions under which the organizing voice was laid. But if
the free diaphony was practised at all it was to lead musicians
inevitably to a recognition of these intervals, and of the effect of
contrasting one kind with another. In the works of Hucbald and Odo
and their contemporaries, however, the ideal is theoretically the
parallel progression of the only consonances they would admit, the
fourth, fifth, and octave. Oblique movement was first of all a way to
escape the tritone, and the unnamed dissonances were haphazard.
Thus we find only the mere germ of the science of polyphony. The
dry stiffness of the music and the inadequacy of the cumbersome
rules must lead one to believe that learned men, true to their time,
were doing what they could to define a popular free practice within
the limits of theory. The sudden untraceable advent of a new free
style some hundred years or more later goes to prove that the free
descant of a genuinely musical people was never actually
suppressed or discontinued by the influence of the theorists.

II
However, before considering the new diaphony, we have still to trace
the further progress of the organum of Hucbald and Odo. The next
theorist of importance was Guido of Arezzo. To Guido have been
attributed at various times most of the important inventions and
reforms of early polyphonic music, among them descant, organum
and diaphony, the hexachordal system, the staff for notation, and
even the spinet; but the wealth of tradition which clothed him so
gloriously has, as in the case of many others, been gradually
stripped from him, till we find him disclosed as a brilliantly learned
monk and a famous teacher, author of but few of the works which
possibly his teaching inspired. He has recently been identified with a
French monk of the Benedictine monastery of St. Maur des Fosses.
[66] He was born at or near Arezzo about 990, and in due time
became a Benedictine monk. He must have had remarkable talent
for music, for about 1022 Pope Benedict VIII, hearing that he had
invented a new method for teaching singing, invited him to Rome to
question him about it. He visited Rome again a few years later on
the express invitation of Pope John XIX, and this time brought with
him a copy of the Antiphonarium, written according to his own
method of notation. The story goes that the pope was so impressed
by the new method that he refused to allow Guido to leave the
audience chamber until he had himself learned to sing from it. After
this he tried to persuade Guido to remain in Rome, but Guido, on the
plea of ill-health, left Rome, promising to return the following year.
However, he accepted an invitation from the abbot of a monastery
near Ferrara to go there and teach singing to the monks and choir-
boys; and he stayed there several years, during which he wrote one
of the most important of his works, the Micrologus, dedicated to the
bishop of Arezzo. Later he became abbot of the Monastery of Santa
Croce near Arezzo, and he died there about the year 1050. During
the time of his second visit to Rome he wrote the famous letter to
Michael, a monk at Pomposa, which has led historians to believe that
he was actually the inventor of a new division of the scales into
groups of six notes, called hexachorda, and a new system of
teaching based on this division.
The case of Guido is typical of the period in which he lived. Very
evidently an unusually gifted teacher, as Hucbald was a hundred
years before him, his influence was strong over the communities
with which he came into contact, and spread abroad after his death,
so that many innovations which were probably the results of slow
growth were attributed to his inventiveness. The Micrologus contains
many rules for the construction of organum below a cantus firmus,
which are not very much advanced beyond those of Hucbald and
Odo. The old strict diaphony is still held by him in respect, though
the free is much preferred. To those intervals which result from the
‘free’ treatment of the organizing voice, however, he gives names,
and he is conscious of their effect; so that, where Hucbald and Odo
confined themselves to giving rules for the movement of the
organizing voice in such a way as to avoid the harsh tritone even at
the cost of other dissonances, Guido gives rules to direct singers in
the use of these dissonances for themselves, which, as we have
seen, in the earlier treatises were considered accidental. This marks
a real advance. But there is in Guido’s works the same attempt
merely to make rules, to harness music to logical theory, that we
found in Hucbald’s and Odo’s; and it is again hard to believe that his
method of organizing was in common practice, or that it represents
the style of church singing of his day. From the accounts of the early
Christians, from the elaborate ornamentation of the plain-song in
mediæval manuscripts in which it is first found written down, and
from later accounts of the ‘descanters’ we are influenced to believe
that music was sung in the church with a warmth of feeling,
sometimes exalted, sometimes hysterical even to the point of
stamping with the feet and gesticulating, from which the
standardized bald ornamentation of Guido is far removed.
Furthermore, the next important treatises after Guido’s, one by
Johannes Cotto, and an anonymous one called Ad Organum
Faciendum, deal with the subject of organum in a wholly new way
and show an advance which can hardly be explained unless we
admit that a freer kind of organum was much in use in Guido’s day
than that which he describes and for which he makes his rules.
But before proceeding with the development of the early polyphony
after the time of Guido, we have to consider two inventions in music
which have been for centuries placed to his credit. In the first place
he is supposed to have divided the scale, which, it will be
remembered, had always been considered as consisting of groups of
four notes called tetrachords placed one above the other, into
overlapping groups of six notes called hexachords. The first began
on G, the second on C, the third on F, and the others were
reduplications of these at the octave. The superiority of this system
over the system of tetrachords, inherited from the Greeks, was that
in each hexachord the halftone occupies the same position, that is,
between the third and fourth steps.[67] It is not certain whether
Guido was the first so to divide the scale, but he evidently did much
to perfect the new system.
There has long been a tradition that he was the first to give those
names to the notes of the hexachord which are in use even at the
present day. Having noticed that the successive lines of a hymn to
St. John the Baptist began on successive notes of the scale, the first
on G, the second on A, the third on B, etc., up to the sixth note,
namely, E, he is supposed to have associated the first syllable of
each line with the note to which it was sung. The hymn reads as
follows:

Ut queant laxis
Resonari fibris
Mira gestorum
Famuli tuorum
Solve polluti
Labii reatum
Sancte Joannes.

Hence G was called ut; A, re; B, mi; C, fa; D, sol; and E, la. These
are the notes of the first hexachord, and these names are given to
the notes of every hexachord. The half-step therefore was always
mi-fa. Since the hexachords overlapped, several tones acquired two
or even three names. For instance, the second hexachord began on
C, which was also the fourth note of the first hexachord, and in the
complete system this C was C-fa-ut. The fourth hexachord began on
G an octave above the first. This G was not only the lowest note of
the fourth hexachord but the second of the third and the fourth of
the second. Therefore, its complete name was G-sol-re-ut. The
lowest G, which Guido is said to have added to perfect the system,
was called gamma. It was always gamma-ut, from which our word
gamut. The process of giving each note its proper series of names
was called solmisation.
The system seems to us clumsy and inadequate. We cannot but ask
ourselves why Guido did not choose the natural limit of the octave
for his groups instead of the sixth. However, it was a great
improvement over the yet clumsier system of the tetrachords, and
was of great service to musicians down to comparatively recent
times. One may find no end of examples of its use in the works of
the great polyphonic writers. As a help to students in learning it, the
system of the Guidonian Hand was invented, whereby the various
tones and syllables of the hexachords were assigned to the joints of
the hand and could be counted off on the hand much as children are
taught in kindergarten to count on their fingers. That Guido himself
invented this elementary system is doubtful, though his name has
become associated with it.

THE GUIDONIAN HAND.


Guido must also be credited with valuable improvements in the art of
notation. In his day two systems were in use. One employed the
letters of the alphabet, capitals for the lowest octave, small letters
for the next and double letters for the highest. This was exact,
though difficult and clumsy. The other employed neumes (see Chap.
V) superimposed over the words (of the text to be sung) at
distances varying according to the pitch of the sound. This, though
essentially graphic, was inaccurate. Composers were already
accustomed to draw two lines over the text, each of which stood for
a definite pitch, one for F, colored red, and one for C, a fifth above,
colored yellow, but the pitch of notes between or below or above
these lines was, of course, still only indefinitely indicated by the
distance of the neumes from them. Guido therefore added another
line between these two, representing A, and one above representing
E, both colored black. Thus the four-line staff was perfected. It has
remained the orthodox staff for plain-song down to the present day.
This improvement of notation, in addition to the hexachordal system
and the invention of solmisation, have all had a lasting influence
upon music, and through his close connection with them Guido of
Arezzo stands out as one of the most brilliant figures in the early
history of music.

III
Hardly a trace has survived of the development of music during the
fifty years after the death of Guido, about 1050. The next works
which cast light upon music were written about 1100. One is the
Musica of Johannes Cotto, the other the anonymous Ad organum
faciendum mentioned above. In both works a wholly new style of
organum makes its appearance. In the first place, the organizing
voice now sings normally above the cantus firmus, though the whole
style is so relatively free that the parts frequently cross each other,
sometimes coming to end with the organizing voice below. In the
second place, contrary movement in the voice parts is preferred to
parallel or oblique movement; that is, if the melody ascends, the
accompanying voice, if possible, descends, and vice versa. Thus the
two melodies have each an individual free movement and the
science of polyphony is really under way. Moreover, they proceed
now through a series of consonances. There are no haphazard
dissonances as in the earlier free organum of both Hucbald and
Guido. The organizing voice is no longer directed only in such a way
as is easiest to avoid the hated tritone, but is planned to sing always
in consonance with the cantus firmus. The following example
illustrates the movement of the parts in this new system:
Cotto is rather indifferent and, of course, dry about the whole
subject of organum. It occupied but a chapter in his rather long
treatise. But the ‘Anonymus’ is full of enthusiasm and loud in his
praises of this method of part-singing and bold in his declaration of
its superiority over the unaccompanied plain-song. Such enthusiasm
smacks a little of the layman, and is but another indication of the
real origin of organum in the improvised descant of the people, quite
out of the despotism of theory. The Anonymus gives a great many
rules for the conduct of the organizing or improvising voice. He has
divided the system into two modes, determined by the interval at
which the voices start out. For instance, rules of the first mode state
how the organizing voice must proceed when it starts in unison with
the cantus firmus, or at the octave. If it starts at the fourth or fifth it
is controlled by the rules of the second mode. There are three other
modes which are determined by the various progressions of the
parts in the middle of the piece. The division into modes and the
rules are of little importance, for it is obvious that only the first few
notes of a piece are definitely influenced by the position at which the
parts start and that after this influence ceases to make itself felt the
modes dissolve into each other. Thus, though the enthusiasm of the
Anonymus points to the popularity of the current practice of
organizing, whatever it may have been, his rules are but another
example of the inability of theory to cope with it. Still this theoretical
composition continued to claim the respect of teachers and
composers late into the second half of the twelfth century.
A treatise by Guy, Abbot of Chalis, about this time, is concerned with
essentially the same problems and presents no really new point of
view. He is practically the last of the theorizing organizers. Organum
gave way to a new kind of music. In the course of over two hundred
years it had run perfectly within the narrow limits to which it had
been inevitably confined, and the science of it was briefly this: to
devise over any given melody a counter-melody which accompanied
it note by note, moving, as far as possible, in contrary motion,
sinking to meet the melody when it rose, rising away from it when it
fell, and, with few exceptions, in strictest concord of octaves, fifth,
fourths, and unison. Rules had been formulated to cover practically
all combinations which could occur in the narrow scheme. The
restricted, cramped art then crumbled into dust and disappeared.
Again and again this process is repeated in the history of music. The
essence of music, and, indeed, of any art, cannot be caught by rules
and theories. The stricter the rules the more surely will music rebel
and seek expression in new and natural forms. We cannot believe
that music in the Middle Ages was not a means of expression, that it
was not warm with life; and therefore we cannot believe that this
dry organum of Hucbald and Odo, of Guido of Arezzo, of Guy of
Chalis, which was still-born of scholastic theory, is representative of
the actual practice of music, either in the church or among the
people. On the other hand, these excellent old monks were pioneers
in the science of polyphonic writing. Inadequate and confusing as
their rules and theories may be, they are none the less the first rules
and theories in the field, the first attempts to give to polyphony the
dignity and regularity of Art.
Meanwhile, long before Guy of Chalis had written what may be taken
as the final word on organum, the new art which was destined to
supplant it was developing both in England and in France. Two little
pieces, one Ut tuo propitiatus, the other Mira lege, miro modo, have
survived from the first part of the twelfth century. Both are written in
a freely moving style in which the use of concords and discords
appears quite unrestricted. Moreover, the second of them is distinctly
metrical, and in lively rhythm. It is noted with neumes on a staff and
the rhythm is evident only through the words, for the neumes gave
no indication of the length or shortness of the notes which they
represented, but only their pitch. Now in both these little pieces
there are places where the organizing voice sings more than one
note to a note of the cantus firmus or vice versa. So long as
composers set only metrical texts to music the rhythm of the verse
easily determined the rhythm in which the shorter notes were to be
sung over the longer; but the text of the mass was in unmetrical
prose, and if composers, in setting this to music in more than one
part, wished one part to sing several notes to the other’s one, they
had no means of indicating the rhythm or measure in which these
notes were to be sung. Hence it became necessary for them to
invent a standard metrical measure and a system of notation
whereby it could be indicated. Their efforts in this direction
inaugurated the second period in the history of polyphonic music,
which is known as the period of measured music, and which extends
roughly from the first half of the twelfth century to the first quarter
of the fourteenth, approximately from 1150 to 1325.

IV
Our information regarding the development of the new art of
measured music comes mainly from treatises which appeared in the
course of these two centuries. Among them the most important are
the two earliest, Discantus positio vulgaris and De musica libellus,
both anonymous and both belonging to the second half of the
twelfth century; the De musica mensurabili positio of Jean de
Garlandia, written about 1245; and at last the great Ars cantus
mensurabilis, commonly attributed to Franco of Cologne, about
whose identity there is little certainty, and the work of Walter
Odington, the English mathematician, written about 1280, De
speculatione musices. As the earlier theorists succeeded in
compressing a certain kind of music within the strict limits of
mathematical theory, so the mensuralists finally bound up music in
an exact arbitrary system from which it was again to break free in
the so-called Ars nova. But the field of their efforts was much larger
than that of the organum and the results of their work consequently
of more lasting importance.
The first attempts were toward the perfecting of a system of
measuring music in time, and the outcome was the Perfect System,
a thoroughly arbitrary and unnatural scheme of triple values. That
the natural division of a musical note is into two halves scarcely
needs an explanation. We therefore divide our whole notes into half
notes, the halves into quarters, the quarters into eighths, and so
forth. But the mensuralists divided the whole note into three parts or
two unequal parts, and each of these into three more. The standard
note was the longa. It was theoretically held to contain in itself the
triple value of the perfect measure. Hence it was called the longa
perfecta. The first subdivision of the longa in the perfect system was
into three breves and of the breve into three semi-breves. But in
those cases in which the longa was divided into two unequal parts
one of these parts was still called a longa. This longa, however, was
considered imperfect, and its imperfection was made up by a breve.
So, too, the perfect breve could be divided into an imperfect and a
semi-breve.
Let us now consider the signs by which these values were
expressed. The sign for the longa, or long, as we shall henceforth
call it, was a modification of one of the old neumes called a virga,
written thus ; that for the brevis or breve came from the punctum,
written thus . The new signs were long and breve . The semi-
breve was a lozenge-shaped alteration of the breve, . This seems
simple enough until we come across the distressful circumstances
that the same sign represented both the perfect and imperfect long,
and that the perfect and imperfect breve, too, shared the same
figure. The following table illustrates the early mensural notes and
their equivalents in modern notation.

In our age of utilitarian inspiration the imperfections of such a


system of notation in which the two most frequent signs had a
twofold significance would be remedied by the invention of other
signs; but the theorists of that day found it easier and more natural
to supplement the system with numbers of rules whereby the exact
values of the notes could be determined. For example, a long
followed by another long was perfect; a long followed by a breve
was imperfect and to be valued as two beats. But a long followed by
two breves was perfect, for the two breves in themselves made up a
second perfect three, since one was considered as recta and the
other as altera. A long followed by three breves was obviously
perfect, since the three breves could not but make up a perfect
measure. Similar rules governed the valuation of the breve. Three
breves between two longs were not to be altered, four breves
between two longs also remained unaltered, since one of them
counted to make up the imperfection of the preceding long. But five
breves required alteration, the first three counting as one perfect
measure, the last two attaining perfection by the alteration of the
second of them. Semi-breves were also subject to the laws of
perfection and alteration and were governed by much the same laws
as governed the breves. One who had mastered all these laws was
able to read music with more or less certainty, though it must have
been necessary for him to look ahead constantly, in order to
estimate the value of the note actually before him.
Later theorists did not fail to associate the mysteries of the perfect
system of triple values with the Trinity, and thus sprang up the belief
that the earlier mensuralists had had the perfection of the Trinity in
mind when they allotted to the perfect longa its measure of three
values. Yet, clumsy as the system of triple values was, it was
founded upon perfectly rational principles. It was the best
compromise in music between several poetic metres, some of which,
like the Iambic and Trochaic, are essentially triple; others, like the
Dactylic and Anapæstic, essentially double. Music, during all the
years while the mensuralists were supreme, was profoundly
influenced by poetic metres. All these had been reduced by means
of the triple proportion to six formulas or modes, and every piece of
music was theoretically in one or another of these modes. Such a
definite classification of various rhythms, besides being eminently
gratifying to the learned theorists, was of considerable assistance to
the singer in his way through the maze of mensural notation, who,
knowing the mode in which he was to sing, had but to fit the notes
before him into the persistent, generally unvarying, rhythm proper to
that mode. Composers were well aware of the monotony of one
rhythm long continued. They therefore interrupted the beats by
pauses, and occasionally shifted in the midst of a piece from one
mode to another. The pauses were represented by vertical lines
across the staff, and the length of the pause was determined by the
length of the line—the perfect pause of three beats being
represented by a line drawn up through three spaces, the imperfect
pause of two beats by one crossing two spaces and the others in
proportion. The end was marked by a line drawn across the entire
staff.
So far the complexities of the mensural system of notation are not
too difficult to follow with comparative ease. But the longs, the
breves and the semi-breves were employed only in the notation of
syllabic music; that is, of music in which each note corresponds to a
syllable of the text. In those cases where one syllable was extended
through several notes, another form of notation was employed. The
several notes so sung were bound together in one complex sign
called a ligature. The ligatures, like the longs and the breves, were
adaptations of old neumatic signs. In the old plain-song the
flourishes or melismas on single syllables were sung in a free
rhythm; but the mensuralists were determined to reduce every
phrase of music to exact rhythmical proportions, and these easy,
graceful, soaring ornaments were crushed with the rest in the iron
grip of their system. Hence the ligatures were interpreted according
to the strictest rules. A few examples will serve to show the
extraordinary complexity of the system. Among the old neumatic
signs which stood for a series of notes two were of especially
frequent occurrence. These were the podatus, , and the clivis,
. Of these the first represented an ascending series, the second
—which seems to have developed from the circumflex accent—a
descending series. It will be noticed that the clivis begins with an
upward stroke to the first note, which is represented by the heavy
part of the line at the top of the curve. The podatus has no such
stroke. Several other signs were derived from these two, and those
derived from the clivis began always with this upward stroke, and
those from the podatus were without it. Thus all ascending
ornaments were represented by a neume which had no preliminary
stroke, all descending ornaments by one with the preliminary stroke.
This characteristic peculiarity was maintained by the mensuralists in
their ligatures. The podatus became , the clivis . In so
far as the mensural system of notation was graphic, in that the
position of the notes in the scale presented accurately the direction
of the changing pitch of the sounds they stood for, there was no
need of preserving in the ligatures such peculiarities of the neumatic
signs. But, on the other hand, these peculiarities were needed to
represent the mensural value of the notes in the ligatures, the more
so because the mensuralists were determined to allow no freedom in
the rendering of those ornaments in ligature, but rather to reduce
each one to an exact numerical value. Hence we find two kinds of
ligatures: those which preserved the traits inherited from their
neumatic ancestors, and those in which such marks were lacking.
The first were very properly called cum proprietate, the others sine
proprietate; and the rule was that in every ligature cum proprietate
the first note was a breve, while in every ligature sine proprietate it
was a long. If the ligature represented a series of breves and semi-
breves, the preliminary stroke was upward from the note, not to it,
thus: .
Further than this we need not go in our explanation of notation
according to the mensural system. The mensuralists had their way
and reduced all music to a purely arbitrary system of triple
proportion, and their notation, though bewildering and complex, was
practically without flaw. The reaction from it will be treated in the
next chapter. Meanwhile we have to consider what forms of music
developed under this new method.

V
Regarding the relations of the voice parts, one is struck by the new
attitude toward consonance and dissonance of which they give
proof. In the old and in the free organum only four intervals were
admitted as consonant—the unison, the fourth, the fifth, and the
octave. The third and the sixth, which add so much color to our
harmony, were appreciated and considered pleasant only just before
the final unison or octave. The mensuralists admitted them as
consonant, though they qualified them as imperfect. For, true to the
time in which they lived, they divided the consonants theoretically
into classes—the octave and unison being defined as perfect, the
fourth and the fifth as intermediate, the third and later the sixth as
imperfect. So far did the love of system carry them that, feeling the
need of a balancing theory of dissonances, these were divided into
three classes similarly defined as perfect, intermediate, and
imperfect. We should, indeed, be hard put to-day to discriminate
between a perfect and an imperfect discord. Of the imperfect
consonances the thirds were first to be recognized, the minor third
being preferred, as less imperfect, to the major. The major sixth
came next and the last to be consecrated was the minor sixth,
which, for some years after the major had been admitted among the
tolerably pleasant concords, was held to be intolerably dissonant.
The fact that these concords, now held to be the richest and most
satisfying in music, were then called imperfect is striking proof of the
perseverance of the old classical ideas of concord and discord
inherited from the Greeks. Again, one must suspect that theory and
practice do not walk hand in hand through the history of music in
the Middle Ages.
The admission of thirds and sixths even grudgingly among the
consonant intervals is proof that through some common or popular
practice of singing they had become familiar and pleasant to the
ears of men. We have already mentioned the possible origin of
organum in the practice of improvising counter-melodies which
seems to have existed among the Celts and Germans of Europe at a
very early age. There is some reason to believe that in this practice
thirds and sixths played an important rôle; in fact, that there were
two kinds of organizing or descant, one of which, called gymel,
consisted wholly of thirds, the other, called faux-bourdon, of thirds
and sixths. These kinds of organizing, it is true, are not mentioned
by name until nearly the close of the fourteenth century, but there is
evidence that they were of ancient origin. Whether or not these
were the popular practices which brought the agreeable nature of
thirds and sixths to the attention of the mensuralists has not yet
been definitely determined. The reader is referred to Dr. Riemann’s
Geschichte der Musiktheorie im IX-XIV Jahrhundert (Leipzig, 1898),
and the ‘Oxford History of Music,’ Vol. I, by H. E. Wooldridge (Part I,
p. 160), for discussions on both sides of the question. The word
gymel was derived from the Latin gemellus, meaning twin, and the
cantus gemellus, or organizing in thirds, in fact, consists of twin
melodies. Faux-bourdon means false burden, or bass. The term was
applied to the practice of singers who sang the lowest part of a
piece of music an octave higher than it was actually written. If the
chord C-E-G is so sung then it becomes E-G-C, and whereas in the
original chord as written the intervals are the third, from C to E, and
the fifth, from C to G, in the transposed form the intervals are the
third, from E to G, and the sixth, from E to C, of which intervals
faux-bourdon consisted. The origin of this ‘false singing’ offered by
Mr. Wooldridge,[68] though properly belonging in a later period, may
be summarized here.
By the first quarter of the fourteenth century the methods of descant
had become thoroughly obnoxious to the ecclesiastical authorities
and the Pope, John XXII, issued a decree in 1322 for the restriction
of descant and for the reëstablishing of plain-song. The old parallel
organum of the fifth and fourth was still allowed. Singers, chafing
under the severe restraint, added a third part between the cantus
firmus and the fifth which on the written page looked innocent
enough to escape detection, and further enriched the effect of their
singing by transposing their plain-song to the octave above, which,
as we have seen, then moved in the pleasant relation of the sixth to
the written middle part. Thus, though the written parts looked in the
book sufficiently like the old parallel organum, the effect of the
singing was totally different. However, this explanation of the origin
of the term faux-bourdon leaves us still unenlightened as to how the
sixth had come to sound so agreeably to the ears of these rebellious
singers.
Having perfected a system of notation, and having admitted the
intervals pleasantest to our ears among the consonances to be
allowed, having thus broadly widened their technique and the
possibilities of music, we might well expect pleasing results from the
mensuralists. But their music is, as a matter of fact, for the most
part rigid and harsh. Several new forms of composition had been
invented and had been perfected, notably by the two great organists
of Notre Dame in Paris, Leo or Leonin, and his successor, Perotin. It
is customary to group these compositions under three headings,
namely, compositions in which all parts have the same words,
compositions in which not all parts have words, and compositions in
which the parts have different words. Among the first the cantilena
(chanson), the rondel and rota are best understood, though the
distinction between the cantilena and the rondel is not evident. The
rondel was a piece in which each voice sang a part of the same
melody in turn, all singing together; but, whereas in the rota one
voice began alone and the others entered each after the other with
the same melody at stated intervals, until all were singing together,
in the rondel all voices began together, each singing its own melody,
which was, in turn, exchanged for that of the others. Among the
compositions of the second class (in which not all parts have words),
the conductus and the organum purum were most in favor. Both are
but vaguely understood. The organum purum, evidently the survival
of the old free descant, was written for two, three, or even four
voices. The tenor sang the tones of a plain-song melody in very long
notes, while the other voices sang florid melodies above it, merely to
vocalizing syllables. The conductus differed from this mainly in that
such passages of florid descant over extended syllables of the plain-
song were interspersed with passages in which the plain-song
moved naturally in metrical rhythm, and in which the descant
accompanied it note for note. In the conductus composers made use
of all the devices of imitation and sequence which were at their
command. Finally, the third class of compositions named above is
represented by the Motet.
The Motet is by far the most remarkable of all forms invented by the
mensuralists. In the first place, a melody, usually some bit of plain-
song, was written down in a definite rhythmical formula. There were
several of these formulæ, called ordines, at the service of the
composers. The tenor part was made up of the repetition of this
short formal phrase. Over this two descanting parts were set, which
might be original with the composer, but which later were almost
invariably two songs, preferably secular songs. These two songs
were simply forced into rhythmical conformity to the tenor. They
were slightly modified so as to come into consonance with each
other and with the tenor at the beginning and end of the lines. Apart
from this they were in no way related, either to each other or to the
tenor. So came about the remarkable series of compositions in which
three distinct songs, never intended to go together, are bound fast
to each other by the rules of measured music, in which the tenor
drones a nonsense syllable, while the descant and the treble may be
singing, the one the praises of the Virgin, the other the praises of
good wine in Paris. This is surely the triumphant non plus ultra of
the mensuralists. Here, indeed, the rules of measured music preside
in iron sway. Not only have the old free ornaments of the early
church music been rigorously cramped to a formula and all the kinds
of metre reduced to a stiff rule of triple perfection, but the quaint old
hymns of the church have been crushed with the gay, mad songs of
Paris down hard upon a droning, inexorable tenor which, like a
fettered convict, works its slow way along. A reaction was inevitable
and it was swift to follow.
L. H.
FOOTNOTES:
[65] See Riemann, Handbuch der Musikgeschichte, I², p. 144 ff.
[66] See article by Dom Germain in Revue de l’art chrétien, 1888.
[67] Strict ‘imitation’ would be extremely difficult in the tetrachordal system. A
subject given in one tetrachord could not be imitated exactly in another, because
the tetrachords varied from each other by the position of the half-step within
them. Compare, for instance, the modern major and minor modes. The answer
given in minor to a subject announced in major is not a strict imitation. If, on the
other hand, the answer to a subject in a certain hexachord was given in another
hexachord, it would necessarily be a strict imitation, since in all hexachords the
half-step came between the third and fourth tones, between mi and fa.
[68] Op. cit., Part II.
CHAPTER VII
SECULAR MUSIC OF THE MIDDLE AGES

Popular music; fusion of secular and ecclesiastic spirit;


Paganism and Christianity; the epic—Folksong; early types
in France, complainte, narrative song, dance song;
Germany and the North; occupational songs—Vagrant
musicians; jongleurs, minstrels; the love song—
Troubadours and Trouvères; Adam de la Halle—The
Minnesinger; the Meistersinger; influence on Reformation
and Renaissance.

However slim the records of early church music they still suffice to
give some clews to the origin and nature of the first religious songs.
But, when we turn to the question of secular song at the beginning
of our era, we are baffled by an utter lack of tangible material. For
the same monks to whom we are indebted for the early examples of
sacred music were religious fanatics who looked with hostile eyes
upon the profane creations of their lay contemporaries. Yet we may
be confident of the continued and uninterrupted existence not only
of some sort of folk music, but also of the germs at least of an art
music, however crude, throughout that period of confusion incident
to, and following, the crumbling of the Roman empire.
We need but point to our discourse upon the music of primitive
peoples (Chap. I), the traces of musical culture left by the ancients
(Chap. II), and especially the high achievements of the Greeks
(Chap. IV), as evidence that, whatever the stage of a people’s
intellectual development, music is a prime factor of individual and
racial expression. Furthermore, at almost every period there is
recognizable the distinction between folk music proper—the
spontaneous collective expression of racial sentiment—and the more
sophisticated creations which we may designate as art. Thus the
music transmitted by the Greeks to the Romans, if added to ever so
slightly, no doubt was continued with the other forms of Greek
culture. The symposias, scolia, and lyrics of Hellas had their progeny
in the odes of Horace and Catullus; the bards, the aœds, and
rhapsodists had their counterpart—degenerate, if you will—in the
histriones, the gladiators, and performers in the arena of declining
Rome. Turning to the ‘Barbarians’ who caused the empire’s fall, we
learn that already Tacitus recorded the activities of the German
bardit who intoned war songs before their chiefs and inspired them
to new victories; while Athenæus and Diodorus Siculus both tell of
the Celtic bards who had an organization in the earliest Middle Ages
and were regularly educated for their profession.

I
Because of the fact that our earliest musical records are
ecclesiastical, the impression might prevail that modern music had
its origin in the Christian church. But, although almost completely
subjected to it as its guardian mother, and almost wholly occupied in
its service, the beginnings of Christian music antedate the church
itself. Pagan rites had their music no less than Christian. Just as we
find elements of Greek philosophy in the teaching of Christianity, so
the church reconciled Pagan festivals with its own holidays, and with
them adapted elements of Pagan music. Thus our Easter was a
continuation of the Pagan May-day festivals, and in the old Easter
hymn O filii et filiæ we find again the old Celtic may day songs, the
chansons de quête which still survive in France. We here reproduce
one above the other:
[PNG] [Listen]

O fi-li-i et fi-li-æ
Rex cœ-les-tis rex glo-ri-æ.

[PNG] [Listen]

En re-ve-nant de-dans les champs,


En re-ve-nant de-dans les champs.

The midwinter festival, merged into our Christmas, and the


midsummer festival, corresponding to the feast of St. John the
Baptist, both became connected with masses and songs common to
both beliefs; the Tonus Peregrinus, sung to the psalm ‘When Israel
came out of Egypt,’ already an old melody in the ninth century, is
almost identical with old French secular songs, and we have already
observed the adoption of vulgar melodies into ‘sequences’ and
motets.
It must be remembered that for a considerable period Christianity
and Paganism coexisted as tolerant companions. The former could
not totally blot out the traditions, customs, conventions, ideas, and
myths of classic Paganism which were rooted in the popular
consciousness. ‘All through the Middle Ages,’ says Symonds, ‘uneasy
and imperfect memories of Greece and Rome had haunted Europe.
Alexander, the great conqueror; Hector, the noble knight and lover;
Helen, who set Troy town on fire; Virgil, the magician; Dame Venus,
lingering about the hill of Hörsel—these phantoms, whereof the
positive historic truth was lost, remained to sway the soul and
stimulate desire in myth and saga.’[69]
Associated with these myths were the traditions native to the Celtic
and Germanic peoples. The very bards of whom we spoke are
known to have entered the service of the church in great number,
though this did not prevent their travelling from castle to castle to
sing before the princes ballads in praise of their heroic ancestors. Of
these epics, hero tales, strange stories of conquest and adventure
the nations of central Europe possessed a rich treasure, and we hear
that about A. D. 800 Charlemagne, the sovereign patron of liberal
arts, ordered a collection of them to be made.
Tolerant though he was of the traditions of his people, the profane
songs of love and satire, sometimes indecent, which were sung
about the churches, became subjects of his censure; and no doubt
the trouble they caused was but one indication of the growing
antagonism between Christian and non-Christian, the intolerance of
the later Middle Ages. Already Charles’ son, Ludwig the Pious, looked
with disfavor upon the heathen epics. As time went on and clerical
influence broadened, the personalities of Pagan tradition became
associated with the spirit of evil; Dame Venus had now become the
she-devil, the seductress of pious knights.[70] This again gave rise to
new ideas, traditions, and superstitions; the mystic and the
supernatural caught hold of the people’s fancy and were reflected in
their poetry and song.
Among the earliest epics, of which the verses are extant, are
fragments such as the song on the victory of Clothar II over the
Saxons in 622 A. D. Helgaire, a historian of the ninth century, tells us
that, ‘thanks to its rustic character, it ran from lip to lip; when it was
sung the women provided the chorus by clapping their hands.’ Its
Latin text is said to be merely a translation of a popular version,
which would antedate the earliest known vernacular song by over
two centuries. Of a more advanced type is the Song of Roland, that
famous chronicle of the death of the Count of Brittany in the Pass of
Roncesvalles, during Charlemagne’s return from the conquest of the
Spanish march. Its musical notation was lost, but it was sung as late
as 1356 at the battle of Poitiers. Though this great epic consists of
no less than four thousand verses, Tiersot points out that its hero
had long been celebrated in innumerable short lyrics, easy to
remember, which all the people sang. Many were the epics
describing the valiant deeds of Charlemagne himself, and posterity
deified him as the hero of heroes in numerous strains that are lost to
us. But one of which the music has been deciphered, though with
varying results, is the Planctus Karoli, a complainte on the death of
the great emperor (813 A. D.).[71] Then there is the quaint
vernacular song in praise of King Ludwig III, celebrating his victory
over the Normans (832 A. D.):

‘Einen Kuning weiz ich


Heisset Herr Ludwig
Der gerne Gott dienet
Weil er ihms lohnet,’ etc.

(‘A king I know,


named Lord Ludwig,
who serves God gladly,
for he rewards him,’ etc.)

But with isolated exceptions like this one all the early epics were
written in Latin; even the early songs of the first crusaders (eleventh
century) are still in that language. Their origin may in many
instances have been ecclesiastical; written by some monk secluded
within his monastery walls, they may never have been sung by the
people; their melodies, akin to the plain chant of the church, may
never have entered into the popular consciousness. Yet it is in the
popular consciousness that we must look for the true origin of
mediæval secular music. In folk song itself we must seek the germs
of the art which bore such rich blossoms as the Troubadour and
Minnesinger lyrics and which in turn refreshed by its influence the
music of the church itself.
II
As folk songs we are wont to designate those lyrics of simple
character which, handed down from generation to generation, are
the common property of all the people. Every nation, regardless of
the degree of its musical intelligence, possesses a stock of such
songs, so natural in their simple ingenuity as to disarm the criticism
of art, whose rules they follow unconsciously and with perfect
concealment of means. Their origin is often lost in the obscurity of
tradition and we accept them generally and without question as part
and parcel of our racial inheritance. Yet, while in a sense
spontaneous, every folk song did originate in the consciousness of
some one person. The fact that we do not know its author’s name
argues simply that the song has outlived the memory of him who
created it. He was a man of the people, more gifted than his fellows,
who saw the world through a poet’s eye, but who spoke the same
language, was reared in the same traditions, and swayed by the
same passions and sentiments as they who were unable to express
such things in memorable form. This fellow, whose natural language
is music, becomes their spokesman; their heartbeats are the accents
of his song. His talent is independent of culture. A natural facility, an
introspective faculty and a certain routine suffice to give his song the
coherence and definiteness of pattern which fasten it upon the
memory. Language is the only requisite for the transmission of his
art. Once language is fixed and has become the common property of
the people, this song, vibrating the heart-strings of its makers’
countrymen, will be repeated by another who perchance will fashion
others like it; his son, if he be gifted like himself, will do likewise and
so the inexhaustible well of popular genius will flow unceasingly from
age to age.
In the sentiments and thoughts common to all, then, we will find the
impulses of the songs which we shall now discuss. Considering the
different shades of our temperament, sadness, contentment,
gladness, and exuberance, we find that each gives rise to a species
of song, of which the second is naturally the least distinctive, the
two extremes calling for the most decisive expression. Now sadness
and melancholy have their concrete causes, and it is in the narration
of these causes that the heart vents its sorrow. Hence the narrative
form, the complainte, whose very name would confirm our
reasoning, is the earliest form of folk song in the vulgar tongue. In a
warlike people this would naturally dwell upon warlike heroic
themes, and we have already pointed out the early origin of the
epic. The musical form of epic was perhaps the simplest of all,
taking for its sole rhythm the accent of the words, one or two short
phrases, chanted much in the manner of the plain-song, sufficing for
innumerable verses. It is notable, too, that the church, adroitly
seizing upon popular music as a power of influence, adopted this
form to another genus, the légende, which, though developed by
clericals, struck as deep a root in the people’s imagination. Thus we
see in the ninth century the ‘Chant of St. Eulalia,’ and in the tenth
the ‘Life of St. Leger,’ which already shows great advance in form,
being composed in couplets of two, four, and six verses, alternating.
Possessed of better means of perpetuation this religious epic
flourished better and survived longer than the heroic complainte.
Still another genus was what we might call the popular complaintes,
the chansons narratives, which dealt with the people’s own
characters, with the common causes of woe; the common soldier
and the peasant; the death of a husband or a son. Such a one is the
Chanson de Renaud, which is considered the classic type of popular
song. It is sung in every part of France, and its traces are found in
Spain, Italy, Sweden, and Norway. It is unquestionably of great age,
though its date cannot be fixed.

[PNG] [Listen]
Welcome to our website – the ideal destination for book lovers and
knowledge seekers. With a mission to inspire endlessly, we offer a
vast collection of books, ranging from classic literary works to
specialized publications, self-development books, and children's
literature. Each book is a new journey of discovery, expanding
knowledge and enriching the soul of the reade

Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.

Let us accompany you on the journey of exploring knowledge and


personal growth!

ebooknice.com

You might also like