Forel2005 PDF
Forel2005 PDF
org/
Seismic Data Processing
with Seismic Un*x
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
SEG Course Notes Series publications are published without the normal SEG peer
reviews. This volume in the series is reproduced here as provided by the authors.
Published in 2005
Reprinted in 2007, 2008
iii
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
v
TOC-1
Table of Contents
vi
TOC-2
Table of Contents
vii
TOC-3
Table of Contents
viii
TOC-4
Table of Contents
ix
TOC-5
Table of Contents
x
TOC-6
Preface
This book can serve either of two purposes. (1) It can be used, as it is in our courses
at Michigan Technological University, as an aid to teaching seismic reflection data
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
processing. (2) It can be used as a primer to Seismic Un*x (SU) by those who may or
may not already be familiar with seismic processing using other software packages. SU is
provided by the Center for Wave Phenomena at Colorado School of Mines and is
available from their web site www.cwp.mines.edu/cwpcodes.
There are details of SU that are important to the processing specialist, but are not
essential to the student who is being introduced to seismic processing. Where these
details appear in the text, we placed a gray bar in the margin to indicate that the material
can be skipped by the student interested only in learning about processing. Of course,
these details might be among the most-interesting material from the viewpoint of
someone who wants to learn programming techniques that apply to SU.
A beginning course in reflection seismology processing might complete the entire
book in one semester. A course that covers fundamental processing and some
interpretation might skip the material with a gray bar and use only one of the real (field,
not synthetic) data sets that we provide.
We have found that one of the biggest hurdles to developing a new course is the
scarcity of real data. With this book, we provide two real data sets. The first real data set,
from the Nankai trough near Japan, is unusual in that it was acquired over very deep
water. This presents some advantages (mediocre velocity analyses will still produce good
results), but it is not realistic for typical exploration purposes. It is, however, a site of
exciting geologic features, and the rugged seafloor topography dramatically demonstrates
the benefit of migration. It is a data set that any student can appreciate. The second real
data set, from offshore Taiwan, presents a number of processing challenges; it is much
more difficult to process to satisfaction.
The compact disks (CDs) included with this book have copies of the scripts and the
seismic data sets (including some data sets that are generated by the scripts). The book is
printed in black-and-white to keep its price low. Because some figures are best viewed in
color, we put copies of all color figures, in uncompressed TIFF format, on the CDs.
Appendix E has a complete list of the contents of the CDs.
We at Michigan Tech will maintain a list of errata for the book. The errata list can be
found by searching on the web for Michigan Tech, errata, and Seismic Un*x. Please
report any errors to the address identified on that web site. Reports of SU bugs,
suggestions for SU improvements, or proposals for new SU scripts should be submitted
to the CSM Center for Wave Phenomena via methods suggested on their web site.
xi
P-1
Table of Computer Notes
Computer Notes are guides to help you better understand and use the Unix system.
Chapter 2 is not listed here, but it can be considered a chapter of Computer Notes.
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
Warning 1: We developed this Seismic Un*x Primer while using the csh shell. The
Computer Notes were written from this perspective. If you are not working under the csh
shell, our Computer Notes might not be appropriate for you.
Warning 2: The previous warning does not apply to the scripts. The first line of every
script is, “#! /bin/sh” , a command that makes scripts run under the Bourne shell.
Section Page
1.4 Computer Note 1: Enter ............................................................................. 1-2
1.5 Computer Note 2: Your Shell ..................................................................... 1-3
1.6 Computer Note 3: Changing Directories .................................................... 1-3
1.7 Computer Note 4: Cursor Prompt .............................................................. 1-3
1.9 Computer Note 5: Background (&) and Foreground Processes ................. 1-5
1.9.1 Background Processes .......................................................................... 1-5
1.9.2 Foreground Processes ........................................................................... 1-6
1.13.1 Computer Note 6: No Spaces around the Equal Sign .......................... 1-9
5.5 Computer Note 7: Pause and Resume Screen Print ................................... 5-6
5.8 Computer Note 8: Use, Modify a Command from the History List ........... 5-8
5.8.1 Use a Command from the History List ................................................ 5-8
5.8.2 Modify a Command from the History List ........................................... 5-8
6.6 Computer Note 9: Data Files Not in Script Directory ................................ 6-14
6.6.1 Data Directory below Script Directory — Full Path Name ................. 6-14
6.6.2 Data Directory below Script Directory — Short Path Name ............... 6-15
6.6.3 Data Directory at the same Level as Script Directory .......................... 6-16
7.4 Computer Note 10: If Blocks ..................................................................... 7-4
7.6.5 Computer Note 11: Interactive Scripts ................................................. 7-13
10.2 Computer Note 12: Symbolic Link ............................................................ 10-4
Throughout this Primer, we use a vertical bar, as shown to the right, to mark passages
that a first-time reader can safely ignore. Generally, the bar marks two kinds of text.
Details of model building (large portions of Chapters 4, 5, and 6) can be ignored because
Model 4 is on one of the CDs. The bar also marks parts of scripts and explanations of
those parts that are of more interest to shell script programmers than to script users.
xii
TOCN-1
Table of Scripts
Scripts are listed when introduced. The page number is the page of the first line of the
script. When a script “fragment” is listed, the original script location is also cited.
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
Line commands are not referenced, with the exception of our only use of sumute.
Section Page
2.4 myplot.sh ...................................................................................................... 2-5
4.2 model1.sh ..................................................................................................... 4-1
4.3 model2.sh ..................................................................................................... 4-5
4.4 model3.sh ..................................................................................................... 4-7
4.6 psmerge1a.sh ................................................................................................ 4-10
4.7 psmerge1b.sh ............................................................................................... 4-12
4.8 psmerge2a.sh ................................................................................................ 4-13
4.9 psmerge2b.sh ............................................................................................... 4-15
4.10 psmerge3.sh ................................................................................................. 4-17
5.2 acq1.sh ......................................................................................................... 5-1
5.3 acq2.sh (fragment, modification of acq1.sh, 5.2) ........................................ 5-5
5.4 acq3.sh (fragment, modification of acq1.sh, 5.2) ........................................ 5-5
5.6 showshot.sh .................................................................................................. 5-6
6.2 model4.sh ..................................................................................................... 6-1
6.3 acq4.sh ......................................................................................................... 6-5
6.5 showshot.sh (repeated from 5.6) .................................................................. 6-10
6.6.1 shotrwd1.sh ............................................................................................... 6-14
6.6.2 shotrwd2.sh ............................................................................................... 6-15
6.6.2 shotrwd3.sh ............................................................................................... 6-16
6.6.3 shotrwd4.sh ............................................................................................... 6-16
6.8 acq4shot.sh (fragment, modification of acq4.sh, 6.3) .................................. 6-18
6.8 showshotB.sh ............................................................................................... 6-18
7.2 sort2cmp.sh .................................................................................................. 7-1
7.3 showcmp.sh .................................................................................................. 7-4
7.6.1 oz14prep.sh ............................................................................................... 7-6
7.6.2 oz14velan.sh .............................................................................................. 7-9
7.6.5 iva.scr ........................................................................................................ 7-13
7.6.6.1 iva.scr (repeated from 7.6.5) .................................................................. 7-14
7.6.6.3 iva.sh ...................................................................................................... 7-21
8.2.1 tvQC.sh ..................................................................................................... 8-1
8.2.2.1 velanQC.scr ............................................................................................ 8-5
xiii
TOS-1
Table of Scripts
TOS-2
Introduction
1. Introduction
1.1 The Goal of this Primer
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
Our objective is to introduce you to the fundamentals of seismic data processing with
a learn-by-doing approach. We do this with Seismic Un*x (SU), a free software package
maintained and distributed by the Center for Wave Phenomena (CWP) at the Colorado
School of Mines (CSM). At the outset, we want to express our gratitude to John
Stockwell of the CWP for his expert counsel.
SU runs on several operating systems, including Unix, Microsoft Windows, and
Apple Macintosh. However, we discuss SU only on Unix.
Detailed discussion of wave propagation, convolution, cross- and auto-correlation,
Fourier transforms, semblance, and migration are too advanced for this Primer. Instead,
we suggest you refer to other publications of the Society of Exploration Geophysicists,
such as “Digital Processing of Geophysical Data - A Review” by Roy O. Lindseth and
one of the two books by Ozdogan Yilmaz: “Seismic Data Processing,” 1987 and
“Seismic Data Analysis,” 2001.
Our goal is to give you the experience and tools to continue exploring the concepts of
seismic data processing on your own.
1-1
Introduction
Chapters 12-13 and 15-16 start with a real 2-D line of shot gathers (Taiwan) and
process it through migration.
Chapter 14 uses real shot gathers from the Oz Yilmaz collection to demonstrate f-k
filtering and deconvolution.
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
1-2
Introduction
1-3
Introduction
/home/forel>
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
Figure 1.2: My new prompt shows the current directory I also changed “%’ to “>”.
/home/forel/seismicx/demos/Filtering/Sufilter>
1-4
Introduction
You can regain the Unix prompt in your SeismicX window by pressing the return key
(if you supplied & at the end of the command). You can close the graphics window by
using the Unix Close command (the top left button).
Let’s discuss the details of the above command:
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
Figure 1.4: The output of your first SU command. The upper-left corner shows
(0.200719,8.05814) because we clicked the middle mouse button at approximately 0.200
ms time (the vertical axis is time) and on the eighth trace from the left (the horizontal axis
is trace number).
1-5
Introduction
While a process is running in the background, you might need to cancel (kill) it. You
can do this using a Unix utility called top.
1. In an x-term window, enter top
2. The x-term window now shows your most active processes (Figure 1.5) and the
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
cursor is now in the upper left of the top display (shown as the red box in Figure
1.5).
1-6
Introduction
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
Figure 1.6: Zoom an xwigb window by dragging a box with the left mouse button. The
left image shows the mouse dragging the box. The right image shows the zoomed image.
To return to the original view, click the left mouse button in the window.
1-7
Introduction
SU interprets this to mean you want information about suplane, so SU prints the program
help (selfdoc) file to the screen. Below are the first several lines of the suplane selfdoc
file.
SUPLANE - create common offset data file with up to 3 planes
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
Optional Parameters:
npl=3 number of planes
nt=64 number of time samples
ntr=32 number of traces
taper=0 no end-of-plane taper
= 1 taper planes to zero at the end
offset=400 offset
dt=0.004 time sample interval in seconds
You see that the name stands for “create common offset data file with up to 3 planes.”
(Many SU program names start with “su.”) You also see the default values (values used
when the user does not supply a value) of various parameters, such as the number of
planes (npl=3), the number of time samples per trace (nt=64) and the time sample
interval (dt=4 ms). Using default values, each trace is 256 ms long.
Figure 1.7: The suplane command with two defaults overwritten: number of planes = 2,
and time interval = 0.008 seconds.
1-8
Introduction
Various help facilities are described in The New SU User’s Manual by John W.
Stockwell, Jr. & Jack K. Cohen; Version 3.2: August 2002. This Manual is available at
the CWP Seismic Un*x web site. In addition to the self documentation described in
Section 1.11, other help facilities include:
• suhelp lists all the available programs.
• suname lists all programs and libraries with a short description of them.
• sudoc followed by the program name gives documentation of the program. This
can work even when there is no selfdoc for the program.
• sufind followed by a string searches in all self documentation for this string.
• sukeyword lists keys used for headers.
• demos is a directory in the SU installation that contains useful scripts.
1-9
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
2-1
Unix Commands and Concepts
This chapter explains fundamental Unix commands that are necessary for
understanding later scripts. It will be helpful if you know elementary Unix commands.
Books titled “Teach Yourself Unix” have excellent, simple, early chapters that give the
basics. Also, by surfing the web, you can find universities that have good tutorial sites.
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
2.3.3 Pipe: |
We saw the pipe in Section 1.8. We mention it here because it is an advanced
concept. A pipe allows data to flow from one process to another. Below (as before),
2-2
Unix Commands and Concepts
suplane creates data, then the pipe sends the data to the imaging program, suxwigb to be
seen on the screen. In other words, data flow is through the pipe, left to right.
suplane | suxwigb
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
2-3
Unix Commands and Concepts
2-4
Unix Commands and Concepts
set -x
2-5
Unix Commands and Concepts
5 # Set messages on
6 ##set -x
7
8 # Define variable
9 signaltonoise=10
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
10
11 # Create seismic data. By default, suplane generates
12 # 32 traces with signals from three reflectors.
13 suplane | suaddnoise sn=$signaltonoise > myplot.su
14
15 # Send file myplot.su to user-selected image program
16 # or make Postscript file.
17 case $1 in
18
19 wiggle)
20 suxwigb < myplot.su title="Wiggle plot"
21 ;;
22
23 image)
24 suximage < myplot.su title="Bitmap plot"
25 ;;
26
27 pswiggle)
28 supswigp < myplot.su > myplot1.eps title="Postscript Wiggle"
29 echo " "
30 echo " Wiggle file myplot1.eps has been created."
31 echo " "
32 ;;
33
34 psimage)
35 supsimage < myplot.su > myplot2.eps title="Postscript Bitmap"
36 echo " "
37 echo " Bitmap file myplot2.eps has been created."
38 echo " "
39 ;;
40
41 *)
42 echo " "
43 echo " Use: myplot.sh [wiggle, image, pswiggle, psimage]"
44 echo " "
45 ;;
46
47 esac
48
49 # Exit politely from shell
50 exit
51
2-6
Unix Commands and Concepts
file myplot.su. By the end of line 13, you have a new file of seismic data on your
computer. File myplot.su is the input data to the “case” logic that follows.
Lines 17-47 are the “case” logic.
Line 50 exits the shell.
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
To run this script, it first must be created in an editor, saved under any name and
made executable. The script is then called with the name of the file, followed by the case
selection (see Table 2.2). For example, if the script file was saved under the name
myplot.sh, and then made executable with the command
chmod +x myplot.sh
the command
myplot.sh wiggle
runs the script and displays wiggle traces on the screen. In fact, there are four ways to run
this script:
Table 2.2: Possible cases and output of myplot.sh
Command Output
myplot.sh wiggle wiggle trace screen image
myplot.sh image bitmap screen image
myplot.sh pswiggle wiggle trace Postscript file – myplot1.eps
myplot.sh psimage bitmap Postscript file – myplot2.eps
The first two cases display seismic data directly on the screen. The latter two cases
save the created image as a Postscript file. When you run the script selecting one of the
Postscript cases, you must use one of the commands below to see the contents of the
Postscript files (if you have the Ghostscript program):
ghostview –bg white myplot1.eps &
ghostview -bg white myplot2.eps &
Your output should look like one of the four images in Figures 2.1 and 2.2 (below).
Option “-bg white” makes the ghostview background white. On many systems, the
default background is grey.
The echo command writes to the screen (unless the output of echo is redirected).
Lines 29 and 31 put blank lines above and below the output of line 30 to make line 30
output easier to read. The same is true for lines 36, 38, 37 and lines 42, 44, 43.
Last, notice line 41. This case option is used if you do not select or if you incorrectly
type one of the other cases. Case option “*)” is placed after all other cases and is always
selected if none of the previous cases are selected. This option is used here to print
information to the screen to remind you of the acceptable cases.
2-7
Unix Commands and Concepts
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
2-8
Trace Headers and Windowing Data
When seismic traces are in SEG-Y format, the SU trace format, and
many other formats, the beginning of every trace, the trace header, has
information about the trace. You can think of these as slots of
information above the data part of the trace. The data part is the time-
amplitude series that we see in a seismic display.
Trace header information might include the trace number and the
offset of the trace (for shot or CMP gathers). In an SU seismic data set,
the number of trace header slots is the same to ensure that every trace in
a data set has the same length (in terms of bytes of storage).
SU doesn't call them headers; it calls them keys. The following table
lists some SU keys.
Table 3.1: Some SU trace “headers” or trace “keys”
Key Definition
dt sample interval in microseconds
ns number of samples in this trace
ntr number of traces
offset offset
tracf trace number within field record (gather)
tracl trace sequence number within line
tracr trace sequence number within reel (entire data set)
delrt delay recording time in milliseconds
3-1
Trace Headers and Windowing Data
The sample interval is 4 ms (dt = 4000 microseconds) and every trace has 64 samples
(ns = 64). So, the traces must be 256 ms long. We can see this is true in Figure 1.4.
Let’s use another trace header analysis program, sugethw, a program that gets header
words and writes them to the screen. The command below sends seismic data file
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
myplot.su to program sugethw. Keys tracl, tracr, offset, ns, and dt are read from the file.
However, instead of writing the key values to the screen, the key values are directed to a
new file, test.txt. We can see the contents of test.txt by using the cat command.
sugethw < myplot.su key=tracl,tracr,offset,ns,dt > test.txt
cat test.txt
Below are the contents of file test.txt. (Blank lines were removed from the file.)
tracl=1 tracr=1 offset=400 ns=64 dt=4000
tracl=2 tracr=2 offset=400 ns=64 dt=4000
tracl=3 tracr=3 offset=400 ns=64 dt=4000
tracl=4 tracr=4 offset=400 ns=64 dt=4000
tracl=5 tracr=5 offset=400 ns=64 dt=4000
tracl=6 tracr=6 offset=400 ns=64 dt=4000
tracl=7 tracr=7 offset=400 ns=64 dt=4000
tracl=8 tracr=8 offset=400 ns=64 dt=4000
tracl=9 tracr=9 offset=400 ns=64 dt=4000
tracl=10 tracr=10 offset=400 ns=64 dt=4000
tracl=11 tracr=11 offset=400 ns=64 dt=4000
tracl=12 tracr=12 offset=400 ns=64 dt=4000
tracl=13 tracr=13 offset=400 ns=64 dt=4000
tracl=14 tracr=14 offset=400 ns=64 dt=4000
tracl=15 tracr=15 offset=400 ns=64 dt=4000
tracl=16 tracr=16 offset=400 ns=64 dt=4000
tracl=17 tracr=17 offset=400 ns=64 dt=4000
tracl=18 tracr=18 offset=400 ns=64 dt=4000
tracl=19 tracr=19 offset=400 ns=64 dt=4000
tracl=20 tracr=20 offset=400 ns=64 dt=4000
tracl=21 tracr=21 offset=400 ns=64 dt=4000
tracl=22 tracr=22 offset=400 ns=64 dt=4000
tracl=23 tracr=23 offset=400 ns=64 dt=4000
tracl=24 tracr=24 offset=400 ns=64 dt=4000
tracl=25 tracr=25 offset=400 ns=64 dt=4000
tracl=26 tracr=26 offset=400 ns=64 dt=4000
tracl=27 tracr=27 offset=400 ns=64 dt=4000
tracl=28 tracr=28 offset=400 ns=64 dt=4000
tracl=29 tracr=29 offset=400 ns=64 dt=4000
tracl=30 tracr=30 offset=400 ns=64 dt=4000
tracl=31 tracr=31 offset=400 ns=64 dt=4000
tracl=32 tracr=32 offset=400 ns=64 dt=4000
As we suspected, the traces are numbered successively. As we knew, all traces have
offset value 400.
3-2
Trace Headers and Windowing Data
file is big-endian (high byte) or little-endian (low byte). You don’t have to understand
endianness to understand the following sections. If you want to understand it, put
“endianness” in a web search engine.
3-3
Trace Headers and Windowing Data
13 24 traces:
14 tracl=(1,24) tracr=(1,24) fldr=10003 tracf=(1,24) cdp=(3,26)
15 cdpt=1 trid=1 nvs=1 nhs=1 duse=1
16 scalel=1 scalco=1 counit=1 delrt=4 muts=4
17 ns=1550 dt=4000
18 /home/david/suscripts/data/$
In this test:
• In line 1, I input the file to surange.
• Lines 3-4 show that surange cannot read the file.
• In line 5, because I suspect the problem is the endianness of the file, I use
suswapbytes to change the file’s endianness, then I pipe the altered file to
surange.
• Lines 6-10 show that surange can read the altered file.
• In line 11, I use suswapbytes to change the endianness of the input file, then I
redirect the output to a file with our naming convention.
• In line 12, I test the new file with surange.
• Lines 13-17 show that the new file can be read by surange.
3-4
Trace Headers and Windowing Data
3-5
Trace Headers and Windowing Data
• We do not expect dt, the sample interval key, to have more than one value.
• We expect tracf, the trace number key, to exhibit many values.
For a complete list of SU keys, use program sukeyword:
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
sukeyword -o
This program prints a document, one page at a time to the screen. Press the space bar
repeatedly to scroll to the next pages of the listing. The “–o” flag tells the program that
prints the contents of file segy.h to start the screen print with the first key definition.
Alternatively, you can send the listing to a file, like this:
sukeyword -o > sukeys.txt
Now you can either open file sukeys.txt (I made up the name.) in a text editor for
convenient reading or you can use more to scroll through the file:
more sukeys.txt
If you know the name of a key and want its definition, use the following command:
sukeyword [ ]
For example, when you type:
sukeyword dt
the screen print of file segy.h opens and quickly scrolls to put the definition of dt just a
few lines below the top of the screen. Try it!
3-6
Trace Headers and Windowing Data
3-7
Trace Headers and Windowing Data
3-8
Trace Headers and Windowing Data
2125 4 96 25 230
We see from Figure 3.2 that offset decreases with increasing trace number. Before we
use sushw, we have to calculate the offset of the first trace, the farthest offset trace:
farthest offset = 230 + ( 95 * 25 ) = 2605
The following one-line command generates the appropriate offset values and creates a
new seismic file oz30h2.su.
sushw < oz30h1.su key=offset a=2605 b=-25 j=96 > oz30h2.su
Notice that a, the first trace, is assigned the value of the farthest offset and we use the
negative trace interval in b to count down from the farthest offset.
Let's use surange to examine the trace headers:
surange < oz30h2.su
The surange output is:
96 traces:
tracl=(1,96) tracr=(1,96) fldr=10030 tracf=(1,96) cdp=30
cdpt=1 trid=1 nvs=1 nhs=1 duse=1
offset=(230,2605) scalel=1 scalco=1 counit=1 delrt=4
muts=4 ns=2175 dt=4000
Let's look at some of the trace header information:
1. tracl, tracr, tracf: Traces are numbered 1 to 96.
2. cdp: Trace cdp numbers are the single value "30".
3. offset: The offsets range from 230 to 2605.
4. dt: The sample interval is 4 ms.
5. delrt: The time delay to the first sample is 4 ms (one sample).
Note: You can use sugethw to confirm these key values.
There are richer (more complex) ways to use sushw than we have shown. We leave
that exploration to you.
3-9
Trace Headers and Windowing Data
3-10
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
3-11
Trace Headers and Windowing Data
Figure 3.3: Left: Trace 60 from oz30.su. Right: Trace 60 windowed from 1 to 3 seconds.
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
This chapter shows you how to create 2-D geologic models, create synthetic shot
gathers from the models, and examine the shot gathers for quality control (QC the
gathers). Chapter 5 shows you how to use the models to “acquire” (create synthetic)
seismic data sets.
Models developed in this chapter have layers that are homogeneous and isotropic.
Each layer has a single acoustic (P-wave) velocity.
This chapter and the next are computer-intensive in two ways: (1) the scripts are
complex and (2) it will probably take your computer two hours to most of a day to
generate the shot gathers. If you want to skip this complexity, skim these two chapters to
familiarize yourself with the geologic models, then simply use the synthetic data set
generated from Model 4 (Chapter 6) that accompanies this Primer. Seismic data
generated from Model 4 are processed in Chapters 7, 8, and 9.
4-1
Three Simple Models
6
7 # Experiment Number
8 num=1
9
10 # Name output binary model file
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
11 modfile=model${num}.dat
12
13 # Name output encapsulated Postscript image file
14 psfile=model${num}.eps
15
16 # Remove previous .eps file
17 rm -f $psfile
18
19 trimodel xmin=0 xmax=6 zmin=0 zmax=2 \
20 1 xedge=0,6 \
21 zedge=0,0 \
22 sedge=0,0 \
23 2 xedge=0,2,4,6 \
24 zedge=0.30,0.50,0.20,0.30 \
25 sedge=0,0,0,0 \
26 3 xedge=0,2,4,6 \
27 zedge=0.55,0.75,0.45,0.55 \
28 sedge=0,0,0,0 \
29 4 xedge=0,2,4,6 \
30 zedge=0.65,0.85,0.55,0.65 \
31 sedge=0,0,0,0 \
32 5 xedge=0,2,4,6 \
33 zedge=1.30,1.30,1.60,1.20 \
34 sedge=0,0,0,0 \
35 6 xedge=0,6 \
36 zedge=2,2 \
37 sedge=0,0 \
38 kedge=1,2,3,4,5,6 \
39 sfill=0.1,0.1,0,0,0.44,0,0 \
40 sfill=0.1,0.4,0,0,0.40,0,0 \
41 sfill=0.1,0.6,0,0,0.35,0,0 \
42 sfill=0.1,1.0,0,0,0.30,0,0 \
43 sfill=0.1,1.5,0,0,0.25,0,0 > $modfile
44 ## x,z
45
46 # Create a Postscript file of the model
47 # Set gtri=1.0 to see sloth triangle edges
48 spsplot < $modfile > $psfile \
49 gedge=0.5 gtri=2.0 gmin=0 gmax=1 \
50 title="Earth Model - 5 layers [M${num}]" \
51 labelz="Depth (km)" labelx="Distance (km)" \
52 dxnum=1.0 dznum=0.5 wbox=6 hbox=2
53
54 # Exit politely from shell
55 exit
56
This script can be divided into sets:
• System: Line 1 invokes the shell, line 5 turns on messages, and line 55 exits the
shell.
• Variables:
o Line 8 lets us vary the number of each run as we perfect the script. This
number becomes part of output file names. Examples: 1, 101, 1a.
o Line 11 assigns a name to the output binary model file that is used on line 43.
The output binary model file is the prime reason for this script. Line 11 is
4-2
Three Simple Models
It is also good practice when using two variables next to each other:
model${variable1}${variable2}.dat
o Line 14 assigns a name to the output .eps image file that is used on line 48.
• Bookkeeping: Line 17 removes a previous image file. This line is optional. On
some systems, the program crashes if a .eps file with the same name already
exists. Usually files are overwritten!
• Program trimodel: Lines 19-43 create the model. Notice that line 44 is a
comment. We use line 44 to remind us that the first two entries of line sfill are x-z
values. The very important sfill parameter is discussed below.
• Program spsplot: Lines 48-52 create the .eps image file.
Let’s examine program trimodel. Program trimodel fills the model with triangles of
(1/velocity)2. While (1/velocity) is called “slowness,” (1/velocity)2 is called “sloth.” A
sloth is a slow-moving tree-dwelling mammal found in Central and South America.
Program trimodel’s use here can be divided into five parts:
1. Line 19 defines the model dimensions.
2. The six sets of xedge, zedge, and sedge triplets define layer boundaries and
velocity gradients. There is a requirement that each set of triplets have the same
number of values. In other words, the triplet for layer 1 that defines the top of the
model (a straight line) has (only needs) two values and each triplet (xedge, zedge,
sedge), therefore, contains two values. On the other hand, the triplet for layer 2
that defines a curved surface has more than two values.
20 1 xedge=0,6 \
21 zedge=0,0 \
22 sedge=0,0 \
23 2 xedge=0,2,4,6 \
24 zedge=0.30,0.50,0.20,0.30 \
25 sedge=0,0,0,0 \
A triplet consists of an xedge, zedge pair that is an interface control point and an
sedge value that is the velocity gradient at the control point.
Line sedge has only zeros because all layers are isotropic and homogeneous. Line
sedge would have non-zero values if we want a layer to have velocity gradients.
3. In this model that has six boundaries, there are five simple layers. Therefore, there
are five sfill lines, lines 39-43. These lines are written for isotropic, homogenous
layers, which is why most of the values are zero.
Table 4.1: Line sfill variables
x z x0 z0 s00 ds/dx ds/dz
Each x-z pair of sfill is a point in a layer. Each sfill line describes the sloth value
that fills the layer.
4-3
Three Simple Models
Figure 4.2: Left to right: Shot 1, Shot 13, Shot 27, Shot 40 from Model 1.
Remember, you can use the self-documentation to learn more about any SU program
by entering just the program name on a line (Section 1.11).
4-4
Three Simple Models
4-5
Three Simple Models
27 zedge=0.7,0.66,0.66,0.66 \
28 sedge=0,0,0,0 \
29 4 xedge=3,3.5,4,6 \
30 zedge=0.7,0.74,0.74,0.74 \
31 sedge=0,0,0,0 \
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
32 5 xedge=0,2,4,6 \
33 zedge=1.3,1.3,1.6,1.2 \
34 sedge=0,0,0,0 \
35 6 xedge=0,6 \
36 zedge=2,2 \
37 sedge=0,0 \
38 kedge=1,2,3,4,5,6 \
39 sfill=1,0.1,0,0,0.44,0,0 \
40 sfill=1,0.7,0,0,0.40,0,0 \
41 sfill=4,0.7,0,0,0.30,0,0 \
42 sfill=1,1.5,0,0,0.20,0,0 > $modfile
43 ## x,z
44
45 # Create a Postscript file of the model
46 spsplot < $modfile > $psfile \
47 gedge=0.5 gtri=2.0 gmin=0 gmax=1 \
48 title="Earth Model - High Vel. Intrusion [M${num}]" \
49 labelz="Depth (km)" labelx="Distance (km)" \
50 dxnum=1.0 dznum=0.5 wbox=6 hbox=2
51
52 # Exit politely from shell
53 exit
54
Table 4.3: Values of sfill in Model 2
x (m) z (m) s00 (sloth) velocity (1/¥sloth) (km/s)
1 0.1 0.44 1508
1 0.7 0.40 1581
4 0.7 0.30 1826
1 1.5 0.20 2236
In this isotropic, homogenous model, sfill supplies (fills) a single sloth value for a
layer when an x-z point is specified in the appropriate layer.
Below are 60-trace split-spread shot gathers acquired from source positions x=2.0,
x=2.6, x=3.3, and 3.95. Acquisition of these data is explained in Chapter 5.
Figure 4.4: Left to right: Shot 1, Shot 13, Shot 27, Shot 40 from Model 2.
4-6
Three Simple Models
4-7
Three Simple Models
32 5 xedge=-1.,1.0,3.0,5.0 \
33 zedge=1.3,1.3,1.6,1.2 \
34 sedge=0,0,0,0 \
35 6 xedge=-1,5 \
36 zedge=2,2 \
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
37 sedge=0,0 \
38 kedge=1,2,3,4,5,6 \
39 sfill=1.0,0.1,0,0,0.44,0,0 \
40 sfill=1.0,0.6,0,0,0.40,0,0 \
41 sfill=2.1,0.7,0,0,0.30,0,0 \
42 sfill=1.0,1.5,0,0,0.20,0,0 > $modfile
43 ## x,z
44
45 # Create a Postscript file of the model
46 spsplot < $modfile > $psfile \
47 gedge=0.5 gtri=2.0 gmin=0 gmax=1 \
48 title="Earth Model - Diffractor [M${num}]" \
49 labelz="Depth (km)" labelx="Distance (km)" \
50 dxnum=1.0 dznum=0.5 wbox=6 hbox=2
51
52 # Exit politely from shell
53 exit
54
Table 4.4: Values of sfill in Model 3
x (m) z (m) s00 (sloth) velocity (1/¥sloth) (km/s)
1.0 0.1 0.44 1508
1.0 0.6 0.40 1581
2.1 0.7 0.30 1826
1.0 1.5 0.20 2236
The .eps image is still made 1:1 because in line 50, the box dimensions are six units
wide by two units high.
Below are 60-trace split-spread shot gathers acquired from source positions x=1.0,
x=1.6, x=2.3, and 2.95. Acquisition of these data is explained in Chapter 5.
Figure 4.6: Left to right: Shot 1, Shot 13, Shot 27, Shot 40 from Model 3.
4-8
Three Simple Models
sets with the models. Mostly, the images are just fun to see.
The scripts are adapted from the examples in the SU demonstration directories
demos/Synthetic/Tri/Models and demos/Synthetic/Tri/Rays. Our scripts create .eps
images, then overlay them. The scripts are called “psmerge” because they are named for
the psmerge program that merges .eps files. They all start with a .eps model file
previously created.
In brief:
• We pick a location for a single shot,
usually at the surface (z=0.0).
• We decide the number of rays that
emanate from the source, defined by
“number of angles” (nangle).
• We decide the angle of the fan of rays
that emanate from the source, defined
by “first angle” (fangle) and “last
angle” (langle).
In the example above, nangle=12, fangle=-30, and langle=45. The larger the value of
nangle, the longer it takes to produce the image. However, a complicated model might
require many rays to ensure that some will reflect to surface receivers.
• We also decide whether the rays transmit (refract) through a layer or reflect from
it. For example:
o refseq=2,0 means the rays refract through interface 2
o refseq=5,1 means the rays reflect at interface 5
Because .eps files are merged, they should have the same height and width. Also, the
z-axis label and x-axis label of ray and wavefront images either should match the labels
of the model image or the labels should not be generated. In the scripts below, labels are
not generated for ray and wavefront images since the labels are already on the model
image.
Axes labels are specified for ray and wavefront images because the default values
might not match the values generated for the model image. When the same values are
specified for each image, they overlap!
The main program, triray, generates the ray and wavefront information. It is up to us
whether we save this information. Program psgraph is called once to map the ray
information to a .eps image; it is called a separate time to map the wavefront information
to a separate .eps image. Program psmerge is used once to merge however many .eps
images were created.
4-9
Three Simple Models
4-10
Three Simple Models
13
14 # Output files
15 rayendsfile=rayends${num}a.dat
16 rayfile=ray${num}a.dat
17 raypsfile=ray${num}a.eps
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
18 psmergefile=psmerge${num}a.eps
19
20 # Assign values to variables
21 nangle=10 fangle=-45 langle=-15 nxz=301
22
23 # Shoot the rays
24 triray < $modelfile > $rayendsfile rayfile=$rayfile \
25 nangle=$nangle fangle=$fangle langle=$langle \
26 xs=4.5 zs=0.0 nxz=$nxz \
27 refseq=2,0 refseq=3,0 refseq=4,0 refseq=5,1
28
29 # Plot the rays
30 psgraph < $rayfile > $raypsfile \
31 nplot=`cat outpar` n=$nxz hbox=2.0 wbox=6.0 \
32 x1beg=0.0 x1end=2.0 x2beg=0 x2end=6 \
33 d1num=0.5 d2num=1.0 style=seismic linegray=0
34
35 # Merge model + rays
36 psmerge in=$modelpsfile in=$raypsfile > $psmergefile
37
38 # Exit politely from shell
39 exit
40
The rays are black because linegray=0 in psgraph (line 32).
This script (psmerge1a.sh) and the next script (psmerge1b.sh) both use files created
during our earlier work with Model 1. The input files, lines 11 and 12 of psmerge1a.sh
and psmerge1b.sh, both use variable ${num} where num=1. The output files of
psmerge1a.sh use variable ${num}a to distinguish the output files of this script from the
output files of the next script.
4-11
Three Simple Models
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
4-12
Three Simple Models
38
39 # Transpose the wavefile
40 transp < $wavefile > $wavetrans n1=$nt n2=$nangle nbpe=8
41
42 # Plot the wavefronts
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
4-13
Three Simple Models
7 # Experiment number
8 num=2
9
10 # Input files
11 modelfile=model${num}.dat
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
12 modelpsfile=model${num}.eps
13
14 # Output files
15 rayendsfile=rayends${num}a.dat
16 rayfile=ray${num}a.dat
17 raypsfile=ray${num}a.eps
18 psmergefile=psmerge${num}a.eps
19
20 # Assign values to variables
21 nangle=15 fangle=0 langle=50 nxz=301
22
23 # Shoot the rays
24 triray < $modelfile > $rayendsfile rayfile=$rayfile \
25 nangle=$nangle fangle=$fangle langle=$langle \
26 xs=3.0 zs=0.0 nxz=$nxz \
27 refseq=2,0 refseq=3,0 refseq=4,0 refseq=5,1
28
29 # Plot the rays
30 psgraph < $rayfile > $raypsfile \
31 nplot=`cat outpar` n=$nxz hbox=2 wbox=6 \
32 x1beg=0 x1end=2 x2beg=0 x2end=6 \
33 d1num=0.5 d2num=1.0 style=seismic linegray=0
34
35 # Merge model + rays
36 psmerge in=$modelpsfile in=$raypsfile > $psmergefile &
37
38 # Exit politely from shell
39 exit
40
The rays are black because linegray=0 in psgraph (line 32).
4-14
Three Simple Models
1 #! /bin/sh
2 # File: psmerge2b.sh
3
4 # Set messages on
5 set -x
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
6
7 # Experiment number
8 num=2
9
10 # Input files
11 modelfile=model${num}.dat
12 modelpsfile=model${num}.eps
13
14 # Output files
15 rayendsfile=rayends${num}b.dat
16 rayfile=ray${num}b.dat
17 raypsfile=ray${num}b.eps
18 wavefile=wave${num}b.dat
19 wavetrans=wavetrans${num}b.dat
20 wavepsfile=wave${num}b.eps
21 psmergefile=psmerge${num}b.eps
22
23 # Assign values to variables
24 nangle=65 fangle=0 langle=50 nxz=301 nt=20
25
26 # Shoot the rays
27 triray < $modelfile > $rayendsfile \
28 rayfile=$rayfile wavefile=$wavefile \
29 nangle=$nangle fangle=$fangle langle=$langle \
30 nxz=$nxz nt=$nt xs=3.0 zs=0.0 \
31 refseq=2,0 refseq=3,0 refseq=4,0 refseq=5,1
32
33 # Plot the rays
34 psgraph < $rayfile > $raypsfile \
35 nplot=`cat outpar` n=$nxz hbox=2 wbox=6 \
36 x1beg=0 x1end=2 x2beg=0 x2end=6 \
37 d1num=0.5 d2num=1.0 style=seismic linegray=0
38
39 # Transpose the wavefile
40 transp < $wavefile > $wavetrans n1=$nt n2=$nangle nbpe=8
41
42 # Plot the wavefronts
43 psgraph < $wavetrans > $wavepsfile \
44 linewidth=0.0 mark=8 marksize=2 \
45 nplot=$nt n=$nangle hbox=2.0 wbox=6.0 \
46 x1beg=0.0 x1end=2.0 x2beg=0.0 x2end=6.0 \
47 d1num=0.5 d2num=1.0 style=seismic linegray=1
48
49 # Merge model + rays + wavefronts
50 psmerge in=$modelpsfile in=$raypsfile in=$wavepsfile > $psmergefile &
51
52 # Exit politely from shell
53 exit
54
Now the rays are black due to linegray=0 in line 36. The wavefronts are white due to
linegray=1 in line 47.
4-15
Three Simple Models
Figure 4.11: Rays generated through Model 3. Program triray parameter nxz=301.
The source was placed over the diffractor so rays shoot through and around it. In a
real earth, raypaths would be quite scattered near the diffractor edges. Here, the edges are
not treated realistically. Notice that only six of the ten rays reach the surface. In the first
use of this script, triray parameter nxz=301 (lines 21 and 26). To make all ten rays reach
the surface, nxz was increased in increments of 50. All rays reached the surface with
nxz=701 (see Figure 4.12, below). It is interesting to observe that the default value of nxz
is 101. It is probably unrealistic to set nxz so high (701), and probably a waste of
computer time; although, without examining the source code and understanding exactly
what role nxz plays, it is not easy to be certain. Nonetheless, through trial and error we
obtained a reasonable solution..
Figure 4.12: Rays generated through Model 3. Program triray parameter nxz=701.
4-16
Three Simple Models
While hbox=2.0 and wbox=6.0 ensure that the model and ray images have the same
dimensions (line 31 in this script and line 50 in the model script), psgraph still must be
told the x and z extents of the image (line 32). Notice that x2beg=-1 and x2end=5.
1 #! /bin/sh
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
2 # File: psmerge3.sh
3
4 # Turn on messages
5 set -x
6
7 # Experiment number
8 num=3
9
10 # Input files
11 modelfile=model${num}.dat
12 modelpsfile=model${num}.eps
13
14 # Output files
15 rayendsfile=rayends${num}.dat
16 rayfile=ray${num}.dat
17 raypsfile=ray${num}.eps
18 psmergefile=psmerge${num}.eps
19
20 # Assign values to variables
21 nangle=10 fangle=0 langle=+25 nxz=701
22
23 # Shoot the rays
24 triray < $modelfile > $rayendsfile rayfile=$rayfile \
25 nangle=$nangle fangle=$fangle langle=$langle \
26 xs=2.0 zs=0.0 nxz=$nxz \
27 refseq=2,0 refseq=3,0 refseq=4,0 refseq=5,1
28
29 # Plot the rays
30 psgraph < $rayfile > $raypsfile \
31 nplot=`cat outpar` n=$nxz hbox=2 wbox=6 \
32 x1beg=0 x1end=2 x2beg=-1 x2end=5 \
33 d1num=0.5 d2num=1.0 style=seismic linegray=0
34
35 # Merge model + rays
36 psmerge in=$modelpsfile in=$raypsfile > $psmergefile &
37
38 # Exit politely from shell
39 exit
40
4-17
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
In the previous chapter, we developed three simple models and used them to generate
single shot gathers and images of rays and wavefronts. In this chapter, we use the same
models to acquired 2-D lines of seismic data.
5-1
Three Simple Models: Acquire 2-D Lines
36 END`
37 fldr=`bc -l <<-END
38 $i + 1
39 END`
40
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
41 j=0
42 while [ "$j" -ne "60" ]
43 do
44
45 fg=`bc -l <<-END
46 $i * 0.05 + $j *0.05
47 END`
48 gx=`bc -l <<-END
49 $i * 50 + $j * 50 - 1475
50 END`
51 offset=`bc -l <<-END
52 $j * 50 - 1475
53 END`
54 tracl=`bc -l <<-END
55 $i * 60 + $j + 1
56 END`
57 tracf=`bc -l <<-END
58 $j + 1
59 END`
60
61 echo " Sx=$sx Gx=$gx fldr=$fldr Offset=$offset tracl=$tracl\
62 fs=$fs fg=$fg"
63
64 k=2
65 while [ "$k" -ne "6" ]
66 do
67
68 triseis < $inmodel xs=2,3.95 xg=0.525,5.425 zs=0,0 zg=0,0 \
69 nangle=$nangle fangle=$fangle langle=$langle \
70 kreflect=$k krecord=1 fpeak=12 lscale=0.5 \
71 ns=1 fs=$fs ng=1 fg=$fg nt=$nt dt=$dt |
72 suaddhead nt=$nt |
73 sushw key=dt,tracl,tracr,fldr,tracf,trid,offset,sx,gx \
74 a=4000,$tracl,$tracl,$fldr,$tracf,1,$offset,$sx,$gx >> temp$k
75
76 k=`expr $k + 1`
77
78 done
79 j=`expr $j + 1`
80
81 done
82 i=`expr $i + 1`
83
84 done
85
86 echo " --End looping over triseis."
87
88 #=================================================
89
90 # Sum contents of the "temp" files
91 echo " --Sum files."
92 susum temp2 temp3 > tempa
93 susum tempa temp4 > tempb
94 susum tempb temp5 > $outseis
95
96 # Remove temp files
97 echo " --Remove temp files."
98 rm -f temp*
5-2
Three Simple Models: Acquire 2-D Lines
99
100 # Exit politely from shell script
101 echo " --Finished!"
102 exit
103
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
5-3
Three Simple Models: Acquire 2-D Lines
geophone moves across the model. Both fg and gx are the geophone position. Variable fg
is in model units, kilometers, and is used by triseis; variable gx is an integer number of
meters and is computed for a header value.
Variable fldr (line G) is a header value that identifies each shot gather. Variable tracl
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
(line P) sequentially numbers the traces of the line (TRACe number in the Line). Variable
tracf (line Q) sequentially numbers the traces of each gather (TRACe number in the File).
A i=0
B while i ne 40
C do
D
E fs = i * 0.05 triseis
F sx = i * 50 sushw
G fldr = i + 1 sushw
H
I j=0
J while j ne 60
K do
L
M fg = i * 0.05 + j *0.05 triseis
N gx = i * 50 + j * 50 - 1475 sushw
O offset = j * 50 - 1475 sushw
P tracl = i * 60 + j + 1 sushw
Q tracf = j + 1 sushw
R
S echo " $sx $gx $fldr $offset $tracl $fs $fg"
T
U k=2
V while k ne 6
W do
X
Y triseis < $inmodel xs=2,3.95 xg=0.525,5.425 zs=0,0 zg=0,0 \
Z nangle=$nangle fangle=$fangle langle=$langle \
AA kreflect=$k krecord=1 fpeak=12 lscale=0.5 \
BB ns=1 fs=$fs ng=1 fg=$fg nt=$nt dt=$dt |
CC suaddhead nt=$nt |
DD sushw key=dt,tracl,tracr,fldr,tracf,trid,offset,sx,gx \
EE a=4000,$tracl,$tracl,$fldr,$tracf,1,$offset,$sx,$gx >> temp$k
FF
GG k = k + 1
HH
II done
JJ j = j + 1
KK
LL done
MM i= i + 1
NN
OO done
Variable offset (line O) ranges from -1475 to +1475, incrementing by 50 meters. The
values of offset as it passes through zero offset are … -125, -75, -25, +25, +75, +125, etc.
This simple numbering scheme cleverly skips the geophysically impossible zero offset
acquisition position.
Let's use surange to examine the trace headers:
surange < seis1.su
The surange output is:
2400 traces:
tracl=(1,2400) tracr=(1,2400) fldr=(1,40) tracf=(1,60) trid=1
offset=(-1475,1475) sx=(0,1950) gx=(-1475,3425) ns=751 dt=4000
5-4
Three Simple Models: Acquire 2-D Lines
48 gx=`bc -l <<-END
49 $i * 50 + $j * 50 + 525
50 END`
Let's use surange to examine the trace headers:
surange < seis2.su
The surange output is:
2400 traces:
tracl=(1,2400) tracr=(1,2400) fldr=(1,40) tracf=(1,60) trid=1
offset=(-1475,1475) sx=(2000,3950) gx=(525,5425) ns=751 dt=4000
We see that headers sx and gx of seis2.su differ from seis1.su. All other headers are
the same as before.
The only other ways acq2.sh differs from acq1.sh are lines 2 and 7:
2 # File: acq2.sh
7 num=2
That is, the file name is internally documented and Model 2 is called instead of Model 1.
48 gx=`bc -l <<-END
49 $i * 50 + $j * 50 - 475
50 END`
Let's use surange to examine the trace headers:
5-5
Three Simple Models: Acquire 2-D Lines
5-6
Three Simple Models: Acquire 2-D Lines
5-7
Three Simple Models: Acquire 2-D Lines
5.8 Computer Note 8: Use, Modify a Command from the History List
The shell has a record of your previous commands. You can use this “history” to
repeat previous commands or recall and modify previous commands.
5-8
Three Simple Models: Acquire 2-D Lines
which made a .eps file of Model 2, shot 25. Now suppose you want to make a .eps file of
Model 3, shot 25. In other words, you want to change “2” to “3.”
1. Press the up arrow key once to return to the previous command.
2. Press .the left arrow key several times, until the prompt is between the “2” and the
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
“25.”
3. Press the delete key once. The “2” is now erased.
4. Press the I key once. This tells the shell you want to “insert” on the line.
5. Press “3.” The model number is now changed.
6. Press “Enter” or “Return” to execute the command.
If you want to make other changes on the line, like changing the shot number, use the
left or right arrow keys to move the cursor to the right of the delete/insert point. Then, use
“delete” or the “I” key to make your change.
Also note:
7. Pressing the “Escape” key cancels the “insert” mode.
8. If you are not in “insert” mode, you can press “U” to undo a previous change.
When you are finished changing the line, execute your new command.
5-9
Three Simple Models: Acquire 2-D Lines
Notice that lines 68-74 and lines 91-94 are commented out with two “#.” We know
only one is necessary. We use two so we see where we intend comments to be temporary.
This script would have created 2400 traces. The output file contains 2400 lines and
occupies less than 2 Mbytes. Below are the first ten lines and the last ten lines of
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
survey2.txt.
Sx=2000 Gx=525 fldr=1 Offset=-1475 tracl=1 fs=0 fg=0
Sx=2000 Gx=575 fldr=1 Offset=-1425 tracl=2 fs=0 fg=.05
Sx=2000 Gx=625 fldr=1 Offset=-1375 tracl=3 fs=0 fg=.10
Sx=2000 Gx=675 fldr=1 Offset=-1325 tracl=4 fs=0 fg=.15
Sx=2000 Gx=725 fldr=1 Offset=-1275 tracl=5 fs=0 fg=.20
Sx=2000 Gx=775 fldr=1 Offset=-1225 tracl=6 fs=0 fg=.25
Sx=2000 Gx=825 fldr=1 Offset=-1175 tracl=7 fs=0 fg=.30
Sx=2000 Gx=875 fldr=1 Offset=-1125 tracl=8 fs=0 fg=.35
Sx=2000 Gx=925 fldr=1 Offset=-1075 tracl=9 fs=0 fg=.40
Sx=2000 Gx=975 fldr=1 Offset=-1025 tracl=10 fs=0 fg=.45
5-10
Build Model 4, Acquire a Line, Display Gathers, QC
In the previous two chapters, we developed three simple models and used them to
acquire 2-D lines of seismic data. In this chapter, we combine some of the attributes of
those models to create a fourth model and use the model to acquire a 2-D line of seismic
data
In the following two sections, we explain the model and acquisition scripts in detail;
you do not have to be familiar with the related scripts that are in the previous two
chapters.
The next three sections of this chapter explain how to build the model, acquire
seismic data, and view selected gathers. While those sections are important, the last two
sections about quality control (QC) are equally important. The QC sections explain how
you can acquire survey information and seismic data from selected portions of the model.
You can examine preliminary survey information and seismic data to increase your
confidence in your model and your acquisition script before spending time acquiring the
full seismic data set.
6-1
Build Model 4, Acquire a Line, Display Gathers, QC
35 6 xedge=1.80,2.00,2.20 \
36 zedge=0.98,1.05,1.03 \
37 sedge=0,0,0 \
38 7 xedge=7.0,7.5,8.0,8.5,9.0,9.50,10.0,11.0,12. \
39 zedge=0.5,0.6,0.7,0.8,0.9,0.91,0.88,0.72,0.5 \
40 sedge=0,0,0,0,0,0,0,0,0 \
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
41 8 xedge=7.0,7.5,8.0,8.50,9.00,9.50,10.0,10.5,11.0,11.5,12.0 \
42 zedge=0.5,0.7,0.9,1.02,1.15,1.25,1.25,1.20,1.15,1.08,1.00 \
43 sedge=0,0,0,0,0,0,0,0,0,0,0 \
44 9 xedge=-2,12 \
45 zedge=2,2 \
46 sedge=0,0 \
47 kedge=1,2,3,4,5,6,7,8,9 \
48 sfill=0.0,0.20,0,0,0.44,0,0 \
49 sfill=0.0,0.50,0,0,0.16,0,0 \
50 sfill=0.0,1.20,0,0,0.08,0,0 \
51 sfill=0.0,1.80,0,0,0.07,0,0 \
52 sfill=2.0,1.02,0,0,0.11,0,0 \
53 sfill=10.,1.00,0,0,0.09,0,0 > $modfile
54 ## x,z
55
56 # Create Encapsulated PostScript (EPS) image of model
57 spsplot < $modfile > $psfile \
58 gedge=0.5 gtri=2.0 gmin=0.0 gmax=5.0 \
59 title="Earth Model $num" \
60 labelz="Depth (km)" labelx="Distance (km)" \
61 wbox=5.25 hbox=0.75 dxnum=2.0 dznum=1.0
62
63 # Exit politely from shell
64 exit
65
This script can be divided into sets:
• System: Line 1 invokes the shell, line 5 turns on messages, and line 64 exits the
shell.
• Variables: Line 8 lets us vary the number of each run as we perfect the script (2a,
2b, etc.). This number becomes part of output file names. Line 11 assigns a name
to the output binary model file that is used on line 53. The output binary model
file is the prime reason for this script. Line 14 assigns a name to the output .eps
image file that is used on line 57.
• Bookkeeping: Line 17 removes a previous image file. This line is optional. On
some systems, the program crashes if a .eps file with the same name already
exists. Usually files are overwritten.
• Program trimodel: Lines 19-53 create the model. Notice that line 54 is a
comment. We use line 54 to remind us that the first two entries of line sfill are x-z
values. The very important sfill line is discussed below.
• Program spsplot: Lines 57-61 create the .eps image file.
Let’s examine program trimodel. Program trimodel fills the model with triangles of
(1/velocity)2. While (1/velocity) is called “slowness,” (1/velocity)2 is called “sloth.” A
sloth is a slow-moving tree-dwelling mammal found in Central and South America.
Program trimodel’s use here can be divided into five parts:
1. Line 19 defines the model dimensions.
2. The nine sets of xedge, zedge, and sedge triplets define layer boundaries
(interfaces). Each xedge, zedge, sedge triplet must have the same number of
values. For example, triplet 1 that defines the top of the model has two values to
define the flat layer:
6-2
Build Model 4, Acquire a Line, Display Gathers, QC
20 1 xedge=-2,12 \
21 zedge=0,0 \
22 sedge=0,0 \
And, parameters xedge, zedge, sedge for interface two have eight values (as many
values as we thought necessary to define the interface):
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
23 2 xedge=-2.,0.00,2.0,4.0,6.0,8.00,10.0,12.0 \
24 zedge=0.3,0.32,0.3,0.6,0.2,0.25,0.25,0.25 \
25 sedge=0,0,0,0,0,0,0,0 \
Line sedge has only zeros because all layers are isotropic and homogeneous. Line
sedge would have non-zero values if we want a layer to have velocity gradients.
3. Use kedge, line 47, to list your interface numbers. This is important for your
acquisition script (the next section). An interface NOT listed here will NOT be
seen by a later acquisition script.
4. In this model that has nine boundaries, there are six layers (Figure 6.1): four
somewhat horizontal layers that extend across the model, a buried channel
(between interfaces 5 and 6) and a pinchout (between interfaces 7 and 8)
Therefore, there are six sfill lines, lines 48-53. These lines are written for
isotropic, homogenous layers, which is why most of the values are zero.
Figure 6.1: Model 4 is a 2-D slice of a marine environment (the top layer is water).
(TOP) The earth model. (MIDDLE) The simple model comprised of four layers; that is,
interfaces 1 (top), 2, 3, 4, and 9 (bottom). (BOTTOM) The model with the small channel
diffractor (interfaces 5 and 6) and the pinchout (interfaces 7 and 8). Interface 7 exactly
matches interface 3 from x = 7 km to x = 12 km. Interfaces 7 and 8 close (pinch out) at
x = 7 km.
6-3
Build Model 4, Acquire a Line, Display Gathers, QC
defines the sloth value that fills the layer. Remember, this model contains only
isotropic, homogenous layers.
Table 6.2: Values of sfill in Model 4
x z s00 velocity (1/¥sloth) Layer
(m) (m) (sloth) (m/s) boundaries
0.0 0.20 0.44 1508 1-2
0.0 0.50 0.16 2500 2-3
0.0 1.20 0.08 3536 3-4
0.0 1.80 0.07 3780 4-9
2.0 1.02 0.11 3015 5-6
10. 1.00 0.09 3333 7-8
Figure 6.2 shows that the sloth values generally decrease (P-wave velocity layer
values generally increase) with depth.
Figure 6.2: Model 4 layer velocities. Top: Sloth values. Bottom: Velocities in m/s.
To make interfaces 5 and 6 (below) create a layer, the two ends of the interfaces
intersect (see the bold x,z values below).
32 5 xedge=1.80,2.00,2.20 \
33 zedge=0.98,1.00,1.03 \
6-4
Build Model 4, Acquire a Line, Display Gathers, QC
34 sedge=0,0,0 \
35 6 xedge=1.80,2.00,2.20 \
36 zedge=0.98,1.05,1.03 \
37 sedge=0,0,0 \
Interfaces 7 and 8 (below) have the same starting xedge (7.0) and zedge (0.5)
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
values (in bold below), ensuring that they form a pinchout. The interfaces form a
closed layer because the right ends intersect the right edge of the model.
38 7 xedge=7.0,7.5,8.0,8.5,9.0,9.50,10.0,11.0,12. \
39 zedge=0.5,0.6,0.7,0.8,0.9,0.91,0.88,0.72,0.5 \
40 sedge=0,0,0,0,0,0,0,0,0 \
41 8 xedge=7.0,7.5,8.0,8.50,9.00,9.50,10.0,10.5,11.0,11.5,12.0 \
42 zedge=0.5,0.7,0.9,1.02,1.15,1.25,1.25,1.20,1.15,1.08,1.00 \
43 sedge=0,0,0,0,0,0,0,0,0,0,0 \
5. Line 53 has the variable name of the output binary model file.
Program spsplot plots a triangulated sloth function as an encapsulated Postscript file.
So, we see that spsplot is designed to accompany trimodel! If you want to see the
triangles that are used to build the model, change gtri=2.0 to gtri=1.0.
On line 61, wbox=5.25 and hbox=0.75 ensure that the .eps image has the same
proportions as the actual model (line 19). It is important for us to specify the same width
and height in different programs within the same script so images created by different
programs exactly overlap. Also on line 61, dxnum and dznum specify the annotation
increments in the respective directions.
6-5
Build Model 4, Acquire a Line, Display Gathers, QC
15
16 # Name output seismic file
17 outseis=seis${num}.su
18
19 # Remove survey file
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
20 rm -f survey${num}.txt
21 # Name survey file
22 survey=survey${num}.txt
23
24 #=================================================
25 # Create the seismic traces with "triseis"
26 # i-loop = 200 source positions
27 # j-loop = 60 geophone positions (split-spread)
28 # per shot position
29 # k-loop = layers 2 through 8
30 # (do not shoot layers 1 and 9)
31
32 echo " --Begin looping over triseis."
33
34 i=0
35 while [ "$i" -ne "200" ]
36 do
37
38 fs=`bc -l <<-END
39 $i * 0.05
40 END`
41 sx=`bc -l <<-END
42 $i * 50
43 END`
44 fldr=`bc -l <<-END
45 $i + 1
46 END`
47
48 j=0
49 while [ "$j" -ne "60" ]
50 do
51
52 fg=`bc -l <<-END
53 $i * 0.05 + $j *0.05
54 END`
55 gx=`bc -l <<-END
56 $i * 50 + $j * 50 - 1475
57 END`
58 offset=`bc -l <<-END
59 $j * 50 - 1475
60 END`
61 tracl=`bc -l <<-END
62 $i * 60 + $j + 1
63 END`
64 tracf=`bc -l <<-END
65 $j + 1
66 END`
67
68 echo " Sx=$sx Gx=$gx fldr=$fldr Offset=$offset tracl=$tracl\
69 fs=$fs fg=$fg"
70 echo " Sx=$sx Gx=$gx fldr=$fldr Offset=$offset tracl=$tracl\
71 fs=$fs fg=$fg" >> $survey
72
73 k=2
74 while [ "$k" -ne "9" ]
75 do
76
77 triseis < $inmodel xs=0,9.95 xg=-1.475,11.425 zs=0,0 zg=0,0 \
6-6
Build Model 4, Acquire a Line, Display Gathers, QC
6-7
Build Model 4, Acquire a Line, Display Gathers, QC
Line 10 also assigns the number of time samples (nt=501) and the time sample
interval in seconds (dt=0.004). The output traces are 2 seconds long.
Note: Check the unit of the time sample interval in each SU program. The unit varies.
Lines 70 and 71 are almost the same as the previous two lines. However, where lines
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
68 and 69 write to the screen, lines 70 and 71 write values to a disk file. Although the
survey file for model 4 contains 12000 lines, it only occupies 1 Megabyte. Line 20
removes this file at the beginning of each run. If this line was not used, successive runs
would be appended (>>) to the end of previous runs. The survey file is named on line 22.
20 rm -f survey${num}.txt
22 survey=survey${num}.txt
70 echo " Sx=$sx Gx=$gx fldr=$fldr Offset=$offset tracl=$tracl\
71 fs=$fs fg=$fg" >> $survey
The core of the script is the three do-while loops (Section 2.3.13) over triseis,
suaddhead, and sushw. Each time these programs are used, triseis creates a seismic
trace, suaddhead creates a trace header on the trace, and sushw writes values to the trace
headers.
• The outermost i-loop goes over 200 source positions, 0 to 199 (lines 34 and 35).
• The j-loop goes over 60 geophone positions, 0 to 59 (lines 48 and 49).
• The innermost k-loop goes over the reflectors, 2 to 8 (lines 73 and 74).
Consider the innermost loop. The value of k signifies the reflector from which triseis
records a reflection (kreflect=$k, line 79) and the temp file in which the trace is stored
(temp$k, line 83.) Because k loops over reflectors 2 to 8, seven temp files are created.
(The top and base of the model are ignored.) Each temp data set has (in this case) 12000
traces. After all traces are created, lines 101 to 106 “sum” the seven data sets, trace-for-
trace.
The following pseudo-code of the do-while loops is presented to help us consider
how the variables are used. The program in which the variable is used is printed in bold
to the right on the assignment line. Variable k is underlined in lines T, Y, and CC to
remind us of its use as discussed in the preceding paragraph.
Variable fs (line E) is the source location relative to the start of the source surface.
The source surface is defined in line Y as variable xs. As i increments, the source moves
across the model. Both fs and sx are the source position. Variable fs is in model units,
kilometers, and is used by triseis; variable sx is an integer number of meters and is
computed for a header value.
Variable fg (line M) is the geophone location relative to the start of the geophone
surface. The geophone surface is defined in line Y as variable gx. As j increments, the
geophone moves across the model. Both fg and gx are the geophone position. Variable fg
is in model units, kilometers, and is used by triseis; variable gx is an integer number of
meters and is computed for a header value.
Variable fldr (line G) is a header value that identifies each shot gather. Variable tracl
(line P) numbers the traces continuously throughout the line. Variable tracf (line Q)
numbers the traces within each gather.
A i=0
B while i ne 200
C do
6-8
Build Model 4, Acquire a Line, Display Gathers, QC
D
E fs = i * 0.05 triseis
F sx = i * 50 sushw
G fldr = i + 1 sushw
H
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
I j=0
J while j ne 60
K do
L
M fg = i * 0.05 + j *0.05 triseis
N gx = i * 50 + j * 50 - 1475 sushw
O offset = j * 50 - 1475 sushw
P tracl = i * 60 + j + 1 sushw
Q tracf = j + 1 sushw
R
S k=2
T while k ne 9
U do
V
W triseis < $inmodel xs=0,9.95 xg=-1.475,11.425 zs=0,0 zg=0,0 \
X nangle=$nangle fangle=$fangle langle=$langle \
Y kreflect=$k krecord=1 fpeak=40 lscale=0.5 \
Z ns=1 fs=$fs ng=1 fg=$fg nt=$nt dt=$dt |
AA suaddhead nt=$nt |
BB sushw key=dt,tracl,tracr,fldr,tracf,trid,offset,sx,gx \
CC a=4000,$tracl,$tracl,$fldr,$tracf,1,$offset,$sx,$gx >> temp$k
DD
EE k = k + 1
FF
GG done
HH j = j + 1
II
JJ done
KK i= i + 1
LL
MM done
Variable offset (line O) ranges from -1475 to +1475, incrementing by 50 meters. The
values of offset as it passes through zero offset are … -125, -75, -25, +25, +75, +125, etc.
This simple numbering scheme cleverly skips the geophysically impossible zero offset
acquisition position.
Let's use surange to examine the trace headers:
surange < seis4.su
The surange output is:
12000 traces:
tracl=(1,12000) tracr=(1,12000) fldr=(1,200) tracf=(1,60) trid=1
offset=(-1475,1475) sx=(0,9950) gx=(-1475,11425) ns=501 dt=4000
6-9
Build Model 4, Acquire a Line, Display Gathers, QC
acq4.sh). We do it for the same reason we make the first model interface the top
interface. We do it because this approach simplifies designing the k-loop (script
acq4.sh, lines 73-87). By making the model’s bottom interface the last one (and
the top interface the first), the k-loop (triseis, suaddhead, sushw) begins at the
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
6-10
Build Model 4, Acquire a Line, Display Gathers, QC
Figure 6.3: Shot gather 45 from Model 4, shot almost directly over the channel.
The images below show selected shot gathers acquired over Model 4.
6-11
Build Model 4, Acquire a Line, Display Gathers, QC
Time (s)
Time (s)
1.0 1.0 1.0
Time (s)
Time (s)
Time (s)
Figure 6.4.1: Shot gathers from Model 4. Numbers correspond to shot points.
6-12
Build Model 4, Acquire a Line, Display Gathers, QC
Time (s)
Time (s)
1.0 1.0 1.0
Time (s)
Time (s)
Time (s)
Figure 6.4.2: Shot gathers from Model 4. Numbers correspond to shot points.
6-13
Build Model 4, Acquire a Line, Display Gathers, QC
Time (s)
Time (s)
1.0 1.0 1.0
Figure 6.4.3: Shot gathers from Model 4. The channel creates the diffraction in SP 40
and SP 50. We first see the right side pinch-out in SP 150. Due to divergent ray paths, the
bottom layer does not return reflections in SP 150, SP 160, and SP 170.
6-14
Build Model 4, Acquire a Line, Display Gathers, QC
9
10 # Set messages on
11 set -x
12
13 # Window one "field record"
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
6-15
Build Model 4, Acquire a Line, Display Gathers, QC
commands shotrwd2.sh to start here (./), go to subdirectory data, read file seis4.su (line
14), extract shot 130, write file shot4150.su to subdirectory data that is below the current
directory (./) (line 16). Then (line 19) says, from here (./), go to subdirectory data, read
the single gather of field record 150 and display it as a wiggle plot.
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
Note: You do not need to use./ in the above script. The following script is just as
effective.
1 #! /bin/sh
2
3 # shotrwd3.sh: Window one "field record" from file seis#.su
4 # where # represents the model number.
5 # Outputs: .su file of the shot gather
6 # wiggle image of the shot gather
7 # Use: shotrwd3.sh model shot
8 # Example: shotrwd3.sh 3 20
9
10 # Set messages on
11 set -x
12
13 # Window one "field record"
14 suwind < data/seis$1.su \
15 key=fldr min=$2 max=$2 \
16 > data/shot$1$2.su
17
18 # Make wiggle plot
19 suxwigb < data/shot$1$2.su \
20 title="SP # $2 [$1]" key=offset \
21 label1=" Time (s)" label2="Offset (m)" \
22 x2beg=-1500 x2end=1500 perc=99 &
23
24 # Exit politely from shell
25 exit
26
6-16
Build Model 4, Acquire a Line, Display Gathers, QC
Our script is in directory scripts. On line 14, “../” directs the system to go up one
directory (to directory worksu). The rest of the path name specifies the data file going
down from directory worksu.
You can use “../” repeatedly to climb up the directory structure. For example,
suwind < ../../worksu/data/seis$1.su \
tells the system to go from directory scripts up to worksu, up to forel, down to worksu,
down to data. Don’t specify file names going up.
6-17
Build Model 4, Acquire a Line, Display Gathers, QC
6-18
Build Model 4, Acquire a Line, Display Gathers, QC
11 set -x
12
13 # Window one temporary "field record"
14 suwind < seis$1shot.su key=fldr min=$2 max=$2 > temp$1$2.su
15
16 # Make wiggle plot
17 suxwigb < temp$1$2.su title="SP # $2 [$1]" key=offset \
18 label1=" Time (s)" label2="Offset (m)" \
19 x2beg=-1500 x2end=1500 perc=99 &
20
21 # Create .eps image of a shot gather
22 supswigp < temp$1$2.su title="SP # $2 [$1]" key=offset \
23 label1="Time (s)" label2="Offset (m)" \
24 x2beg=-1500 x2end=1500 perc=99 > shot$1$2.eps &
25
26 # Remove temporary gather
27 rm -f temp$1$2.su
28
29 # Exit politely from shell
30 exit
31
Entering
showshotB.sh 4 50
commands showshotB.sh to read file seis4shot.su, extract shot 50, and create file
shot450.eps. That is, the output .eps file name has the number of the model and the
number of the shot, as highlighted in line 24. The title on the output images (lines 17 and
22) contains the abbreviation SP (shot point), the number of the shot gather, and in square
brackets, the number of the seismic file (Figure 6.3).
This script does not output a .su file. A temporary .su file is created on line 14, is used
by suxwigb (line 17) and supswigp (line 22), and is removed by line 27. If you want a
permanent .su file, follow the instructions in section 6.5.
6-19
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
From the previous chapter, we have a 2-D line of seismic data. In this chapter, we:
• sort the shot gathers to common midpoint (CMP) gathers and
• perform velocity analysis on selected CMP gathers.
Because our synthetic data are noise-free, we can proceed quickly from shot gathers to
migration. If we wanted to, we could use suaddnoise to add noise to the data; this is
often useful. You should consider adding some noise to the data set, then processing it
using the examples we provide in this chapter and the next.
Let’s discuss “common-depth-point” (CDP) and “common midpoint” (CMP) (Sheriff,
2002). We try to avoid the term CDP because there is no common (same) point at the
reflector if the reflector dips. On the other hand, common-midpoints almost always exist
because they are defined by the geometric midpoint between sources and receivers.
Because SU does not have a cmp key, we reluctantly use the cdp key.
Figure 7.1: Left: The common-depth-point (CDP) and the common-midpoint (CMP) are
the same because the reflector is flat. Right: The reflections in this CMP gather do not
share a CDP because the reflector dips.
7-1
Model 4: Sort, Velocity Analysis
9 # file 1 2
10 susort > cmp4.su cdp offset
11
12 # Exit politely from shell
13 exit 0
14
Let’s try to understand how the first part of the script works. Typically, when we
assign cdp values, we want the output CMP gathers to increment by “1.”
Recall that receiver positions increment by 50 meters and the source positions also
increment by 50 meters. In the equation above, scalar “a” is set to 50 meters greater than
the largest positive offset. Scalar “d” is chosen to normalize the values computed in the
numerator so cdp values increment simply.
Table 7.1 shows headers sx and gx of the first and last traces of the first ten shots, as
well as the computed cdp values. You can see that cdp values slowly increase in a regular
way.
Table 7.1: cdp values computed from headers sx and gx
cdp sx gx
tracf = 1 1 0 -1475
Shot 1
tracf = 60 60 0 1475
tracf = 1 3 50 -1425
Shot 2
tracf = 60 62 50 1525
tracf = 1 5 100 -1375
Shot 3
tracf = 60 64 100 1575
tracf = 1 7 150 -1325
Shot 4
tracf = 60 66 150 1625
tracf = 1 9 200 -1275
Shot 5
tracf = 60 68 200 1675
tracf = 1 11 250 -1225
Shot 6
tracf = 60 70 250 1725
tracf = 1 13 300 -1175
Shot 7
tracf = 60 72 300 1775
tracf = 1 15 350 -1125
Shot 8
tracf = 60 74 350 1825
tracf = 1 17 400 -1075
Shot 9
tracf = 60 76 400 1875
tracf = 1 19 450 -1025
Shot 10
tracf = 60 78 450 1925
Table 7.2 shows headers sx and gx of the first and last traces of the last ten shots, as
well as the computed cdp values. You can see that cdp values slowly increase. Also,
notice that the last cdp value is 458 – recall that there are 176 shot gathers!
Table 7.2: cdp values computed from headers sx and gx
cdp sx gx
tracf = 1 381 9500 8025
Shot 191
tracf = 60 440 9500 10975
tracf = 1 383 9550 8075
Shot 192
tracf = 60 442 9550 11025
7-2
Model 4: Sort, Velocity Analysis
7-3
Model 4: Sort, Velocity Analysis
This script assumes that all seismic data are in files of the same type of name as
underlined in line 20 (cmp1.su, cmp2.su, etc.), and in the same directory as the script.
1 #! /bin/sh
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
2
3 # File: showcmp.sh: Window one CMP from file seis#.su
4 # where # represents the model number.
5 # Specify input model number and output CMP
6 # Input: CMP file
7 # Outputs: CMP gather in .su format
8 # wiggle image of the CMP gather
9 # .eps file of the CMP gather
10 # Use: showcmp.sh [model] [cmp]
11 # Example: showcmp.sh 4 20
12
13 # Set messages on
14 ##set -x
15
16 # Display a CMP gather between 1 and 458
17 if [ $2 -le 458 ]; then
18 if [ $2 -ge 1 ]; then
19
20 suwind < cmp$1.su key=cdp min=$2 max=$2 > cmp$1$2.su
21
22 suxwigb < cmp$1$2.su title="CMP # $2 [$1]" key=offset \
23 label1=" Time (s)" label2="Offset (m)" \
24 perc=99 &
25
26 supswigp < cmp$1$2.su title="CMP # $2 [$1]" key=offset \
27 label2="Time (s)" label2="Offset (m)" \
28 x2beg=-1500 x2end=1500 perc=99 > cmp$1$2.eps
29
30 exit
31
32 fi
33 fi
34
35 echo usage: showcmp.sh [model number] [CMP between 1 and 458]
36
37 # Exit politely from shell
38 exit
39
7-4
Model 4: Sort, Velocity Analysis
If you think there is a problem with the script or how you are using the script,
uncomment line 14.
7.5 Fold
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
After sorting to CMP-order, we have 458 gathers. But, the first gathers and the last
gathers of the line are not full fold. See Figure 7.2, below. (We describe the program that
generated the data for Figure 7.2, sukeycount, in Appendix C.) Upon inspection, we find
that the first full-fold CMP gather is CMP 60 and the last full fold CMP gather is CMP
400. Full fold is 30 traces, in contrast to the shot gather full fold of 60 traces. This is an
expected outcome of sorting based on the acquisition geometry (see Sheriff, 2002:
Stacking Chart).
To make reasonably detailed velocity analysis at a regular interval along the line, we
choose to analyze eighteen full-fold CMPs: 60, 80, 100, …, 360, 380, 400.
7-5
Model 4: Sort, Velocity Analysis
ns=1575 dt=4000
Figure 7.3: Original data file oz14.su has 48 traces and is 6.3 seconds long.
The following script, oz14prep.sh, prepares oz14.su for velocity analysis.
1 #! /bin/sh
2
3 # File: oz14prep.sh: Prepare oz14.su for velocity analysis
4 # Input (1): shot gather
5 # Output (1): modified CMP gather
6 # Use: oz14prep.sh
7
8 # Set messages on
9 set -x
10
11 # Name data sets
12 indata=oz14.su
13 outdata=oz14h.su
14
15 # Use only first 4 seconds of data
16 suwind < $indata tmax=4 |
17
18 # Gain
19 sugain tpow=2 |
20
21 # Convert from shot gather to CMP gather
22 # by making all cdp values = 14
23 suchw key1=cdp a=14 b=0 |
24
25 # Add offset values to offset key
26 suchw key1=offset a=11250 b=-220 key2=tracf |
27
28 # Mute
7-6
Model 4: Sort, Velocity Analysis
29 sumute key=tracf \
30 xmute=1,46,48 \
31 tmute=2.250,0.500,0.004 \
32 > $outdata
33
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
34 # Plot
35 suxwigb < $outdata key=offset perc=90 &
36
37 # Exit politely from shell
38 exit
39
In oz14prep.sh, the data are prepared in the following ways:
• Line 16 windows the first 4 seconds of data.
• Line 19 applies a classic power of 2 time-varying gain.
• Line 23 “effectively” converts the shot gather to a CMP gather by making all
values of key cdp the same. (See Section 3.7.)
• Line 26 puts values into key offset according to either of Yilmaz’ books: Table 1-
8 of Seismic Data Processing (1987) or Table 1-13 of Seismic Data Analysis
(2001): the nearest offset trace is 69 feet from the source and trace spacing is 220
feet. Since the first trace is the farthest offset trace, parameter a has the farthest
offset and parameter b is used to subtract the trace spacing from parameter a.
• Lines 29-31 mute the refractions. Figure 7.4, below left, shows the data after
oz14prep.sh. For comparison, the right side of Figure 7.4 shows the data after all
the processing of oz14prep.sh, but without the mute.
Figure 7.4: Left: Output of script oz14prep.sh. Right: Same as Left, but without mute.
7-7
Model 4: Sort, Velocity Analysis
Figure 7.5: CMP 60 from Model 4. The lower and upper lines (hyperbolas) are the
lower and upper velocities necessary for velocity analysis.
(Remember that color figures and those requiring detail are included as TIFF files on
a CD accompanying this book, as listed in Appendix E.)
7-8
Model 4: Sort, Velocity Analysis
Unfortunately, it is not yet possible to push a button and have the computer figure out
the velocities that flatten the primary reflections. Remember, real seismic data have
multiples and noise that interfere with moveout analysis. The synthetic gather in Figure
7.5 also has noise, but not so much that we can't easily see the reflectors.
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
7-9
Model 4: Sort, Velocity Analysis
Here we use suxcontour to show black contour lines on a white background. We could
also use suximage to create color-area displays on a computer monitor.
Parameter nc is a sort of volume control – by varying the number of contour levels,
we coarsen or fine-tune the output of the semblance calculation.
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
7-10
Model 4: Sort, Velocity Analysis
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
Figure 7.7: Left: oz14h.su after NMO for multiples. Right: oz14h.su after NMO for
primaries.
7-11
Model 4: Sort, Velocity Analysis
the panel provide visual emphasis to the best NMO velocity, but the center CMP of each
panel is the one we study.
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
Figure 7.8: A constant velocity stack (CVS) of 10 panels made from CMP 60, Model 4.
Also, remember that the velocity spacing between panels (here) is 300 m/s. We
cannot determine a precise stacking velocity from the CVS plot, but it is valuable when
used with the velan. We will show this in a later section when we combine the original
CMP display, the velan, and the CVS plot in an interactive velocity analysis script.
7-12
Model 4: Sort, Velocity Analysis
interpolates velocity functions between CMPs, the interpolation is guided throughout the
time section.
The corrected CMP should exhibit flat primary reflections. After NMO is applied, an
upward curved primary reflection means the picked velocity is too slow (overcorrected)
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
and a downward curved primary reflection means the picked velocity is to fast (under-
corrected).
A fourth QC tool is provided when you decide to re-pick a CMP: the previous picks
are placed on the velan. This allows you to easily use or avoid previous picks.
Finally, how frequently along the seismic line must you do velocity analysis? It
depends on the complexity of the geology.
7-13
Model 4: Sort, Velocity Analysis
minimum reading: Section 7.6.6.2 (iva Displays) and Section 7.6.6.4 (iva.sh – User-
supplied Values).
7.6.6.1 iva.scr
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
You can do velocity analysis by creating a velan for each selected CMP along a line
(script oz14velan.sh, Section 7.6.2), then picking t-v pairs from the velan. After you have
t-v pairs for those selected CMPs, you can put them into a script that does NMO for the
line by interpolating between analyzed CMPs. However, if you do not like the flattened
CMPs or the later stack section, this style of velocity analysis is tedious to repeat. Script
iva.sh allows you to make picks interactively, view the flattened CMPs, and if necessary,
refine your picks.
As we explained in the previous section, we start interactive scripts by running a .scr
script. Here is iva.scr:
1 #! /bin/sh
2 # File: iva.scr
3 # Run this script to start script iva.sh
4
5 xterm -geom 80x20+10+545 -e iva.sh
6
Line 1 invokes the shell. Line 5 opens a dialog window of specified size and position.
Line 5 also starts iva.sh, the processing script, within that window.
7-14
Model 4: Sort, Velocity Analysis
• Put the mouse pointer over the first major event, at approximately 0.375 seconds
and velocity approximately 1500 m/s, then press letter “S.”
• Put the mouse pointer over the second major event, at approximately 0.7 seconds
and velocity approximately 2700 m/s, then press letter “S.”
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
• Put the mouse pointer over the third major event, at approximately 1.09 seconds
and velocity approximately 2900 m/s, then press letter “S.”
• You picked all the major events. Now make a final pick near the end time:
Following the t-v trend of the last two picks, put the mouse pointer near the last
time, 2 seconds, then press letter “S.”
• We are finished picking t-v pairs, so we press letter “Q.”
Figure 7.9: Left: The velan. Middle: The CVS plot. Right: A plot of the input CMP.
7-15
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
7-16
Model 4: Sort, Velocity Analysis
Figure 7.10: After picking. Left: The t-v graph. Middle-left: The CVS plot. Middle-right: The CMP after NMO. Right:
The stack trace repeated eight times.
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
7-17
Figure 7.11: The display for re-picking. Left: The color velan with the previous picks superimposed. Middle-left: The CVS plot.
Model 4: Sort, Velocity Analysis
Middle-right: The CMP after NMO. Right: A plot of the input CMP.
Model 4: Sort, Velocity Analysis
The other plots are not just for show. At any time during picking, you can click the
middle mouse button in any window to get information. For example, by clicking near
the apex of the third reflector in the CMP window (Figure 7.12 below), the time, 1.06596
seconds tells you where to expect the third reflector to show maximum semblance in the
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
velan. You can then use the middle mouse button to click in the velan window to find the
semblance maximum that corresponds to 1.06596 seconds. (Are the numbers really
accurate to five decimal places? No.)
Figure 7.12: The upper left corner of a plot shows y,x information when you click the
middle mouse button in a window. Left: In the CMP window, y,x corresponds to
time,offset. Right: In the velan window, y,x corresponds to time,velocity.
Note: Whenever an suximage window is active, you can scroll through color palettes
by repeatedly pressing “r,” “R,” “h,” or “H.”. The following are lines from the ximage
selfdoc. (RGB stands for Red, Green, Blue; HSV stands for Hue, Saturation, Value.)
X Functionality:
Button 1 Zoom with rubberband box
Button 2 Show mouse (x1,x2) coordinates while pressed
q or Q key Quit
s key Save current mouse (x1,x2) location to file
p or P key Plot current window with pswigb (only from disk files)
7-18
Model 4: Sort, Velocity Analysis
(Move mouse cursor out and back into window for r,R,h,H to take effect)
Once “Q” is pressed when the velan window is active, the plots of Figure 7.10 appear.
The t-v graph looks good because (1) it starts at the top of the graph and ends near the
bottom, and (2) it is smooth, and (3) it generally increases with depth.
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
The middle-right plot, the CMP after NMO, is also pleasing because it shows flat
reflectors.
The right plot, the repeat stack, shows strong amplitude peaks for the reflectors.
The dialog window shows the following:
t-v PICKS CMP 60
----------------------
0.0211082 1526.7
0.37467 1526.7
0.69657 2685
1.08707 2952.3
1.98945 3115.65
7-19
Model 4: Sort, Velocity Analysis
Return. (You will see this below in script iva.sh.) We press Return. The dialog window
shows:
-- CLOSING CMP 60 WINDOWS --
7-20
Model 4: Sort, Velocity Analysis
plan to re-run the script right away, you can remove the temporary files by
entering a command similar to line 127:
rm panel.* picks.* par.* tmp*
Line 158 sets variable repick to “false” since, at this point, the user has not yet
made a first pick.
• Lines 160-440 are the core processing and plotting.
• Lines 441-453 create the velocity picks output file (outpicks).
• Lines 454-461 write a message to the screen telling us the name of the output file
of picks, pause the script, remove all temporary files, and close the script. The
pause of line 459 also gives us the chance to examine the temporary files before
they are removed.
1 #! /bin/sh
2 # File: iva.sh
3 # Run script iva.scr to start this script
4
5 # Set messages on
6 ##set -x
7
8 #================================================
9 # USER AREA -- SUPPLY VALUES
10 #------------------------------------------------
11 # CMPs for analysis
12
13 cmp1=60 cmp2=80 cmp3=100
14 cmp4=120 cmp5=140 cmp6=160
15 cmp7=180 cmp8=200 cmp9=220
16 cmp10=240 cmp11=260 cmp12=280
17 cmp13=300 cmp14=320 cmp15=340
18 cmp16=360 cmp17=380 cmp18=400
19
20 numCMPs=18
21
22 #------------------------------------------------
23 # File names
24
25 indata=cmp4.su # SU format
26 outpicks=vpick.txt # ASCII file
27
28 #------------------------------------------------
29 # display choices
30
31 myperc=98 # perc value for plot
32 plottype=0 # 0 = wiggle plot, 1 = image plot
33
34 #------------------------------------------------
35 # Processing variables
36
37 # Semblance variables
38 nvs=100 # number of velocities
39 dvs=27 # velocity intervals
40 fvs=1200 # first velocity
41
7-21
Model 4: Sort, Velocity Analysis
42 # CVS variables
43 fc=1200 # first CVS velocity
44 lc=3900 # last CVS velocity
45 nc=10 # number of CVS velocities (panels)
46 XX=11 # ODD number of CMPs to stack into central CVS
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
47
48 #================================================
49
50 # HOW SEMBLANCE (VELAN) VELOCITIES ARE COMPUTED
51
52 # Last Vel = fvs + (( nvs-1 ) * dvs ) = lvs
53 # 5000 = 500 + (( 99-1 ) * 45 )
54 # 3900 = 1200 + (( 100-1 ) * 27 )
55
56 # Compute last semblance (velan) velocity
57 lvs=`bc -l << -END
58 $fvs + (( $nvs - 1 ) * $dvs )
59 END`
60
61 #------------------------------------------------
62
63 # HOW CVS VELOCITIES ARE COMPUTED
64
65 # dc = CVS velocity increment
66 # dc = ( last CVS vel - first CVS vel ) / ( # CVS - 1 )
67 # m = CVS plot trace spacing (m = d2, vel units)
68 # m = ( last CVS vel - first CVS vel ) / ( ( # CVS - 1 ) * XX )
69
70 # j=1
71 # while [ j le nc ]
72 # do
73 # vel = fc + { [( lc - fc ) / ( nc-1 )] * ( j-1) }
74 # j = j + 1
75 # done
76 # EXAMPLE:
77 # vel = 1200+ ( (( 3900 - 1200 ) / ( 10-1 )) * ( 1-1) )
78 # vel = 1200+ ( (( 3900 - 1200 ) / ( 10-1 )) * ( 2-1) )
79 # .
80 # .
81 # .
82 # vel = 1200 + ( (( 3900 - 1200 ) / ( 10-1 )) * (11-1) )
83
84 #================================================
85
86 # FILE DESCRIPTIONS
87
88 # tmp0 = binary temp file for input CVS gathers
89 # tmp1 = binary temp file for output CVS traces
90 # tmp2 = ASCII temp file for managing picks
91 # tmp3 = binary temp file for stacked traces
92 # tmp4 = ASCII temp file for "wc" result (velan)
93 # tmp5 = ASCII temp file for stripping file name from tmp4 (velan)
94 # tmp6 = ASCII temp file to avoid screen display of "zap"
95 # tmp7 = ASCII temp file for picks
96 # tmp8 = binary temp file for NMO (flattened) section
97 # panel.$picknow = current CMP windowed from line of CMPs
98 # picks.$picknow = current CMP picks arranged as "t1 v1"
99 # "t2 v2"
100 # etc.
101 # par.# (# is a sequential index number; 1, 2, etc.)
102 # = current CMP picks arranged as
103 # "tnmo=t1,t2,t3,...
104 # "vnmo=v1,v2,v3,...
7-22
Model 4: Sort, Velocity Analysis
7-23
Model 4: Sort, Velocity Analysis
7-24
Model 4: Sort, Velocity Analysis
231
232 # Calculate trace spacing for CVS plot (m = d2, vel units)
233 # m = ( last CVS vel - first CVS vel ) / ( ( # CVS - 1 ) * XX )
234 m=`bc -l << -END
235 ( $lc - $fc ) / ( ( $nc - 1 ) * $XX )
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
236 END`
237
238 # CVS velocity loop
239 j=1
240 while [ $j -le $nc ]
241 do
242
243 vel=`bc -l << -END
244 $fc + $dc * ( $j - 1 )
245 END`
246
247 # uncomment to print CVS velocities to screen
248 ## echo " vel = $vel"
249
250 sunmo < tmp0 vnmo=$vel verbose=0 |
251 sustack >> tmp1
252
253 j=`expr $j + 1`
254 done
255
256 # Compute lowest velocity for annotating CVS plot
257 # loV = first CVS velocity - ( ( CMP range - 1 ) / 2 ) * vel inc
258 loV=`bc -l << -END
259 $fc - ( $X / 2) * $m
260 END`
261
262 suximage < tmp1 xbox=322 ybox=10 wbox=300 hbox=450 \
263 title="CMP $picknow Constant Velocity Stacks" \
264 label1=" Time (s)" label2="Velocity (m/s)" \
265 f2=$loV d2=$m verbose=0 \
266 perc=$myperc n2tic=5 cmap=rgb0 &
267
268 fi
269
270 #------------------------------------------------
271 # Picking instructions
272 #------------------------------------------------
273
274 echo " "
275 echo "Preparing CMP $i of $numCMPs for Picking "
276 echo "Location is CMP $picknow "
277 echo " Start CVS CMP = $k1 End CVS CMP = $k2"
278 echo " "
279 echo " Use the semblance plot to pick (t,v) pairs."
280 echo " Type \"s\" when the mouse pointer is where you want a pick."
281 echo " Be sure your picks increase in time."
282 echo " To control velocity interpolation, pick a first value"
283 echo " near zero time and a last value near the last time."
284 echo " Type \"q\" in the semblance plot when you finish picking."
285
286 #------------------------------------------------
287 # Plot semblance (velan) (left)
288 #------------------------------------------------
289
290 # repick: 1=false, 0=true
291 if [ $repick -eq 0 ] ; then
292
293 #--- --- --- --- --- --- --- --- --- ---
7-25
Model 4: Sort, Velocity Analysis
7-26
Model 4: Sort, Velocity Analysis
357 fi
358
359 #------------------------------------------------
360 # Stack window (right)
361 #------------------------------------------------
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
362
363 j=1
364 while [ $j -le 8 ]
365 do
366
367 # Append stack trace into tmp3 multiple times
368 sustack < tmp8 >> tmp3
369
370 j=`expr $j + 1`
371 done
372
373 suxwigb < tmp3 xbox=946 ybox=10 wbox=200 hbox=450 \
374 title="CMP $picknow repeat stack trace" \
375 label1=" Time (s)" d2num=50 key=cdp \
376 verbose=0 perc=$myperc &
377
378 #------------------------------------------------
379 # Manage picks (2): Prepare picks for vel profile
380 #------------------------------------------------
381
382 sed < par.$i '
383 s/tnmo/xin/
384 s/vnmo/yin/
385 ' > par.uni.$i
386
387 #------------------------------------------------
388 # Velocity profile (left)
389 #------------------------------------------------
390
391 unisam nout=$nt fxout=$tstart dxout=$dt \
392 par=par.uni.$i method=mono |
393 xgraph n=$nt nplot=1 d1=$dt f1=$tstart x2beg=$fvs x2end=$lvs \
394 label1=" Time (s)" label2="Velocity (m/s)" \
395 title="CMP $picknow Stacking Velocity Function" \
396 -geometry 300x450+10+10 -bg white style=seismic \
397 grid1=solid grid2=solid linecolor=2 marksize=1 mark=0 \
398 titleColor=black axesColor=blue &
399
400 #------------------------------------------------
401 # Dialogue with user: re-pick ?
402 #------------------------------------------------
403
404 echo " "
405 echo " t-v PICKS CMP $picknow"
406 echo "----------------------"
407 cat picks.$picknow
408 echo " "
409 echo " Use the velocity profile (left),"
410 echo " the NMO-corrected gather (middle-right),"
411 echo " and the repeated stack trace (right)"
412 echo " to decide whether to re-pick the CMP."
413 echo " "
414 echo "Picks OK? (y/n) " > /dev/tty
415 read response
416
417 rm tmp*
418
419 # "n" means re-loop. Otherwise, continue to next CMP.
7-27
Model 4: Sort, Velocity Analysis
425 repick=0
426 cp picks.$picknow tmp7
427 ;;
428 *)
429 echo "$picknow $i" >> par.cmp
430 i=`expr $i + 1`
431 repick=1
432 echo "-- CLOSING CMP $picknow WINDOWS --"
433 zap xwigb > tmp6
434 zap ximage > tmp6
435 zap xgraph > tmp6
436 ;;
437 esac
438
439 done
440
441 #------------------------------------------------
442 # Create velocity output file
443 #------------------------------------------------
444
445 mkparfile < par.cmp string1=cdp string2=# > par.0
446
447 i=0
448 while [ $i -le $numCMPs ]
449 do
450 sed < par.$i 's/$/ \\/g' >> $outpicks
451 i=`expr $i + 1`
452 done
453
454 #------------------------------------------------
455 # Remove files and exit
456 #------------------------------------------------
457 echo " "
458 echo " The output file of t-v pairs is "$outpicks
459 pause
460 rm -f panel.* picks.* par.* tmp*
461 exit
462
7-28
Model 4: Sort, Velocity Analysis
Lines 13-18 list the CMPs. Here, the CMPs evenly increment by 20, but any values
are acceptable. The CMPs are listed on six lines, but they can be listed on three lines, two
lines, or even one per line to use 18 lines. The number per line is not important. However,
the style of these lines is very important. When the eval function operates, line 169, it
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
7-29
Model 4: Sort, Velocity Analysis
7-30
Model 4: Sort, Velocity Analysis
the number of single velocities to apply to those eleven CMPs; in other words, the
number of panels.
45 nc=10 # number of CVS velocities (panels)
46 XX=11 # ODD number of CMPs to stack into central CVS
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
If the first CMP is 60, CMP 60 is the panel’s center CMP. Lines 218-223 compute the
minimum and maximum CMP values of this series of eleven CMPs. Line 224 windows
the eleven CMPs from the input file and puts them in file tmp0. In lines 239-254, as each
of the ten velocities is computed, NMO is applied to tmp0 and the CMPs are stacked.
Line 251 uses the append command to concatenate successive panels into file tmp1.
251 sustack >> tmp1
File, tmp1 is plotted by suximage, lines 262-266. The CVS horizontal axis is velocity.
Lines 234-236 compute velocity trace spacing (m) and lines 258-260 compute the first
(lowest) annotation velocity (loV)for the CVS suximage plot (line 265).
Lines 274-284 print the picking instructions to the screen. They also print information
about the CMP that is about to be displayed (variables numCMPs, picknow, k1, and k2 in
lines 275-277). Notice that lines 173-174 previously printed this information. We repeat
the information because system messages printed when sunmo runs scrolls earlier text
out of the viewing area.
Lines 315-321 create and plot the velan if this is the first time the velan is created for
the current CMP. Lines 299-311 are used if the velan is being displayed for re-picking. In
both cases, user-supplied values for nvs, dvs, and fvs are used (lines 304, 306, 315, 317).
For re-picking, we have to tell suximage, in line 311, the name of the file of the previous
picks (parameter curve) and how many t-v picks are in the file (parameter npair).
311 curve=tmp7 npair=$npair curvecolor=white
Why does file tmp7 hold the picks? Here we explain how picks are made and how
they get into tmp7. Notice that suximage, suxwigb, and xgraph always end with an
ampersand (&) except the two times suximage is used for picking: lines 305-311 and
lines 316-321. In these two uses of suximage, parameter mpicks is used and has the value
picks.$picknow:
310 grid1=solid grid2=solid mpicks=picks.$picknow \
321 grid1=solid grid2=solid mpicks=picks.$picknow
Because these uses of suximage do not use the ampersand (&), script control goes to
ximage. When the velan plot is active and you press “Q,” control leaves ximage and
returns to the script (see the ximage selfdoc).
Whether we are picking a CMP the first time, the second time, or … the picks go in
the same file. However, every time we choose to re-pick (the re-pick yes/no case is lines
420-437), the old picks are copied to tmp7 by line 426:
426 cp picks.$picknow tmp7
So, tmp7 has the old picks to overlay on the velan while new picking is going on.
Now we return to our travel down the script.
Lines 299-301 put the number of t-v picks (the number of lines) of tmp7 into variable
npair. Unfortunately, command wc –l also outputs the file name. For example, wc –l with
the iva.sh script returns:
·····449 iva.sh
7-31
Model 4: Sort, Velocity Analysis
Those dots show that the output line count (449) is preceded by five blank spaces. The
second part of line 299, the sed command, removes the preceding blank spaces (Lamb,
1990, page 91, number 11). We know that the name of the t-v pick file is tmp7; we use
that information on line 300 to remove the file name from tmp4. The sort on line 301 is
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
merely a device to copy the contents of tmp5 (the line count) to variable npair.
After picking (after “Q” is pressed), lines 333-336 create a parameter (par) file
formatted for input to sunmo. Command mkparfile is an SU function that has a selfdoc.
The following are the contents of tmp2 after picking CMP 60:
cdp=60
tnmo=0.0158311,0.369393,0.69657,1.08707,1.98945
vnmo=1497,1526.7,2670.15,2952.3,3130.5
File tmp2 is used by sunmo in line 346 to make the flattened CMP for a QC display.
The flattened CMP, file tmp8, is used by the next QC display, the repeat stack trace (lines
363-376). You might want fewer or more traces than the eight specified in line 364.
Lines 382-392 create a par file for the QC t-v graph. While sunmo requires tnmo-
vnmo pairs, xgraph requires xin-yin pairs (lines 383-384). SU program unisam creates a
smooth line from a series of point pairs (lines 391-392).
By the time we arrive at line 415, we are viewing the t-v graph, the CVS plot, the
flattened CMP, and the repeat stacks. Also, the dialog window shows the current t-v pairs
(line 407). Here, we have to choose whether to re-pick the CMP or accept our picks. No
matter which we choose, all files with the tmp prefix are removed at line 417 to clear the
files for the next round of displays.
If we choose to re-pick, the old picks are copied to tmp7 for overlay on the next velan
(line 426). If we choose to accept the current picks, the CMP counter increments (line
430), and all the plot windows are closed by lines 433-435. These lines direct (>) the
command zap to file tmp6 in order to minimize system messages to the screen.
7-32
Model 4: Sort, Velocity Analysis
Note: Line 2 of this file must be removed before it can be used in a script. A comment
line is not acceptable within a script command (see Section 8.2).
7-33
Model 4: Sort, Velocity Analysis
7-34
Model 4: T-V Picks QC, NMO, Stack
Now that we have a file of time-velocity (t-v) picks at selected CMPs, we check the
quality of those picks. Then, we apply NMO to the data. After reviewing the results of
NMO, it is simple (a one-line command) to stack the gathers.
For the sake of presentation in Section 8.2.1, what was one line in vpick4.txt (line 1)
are now two lines in tvQC.sh (lines 13-14). Notice that THERE ARE NO SPACES AT
THE END OF LINE 13 AND NO SPACES AT THE BEGINNING OF LINE 14.
We removed line 2 from file vpick4.txt. That line would be between lines 14 and 15
below. We removed it because A COMMENT LINE WITHIN THE LINES OF AN SU
COMMAND WILL MAKE THE SCRIPT CRASH.
We also removed the continuation mark from the end of line 50.
8.2 Time-Velocity QC
At the end of interactive velocity analysis (Section 7.6.6.10), we have a file that
contains a row of cdp values. Also within that file, there is a pair of tnmo values and
vnmo values for each cdp value. Before we use these cdp-tnmo-vnmo values in normal
moveout correction (NMO), we want to check the quality of our picks.
8-1
Model 4: T-V Picks QC, NMO, Stack
21 tnmo=0.0211082,0.437995,0.844327,0.949868,1.16623,1.98945 \
22 vnmo=1497,1526.7,2551.35,2432.55,2685,2685 \
23 tnmo=0.0105541,0.569921,1.00792,1.30343,1.99472 \
24 vnmo=1497,1541.55,2595.9,2848.35,3011.7 \
25 tnmo=0.0158311,0.686016,1.12401,1.04591,1.98945 \
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
26 vnmo=1482.15,1541.55,2729.55,3160.2,3293.85 \
27 tnmo=0.0158311,0.78628,1.19789,1.5409,1.99472 \
28 vnmo=1482.15,1511.85,3011.7,3694.8,3798.75 \
29 tnmo=0.0105541,0.765172,1.1029,1.45119,1.98945 \
30 vnmo=1497,1511.85,2937.45,3665.1,3858.15 \
31 tnmo=0.0158311,0.627968,0.907652,1.25594,1.99472 \
32 vnmo=1497,1556.4,2595.9,3056.25,3353.25 \
33 tnmo=0.0158311,0.469657,0.686016,1.07124,1.98417 \
34 vnmo=1482.15,1556.4,2521.65,2818.65,3026.55 \
35 tnmo=0.0158311,0.3219,0.501319,0.923483,1.98417 \
36 vnmo=1482.15,1541.55,2462.25,2670.15,2774.1 \
37 tnmo=0.0158311,0.226913,0.395778,0.870712,1.98417 \
38 vnmo=1482.15,1511.85,2521.65,2699.85,2848.35 \
39 tnmo=0.0105541,0.23219,0.390501,0.918206,1.99472 \
40 vnmo=1482.15,1526.7,2566.2,2878.05,3234.45 \
41 tnmo=0.0158311,0.258575,0.46438,1.99472 \
42 vnmo=1497,1526.7,2699.85,3368.1 \
43 tnmo=0.0158311,0.311346,0.538259,0.612137,1.98945 \
44 vnmo=1482.15,1526.7,2759.25,2952.3,3442.35 \
45 tnmo=0.0211082,0.353562,0.622691,0.744063,1.99472 \
46 vnmo=1467.3,1511.85,2774.1,3056.25,3457.2 \
47 tnmo=0.0211082,0.369393,0.691293,0.812665,1.99472 \
48 vnmo=1482.15,1511.85,2878.05,2982,3279 \
49 tnmo=0.0211082,0.348285,0.728232,0.881266,1.20844,1.98945 \
50 vnmo=1482.15,1511.85,2833.5,3026.55,3145.35,3323.55
51
52 # Exit politely from shell
53 exit
54
Before running tvQC.sh, we modified one time value on line 25. Below are the screen
messages from tvQC.sh.
tvnmoqc: This file has 18 CDPs.
tvnmoqc: tnmo values must increase for use in NMO
tvnmoqc: For cdp=160, check times 1.12401 and 1.04591.
tvnmoqc: End of cdp-tnmo-vnmo check.
First, if there had been no time series problems, only the first and last lines would
have printed to the screen. Second, no matter how many time series problems the input
file has, tvnmoqc reads the entire file. It is up to you, the user, to correct time series
problems before you use these lines for NMO (program sunmo). See Section 8.2.3.
8-2
Model 4: T-V Picks QC, NMO, Stack
is 60. If we set prefix=velqc, the name of the first output file is velqc.60. The
contents of velqc.60 are:
0.0158311 1511.85
0.390501 1511.85
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
0.686016 2640.45
1.06069 2922.6
1.99472 3249.3
To overlay t-v picks on the velan, suximage needs data in columns, not rows.
8-3
Model 4: T-V Picks QC, NMO, Stack
Figure 8.1 shows CMP 60 with the velan and the picks on the velan. Figure 8.2 shows
CMP 160 with the velan and the picks on the velan. The time value that we altered earlier
has not been corrected, so we see the sharp up movement at the altered time. The contents
of file velqc.160 are:
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
0.0158311 1482.15
0.686016 1541.55
1.12401 2729.55
1.04591 3160.2
1.98945 3293.85
As you can see in the file at the end of Chapter 7, the fourth time, 1.04591 should be
1.44591.
Below are the first screen messages from velanQC. Lines numbers are added for
discussion.
1
2 tvnmoqc: This file has 18 CDPs.
3
4 tvnmoqc: tnmo values must increase for use in NMO
5
6 tvnmoqc: For cdp=160, check times 1.12401 and 1.04591.
7
8 tvnmoqc: End of cdp-tnmo-vnmo check.
9
10 *** VELOCITY ANALYSIS QC ***
11
12 Preparing CMP 1 of 18 for Display
13
14 Picks for CMP 60:
15 -------------------------
16 0.0158311 1511.85
17 0.390501 1511.85
18 0.686016 2640.45
19 1.06069 2922.6
20 1.99472 3249.3
21 -------------------------
22
23 Press Return or Enter to see next gather
24 Or enter "x" to exit
25 x
26 Terminated
27 kill: 6548: no such process
28 Terminated
29 kill: 6557: no such process
30
31 ==> Closing velanQC
32
33 press return key to continue
34
Lines 1-8 are the same as are printed when mode=1 because the same checks are
done to the cdp-tnmo-vnmo vales while tvnmoqc reads them. After error-checking and
diagnostic printing, all the files of t-v values are created.
Lines 12-24 are printed as each t-v file is read from disk, and as the CMP and velan
images are made on the screen.
In this example, instead of continuing to the next gather, the user enters “x” to exit the
script: line 25. As a result, messages are printed to the screen as the seismic display
8-4
Model 4: T-V Picks QC, NMO, Stack
windows are closed (lines 26-29) and the script announces that it is finished processing
(line 31). After the user presses Enter or Return (as instructed on line 33), the script ends.
8-5
Model 4: T-V Picks QC, NMO, Stack
46 240,260,280,300,320,340,360,380,400 \
47 tnmo=0.0158311,0.390501,0.686016,1.06069,1.99472 \
48 vnmo=1511.85,1511.85,2640.45,2922.6,3249.3 \
49 tnmo=0.0158311,0.353562,0.701847,1.0343,1.99472 \
50 vnmo=1482.15,1511.85,2625.6,2774.1,2982 \
51 tnmo=0.0158311,0.364116,0.765172,0.875989,1.08707,1.98945 \
52 vnmo=1511.85,1526.7,2595.9,2714.7,2625.6,2699.85 \
53 tnmo=0.0211082,0.437995,0.844327,0.949868,1.16623,1.98945 \
54 vnmo=1497,1526.7,2551.35,2432.55,2685,2685 \
55 tnmo=0.0105541,0.569921,1.00792,1.30343,1.99472 \
56 vnmo=1497,1541.55,2595.9,2848.35,3011.7 \
57 tnmo=0.0158311,0.686016,1.12401,1.04591,1.98945 \
58 vnmo=1482.15,1541.55,2729.55,3160.2,3293.85 \
59 tnmo=0.0158311,0.78628,1.19789,1.5409,1.99472 \
60 vnmo=1482.15,1511.85,3011.7,3694.8,3798.75 \
61 tnmo=0.0105541,0.765172,1.1029,1.45119,1.98945 \
62 vnmo=1497,1511.85,2937.45,3665.1,3858.15 \
63 tnmo=0.0158311,0.627968,0.907652,1.25594,1.99472 \
64 vnmo=1497,1556.4,2595.9,3056.25,3353.25 \
65 tnmo=0.0158311,0.469657,0.686016,1.07124,1.98417 \
66 vnmo=1482.15,1556.4,2521.65,2818.65,3026.55 \
67 tnmo=0.0158311,0.3219,0.501319,0.923483,1.98417 \
68 vnmo=1482.15,1541.55,2462.25,2670.15,2774.1 \
69 tnmo=0.0158311,0.226913,0.395778,0.870712,1.98417 \
70 vnmo=1482.15,1511.85,2521.65,2699.85,2848.35 \
71 tnmo=0.0105541,0.23219,0.390501,0.918206,1.99472 \
72 vnmo=1482.15,1526.7,2566.2,2878.05,3234.45 \
73 tnmo=0.0158311,0.258575,0.46438,1.99472 \
74 vnmo=1497,1526.7,2699.85,3368.1 \
75 tnmo=0.0158311,0.311346,0.538259,0.612137,1.98945 \
76 vnmo=1482.15,1526.7,2759.25,2952.3,3442.35 \
77 tnmo=0.0211082,0.353562,0.622691,0.744063,1.99472 \
78 vnmo=1467.3,1511.85,2774.1,3056.25,3457.2 \
79 tnmo=0.0211082,0.369393,0.691293,0.812665,1.99472 \
80 vnmo=1482.15,1511.85,2878.05,2982,3279 \
81 tnmo=0.0211082,0.348285,0.728232,0.881266,1.20844,1.98945 \
82 vnmo=1482.15,1511.85,2833.5,3026.55,3145.35,3323.55
83
84 #================================================
85
86 # HOW SEMBLANCE (VELAN) VELOCITIES ARE COMPUTED
87
88 # Last Vel = fvs + (( nvs-1 ) * dvs ) = lvs
89 # 5000 = 500 + (( 99-1 ) * 45 )
90 # 3900 = 1200 + (( 100-1 ) * 27 )
91
92 # Compute last semblance (velan) velocity
93 lvs=`bc -l << -END
94 $fvs + (( $nvs - 1 ) * $dvs )
95 END`
96
97 #================================================
98
99 # FILE DESCRIPTIONS
100
101 # tmp1 = ASCII temp file to avoid screen display of "zap"
102 # tmp2 = ASCII temp file for "wc" result (velan)
103 # tmp3 = ASCII temp file for stripping file name from tmp2 (velan)
8-6
Model 4: T-V Picks QC, NMO, Stack
109 #================================================
110
111 echo " "
112 echo " *** VELOCITY ANALYSIS QC ***"
113 echo " "
114
115 #------------------------------------------------
116
117 # Remove old files.
118 rm -f panel.* picks.* tmp*
119
120 #------------------------------------------------
121 # Get ns, dt, first time from seismic file
122 nt=`sugethw ns < $seisdata | sed 1q | sed 's/.*ns=//'`
123 dt=`sugethw dt < $seisdata | sed 1q | sed 's/.*dt=//'`
124 ft=`sugethw delrt < $seisdata | sed 1q | sed 's/.*delrt=//'`
125
126 # Convert dt from header value in microseconds
127 # to seconds for velocity profile plot
128 dt=`bc -l << -END
129 scale=6
130 $dt / 1000000
131 END`
132
133 # If "delrt", use it; else use zero
134 if [ $ft -ne 0 ] ; then
135 tstart=`bc -l << -END
136 scale=6
137 $ft / 1000
138 END`
139 else
140 tstart=0.0
141 fi
142
143 #================================================
144 # BEGIN QC LOOP
145 #------------------------------------------------
146
147 i=1
148 while [ $i -le $numCMPs ]
149 do
150
151 # set variable $picknow to current CMP
152 eval picknow=\$cmp$i
153
154 echo " "
155 echo "Preparing CMP $i of $numCMPs for Display "
156 echo " "
157
158 #------------------------------------------------
159 # Plot CMP (right)
160 #------------------------------------------------
161
162 suwind < $seisdata \
163 key=cdp min=$picknow max=$picknow > panel.$picknow
164
165 if [ $plottype -eq 0 ] ; then
166 suxwigb < panel.$picknow xbox=422 ybox=10 wbox=400 hbox=550 \
8-7
Model 4: T-V Picks QC, NMO, Stack
8-8
Model 4: T-V Picks QC, NMO, Stack
230 # Exit
231 #------------------------------------------------
232
233 # Exit
234 echo " "
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
8-9
Model 4: T-V Picks QC, NMO, Stack
Solution 1:
----------
25 tnmo=0.0158311,0.686016,1.12401,1.44591,1.98945 \
26 vnmo=1482.15,1541.55,2729.55,3160.2,3293.85 \
Our second solution is to increase the value of the fourth time by one millisecond (we
do not change the corresponding velocity). We change the fourth time from 1.12401 to
1.12501. This change has just about no effect on NMO, and we spend less time editing
solution 2 than we do solution 1.
Solution 2:
----------
25 tnmo=0.0158311,0.686016,1.12401,1.12501,1.44591,1.98945 \
26 vnmo=1482.15,1541.55,2729.55,2729.55,3160.2,3293.85 \
Our third solution is to re-pick the CMP. Below are lines from iva.sh (Section
7.6.6.3).
10 #------------------------------------------------
11 # CMPs for analysis
12
13 cmp1=60 cmp2=80 cmp3=100
14 cmp4=120 cmp5=140 cmp6=160
15 cmp7=180 cmp8=200 cmp9=220
16 cmp10=240 cmp11=260 cmp12=280
17 cmp13=300 cmp14=320 cmp15=340
18 cmp16=360 cmp17=380 cmp18=400
19
20 numCMPs=18
21
22 #------------------------------------------------
To re-pick CMP 160, we change the lines as shown below. What was “cmp6” is now
“cmp1,” and parameter numCMPs is now “1.” Nothing else in iva.sh needs to change.
10 #------------------------------------------------
11 # CMPs for analysis
12
13
14 cmp1=160
15
16
17
18
19
20 numCMPs=1
21
22 #------------------------------------------------
After we re-pick CMP 160, we replace the old tnmo-vnmo pair with the new tnmo-
vnmo pair.
8-10
Model 4: T-V Picks QC, NMO, Stack
now two lines in nmo4.sh (lines 18-19). Also, we removed line 2 from file vpick4.txt
(refer to the end of Chapter 7). That line would be between lines 19 and 20 below.
1 #! /bin/sh
2 # File: nmo4.sh
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
8-11
Model 4: T-V Picks QC, NMO, Stack
Figure 8.3: Left: NMO applied to CMP 120. Center: NMO applied to CMP 130. Right:
NMO applied to CMP 140. For CMP 130, the NMO t-v values were linearly interpolated
between CMP 120 and CMP 140.
8-12
Model 4: T-V Picks QC, NMO, Stack
Recall that t-v picks were made every 20th CMP starting at the first full-fold CMP,
CMP 60, and ending at the last full-fold CMP, CMP 400 (see Section 7.5). Program
sunmo used the values of script nmo4.sh to interpolate between CMPs.
How accurate is the interpolation? Mathematically, it is quite accurate. How useful is
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
the interpolation? Figure 8.3 shows CMPs 120, 130 and 140 after NMO. We think we
made precise picks to flatten CMPs 120 and 140; their reflectors look reasonably flat. But
the third reflector of CMP 130, with interpolated picks, appears to curve quite a bit.
When we stack CMP 130, we expect that the third reflector will smear; that is, it will not
stack to a zero-offset spike. What is worse, the smear might contribute to smearing the
second reflector. (We are showing the worst CMP of nmo4.su.)
To improve our stack, we modify script iva.sh to pick only CMP 130. After doing so,
we show the flattened gather in Figure 8.4.
This is not perfect flattening, but better than before. Below is the new NMO script,
nmo4a.sh.
1 #! /bin/sh
2 # File: nmo4a.sh
3 # Apply NMO (flatten) 2-D line of CMPs
4 # Input (1): 2-D line of CMPs
5 # Output (1): NMO-corrected 2-D line of CMPs
6 # Use: nmo4a.sh
7 #
8 # NMO correction is interpolated between named CMPs.
9
10 # Set debugging on
11 set -x
12
13 # Name data sets
14 indata=cmp4.su
15 outdata=nmo4a.su
16
17 sunmo < $indata \
18 cdp=60,80,100,120,130,140,160,180,200,220,\
19 240,260,280,300,320,340,360,380,400 \
20 tnmo=0.0158311,0.390501,0.686016,1.06069,1.99472 \
21 vnmo=1511.85,1511.85,2640.45,2922.6,3249.3 \
22 tnmo=0.0158311,0.353562,0.701847,1.0343,1.99472 \
23 vnmo=1482.15,1511.85,2625.6,2774.1,2982 \
24 tnmo=0.0158311,0.364116,0.765172,0.875989,1.08707,1.98945 \
25 vnmo=1511.85,1526.7,2595.9,2714.7,2625.6,2699.85 \
26 tnmo=0.0211082,0.437995,0.844327,0.949868,1.16623,1.98945 \
27 vnmo=1497,1526.7,2551.35,2432.55,2685,2685 \
28 tnmo=0.0211082,0.506596,0.912929,1.05013,1.24011,1.99472 \
29 vnmo=1511.85,1526.7,2595.9,2982,2714.7,3145.35 \
30 tnmo=0.0105541,0.569921,1.00792,1.30343,1.99472 \
31 vnmo=1497,1541.55,2595.9,2848.35,3011.7 \
32 tnmo=0.0158311,0.686016,1.12401,1.44591,1.98945 \
33 vnmo=1482.15,1541.55,2729.55,3160.2,3293.85 \
34 tnmo=0.0158311,0.78628,1.19789,1.5409,1.99472 \
35 vnmo=1482.15,1511.85,3011.7,3694.8,3798.75 \
36 tnmo=0.0105541,0.765172,1.1029,1.45119,1.98945 \
37 vnmo=1497,1511.85,2937.45,3665.1,3858.15 \
38 tnmo=0.0158311,0.627968,0.907652,1.25594,1.99472 \
39 vnmo=1497,1556.4,2595.9,3056.25,3353.25 \
40 tnmo=0.0158311,0.469657,0.686016,1.07124,1.98417 \
41 vnmo=1482.15,1556.4,2521.65,2818.65,3026.55 \
42 tnmo=0.0158311,0.3219,0.501319,0.923483,1.98417 \
43 vnmo=1482.15,1541.55,2462.25,2670.15,2774.1 \
8-13
Model 4: T-V Picks QC, NMO, Stack
44 tnmo=0.0158311,0.226913,0.395778,0.870712,1.98417 \
45 vnmo=1482.15,1511.85,2521.65,2699.85,2848.35 \
46 tnmo=0.0105541,0.23219,0.390501,0.918206,1.99472 \
47 vnmo=1482.15,1526.7,2566.2,2878.05,3234.45 \
48 tnmo=0.0158311,0.258575,0.46438,1.99472 \
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
49 vnmo=1497,1526.7,2699.85,3368.1 \
50 tnmo=0.0158311,0.311346,0.538259,0.612137,1.98945 \
51 vnmo=1482.15,1526.7,2759.25,2952.3,3442.35 \
52 tnmo=0.0211082,0.353562,0.622691,0.744063,1.99472 \
53 vnmo=1467.3,1511.85,2774.1,3056.25,3457.2 \
54 tnmo=0.0211082,0.369393,0.691293,0.812665,1.99472 \
55 vnmo=1482.15,1511.85,2878.05,2982,3279 \
56 tnmo=0.0211082,0.348285,0.728232,0.881266,1.20844,1.98945 \
57 vnmo=1482.15,1511.85,2833.5,3026.55,3145.35,3323.55 \
58 > $outdata
59
60 # Exit politely from shell
61 exit
62
Script nmo4a.sh is two lines longer than nmo4.sh. Line 18 is modified to include
“130.” The t-v picks for CMP 130 are the fifth tnmo-vnmo pair, lines 28-29.
8-14
Model 4: T-V Picks QC, NMO, Stack
20 # processing variables
21 sortkey=cdp # sort key (usually fldr or cdp)
22 firsts=10 # first sort (fldr or cdp) value
23 lasts=458 # last sort (fldr or cdp) value
24 increment=10 # sort key increment
25 tracekey=offset # trace label key
26
27 #================================================
28 # file descriptions
29
30 # tmp1 = binary temp file for input gather
31
32 #------------------------------------------------
33
34 echo " "
35 echo " *** VIEWER ***"
36 echo " "
37 echo " INPUT: $indata"
38
39 #------------------------------------------------
40 # Remove old temporary file
41 rm -f tmp*
42
43 #------------------------------------------------
44 # BEGIN LOOP
45 #------------------------------------------------
46
47 i=$firsts
48 while [ $i -le $lasts ]
49 do
50
51 echo " "
52 echo "Reading gather $i of $indata"
53 echo "First gather = $firsts Last gather = $lasts"
54
55 suwind < $indata key=$sortkey min=$i max=$i > tmp1
56
57 if [ $plottype -eq 0 ] ; then
58 suxwigb < tmp1 xbox=10 ybox=10 wbox=$Wplot hbox=$Hplot \
59 title="$sortkey $i" \
60 label1=" Time (s)" label2="$tracekey" key=$tracekey \
61 perc=$myperc verbose=0 &
62 else
63 suximage < tmp1 xbox=10 ybox=10 wbox=$Wplot hbox=$Hplot \
64 title="$sortkey $i" \
65 label1=" Time (s)" \
66 perc=$myperc verbose=0 &
67 fi
68
69 echo " "
70 echo "Press Return or Enter to see next gather"
71 echo " Or enter \"x\" to exit"
72 > /dev/tty
73 read response
74
75 case $response in
76 [xX])
77 zap xwigb > tmp1
8-15
Model 4: T-V Picks QC, NMO, Stack
INPUT: nmo4a.su
...
8-16
Model 4: T-V Picks QC, NMO, Stack
8.5 Stack
A one-line command is sufficient to stack the data. The default stack key is cdp.
sustack < nmo4a.su > stack4.su
Below is the surange output of file stack4.su.
surange < stack4.su
458 traces:
tracl=(1,458) tracr=(1,12000) fldr=(1,200) tracf=(1,60) cdp=(1,458)
trid=1 nhs=(1,30) sx=(0,9950) gx=(-1475,11425) ns=501
dt=4000
8-17
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
8-18
Model 4: T-V Picks QC, NMO, Stack
Figure 8.5: An ximage plot of stack4.su. The horizontal axis is CMP number. The sand channel diffraction is near CMP 110, at
approximately 0.8 seconds.
Model 4: T-V Picks QC, NMO, Stack
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
Figure 8.6: Arrows point to velocity analysis CMP locations. Between those CMPs, the
top reflector thickens.
When we look at steep portions of the top reflector (Figure 8.6), we see that at CMP
locations where we did velocity analysis, the top reflector is thin. We might say it is well
focused. Between velocity analysis CMPs where sunmo interpolated t-v values, the top
reflector is broader (out of focus, diffuse). The lesson: The steeper the reflector, the more
often velocity analysis should be done.
Another lesson (Figure 8.7): Velocity analysis should be done more frequently where
velocity changes rapidly (vertically or laterally). The channel on the left side of the stack,
the diffraction with an apex near CMP 110 at approximately 0.8 seconds, is such a
velocity discontinuity.
Figure 8.7: The arrow points to a diffraction hyperbola, the time section expression of
the sand channel.
8-19
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
9. Model 4: Migration
9.1 Introduction
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
In this chapter, we develop tools that migrate data and that display migrated data.
9-1
Model 4: Migration
Figure 9.1: A reduced stacking chart of Model 4. Circles are receivers, squares are
sources.
If we supply a frequency value less than the Nyquist for fmax, we can reduce the
number of frequencies that sustolt migrates. For example, the Nyquist frequency of
Model 4 (4 ms sample interval) is 125 Hz. If we decide we do not have usable signal
above 75 Hz, we could use fmax=75 and sustolt will migrate 75 frequencies instead of
125 frequencies; a saving of almost 50%. However, we are pleased with the speed of
sustolt, so we will not use parameter fmax.
We experimented with the length of the taper on the sides, lstaper, and the length of
the taper on the bottom, lbtaper. We decided to use lstaper=20 (traces) and lbtaper=100
(samples). We encourage you to make your own tests.
9-2
Model 4: Migration
The sfill values we used to build Model 4 (Section 6.2) are the rms velocities. Since
we know the rms velocities, we will migrate a portion of Model 4, the sand channel
diffractor, to determine the correct value of sustolt parameter vscale. Once we know the
value of vscale that collapses the sand channel diffraction using a velocity of
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
Figure 9.2: When the input to script migcvp.sh contains 50 traces and migcvp.sh uses six
different velocities, the output file contains 300 traces.
We use lines 20 and 21 to select a window of CMPs around CMP 110; we migrate
CMPs 80-150. Line 22 uses our calculated value of CDP bin spacing.
Since migcvp.sh does constant velocity migration
• we supplied 1.0 for smig, the stretch factor (line 23).
• we supplied “0” for tmig (line 79).
What should we use for the value of vscale? After several tests, we found that a
velocity scale factor of 1.9 (line 25) yielded the best migration of the sand channel. We
determined vscale by repeatedly migrating the sand channel diffraction until the best
migration image was obtained for a migration velocity of 3000 m/s, the value we know to
be the true interval velocity. Figure 9.3 was obtained using vscale=1.5 and Figure 9.4
was obtained using vscale=1.9.
(In seismic data processing, we sometimes have to know the answer to get the right
answer!)
As discussed in the previous section, the length of the taper on each side (lstaper) is
set to 20 traces (line 26) and the length of the bottom taper (lbtaper) is set to 100 samples
(line 27).
9-3
Model 4: Migration
Lines 30-32 specify that the first panel is migrated at 1200 m/s, the last panel is
migrated at 5000 m/s, and the panel interval is 200 m/s. These numbers mean migcvp.sh
migrates the input data 20 times (see line 41):
( (5000-1200) / 200 ) + 1 = 20
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
Line 34 is for testing. Set variable numVtest to “1” or “2” or another small value to
limit the number of panels processed (lines 45-47).
1 #! /bin/sh
2 # File: migcvp.sh
3 # Create one panel for each migration velocity
4 # Each panel has the same "fldr" value
5 # The migration velocity is in key "offset"
6 # Total number of panels is in key "nvs"
7
8 # Set messages on
9 ##set -x
10
11 #================================================
12 # USER AREA -- SUPPLY VALUES
13 #------------------------------------------------
14
15 # Seismic files
16 indata=stack4.su # SU format
17 outdata=migcvp.su # migration Constant Velocity Panels
18
19 # Migration variables
20 cdpmin=80 # Start CDP value
21 cdpmax=150 # End CDP value
22 dxcdp=70.71 # distance between adjacent CDP bins (m)
23 smig=1.0 # stretch factor (0.6 typical if vrms increasing)
24 # [the "W" factor] (Default=1.0)
25 vscale=1.9 # scale factor to apply to velocities (Default=1.0)
26 lstaper=20 # length of side tapers (traces) (Default=0)
27 lbtaper=100 # length of bottom taper (samples) (Default=0)
28
29 # Velocity panel variables
30 firstv=1200 # first velocity value
31 lastv=5000 # last velocity value
32 increment=200 # velocity increment
33
34 numVtest=100 # use to limit number of velocity panels
35 # otherwise, use very large value (100)
36
37 #================================================
38
39 # Compute number of velocity panels
40
41 numV=`bc -l << -END
42 ( ( $lastv - $firstv ) / $increment ) + 1
43 END`
44
45 if [ $numVtest -lt $numV ] ; then
46 numV=$numVtest
47 fi
48
49 #------------------------------------------------
50
51 # FILE DESCRIPTIONS
52 # tmp1 = binary temp file of input data
53
54 #------------------------------------------------
9-4
Model 4: Migration
55
56 cp $indata tmp1
57 migV=$firstv
58 echo " "
59
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
60 #------------------------------------------------
61 # Loop through Migration Constant Velocity Panels
62 # Each panel has the same "fldr" value
63 # Panel migration velocity is in key "offset"
64 # Total number of panels (numV) is in key "nvs"
65 #------------------------------------------------
66
67 i=1
68 while [ $i -le $numV ]
69 do
70
71 echo " iteration number = $i Velocity = $migV"
72
73 suwind < tmp1 key=cdp min=$cdpmin max=$cdpmax |
74 sushw key=fldr a=$i |
75 sushw key=offset a=$migV |
76 sushw key=nvs a=$numV |
77
78 sustolt cdpmin=$cdpmin cdpmax=$cdpmax dxcdp=$dxcdp \
79 tmig=0 vmig=$migV smig=$smig vscale=$vscale \
80 lstaper=$lstaper lbtaper=$lbtaper \
81 >> $outdata
82
83 i=`expr $i + 1`
84 migV=`expr $migV + $increment`
85
86 done
87
88 #------------------------------------------------
89 # Remove files and exit
90 #------------------------------------------------
91
92 echo " "
93 echo " Output file = $outdata"
94 echo " "
95
96 rm -f tmp*
97 exit
98
Line 56 copies the input data to a temporary file (tmp1) to avoid changing the original
data. Line 57 sets the first migration velocity.
The loop is lines 60-87. Line 71 writes a message to the screen: the current migration
panel number and its migration velocity. Line 73 windows the appropriate CMPs from
the temporary file. Line 74 sets key fldr to the iteration counter i for all traces in a panel.
Line 75 puts the migration velocity in key offset for all traces in a panel. Line 76 puts the
number of panels in key nvs of every trace. Stolt migration is done in lines 78-80. Line 81
concatenates the output panels. Line 83 increments the loop (and panel) counter. Line 84
increments the velocity for the next panel.
Below are the screen messages for this run:
iteration number = 1 Velocity = 1200
iteration number = 2 Velocity = 1400
iteration number = 3 Velocity = 1600
iteration number = 4 Velocity = 1800
9-5
Model 4: Migration
9-6
Model 4: Migration
Because these keys contain these values, and because the panel number and migration
velocity is displayed in each plot, we don’t have to remember migcvp.sh processing
parameters.
Use line 16 to supply the input file name and lines 19-22 to set your plot preferences.
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
Default settings: The panels are displayed from the first (line 27) to the last. Line 28
is a comment line – the actual last panel value comes from key nvs, retrieved by line 52.
Panel display increment is “1” (line 29), and the x-axis label is key cdp (line 30).
1 #! /bin/sh
2 # File: iviewcvp.sh
3 # View seismic panels from a migration line of
4 # Constant Velocity Panels -- key fldr
5 # Migration velocity of each panel is in key offset
6 # Total number of panels is in key nvs
7
8 # Set messages on
9 ##set -x
10
11 #================================================
12 # USER AREA -- SUPPLY VALUES
13 #------------------------------------------------
14
15 # Input seismic data
16 indata=migcvp.su # SU format
17
18 # Plot choices
19 myperc=98 # perc value for plot
20 plottype=1 # 0 = wiggle plot, 1 = image plot
21 Wplot=300 # Width of plot (pixels)
22 Hplot=500 # Height of plot (pixels)
23
24 #================================================
25 # Processing variables
26 sortkey=fldr # sort key [Do Not Change]
27 firsts=1 # first sort (fldr) value:
28 # lasts # last sort (fldr) value: key nvs
29 increment=1 # sort key (fldr) increment
30 tracekey=cdp # trace label key
31
32 #------------------------------------------------
33 # file descriptions
34
35 # tmp1 = binary file of input panel
36 # tmp2 = ASCII file to reduce "zap" screen messages
37
38 #------------------------------------------------
39
40 echo " "
41 echo " *** MIGRATION CONSTANT VELOCITY PANEL VIEWER ***"
42 echo " "
43 echo " INPUT: $indata"
44
45 #------------------------------------------------
46 # Remove old temporary files
47 rm -f tmp*
48
49 #------------------------------------------------
50 # Get first trace - total number of panels in key nvs
51
52 lasts=`sugethw nvs < $indata | sed 1q | sed 's/.*nvs=//'`
53
9-7
Model 4: Migration
54 #------------------------------------------------
55 # BEGIN PANEL LOOP
56 #------------------------------------------------
57
58 i=$firsts
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
9-8
Model 4: Migration
the first trace of tmp1. (We saw this syntax in Section 7.6.6 in script iva.sh.) Line 62
windows the “current” panel to ensure that line 65 extracts the correct velocity from key
offset.
Line 66 uses “expr” to ensure that what is extracted from the trace header is purely
numeric. In Figure 9.3, the migration velocity printed on the plot includes a strange
character. When we include line 66, we do not get the strange character (Figure 9.4).
Figure 9.3: Images from iviewcvp.sh using vscale=1.5. Left: Migration velocity 3400
m/s is too small to collapse the sand channel diffraction. The diffraction is under-
migrated. Right: Migration velocity 4200 m/s is too large. Due to over-migration, the
9-9
Model 4: Migration
diffraction is now a “smile.” The strange character after “3400” and “4200” is
discussed in the text.
Remember, suximage reads the first tracekey value (here it is cdp) and increments by
“1” (see Section 7.7). If tracekey values do not increment by “1” (for example, due to
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
Figure 9.4: vscale=1.9. Left: Migration velocity for this panel, 2600 m/s, under-
migrates the sand channel. Right: Migration velocity for this panel, 3400 m/s, over-
migrates the sand channel.
9-10
Model 4: Migration
The definitions of these parameters are slightly different in the suxmovie and xmovie
documentation. The following are lines from the xmovie selfdoc:
Button 1 Zoom with rubberband box
Button 2 reverse the direction of the movie.
Button 3 stop and start the movie.
q or Q key Quit
s or S key stop display and switch to Step mode
b or B key set frame direction to Backward
f or F key set frame direction to Forward
n or N key same as 'f'
c or C key set display mode to Continuous mode
If you switch from Movie mode to Step mode by pressing the S key, you can step
forward with the F key or backward with the B key. To restart the movie, press the C key.
1 #! /bin/sh
2 # File: migmovie.sh
3 # Run a "movie" of the migration panels
4 # Plot "title" shows panel velocity
5 # Enter "xmovie" for mouse and keyboard options
6
7 set -x
8
9 indata=migcvp.su
10 perc=98
11
12 loop=1 # run panels forward continuously
13 # 2 = run panels back and forth continuously
14 # 0 = load all panels then stop
15
16 n1=501 # number of time samples
17 d1=0.004 # time sample interval
18 n2=71 # number of traces per panel
19 d2=1 # trace spacing
20
21 width=300 # width of window
22 height=500 # height of window
23
24 fframe=1200 # velocity of first panel for title annotation
25 dframe=200 # panel velocity increment for title annotation
26
27 suxmovie < $indata perc=$perc loop=$loop \
28 n1=$n1 d1=$d1 n2=$n2 d2=$d2 \
29 width=$width height=$height \
30 fframe=$fframe dframe=$dframe \
31 title="Velocity %g" &
32
33 exit
34
Parameters fframe and dframe (lines 24-25) are used to calculate a value for each
panel. As the panel (frame) increments, the value of dframe is incrementally added to
fframe. On line 31, “%g” prints the calculated value in the title. If you do not use
parameters fframe and dframe, you can use “%g” to print the frame number in the title.
Observe that the sand channel moves updip as the migration velocity increases.
9-11
Model 4: Migration
value, and smig was 1.0 (unused). We used migcvp.sh to determine a value for vscale
based on the known modeled velocity of the sand channel.
Here, we use sustolt for 1-D (time-varying) migration in script migStolt.sh. Our script
has a single time-varying velocity function that is applied across the section, regardless of
bed dip. We designed our t-v function to fit the data best near CMP 110, the location of
the sand channel diffraction. We selected velocities at different times based on the panels
of constant-velocity migrations. We found that a migration velocity at 1500 m/s was
appropriate at 0.70 seconds; greater migration velocities provided improved images at
later times.
Our migration script, migStolt.sh, does not output the migrated data, just a plot
(Figure 9.5). By now, you know how to modify the script to produce a data file and make
a plot from that data (for example, lines 13-19 of showshot.sh, Section 5.6.)
1 #! /bin/sh
2 # File: migStolt.sh
3 # Stolt migration stacked data
4 # Input: stack data
5 # Output: plot of migrated data
6 # Use: migStolt.sh
7 # Example: migStolt.sh
8
9 # Velocities are stacking (Vrms)
10 # Here, we use the false assumption that stacking
11 # and migrating velocities do not change laterally.
12
13 # smig = stretch factor (0.6 typical if vrms increasing)
14 # vscale = scale factor to apply to velocities
15
16 # Set messages on
17 set -x
18
19 time=0.00,0.70,0.75,0.80,0.85,0.90,2.00
20 vels=1500,1500,2500,3000,2800,3100,3500
21
22 sustolt < stack4.su \
23 cdpmin=1 cdpmax=458 dxcdp=70.71 \
24 tmig=$time vmig=$vels \
25 smig=0.6 vscale=1.9 lstaper=20 lbtaper=100 |
26
27 suximage xbox=10 ybox=10 wbox=800 hbox=400 \
28 title="Migration: Stolt T = $time V = $vels" \
29 label1=" Time (s)" label2="CMP" key=cdp \
30 perc=99 verbose=0 &
31
32 # Exit politely from shell
33 exit
34
The difference between our two uses of sustolt (aside from our use of t-v pairs here)
is that we set smig=0.6 because we are using increasing RMS velocities.
The kink in the second interface at approximately CMP 240 is from the velocity
change from 1500 m/s to 2500 m/s at 0.7 seconds. This velocity change works well for
9-12
Model 4: Migration
the sand channel, but not well at CMP 240. A time-varying and spatially-varying
migration would improve the image.
In the pre-migration stack section (Figure 8.3), the sand channel is a strong diffraction
and the lower synclines near CMP 180 are unrealistically narrow. After migration (Figure
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
9.5), the sand channel is greatly collapsed and the lower synclines are realistically gentle.
Careful velocity analysis around CMP 110 (not shown) at the diffractor time of 0.84
seconds produced a stacking velocity of 2670 m/s. It is generally accepted that migration
velocities are approximately 10% greater than stacking velocities. 3015 m/s is 13%
greater than 2670 m/s.
9-13
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
9-14
Model 4: Migration
Figure 9.5: An ximage plot of stack4.su after Stolt migration. Parameter vscale=1.9. The horizontal axis is CMP number. The
sand channel diffraction near CMP 110, at approximately 0.8 seconds is greatly reduced (collapsed). Compare to Figure 8.3.
Nankai Data: Examine, Resample, Sort, Gain
We are going to process a real 2-D line of seismic data provided by Prof. Greg Moore
of the University of Hawaii. The data were collected near the coast of Japan, over the
Nankai trough, where the Philippines plate is subducting beneath Eurasia. The “Nankai”
data were collected by the University of Texas, the University of Tulsa, and the
University of Tokyo. Based on this data set, a paper was published (Moore, et al., 1990)
in which this line is called NT62-8.
The following Nankai seismic files are available:
Table 10.1: Nankai seismic data
Bytes Name Description Traces Gathers
423827680 Nshots.su shot-ordered gathers 19057 326
26509560 Nstack.su stacked data, as published 2869 1
10-1
Nankai Data: Examine, Resample, Sort, Gain
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
10-2
Nankai Data: Examine, Resample, Sort, Gain
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
Figure 10.2: Stack section Nstack.su as used for publication by Moore et al., 1990.
10-3
Nankai Data: Examine, Resample, Sort, Gain
below eight seconds (see Figure 10.3), we will use all 11 seconds to ensure that we
include diffraction limbs for migration.
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
Figure 10.3: Two views of Nstack.su. Left: CMPs 900-1300. Right: CMPs 900-1300,
time 5.5-8 seconds.
Remember that on CMP gathers, shallow reflections generally exhibit large moveout
(large curvature) and deep reflections exhibit little moveout (little curvature). The Nankai
data were collected in deep water, so we do not expect to see much moveout on the CMP
gathers. Lack of moveout will make velocity analysis difficult, but the quality of the stack
is not strongly dependent on the exact velocities used beneath the water bottom.
10-4
Nankai Data: Examine, Resample, Sort, Gain
This command has the effect of placing a virtual subdirectory (Ndata) in my directory
without actually placing the large data files there. From my suscripts directory, I can
view a listing of the contents of directory Ndata by typing
/home/forel/suscripts> ls -lF Ndata
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
total 160
-r--r--r-- 1 pennin geofac 4238276805 May 22 16:46 Nshots.su
-r--r--r-- 1 pennin geofac 26509560 May 22 16:44 Nstack.su
Note: To prevent accidentally deleting the data, we recommend you set permissions
so the Nankai files are only readable. A brief discussion of permissions is in Section
2.3.5. We also recommend that you store copies of these files in another directory or on a
DVD so you can quickly replace deleted files.
10-5
Nankai Data: Examine, Resample, Sort, Gain
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
Figure 10.4: Upper left: Gather 1707, 0-11 sec (30 traces). Upper right: Gather 1847,
0-11 sec (69 traces). Lower left: Gather 1707, 5.8-7.8 sec. Lower right: Gather 1847,
5.8-7.8 sec.
This data set does not appear to have any noisy traces. (The original processors
probably deleted or killed noisy traces. The left plots in Figure 10.4 show a killed trace!)
10-6
Nankai Data: Examine, Resample, Sort, Gain
Note: While the smaller data set is faster to view, you do not see the entire gather.
Keep in mind that it is the processor’s responsibility to see and comprehend the data. The
first time we viewed the Nankai shot gathers, we viewed every 10th gather using all 11
seconds. A problem found early saves much time later!
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
10-7
Nankai Data: Examine, Resample, Sort, Gain
Nyquist frequency and the folding frequency. References: “Frequency Aliasing,” Yilmaz,
1987, p. 11 and Yilmaz, 2001, p. 30.) Before we resample our 2 ms data, we need to
eliminate frequencies higher than 125 Hz to avoid “folding” them into lower frequencies
during resampling.
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
10-8
Nankai Data: Examine, Resample, Sort, Gain
10-9
Nankai Data: Examine, Resample, Sort, Gain
Remember that if you click the middle mouse button in either plot, information of
where you click is shown in the upper left corner of the plot. A click in the left plot shows
frequency-trace mouse location; a click in the right plot shows time-trace mouse location.
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
Figure 10.5: Top: Shot gather 1707, f-x plot and x-t plot. Bottom: Shot gather 1847, f-x
plot and x-t plot.
Also, based on examining the f-x plots of Figure 10.5, we will remove low
frequencies that appear to be random signal. We will begin the low-frequency taper at 16
Hz and completely pass frequencies at 21 Hz
The x-axis of the f-x plot is labeled with tracr values (the default key); this file does
not have valid f2 key values (see Section 7.7).
10-10
Nankai Data: Examine, Resample, Sort, Gain
3
4 # - Filter: cut freq below 21 Hz & above 95 Hz (anti-alias)
5 # - Resample shot gathers from 2 ms to 4 ms sample interval
6 # - Sort to cdp order
7
8 indata=Ndata/Nshots.su
9 outdata=Ndata/Ncdps4.su
10
11 sufilter < $indata f=16,21,85,95 amps=0,1,1,0 |
12 suresamp nt=2750 dt=.004 |
13 susort > $outdata cdp offset
14
15 # Exit politely from shell
16 exit
17
Note: The size of Ncdps4.su is 214 Mbytes. If you work in a group, we suggest that
only one person sort the shot gathers and everyone in the group use a soft link (Section
10.2) to refer to the file of CMP gathers.
Below is the surange output of Ncdps4.su.
surange < Ncdps4.su
19057 traces:
tracl=(1,19057) tracr=(1,19057) fldr=(1687,2012) tracf=(28,96) cdp=(900,1300)
cdpt=(1,69) trid=(1,2) offset=(-2435,-170) scalel=-10000 scalco=-10000
counit=1 muts=(0,11000) ns=2750 dt=4000 day=206
hour=(21,22) minute=(0,59) sec=(1,59)
10-11
Nankai Data: Examine, Resample, Sort, Gain
We have 401 gathers. But, the first gathers of the line are not full fold (Figure 10.6).
(We describe the program, sukeycount, which generated the data for Figure 10.6 in
Appendix C.) Upon inspection, we find that the first full-fold CMP gather (48 traces) is
CMP 933.
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
To make reasonably detailed velocity analysis at a regular interval along the line, we
choose to analyze every 25th CMP starting with CMP 933; that is, fifteen full-fold CMPs:
933, 958, 983, 1008, … 1208, 1233, 1258, 1283.
Figure 10.7: Velocity analysis locations (CMPs) based on Figure 10.3, right.
10-12
Nankai Data: Examine, Resample, Sort, Gain
10-13
Nankai Data: Examine, Resample, Sort, Gain
33
34 echo " "
35 echo " Supply minimum and maximum key values for dB amplitude display."
36 echo " For the best display, use only one to three traces."
37 echo " for example: 300 300"
38 echo " or: 250 450"
39 echo " or: -450 -250"
40 > /dev/tty
41 read mykey1 mykey2
42
43 #------------------------------------------------
44 # Log preliminary information
45
46 echo " *** INTERACTIVE GAIN TEST ***" > tmp4
47 echo "Input file = $indata perc = $myperc" >> tmp4
48 echo "key = $mykey min value = $mykey1 max value = $mykey2" >> tmp4
49
50 #------------------------------------------------
51 # Show original gather and spectra first
52 #------------------------------------------------
53
54 suxwigb < $indata xbox=10 ybox=10 wbox=400 hbox=600 \
55 label1=" Time (s)" label2="$mykey" \
56 title="Original gather" key=$mykey \
57 perc=$myperc verbose=0 &
58
59 suwind < $indata key=$mykey min=$mykey1 max=$mykey2 |
60 suattributes mode=amp |
61 suop op=db > tmp0
62
63 suximage < tmp0 xbox=420 ybox=10 wbox=190 hbox=600 \
64 label1=" Time" label2="Amplitude" title="Amplitude" \
65 grid1=dot grid2=dot legend=1 units=dB \
66 cmap=hsv1 verbose=0 &
67
68 suxgraph < tmp0 -geometry 190x600+620+10 \
69 label1="Time" label2="Amplitude" \
70 title="$mykey $mykey1 $mykey2" grid1=dot grid2=dot \
71 nTic2=2 -bg white verbose=0 &
72
73 #------------------------------------------------
74 # Amplitude correction
75 #------------------------------------------------
76
77 new=true # true = first test
78 ok=false # false = continue looping
79
80 while [ $ok = false ]
81 do
82
83 rm -f tmp0 # remove earlier copy of file to be gained
84
85 if [ $new = true ] ; then
86 cp $indata tmp0
87 echo " -> Original data" >> tmp4
88 else
89 echo " "
90 echo "Enter A to add another gain correction"
10-14
Nankai Data: Examine, Resample, Sort, Gain
96 [sS])
97 cp $indata tmp0
98 echo " -> Using original data"
99 echo " -> Using original data" >> tmp4
100 ;;
101 [aA])
102 cp tmp1 tmp0
103 echo " -> Using modified data"
104 echo " -> Using modified data" >> tmp4
105 ;;
106 esac
107
108 fi
109
110 echo " "
111 echo "Select Gain Correction Method:"
112 echo " Enter A for automatic gain correction"
113 echo " Enter B to add an overall bias value"
114 echo " Enter C to clip data"
115 echo " Enter E to multiply data by exp(t*epow)"
116 echo " Enter J to use Jon Claerbout values:"
117 echo " tpow=2 gpow=.5 qclip=.95"
118 echo " Enter M to balance by dividing by mean"
119 echo " Enter R to balance data by 1/rms"
120 echo " Enter S to scale data"
121 echo " Enter T to multiply data by t^tpow"
122 > /dev/tty
123 read choice2
124
125 case $choice2 in
126 [aA])
127 echo " Supply window length in seconds:"
128 > /dev/tty
129 read wagc
130 echo " -> AGC: window length = $wagc s"
131 echo " -> AGC: window length = $wagc s" >> tmp4
132 sugain < tmp0 agc=1 wagc=$wagc > tmp1
133 ;;
134 [bB])
135 echo " Supply overall bias value"
136 > /dev/tty
137 read bias
138 echo " -> Bias with value $bias"
139 echo " -> Bias with value $bias" >> tmp4
140 sugain < tmp0 bias=$bias > tmp1
141 ;;
142 [cC])
143 echo " Supply clip value between 0.00 and 1.00:"
144 > /dev/tty
145 read qclip
146 echo " -> Clip by $qclip of absolute values"
147 echo " -> Clip by $qclip of absolute values" >> tmp4
148 sugain < tmp0 qclip=$qclip > tmp1
149 ;;
150 [eE])
151 echo " Supply exponent epow:"
152 > /dev/tty
153 read epow
10-15
Nankai Data: Examine, Resample, Sort, Gain
10-16
Nankai Data: Examine, Resample, Sort, Gain
217
218 echo " "
219 echo "Enter 1 for more Amplitude corrections"
220 echo "Enter 2 to output gained seismic data and EXIT"
221 > /dev/tty
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
10-17
Nankai Data: Examine, Resample, Sort, Gain
We applied (t^pow) gain, (option T), three times, each time independent of the others.
(See the log file below.) Table 10.4 shows how the various gain selections enhanced the
data’s dynamic range.
Table 10.4: dB ranges of Figures 10.8 and 10.9
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
10-18
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
10-19
Nankai Data: Examine, Resample, Sort, Gain
Figure 10.8: Top: CMP 1280 without gain. Bottom: CMP 1280 with gain, tpow=1.8.
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
tpow=2.5.
10-20
Nankai Data: Examine, Resample, Sort, Gain
Figure 10.9: Top: CMP 1280 with gain, tpow=2.2. Bottom: CMP 1280 with gain,
Nankai Data: Examine, Resample, Sort, Gain
10.7 Summary
This chapter saw us working with large files and real data. When we processed model
data, we went easily from shot gathers to sort to CMP gathers to velocity analysis. Real
data require inspection, gain, frequency analysis, and much more that we will see in later
chapters.
10-21
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
Below is the user area of iva.sh changed for the Nankai data. As stated in Section
10.4.2, we will do velocity analysis on every 25th CMP starting with CMP 933 (lines 13-
16). We change the perc value to 95 (line 29), an accommodation for real data.
Semblance values go from 1000 m/s (line 38) to 7000 m/s (nvs*dvs + fvs). CVS panels
range from 1000 m/s to 6000 m/s (lines 41-44).
8 #================================================
9 # USER AREA -- SUPPLY VALUES
10 #------------------------------------------------
11 # CMPs for analysis
12
13 cmp1=933 cmp2=958 cmp3=983 cmp4=1008
14 cmp5=1033 cmp6=1058 cmp7=1083 cmp8=1108
15 cmp9=1133 cmp10=1158 cmp11=1183 cmp12=1208
16 cmp13=1233 cmp14=1258 cmp15=1283
17
18 numCMPs=15
19
20 #------------------------------------------------
21 # File names
22
23 indata=Ndata/Ncdps4g.su # SU format
24 outpicks=Nvpick.txt # ASCII file
25
26 #------------------------------------------------
27 # display choices
28
29 myperc=95 # perc value for plot
30 plottype=1 # 0 = wiggle plot, 1 = image plot
31
32 #------------------------------------------------
33 # Processing variables
34
35 # Semblance variables
36 nvs=240 # number of velocities
37 dvs=25 # velocity intervals
38 fvs=1000 # first velocity
39
40 # CVS variables
41 fc=1000 # first CVS velocity
42 lc=6000 # last CVS velocity
43 nc=10 # number of CVS velocities (panels)
44 XX=11 # ODD number of CMPs to stack into central CVS
45
46 #================================================
By looking at the Nankai stack, Figure 10.3 left, we know the water layer is
approximately 6 seconds (two-way time) and we think there is not much reflection
character below 7.8 seconds. The left side of Figure 11.1 is the velan of CMP 933. It
appears that the time for picking events is restricted to between 6 seconds and 8 seconds.
11-1
Nankai: Velocity Analysis, NMO, Stack, Migration
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
Figure 11.1: Left: Velan of CMP 933, 0-11 seconds. Right: Velan of CMP 933, 5.5-11
seconds.
Instead of making velocity picks in a narrow space on the screen, we window the file
of CMPs:
suwind < Ncdps4g.su tmin=5.5 tmax=11 > Ncdps4g5511.su
and use this as the input to iva.sh
23 indata=Ndata/Ncdps4g5511.su # SU format
We chose the full time (tmax=11) because on CMP 933 we see events around 9.5
seconds and we don’t want to miss these and other potentially later events.
The result of using this new input file is shown on the right side of Figure 11.1. This
makes better use of the screen.
Because we are not picking from zero time, we will have to modify the output pick
file, adding zero time to the beginning of all the tnmo series and adding the water velocity
to the beginning of all the vnmo series.
Below is the output file, Nvpick.txt. For the sake of presentation, we made line 1 into
two lines. Line 1 as presented here has no spaces at the end of the first part and no spaces
at the beginning of the second part, making this a usable file.
1 cdp=933,958,983,1008,1033,1058,1083,\
1108,1133,1158,1183,1208,1233,1258,1283 \
2 #=1,2,3,4,5,6,7,8,9,10,11,12,13,14,15 \
3 tnmo=5.52263,6.44307,6.97119,7.65775,8.50274,10.9849 \
4 vnmo=1522.81,1460.57,1622.4,1834.01,2331.93,2892.08 \
5 tnmo=5.53018,6.49588,7.55967,9.19685,10.9774 \
6 vnmo=1497.92,1497.92,1634.84,2966.77,3016.56 \
7 tnmo=5.52263,6.35254,7.40878,9.27229,10.9849 \
11-2
Nankai: Velocity Analysis, NMO, Stack, Migration
8 vnmo=1497.92,1460.57,1659.74,2307.03,2991.67 \
9 tnmo=5.53772,7.48422,8.93279,10.9925 \
10 vnmo=1497.92,1634.84,2282.14,3564.27 \
11 tnmo=5.55281,6.99383,7.5144,8.48011,10.9925 \
12 vnmo=1522.81,1659.74,1734.43,2319.48,2991.67 \
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
13 tnmo=5.53018,6.69959,7.46159,9.45336,10.9774 \
14 vnmo=1485.47,1572.6,1871.35,2282.14,2929.43 \
15 tnmo=5.53018,6.79012,7.58985,8.88752,10.9774 \
16 vnmo=1497.92,1622.4,1771.77,2406.61,2979.22 \
17 tnmo=5.53772,6.29973,7.3786,8.7668,10.9698 \
18 vnmo=1497.92,1497.92,1759.32,2842.29,3589.17 \
19 tnmo=5.53772,7.30315,9.16667,10.9925 \
20 vnmo=1510.36,1672.19,2630.68,3053.91 \
21 tnmo=5.52263,7.39369,9.60425,10.9849 \
22 vnmo=1497.92,1721.98,2443.96,3004.11 \
23 tnmo=5.53018,6.93347,7.55213,10.9849 \
24 vnmo=1535.26,1634.84,1734.43,2867.19 \
25 tnmo=5.52263,7.43896,8.2915,10.9849 \
26 vnmo=1510.36,1647.29,2282.14,2979.22 \
27 tnmo=5.53018,7.49931,8.13306,10.9698 \
28 vnmo=1510.36,1659.74,1921.15,2555.99 \
29 tnmo=5.54527,7.35597,10.9849 \
30 vnmo=1510.36,1697.08,3004.11 \
31 tnmo=5.53772,7.29561,8.19342,9.08368,10.9925 \
32 vnmo=1522.81,1746.88,2456.41,2879.64,4248.91 \
Our picking philosophy was simple: slowly increasing velocity with depth. We did
not consider any picks that deviated strongly from this. We think this is adequate for a
first look at stack and migration. (In the beginning, simple is good.) After we examine the
stack data and the migrated data, we can reconsider our picks.
The file Nvpick.txt must be modified before it can be used for NMO:
• Line 2 must be removed because a comment line cannot be in the midst of a
command. The continuation mark “\” continues a command onto the next line.
(However, a comment line can be between piped (|) commands.)
• We picked from a seismic file that did not go to zero time, so we must add the
zero time and the velocity at zero time to the tnmo and vnmo lines. The result is in
the script in the next section.
Note: If we had windowed the input file to exclude the last time; for example, if we
had started the window at 5.5 seconds and ended the window at 10 seconds
suwind < Ncdps4g.su tmin=5.5 tmax=10 > Ncdps4g5510.su
we would have to edit Nvpick.txt at the start of each series and at the end of each series.
At the end of each tnmo series, we would add 11.0 and at the end of each vnmo series we
would repeat the last picked velocity.
Because of the rough sea-floor topography and the spatial-temporal interpolation of
the velocity profiles, it may be wise to pick velocities at much more closely spaced CMP
locations. We leave this exercise to the reader who should be able to improve on our final
image by performing this additional task.
11-3
Nankai: Velocity Analysis, NMO, Stack, Migration
For the sake of presentation, we made line 18 into two lines. Line 18 as presented
here has no spaces at the end of the first part and no spaces at the beginning of the second
part, making this a usable file.
1 #! /bin/sh
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
2 # File: Nnmo.sh
3 # Apply NMO (flatten) 2-D line of CMPs
4 # Input (1): 2-D line of CMPs
5 # Output (1): NMO-corrected 2-D line of CMPs
6 # Use: Nnmo.sh
7 #
8 # NMO correction is interpolated between named CMPs.
9
10 # Set debugging on
11 set -x
12
13 # Name data sets
14 indata=Ndata/Ncdps4g.su
15 outdata=Nstack4.su
16
17 sunmo < $indata \
18 cdp=933,958,983,1008,1033,1058,1083,\
1108,1133,1158,1183,1208,1233,1258,1283 \
19 tnmo=0.0,5.52263,6.44307,6.97119,7.65775,8.50274,10.9849 \
20 vnmo=1500.,1522.81,1460.57,1622.4,1834.01,2331.93,2892.08 \
21 tnmo=0.0,5.53018,6.49588,7.55967,9.19685,10.9774 \
22 vnmo=1500.,1497.92,1497.92,1634.84,2966.77,3016.56 \
23 tnmo=0.0,5.52263,6.35254,7.40878,9.27229,10.9849 \
24 vnmo=1500.,1497.92,1460.57,1659.74,2307.03,2991.67 \
25 tnmo=0.0,5.53772,7.48422,8.93279,10.9925 \
26 vnmo=1500.,1497.92,1634.84,2282.14,3564.27 \
27 tnmo=0.0,5.55281,6.99383,7.5144,8.48011,10.9925 \
28 vnmo=1500.,1522.81,1659.74,1734.43,2319.48,2991.67 \
29 tnmo=0.0,5.53018,6.69959,7.46159,9.45336,10.9774 \
30 vnmo=1500.,1485.47,1572.6,1871.35,2282.14,2929.43 \
31 tnmo=0.0,5.53018,6.79012,7.58985,8.88752,10.9774 \
32 vnmo=1500.,1497.92,1622.4,1771.77,2406.61,2979.22 \
33 tnmo=0.0,5.53772,6.29973,7.3786,8.7668,10.9698 \
34 vnmo=1500.,1497.92,1497.92,1759.32,2842.29,3589.17 \
35 tnmo=0.0,5.53772,7.30315,9.16667,10.9925 \
36 vnmo=1500.,1510.36,1672.19,2630.68,3053.91 \
37 tnmo=0.0,5.52263,7.39369,9.60425,10.9849 \
38 vnmo=1500.,1497.92,1721.98,2443.96,3004.11 \
39 tnmo=0.0,5.53018,6.93347,7.55213,10.9849 \
40 vnmo=1500.,1535.26,1634.84,1734.43,2867.19 \
41 tnmo=0.0,5.52263,7.43896,8.2915,10.9849 \
42 vnmo=1500.,1510.36,1647.29,2282.14,2979.22 \
43 tnmo=0.0,5.53018,7.49931,8.13306,10.9698 \
44 vnmo=1500.,1510.36,1659.74,1921.15,2555.99 \
45 tnmo=0.0,5.54527,7.35597,10.9849 \
46 vnmo=1500.,1510.36,1697.08,3004.11 \
47 tnmo=0.0,5.53772,7.29561,8.19342,9.08368,10.9925 \
48 vnmo=1500.,1522.81,1746.88,2456.41,2879.64,4248.91 |
49 sustack > $outdata
50
51 # Exit politely from shell
52 exit
53
We plot the stack file with suximage.
suximage < Nstack4.su key=cdp perc=95 &
11-4
Nankai: Velocity Analysis, NMO, Stack, Migration
We see that our stack file (Figure 11.2) does not resemble the original stack file
(Figure 10.3, right). We suspect the original file Nstack.su is actually a stacked and
migrated file. (In fact, this is the case, as is clear from the publication by Moore et al.,
1990.)
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
11-5
Nankai: Velocity Analysis, NMO, Stack, Migration
(reported in Moore, et al., 1990) is 16.667 m (parameter dxcdp, line 22). Because we do
not know rms velocities for these data, we set vscale=1.0 (line25). We process for image,
not for velocities. Our first velocity is 1400 m/s and our last is 2000 m/s, a narrow range.
11 #================================================
12 # USER AREA -- SUPPLY VALUES
13 #------------------------------------------------
14
15 # Seismic files
16 indata=Nstack4.su # SU format
17 outdata=Ndata/Nmigcvp.su # migration Constant Velocity Panels
18
19 # Migration variables
20 cdpmin=900 # Start CDP value
21 cdpmax=1300 # End CDP value
22 dxcdp=16.667 # distance between adjacent CDP bins (m)
23 smig=1.0 # stretch factor (0.6 typical if vrms increasing)
24 # [the "W" factor] (Default=1.0)
25 vscale=1.0 # scale factor to apply to velocities (Default=1.0)
26 lstaper=20 # length of side tapers (traces) (Default=0)
27 lbtaper=100 # length of bottom taper (samples) (Default=0)
28
29 # Velocity panel variables
30 firstv=1400 # first velocity value
31 lastv=2000 # last velocity value
32 increment=200 # velocity increment
33
34 numVtest=100 # use to limit number of velocity panels
35 # otherwise, use very large value (100)
36
37 #================================================
11-6
Nankai: Velocity Analysis, NMO, Stack, Migration
We do not want to waste viewing space on the deep-water layer, and we see no
significant reflections below 7.8 seconds.
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
Figure 11.3: Left: Original migrated data file. Right: Our constant-velocity Stolt
migration at 1600 m/s.
We find that our best match to the original data, and our best image, is the 1600 m/s
migration (Figure 11.3). Notice that the migration “smiles” above the water bottom have
been muted on the original data set, cosmetically improving the data set’s appearance.
11-7
Nankai: Velocity Analysis, NMO, Stack, Migration
11.3.4 Summary
The Nankai data shows many diffractions. As you look at the various migrations, you
can see that reflectors do not change position; this is generally flat geology. When
reflectors are under-migrated, the diffractions are not collapsed – they are “frowns.”
When reflectors are over-migrated, the diffractions change to “smiles.” The original data
set appears to be slightly over-migrated.
After we migrated Model 4 (in Chapter 9), we used CMP 110 to compare the stacking
velocity to the migration (rms) velocity. That was a valid comparison because we had
calibrated the velocity scale factor, vscale using our model velocity. However, we cannot
expect to derive geologic velocities by migrating the Nankai data because we did not do a
similar calibration.
The Nankai data set is difficult to process because the geology is under 4.5 km of
water. Because of this, the moveout that we expect to see and use for velocity analysis is
subtle. Proper velocity analysis of this data set demands patience and some prior
knowledge of the local velocities, particularly the water velocity. We encourage you to
improve upon our image.
11-8
Nankai: Velocity Analysis, NMO, Stack, Migration
for dx, sugazmig’s name for the CDP bin distance. The velocities are an approximation
based on the previous Stolt migrations.
Note: For sugazmig, the velocities are INTERVAL VELOCITIES.
1 #! /bin/sh
2 # File: Ngazmig.sh
3 # Phase shift migration of Nankai stacked data.
4 # CDP spacing (dx) is 16.667 meters.
5
6 # Set messages on
7 set -x
8
9 sugazmig < Nstack4.su dx=16.6667 \
10 tmig=0,5,6,7,8,9 \
11 vmig=1500,1500,1950,2000,2050,2100 \
12 > Ngazmig.su
13
14 # Exit politely from shell
15 exit
16
Figure 11.4: Left: Original migrated data file. Right: Our migration using sugazmig.
This migration took half an hour to run on a Sun UltraSPARC III with four
processors. Figure 11.4 shows that our migration is a fair match to the original migration.
11-9
Nankai: Velocity Analysis, NMO, Stack, Migration
Although sugazmig does some internal zero-padding, you can see minor migration
artifacts on the sides of the section at about 6 seconds. If you look at the bottom of the
time section (11 seconds), you can also see migration artifacts. However, the bottom
artifacts are unimportant since there are no strong reflections below 10 seconds. Program
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
sugazmig does not have taper parameters; in contrast to sustolt parameters lstaper and
lbtaper.
We used the following command to display the migrated file:
suwind < Ngazmig.su tmin=5.5 tmax=8 | suximage perc=95 title="Nankai stack +
sugazmig" &
11-10
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
11-11
Nankai: Velocity Analysis, NMO, Stack, Migration
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
Our third data set is another 2-D line of seismic data provided by Prof. Greg Moore of
the University of Hawaii. This “Taiwan” data set was collected near the coast of Taiwan
in 1995 (Berndt & Moore, 1999) by the University of Hawaii, San Jose State University,
and National Taiwan University. The size of our shot gather file, Tshot.su, is 411 Mbytes.
(We have only one Taiwan data set, the original shot gathers.) This data set is more
difficult to process than the Nankai data set.
Below is the surange output of Tshot.su.
surange < Tshot.su
25344 traces:
tracl=(114769,140112) tracr=(1,25344) fldr=(800,975) tracf=(1,144)
ep=(740,915)
cdp=(4027,4816) cdpt=(1,144) trid=1 nhs=1 offset=(-3663,-88)
sdepth=80000 swdep=(2120000,3540000) scalel=-10000 scalco=1 sx=(-4773100,-
3965250)
gx=(-4764600,-3600343) gy=(766,94805) counit=3 tstat=12 ns=3999
dt=4000 gain=9 afilf=160 afils=72 lcf=3
hcf=160 lcs=6 hcs=72 year=95 day=260
hour=(12,13) minute=(0,59) sec=(0,59)
Table 12.1: Some Tshot.su key values
Key Range
fldr 800 to 975
cdp 4027 to 4816
offset -3663 to -88
ns 3999
dt 4000
12-1
Taiwan Data: Examine, Zero Traces, Re-examine
Keys tracl and tracr number the 25344 traces (although, they use different starting
numbers).
Key fldr tells us there are 176 shot gathers (975-800+1).
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
Since the number of samples per trace (ns) is 3999 and the trace sample interval (dt)
is 4000 microseconds, the trace length is 16 seconds (15.996 seconds)
Notice that the Taiwan shot gather trace headers already contain cdp values.
This data set has many acquisition key values such as source x-coordinate (sx),
receiver x-coordinate (gx), receiver y-coordinate (gy), and source depth (sdepth).
Figure 12.1 is a chart of the number of traces per shot gather. (We describe the
program, sukeycount, which generated the data for the figure in Appendix C.) The chart
shows that every shot gather has 144 traces.
We do not want to process 16 seconds of data. Let’s use the first 5 seconds:
suwind < Tshot.su tmax=5 > Tshot5.su
The size of our windowed shot gather file is 133 Mbytes.
12-2
Taiwan Data: Examine, Zero Traces, Re-examine
14 # plot choices
15 myperc=95 # perc value for plot
16 plottype=0 # 0 = wiggle plot, 1 = image plot
17 Wplot=900 # Width of plot (pixels)
18 Hplot=600 # Height of plot (pixels)
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
19
20 # processing variables
21 sortkey=fldr # sort key (usually fldr or cdp)
22 firsts=800 # first sort (fldr or cdp) value
23 lasts=975 # last sort (fldr or cdp) value
24 increment=5 # sort key increment
25 tracekey=tracf # trace label key
26
27 #================================================
One of the important reasons we examine the shot gathers is to determine whether we
have bad hydrophones (or for land data, geophones). A bad “phone” is easy to find –
usually the entire trace has extremely high amplitudes or zero amplitudes (a “dead”
trace). An extremely high amplitude (noisy) trace should usually have its amplitudes
replaced by zeros (“killed”); a dead trace usually does not cause processing problems.
Figure 12.2 shows shot gather (fldr) 930. If you zoom the figure, you can see that
traces (tracf) 61, 62, and 143 seem to have anomalously high amplitudes. We confirmed
this with program sudumptrace. (We describe program sudumptrace in Appendix C.)
Below is the command we entered to use sudumptrace and the last ten lines of the
screen output. We used suwind twice to extract four traces from shot (fldr) 930.
suwind < Tdata/Tshot5.su key=fldr min=930 max=930 | suwind key=tracf min=60
max=63 | sudumptrace
1242 4.968 -1.7802e-01 6.6929e+03 6.6929e+03 -1.0794e+00
1243 4.972 -9.1427e-02 6.6929e+03 6.6929e+03 -8.5523e-01
1244 4.976 -4.6168e-01 6.6929e+03 6.6929e+03 -5.7248e-01
1245 4.980 -1.3838e-01 6.6929e+03 6.6929e+03 -6.9779e-01
1246 4.984 7.6338e-01 6.6929e+03 6.6929e+03 -1.3278e+00
1247 4.988 1.2673e+00 6.6929e+03 6.6929e+03 -1.5405e+00
1248 4.992 7.0116e-01 6.6929e+03 6.6929e+03 -1.3003e+00
1249 4.996 -8.4290e-02 6.6929e+03 6.6929e+03 -1.1742e+00
1250 5.000 1.1331e-01 6.6929e+03 6.6929e+03 -1.0646e+00
1251 5.004 3.0200e-01 6.6929e+03 6.6929e+03 -6.6413e-01
The first column is time sample, the second column is time value (seconds). The next
four columns are amplitude values for traces 60-63. The amplitude values of traces 61
and 62 are constant at 6692.9 while the amplitudes of traces 60 and 63 vary at lower
amplitudes.
Below is the command we entered to use sudumptrace on the last four traces in shot
(fldr) 930. Below that are the last ten lines of the screen output.
suwind < Tdata/Tshot5.su key=fldr min=930 max=930 | suwind key=tracf min=141
max=144 | sudumptrace
1242 4.968 -3.2838e+00 -2.9376e-01 1.7135e+03 -6.8847e-01
1243 4.972 -3.4147e+00 9.3122e-01 1.7135e+03 -6.8070e-01
1244 4.976 -2.8242e+00 1.7697e+00 1.7135e+03 4.7962e-01
1245 4.980 -3.0289e+00 9.1051e-01 1.7135e+03 5.8127e-01
1246 4.984 -3.4756e+00 4.9018e-01 1.7135e+03 -8.4847e-01
1247 4.988 -3.3356e+00 1.4016e+00 1.7135e+03 -8.5439e-01
1248 4.992 -3.4664e+00 1.8355e+00 1.7135e+03 5.6005e-01
1249 4.996 -3.2585e+00 8.9739e-01 1.7135e+03 3.7785e-01
1250 5.000 -3.1534e+00 1.8295e-02 1.7135e+03 -4.5137e-01
1251 5.004 -3.5228e+00 4.7177e-02 1.7135e+03 6.9378e-01
12-3
Taiwan Data: Examine, Zero Traces, Re-examine
The first column is time sample, the second column is time value (seconds). The next
four columns are amplitude values for traces 141-144. The amplitude values of trace 143
are constant at 1713.5 while the amplitudes of the other traces vary at lower amplitudes.
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
12-4
Taiwan Data: Examine, Zero Traces, Re-examine
Required parameters
min= first trace to kill (one-based)
Optional parameters
count= 1 number of traces to kill
Program sukill does not use a key; it finds traces by counting them. For our first shot
gather, we have to use sukill twice:
min=61 count=2
min=143 count=1
For our second shot gather, we have to use sukill twice:
min=205 count=2
min=287 count=1
We need 2 x 176 uses of sukill. Below is script killer.sh that generates text for sukill.
1 #! /bin/sh
2 # File: killer.sh
3 # Generate sukill values for Taiwan shots
4
5 # Set messages on
6 ##set -x
7
8 outtext=killer.txt
9 rm -f $outtext # remove earlier trials
10
11 increment=144 # number of traces per shot gather
12 firsts=800 # first fldr value
13 lasts=975 # last fldr value
14
15 bad2=61 # bad traces count 61,62
16 bad1=143 # bad trace count 143
17
18 #------------------------------------------------
19
20 k=0 # fldr counter
21 j=`expr $lasts - $firsts + 1`
22 i=1 # loop counter
23 while [ $i -le $j ]
24 do
25
26 two=`expr $k + $bad2`
27 one=`expr $k + $bad1`
28 echo "sukill min=$two count=2 | sukill min=$one count=1 |" >> $outtext
29
30 k=`expr $k + $increment` # fldr counter
31 i=`expr $i + 1` # loop counter
32
33 done
34
35 #------------------------------------------------
36
37 # Exit politely from shell
38 exit
39
12-5
Taiwan Data: Examine, Zero Traces, Re-examine
Below is script killer2.sh that zeros the problem traces. Lines 11-186 were generated by
killer.sh (except that on line 11 we added “< $indata” and on line 186 we replaced
the pipe (|) with “> $outdata”.
1 #! /bin/sh
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
2 # File: killer2.sh
3 # kill regularly appearing bad traces
4
5 # Set messages on
6 set -x
7
8 indata=Tdata/Tshot5.su
9 outdata=Tdata/Tshot5k.su
10
11 sukill < $indata min=61 count=2 | sukill min=143 count=1 |
12 sukill min=205 count=2 | sukill min=287 count=1 |
13 sukill min=349 count=2 | sukill min=431 count=1 |
14 sukill min=493 count=2 | sukill min=575 count=1 |
15 sukill min=637 count=2 | sukill min=719 count=1 |
16 sukill min=781 count=2 | sukill min=863 count=1 |
17 sukill min=925 count=2 | sukill min=1007 count=1 |
18 sukill min=1069 count=2 | sukill min=1151 count=1 |
19 sukill min=1213 count=2 | sukill min=1295 count=1 |
20 sukill min=1357 count=2 | sukill min=1439 count=1 |
21 sukill min=1501 count=2 | sukill min=1583 count=1 |
22 sukill min=1645 count=2 | sukill min=1727 count=1 |
23 sukill min=1789 count=2 | sukill min=1871 count=1 |
24 sukill min=1933 count=2 | sukill min=2015 count=1 |
25 sukill min=2077 count=2 | sukill min=2159 count=1 |
26 sukill min=2221 count=2 | sukill min=2303 count=1 |
27 sukill min=2365 count=2 | sukill min=2447 count=1 |
28 sukill min=2509 count=2 | sukill min=2591 count=1 |
29 sukill min=2653 count=2 | sukill min=2735 count=1 |
30 sukill min=2797 count=2 | sukill min=2879 count=1 |
31 sukill min=2941 count=2 | sukill min=3023 count=1 |
32 sukill min=3085 count=2 | sukill min=3167 count=1 |
33 sukill min=3229 count=2 | sukill min=3311 count=1 |
34 sukill min=3373 count=2 | sukill min=3455 count=1 |
35 sukill min=3517 count=2 | sukill min=3599 count=1 |
36 sukill min=3661 count=2 | sukill min=3743 count=1 |
37 sukill min=3805 count=2 | sukill min=3887 count=1 |
38 sukill min=3949 count=2 | sukill min=4031 count=1 |
39 sukill min=4093 count=2 | sukill min=4175 count=1 |
40 sukill min=4237 count=2 | sukill min=4319 count=1 |
41 sukill min=4381 count=2 | sukill min=4463 count=1 |
42 sukill min=4525 count=2 | sukill min=4607 count=1 |
43 sukill min=4669 count=2 | sukill min=4751 count=1 |
44 sukill min=4813 count=2 | sukill min=4895 count=1 |
45 sukill min=4957 count=2 | sukill min=5039 count=1 |
46 sukill min=5101 count=2 | sukill min=5183 count=1 |
47 sukill min=5245 count=2 | sukill min=5327 count=1 |
48 sukill min=5389 count=2 | sukill min=5471 count=1 |
49 sukill min=5533 count=2 | sukill min=5615 count=1 |
50 sukill min=5677 count=2 | sukill min=5759 count=1 |
51 sukill min=5821 count=2 | sukill min=5903 count=1 |
52 sukill min=5965 count=2 | sukill min=6047 count=1 |
53 sukill min=6109 count=2 | sukill min=6191 count=1 |
54 sukill min=6253 count=2 | sukill min=6335 count=1 |
55 sukill min=6397 count=2 | sukill min=6479 count=1 |
56 sukill min=6541 count=2 | sukill min=6623 count=1 |
57 sukill min=6685 count=2 | sukill min=6767 count=1 |
58 sukill min=6829 count=2 | sukill min=6911 count=1 |
59 sukill min=6973 count=2 | sukill min=7055 count=1 |
12-6
Taiwan Data: Examine, Zero Traces, Re-examine
12-7
Taiwan Data: Examine, Zero Traces, Re-examine
12-8
Taiwan Data: Examine, Zero Traces, Re-examine
Figure 12.4 shows shot gathers 930 and 820 after zeroing tracr traces 61, 62, and 143.
The ellipses on shot gather 820 show where 1-D frequency analysis is done in the next
section.
Figure 12.4: Shot (fldr) gathers 930 (top) and 820 (bottom) from file Tshot5k.su.
Note: In the Taiwan shot-order data, these bad traces have regular position. In CMP-
order data, these bad traces have no regular position. If we had not examined the gathers
before sorting them to CMP order, we would not have recognized the regularity of these
traces (that they were caused by particular bad hydrophones) and we would have to use a
different way to identify these traces. One other way to identify these traces is by their
regular high amplitudes.
12-9
Taiwan Data: Examine, Zero Traces, Re-examine
second of data from traces windowed from shot gather 820, shown by the ellipses in the
bottom shot gather of Figure 12.4.
In Figure 12.5, the left side example is from a low frequency noise area: tracf=55, 4-
4.5 seconds.
suwind < Tdata/Tshot5k.su key=fldr min=820 max=820 | suwind key=tracf min=55
max=55 tmin=4 tmax=4.5 > T820f55t4.su
We created the left side plots with the following command:
tf.sh T820f55t4.su
The right side example in Figure 12.5 is from a good signal area: tracf=93, 2-2.5
seconds.
suwind < Tdata/Tshot5k.su key=fldr min=820 max=820 | suwind key=tracf min=93
max=93 tmin=2 tmax=2.5 > T820f93t2.su
We created the right side plots with the following command:
tf.sh T820f93t2.su
Figure 12.5: 1-D frequency analysis of wavelets from shot gather 820. Left: low
frequency noise wavelet. Right: good high frequency wavelet.
12-10
Taiwan Data: Examine, Zero Traces, Re-examine
Script tf.sh makes three plots. The top plot is the input time series, the middle plot is
the frequency transform of the time series, and the bottom plot is the phase spectrum of
the time series. Below is script tf.sh.
1 #! /bin/sh
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
2 # File: tf.sh
3 # Time to frequency transform of a trace
4 # Output = 3 plots: time, freq amplitude, phase spectrum
5 # Use: tf.sh [input.su]
6 # Example: tf.sh wave1.su
7
8 # Set messages on
9 set -x
10
11 suxgraph < $1 -geometry 400x200+10+10 -bg white \
12 title="Input: $1" \
13 label1="Time (s)" label2="Amplitude" \
14 style=normal linecolor=0 &
15
16 ##suxwigb < $1 wt=1 va=2 style=normal labelcolor=black \
17 ## wbox=400 hbox=200 xbox=420 ybox=10 \
18 ## title="Input: $1" titlecolor=black \
19 ## label1="Time (s)" label2=" Amplitude" &
20
21 sufft < $1 | suamp mode=amp |
22 suxgraph -geometry 400x200+10+245 -bg white \
23 title="Amplitude Spectrum" \
24 label1="Frequency (Hz)" label2="Amplitude" \
25 style=normal linecolor=0 &
26
27 sufft < $1 | suamp mode=phase |
28 suxgraph -geometry 400x200+10+480 -bg white \
29 title="Phase Spectrum" \
30 label1="Frequency (Hz)" label2="Phase (rad)" \
31 style=normal linecolor=0 &
32
33 # "press return key to ..."
34 pause remove the plots
35
36 zap xgraph
37 ##zap xwigb
38
39 # Exit politely from shell
40 exit
41
• Line 6: As the example shows, the input file must be supplied when the script is
used. The input file is variable $1.
• Lines 11-14 make the time series plot with suxgraph.
• Line 21: sufft transforms the data and suamp mode=amp outputs the amplitude
spectrum of the information from sufft.
• Lines 22-25 plot the amplitude spectrum with suxgraph.
• Line 27: sufft transforms the data and suamp mode=phase outputs the phase
spectrum of the information from sufft.
• Lines 28-31 plot the phase spectrum with suxgraph.
• Lines 16-19 make an alternative time series plot using suxwigb. These lines can
be uncommented for use.
12-11
Taiwan Data: Examine, Zero Traces, Re-examine
Note: If you middle click the mouse in an xwigb window, mouse location
information is printed in the upper left of the window. An xgraph window does not have
this feature.
• After the plots are made, Line 34 causes the following line to be printed to the
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
12-12
Taiwan: Gain-Filter, Filter-Gain
We now have 5 seconds of Taiwan data and we zeroed the bad hydrophones. In this
chapter we will
1. apply spherical divergence correction, then band-pass filter the data.
We will also
2. apply a band-pass filter, then apply spherical divergence correction.
We will compare gathers from these two paths and consider whether one path is better.
13.2 Gain-Filter
13-1
Taiwan: Gain-Filter, Filter-Gain
After several trials, we decide to use a spherical divergence correction of 1.8 (Figures
13.1 and 13.2).
We apply spherical divergence correction with the command below.
sugain < Tdata/Tshot5k.su tpow=1.8 > Tdata/Tshot5kg.su
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
Figure 13.1: Top: Shot 820 without gain. Bottom: Shot 820 with gain, tpow=1.8.
13-2
Taiwan: Gain-Filter, Filter-Gain
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
Figure 13.2: Top: Shot 930 without gain. Bottom: Shot 930 with gain, tpow=1.8.
13-3
Taiwan: Gain-Filter, Filter-Gain
• a perc value for the wiggle plot of the input file, and
• a key (trace header) to label the x-axis of the wiggle plot.
The script creates two displays: a wiggle plot of the input traces and a corresponding
frequency display. The script transforms each input seismic trace to a frequency trace.
The script repeats these three questions and two displays until you quit the script. No
data are saved after processing.
If you make a mistake while typing, use the Delete key, not the Back Space key.
Figure 13.3: Plots of f-x and t-x after gain. Top: Shot 820. Bottom: Shot 930.
The top of Figure 13.3 shows that the low frequency noise overwhelms the
reflections.
13-4
Taiwan: Gain-Filter, Filter-Gain
Note: The x-axis annotation of the frequency plot (an xwigb window) shows tracr
values, the first default key. See Section 7.7.
Based on Figure 13.3, we will use the following command to filter the gained data:
sufilter < Tdata/Tshot5kg.su f=18,23,55,60 amps=0,1,1,0 > Tdata/Tshot5kgf.su
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
Figure 13.4: After gain, then band-pass filter. Top: Shot 820. Bottom: Shot 930.
Let’s continue our experiments by reversing the process order.
13-5
Taiwan: Gain-Filter, Filter-Gain
13.3 Filter-Gain
windowed from the 2-D line at the beginning of Section 13.2: T5k820.su and T5k930.su.
We examine these files with script fxdisp.
Figure 13.5: Plots of f-x and t-x. Top: Shot 820. Bottom: Shot 930.
Compare Figure 13.5, no gain, to Figure 13.3, fxdisp after gain. Without gain, the low
frequency noise on the near-offset traces of gather 820 does not overwhelm the
reflections.
After discussion, we decide to use the same sufilter values we used in Section 13.3.
sufilter < Tdata/Tshot5k.su f=18,23,55,60 amps=0,1,1,0 > Tdata/Tshot5kf.su
13-6
Taiwan: Gain-Filter, Filter-Gain
Figure 13.6: Top: Shot 820 without gain. Bottom: Shot 820 with gain, tpow=2.0.
13-7
Taiwan: Gain-Filter, Filter-Gain
This time, we think a tpow value of 2.0 nicely balances the data (Figures 13.6, 13.7).
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
Figure 13.7: Top: Shot 930 without gain. Bottom: Shot 930 with gain, tpow=2.0.
Table 13.2 shows the amplitude range increase of spherical divergence correction.
We apply spherical divergence correction with the command below.
13-8
Taiwan: Gain-Filter, Filter-Gain
maximum dB 0 0
820 92 93
minimum dB -94 -146.9
maximum dB 0 0
930 92 93
minimum dB -86.59 -151.5
Figure 13.8: After band-pass filter, then gain. Top: Shot 820. Bottom: Shot 930.
13-9
Taiwan: Gain-Filter, Filter-Gain
13.4 Discussion
When we compare Figures 13.4 and 13.8, we see minor differences, but not major
differences. Recall that we used different gain values in the two tests. While we used the
same frequency pass-band for both tests, we did consider passing slightly more lower
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
13-10
f-k Filter and Deconvolution
Real data have noise of various kinds. Some signal enhancement programs are used
prestack (before stacking) and some are used post-stack (after stacking CMPs). In the
previous chapter, we used gain and band-pass filtering on prestack data. We only used
one option out of many in sugain, but you are free to test other sugain options. In this
chapter, we present two popular signal enhancement tools. These tools could prove useful
in processing the Taiwan data set; however, we were unable to improve that data set
using them. (Perhaps you can find a way!). To demonstrate their capability, we use them
on Oz files (shot gathers) that we get from the Colorado School of Mines Center for
Wave Phenomena web site (Section 3.3).
The first tool is a frequency-wavenumber (f-k) filter, a two-dimensional filter. We see
how to use an f-k filter through the sudipfilt program. For a brief description of the f-k
domain, see Sheriff (2002): f-k domain.
The second tool is deconvolution. We implement a deconvolution (decon) script that
uses suacor, an autocorrelation program, and supef, a prediction-error filter.
14-1
f-k Filter and Deconvolution
Figure 14.1: Input gather oz08w.su. Left: Seismic after gain (tpow=2.0). Right: f-k plot
of seismic after gain.
23 Slopes: 4,5,6,7
24
25 Enter 1 for more f-k filter testing
14-2
f-k Filter and Deconvolution
26 Enter 2 to EXIT
27 2
28
29 Slopes are in ==> fk.txt
30 Passed data are in ==> fkpass.su
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
14-3
f-k Filter and Deconvolution
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
Figure 14.2: Output of slope values 4,5,6,7. Top: Data passed by filter. Bottom: Data
rejected by filter.
14-4
f-k Filter and Deconvolution
14-5
f-k Filter and Deconvolution
14-6
f-k Filter and Deconvolution
14-7
f-k Filter and Deconvolution
192
193 # Plot seismic passed data
194 if [ $plottype -eq 0 ] ; then
195 suxwigb < tmp2 xbox=10 ybox=10 wbox=300 hbox=500 \
196 label1=" Time (s)" label2="Offset"\
197 title="After f-k PASS filter" key=offset \
198 perc=$myperc verbose=0 &
199 else
200 suximage < tmp2 xbox=10 ybox=10 wbox=300 hbox=500 \
201 label1=" Time (s)" \
202 title="After f-k PASS filter" \
203 perc=$myperc verbose=0 &
204 fi
205
206 # Plot f-k passed data
207 suspecfk < tmp2 dx=$dx dt=$dt |
208 suximage xbox=320 ybox=10 wbox=300 hbox=500 \
209 label1=" Frequency (Hz)" label2="Wavenumber (k)" \
210 title="f-k Spectrum after PASS filter" \
211 cmap=hsv2 legend=1 units=Amplitude verbose=0 \
212 grid1=dots grid2=dots perc=99 &
213
214 # Apply reject filter
215 sudipfilt < tmp1 dx=$dx dt=$dt slopes=$slopes amps=1,0,0,1 > tmp3
216
217 # Plot seismic rejected data
218 if [ $plottype -eq 0 ] ; then
219 suxwigb < tmp3 xbox=630 ybox=10 wbox=300 hbox=500 \
220 label1=" Traveltime (s)" label2="Offset" \
221 title="After f-k REJECT filter" key=offset \
222 perc=$myperc verbose=0 &
223 else
224 suximage < tmp3 xbox=630 ybox=10 wbox=300 hbox=500 \
225 label1=" Time (s)" \
226 title="After f-k REJECT filter" \
227 perc=$myperc verbose=0 &
228 fi
229
230 # Plot f-k rejected data
231 suspecfk < tmp3 dx=$dx dt=$dt |
232 suximage xbox=940 ybox=10 wbox=300 hbox=500 \
233 label1=" Frequency (Hz)" label2="Wavenumber (k)" \
234 title="f-k Spectrum after REJECT filter" \
235 cmap=hsv2 legend=1 units=Amplitude verbose=0 \
236 grid1=dots grid2=dots perc=99 &
237
238 #------------------------------------------------
239 # More f-k or exit
240 #------------------------------------------------
241
242 echo " "
243 echo "Enter 1 for more f-k filter testing"
244 echo "Enter 2 to EXIT"
245 > /dev/tty
246 read choice2
247
248 case $choice2 in
249 1)
14-8
f-k Filter and Deconvolution
250 ok=false
251 ;;
252 2)
253 cp tmp2 fkpass.su
254 cp tmp3 fkrejj.su
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
255 echo "sudipfilt < $indata dx=$dx dt=$dt \\" >> tmp4
256 echo " slopes= amps=" >> tmp4
257 echo " "
258 echo "Processing log is in ==> fk.txt"
259 echo " Passed data are in ==> fkpass.su"
260 echo " Passed data are in ==> fkpass.su" >> tmp4
261 echo "Rejected data are in ==> fkrejj.su"
262 echo "Rejected data are in ==> fkrejj.su" >> tmp4
263 cp tmp4 fk.txt
264 pause exit
265 zap xwigb > tmp5
266 zap ximage > tmp5
267 ok=true
268 ;;
269 esac
270
271 new=false # true = first test
272
273 done
274
275 #------------------------------------------------
276 # Exit
277 #------------------------------------------------
278
279 # Remove temporary files
280 rm -f tmp*
281
282 # Exit politely from shell
283 exit
284
The user-supplied values are on lines 13-22. Lines 21 and 22 are where trace spacing
and the time sample interval are set. The next lines, 26-40, discuss these parameters.
Lines 42-51 are written to the screen as soon as the script starts so the user will know
what outputs to expect.
Line 54 removes temporary files that might be left from a previously crashed run.
Lines 60-67 document the use of internal files.
Lines 73-77 ask the user if t^(power) gain is desired.
Line 80 copies the seismic data to an internal file while applying gain, or no gain.
Lines 83-85 write the input file name and the gain value to the temporary log files.
Lines 91-101 plot the input seismic data using suxwigb or suximage.
Line 103 transforms the input seismic data to f-k space. Lines 104-108 plot the 2-D
transform of the input seismic data.
Actual filter testing occurs between lines 114 and 273.
Line 114, parameter new, is a test for the if block of lines 122-168. If this is a first test
(the first time through the loop), the input data are copied from tmp0 to tmp1 (line 124).
If this is not the first test, the user is faced with choice1, lines 127-130. If the user
chooses to start fresh, the original data in tmp0 are copied to tmp1 (line 134). If the user
chooses to re-filter data already filtered, another choice (choice3) must be made: whether
14-9
f-k Filter and Deconvolution
to reprocess the data from the “pass” filter or from the “reject” filter (lines 141-144).
After a previous test, the “pass” data are in tmp2 (line 191) and the “reject” data are in
tmp3 (line 215). The user’s choice (choice3) determines which file is copied to
(overwrites) tmp1 for the next test (line 149 or line 155). At the bottom of the test loop,
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
line 273, new=false insures that, after the first time through the loop, the user will always
have to see choice1.
If this is not the first time through the loop, lines 165-166 remove the previous “pass”
and “reject” data files.
Lines 175-181 explain the input format of the slopes and read the slopes from the
user.
Line 191 applies the “pass” filter (amps=0,1,1,0). Lines 194-204 plot the seismic
“pass” data. Line 207 transforms the “pass” seismic data to f-k space. Lines 208-212 plot
the 2-D transform of the “pass” seismic data.
Line 215 applies the “reject” filter (amps=1,0,0,1). Lines 218-228 plot the
seismic “reject” data. Line 231 transforms the “reject” seismic data to f-k space. Lines
232-236 plot the 2-D transform of the “reject” seismic data.
Lines 243-246 offer the user the choice (choice2) to re-filter or exit the script.
If the user chooses to re-filter (choice2=1), line 250 lets the script pass the user to the
top of the loop. If the user chooses to exit (choice2=2):
• Lines 253-254 create permanent disk files from the last processed “pass” and
“reject” test.
• Lines 255-256 put a template of the sudipfilt parameters into the temporary log
file.
• Lines 258-262 write information to the screen and the log file.
• Line 263 copies the temporary log file to a permanent disk file.
• Line 264 makes the script pause so the user can read the screen messages.
• Lines 265-266 close the plot windows and re-direct some of the accompanying
system messages to a temporary file.
• Line 267 sets a flag (ok=true) that ends cycling through the loop (see line 117).
After using ifk the first time, line 271 insures that the user is directed to the else
portion of if block lines 122-168.
Line 280 removes all temporary files.
Line 283 exits the script.
14-10
f-k Filter and Deconvolution
71 traces:
tracl=(26,96) tracr=(26,96) fldr=10008 tracf=(26,96) cdp=(33,103)
cdpt=1 trid=1 nvs=1 nhs=1 duse=1
scalel=1 scalco=1 counit=1 delrt=4 muts=4
ns=750 dt=4000
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
From surange, we know the sample interval is 0.004 seconds (dt=4000). From
Table 1-8 of Seismic Data Processing (Yilmaz, 1987) or Table 1-13 of Seismic Data
Analysis (Yilmaz, 2001), we know the trace spacing is 50 meters.
Table 14.1: Absolute and relative filter slope values
Absolute samples/trace 4 5 6 7
0.016/50 0.020/50 0.024/50 0.028/50
Relative seconds/meter
0.00032 0.00040 0.00048 0.00056
In script ifk.sh, we change lines 21-22:
20 # Processing variables [ Instructions below ]
21 dx=50 # trace spacing
22 dt=0.004 # time sample interval
Below is the screen dialog with our input highlighted. Line numbers are added for
discussion. We input values on lines 14, 21, and 27. We supply our slope values (relative
units) on dialog line 21.
1 ------------------------------------------------------
2 f-k Filter Test
3 ---------------
4 From your slope values, two filters are created --
5 a pass filter and a reject filter.
6 When you exit, the following files are output:
7 Slopes are in ==> fk.txt
8 Passed data are in ==> fkpass.su
9 Rejected data are in ==> fkrejj.su
10 ------------------------------------------------------
11
12 Supply gain power value for t^(power)
13 For no gain, supply 0
14 2
15
16 Supply filter slopes.
17 Input: s1,s2,s3,s4 where s1 < s2 < s3 < s4
18 Example: 3.0,3.5,4.0,4.5
19 or: -4.5,-4.0,-3.5,-3.0
20 Use commas. Do not use spaces.
21 .00032,.00040,.00048,.00056
22
23 Slopes: .00032,.00040,.00048,.00056
24
25 Enter 1 for more f-k filter testing
26 Enter 2 to EXIT
27 2
28
29 Processing log is in ==> fk.txt
30 Passed data are in ==> fkpass.su
31 Rejected data are in ==> fkrejj.su
32
14-11
f-k Filter and Deconvolution
Figure 14.3: Input gather oz08w.su. Left: Seismic after gain (tpow=2.0). Right: f-k plot
of seismic after gain.
Figure 14.4 shows the four output plots: a t-x plot and an f-k plot after application of
the pass filter and a t-x plot and an f-k plot after application of the reject filter. (The test
can be repeated).
We rename the output files to prevent our files from becoming overwritten by a later
run: Note: Use a semicolon to separate multiple commands on the same line.
mv fkpass.su fkpass08rel.su; mv fkrejj.su fkrejj08rel.su
We are happy to see that Figure 14.2 and Figure 14.4 look the same. We compare
these data sets in the next section.
14-12
f-k Filter and Deconvolution
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
14-13
f-k Filter and Deconvolution
14-14
f-k Filter and Deconvolution
Using the following commands, we concatenate the three files with the difference file
in the middle. The first command creates the file that eventually holds all three files.
cat fkrejj08abs.su > fkrejj08BIG.su
cat fkrejj08diff.su >> fkrejj08BIG.su
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
Figure 14.6: File fkrejj08BIG.su, the concatenation of the absolute method reject file
(left),the relative method reject file (right), and the difference of the two (middle).
In Figure 14.6, we see that the amplitudes of the difference file are much smaller than
the amplitudes of the other two files. Figure 14.6 shows vertical bands because the
computer screen cannot clearly show many traces squeezed into a narrow space.
14-15
f-k Filter and Deconvolution
To understand the vertical and horizontal annotations on the f-k plots, remember the
definition of the Nyquist frequency. The Nyquist frequency, introduced in Section 10.4,
is the folding frequency. Mathematically, it is the inverse of twice the sample interval.
Units of temporal frequency (f) are the number of cycles per second. Units of spatial
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
frequency (wavenumber or k) are the number of cycles per unit distance. In Table 14.3,
we present the distance and time Nyquist frequencies for the absolute and relative
examples we used earlier.
Table 14.3: Nyquist spatial and temporal frequencies.
Absolute Relative
ifk.sh dx=1 dt=1 dx=50 dt=0.004
Nyquist kN = 0.5 fN = 0.5 Hz kN = 0.01 fN = 125 Hz
Figure 14.7 is a stretched version of Figure 14.1, right. Here, the horizontal
annotations are easier to see.
Figure 14.7: An f-k plot of the input data, made using absolute dx and dt values.
Figure 14.8 is a stretched version of Figure 14.3, right. Here again, the horizontal
annotations are easier to see.
14-16
f-k Filter and Deconvolution
For both Figures 14.7 and 14.8, the maxima of the axes (negative and positive) are the
respective Nyquist frequencies, spatial and temporal.
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
Figure 14.8: An f-k plot of the input data, made using relative dx and dt values.
14-17
f-k Filter and Deconvolution
14.5 Deconvolution
“Deconvolution is a process that improves the temporal resolution of seismic data by
compressing the basic seismic wavelet (Yilmaz, 1987, Yilmaz 2001: Chapter 2).” The
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
Figure 14.9: Top: Autocorrelation of trace with reverberation. Bottom: Same trace after
deconvolution using prediction distance Į and operator length n (Robinson and Treitel,
2000, Figure 12-3). Used with permission.
14-18
f-k Filter and Deconvolution
ns=1325 dt=4000
We want to use only 3 seconds of this 4 millisecond sample interval data set, and we
want to mute the strong refractions. We examine the data using the following line
command:
sugain < oz16.su tpow=2 | suxwigb perc=95 &
The following line command mutes the early arrivals and windows the first 3
seconds:
sumute < oz16.su key=tracl tmute=1.200,0.395 xmute=1,48 | suwind tmax=3 >
oz16m3.su
Below is the surange output of oz16m3.su.
surange < oz16m3.su
48 traces:
tracl=(1,48) tracr=(1,48) fldr=10016 tracf=(1,48) cdp=(16,63)
cdpt=1 trid=1 nvs=1 nhs=1 duse=1
scalel=1 scalco=1 counit=1 delrt=4 muts=(395,1200)
ns=750 dt=4000
As we stated in the introduction, script idecon uses programs supef and suacor.
Program supef does the deconvolution. Script idecon uses suacor to make plots of each
trace’s autocorrelation, an important deconvolution analysis tool. Below is the parameter
list from the supef selfdoc:
SUPEF - Wiener predictive error filtering
Required parameters:
dt is mandatory if not set in header
Optional parameters:
minlag=dt first lag of prediction filter (sec)
maxlag=last lag default is (tmax-tmin)/20
pnoise=0.001 relative additive noise level
mincorr=tmin start of autocorrelation window (sec)
maxcorr=tmax end of autocorrelation window (sec)
showwiener=0 =1 to show Wiener filter on each trace
mix=1,... array of weights (floats) for moving
average of the autocorrelations
Notice that supef has windowing parameters for the autocorrelation (parameters mincorr
and maxcorr).
Below is the suacor selfdoc:
SUACOR - auto-correlation
Optional Parameters:
ntout=101 odd number of time samples output
norm=1 if non-zero, normalize maximum absolute output to 1
sym=1 if non-zero, produce a symmetric output from
lag -(ntout-1)/2 to lag +(ntout-1)/2
Notice that suacor does not have any windowing parameters.
14-19
f-k Filter and Deconvolution
seismic file and the percent white noise for supef (supef parameter pnoise). All other
parameters are input in response to questions that appear on the screen. Below is the
screen dialog with our input highlighted. Line numbers are added for discussion. We
input values on lines 8, 14, 18, 22, 37, 40, 42, 50, 52, 56, 60, 74, 77, 79, 87, 89, and 93.
1
2 *** DECONVOLUTION TEST ***
3
4 ENVIRONMENT QUESTIONS ...
5
6 Supply gain power value for t^(power)
7 For no gain, supply 0
8 2
9
10 Supply "perc" value for plots
11 Typical values are 100, 98, 95, 90, 85
12 If you need a perc value less than 70,
13 your data are probably poorly gained.
14 98
15
16 Supply "key" for plot x-axis annotation
17 Typical values are: tracl tracf offset
18 tracl
19
20 For wiggle traces, enter w
21 For image display, enter i
22 w
23
24 ITERATION QUESTIONS ...
25
26 oz16m3.su:
27
28 ----------------------------------------
29 Sample interval is .004000 seconds
30 Data last time is 3.000000 seconds
31 ----------------------------------------
32
33 Autocorrelation window start time is .004000
34 Autocorrelation window last time is 3.000000
35
36 Change autocorrelation analysis window ? (y/n)
37 y
38
39 Supply first time (seconds)
40 1.
41 Supply last time (seconds)
42 2.8
43
44 oz16m3.su:
45
46 Autocorrelation first time is 1.
47 Autocorrelation last time is 2.8
48
49 Enter prediction length (seconds)
50 .030
51 Enter operator length (seconds)
52 0.175
14-20
f-k Filter and Deconvolution
53
54 Enter 1 for more decon testing
55 Enter 2 for EXIT
56 1
57
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
14-21
f-k Filter and Deconvolution
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
Figure 14.10: Left: Seismic data for deconvolution. Right: Autocorrelation of the
seismic traces. In this first decon test, the autocorrelation plot was made using seismic
traces windowed (suwind) from 1 second to 2.8 seconds. (See dialog lines 36-42).
Figure 14.11: Left: Seismic data after deconvolution. Right: Autocorrelation of the
windowed portion of the deconvolved seismic traces.
After the user supplies a prediction length and an operator length (dialog lines 49-52),
the plots of Figure 14.11 are made (a gapped deconvolution test).
14-22
f-k Filter and Deconvolution
For the second test, the user decides to deconvolve the output of the first test (instead
of using the original input (dialog lines 58-61). The user is reminded of the seismic file’s
sample interval and total time (dialog lines 66-67) as well as the current settings for the
autocorrelation window (dialog lines 70-71).
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
Again, the user decides to change the autocorrelation window, supplying values that
correspond to the entire trace (dialog lines 73-79).
For the new test, the user supplies a prediction length of one sample and an operator
length (dialog lines 86-89), and the plots of Figure 14.12 are made for a “spiking”
deconvolution. The autocorrelation plot of Figure 14.12 is made from the entire trace.
The seismic plots show the prediction length and the operator length, while the
autocorrelation plots show the extent of the autocorrelation windows.
Figure 14.12: Left: Seismic data after spiking deconvolution. Right: Autocorrelation
of the deconvolved seismic traces (using the entire times).
Below are the contents of idecon.txt, the log file. Line numbers are added for
discussion. Line 2 includes the percent white noise and the gain (t^(power)) values.
1 *** DECONVOLUTION TEST ***
2 Input file = oz16m3.su
3 Percent white noise = 0.001 Gain = 2
4 Sample interval is .004000 seconds
5 Data last time is 3.000000 seconds
6 --------------------------
7 -> Original data
8 Autocorrelation first time is 1.
9 Autocorrelation last time is 2.8
10 Prediction length = .030
11 Operator length = 0.175
12 -> Using modified data
13 Autocorrelation first time is .004
14 Autocorrelation last time is 3.000
15 Prediction length = .004
14-23
f-k Filter and Deconvolution
1 #! /bin/sh
2 # File: idecon.scr
3 # Run this script to start script idecon.sh
4
5 xterm -geom 60x18+10+544 -e idecon.sh
6
Line 1 invokes the shell. Line 5 opens a dialog window 80 characters wide by 18
characters high. Line 5 also places the dialog window 10 pixels from the left side of the
viewing area and 544 pixels from the top of the viewing area. Line 5 also starts idecon.sh,
the processing script, within that window.
14-24
f-k Filter and Deconvolution
47 # |
48 # **
49 # | *
50 # | *
51 # | *
52 # ------*------************************--*
53 # | * * **
54 # | * *
55 # | **
56 # |
57 #
58 # Output wavelet
59 #
60 #
61 # REFERENCE: Robinson, E.A., Treitel, S., 2000, Geophysical
62 # Signal Analysis, Society of Exploration
63 # Geophysicists, Tulsa, p. 278-279.
64 #
65 #------------------------------------------------
66 # Describe temporary files
67 #------------------------------------------------
68
69 # tmp0 = input seismic file, after gain
70 # tmp1 = input to deconvolution:
71 # tmp0 (reproc=s) or tmp3 (reproc=r)
72 # tmp2 = input seismic windowed for autocorrelation
73 # tmp3 = output of the prediction error filter
74 # tmp4 = output of the prediction error filter
75 # windowed for autocorrelation
76 # tmp5 = ASCII log file of processing parameters
77 # tmp6 = ASCII file to reduce screen display of "zap"
78
79 #------------------------------------------------
80
81 echo " "
82 echo " *** DECONVOLUTION TEST ***"
83 echo " "
84 echo " ENVIRONMENT QUESTIONS ..."
85
86 # Remove temporary files
87 rm -f tmp*
88
89 #------------------------------------------------
90 # Supply gain value to t^(power)
91 # Supply perc value for plots
92 # Supply key value for plots
93
94 echo " "
95 echo "Supply gain power value for t^(power)"
96 echo "For no gain, supply 0"
97 > /dev/tty
98 read tpow
99
100 echo " "
101 echo "Supply \"perc\" value for plots"
102 echo " Typical values are 100, 98, 95, 90, 85"
103 echo " If you need a perc value less than 70,"
104 echo " your data are probably poorly gained."
14-25
f-k Filter and Deconvolution
14-26
f-k Filter and Deconvolution
168 taa=$tsamp
169 # initial setting for autocorrelation analysis window end
170 tzz=$tend
171
172 #------------------------------------------------
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
173
174 new=true # true = 1st time through loop
175 finish=false # true = exit decon script
176
177 #------------------------------------------------
178 # Begin testing loop
179 #------------------------------------------------
180
181 echo " "
182 echo " ITERATION QUESTIONS ..."
183
184 while [ $finish = false ]
185 do
186
187 if [ $new = true ] ; then
188
189 cp tmp0 tmp1
190 echo " -> Original data" >> tmp5
191
192 else
193
194 echo " "
195 echo "Enter R to re-process deconvolved data"
196 echo "Enter S to start over"
197 > /dev/tty
198 read reproc
199
200 case $reproc in
201 [sS])
202 cp tmp0 tmp1
203 echo " -> Using original data"
204 echo " -> Using original data" >> tmp5
205 ;;
206 [rR])
207 cp tmp3 tmp1
208 echo " -> Using modified data"
209 echo " -> Using modified data" >> tmp5
210 ;;
211 esac
212
213 fi
214
215 rm -f tmp3 tmp4
216
217 #------------------------------------------------
218 # Plot seismic
219
220 if [ $wiggle -eq 0 ] ; then
221 suxwigb < tmp1 perc=$myperc xbox=10 ybox=10 wbox=300 hbox=500 \
222 label1=" Time (s)" label2=$mykey key=$mykey \
223 windowtitle="Test input" \
224 title="Input to deconvolution" verbose=0 &
225 else
226 suximage < tmp1 perc=$myperc xbox=10 ybox=10 wbox=300 hbox=500 \
227 label1=" Time (s)" \
228 windowtitle="Test input" \
229 title="Input to deconvolution" verbose=0 &
230 fi
14-27
f-k Filter and Deconvolution
231
232 #------------------------------------------------
233 # Tell user current autocorrelation window settings
234
235 echo " "
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
14-28
f-k Filter and Deconvolution
299
300 #------------------------------------------------
301 # Prediction length & Operator length
302
303 echo " "
304 echo "Enter prediction length (seconds)"
305 > /dev/tty
306 read minlag
307 echo "Enter operator length (seconds)"
308 > /dev/tty
309 read maxlag
310
311 echo " Prediction length = $minlag" >> tmp5
312 echo " Operator length = $maxlag" >> tmp5
313
314 #------------------------------------------------
315 # Deconvolution
316
317 supef < tmp1 minlag=$minlag maxlag=$maxlag \
318 mincorr=$taa maxcorr=$tzz pnoise=$wnoise > tmp3
319
320 #------------------------------------------------
321 # After deconvolution -- plot data
322
323 if [ $wiggle -eq 0 ] ; then
324 suxwigb < tmp3 perc=$myperc xbox=634 ybox=10 wbox=300 hbox=500 \
325 label1=" Time (s)" label2=$mykey key=$mykey \
326 title="Pred length=$minlag, Oper length=$maxlag" \
327 windowtitle="Deconvolution" verbose=0 &
328 else
329 suximage < tmp3 perc=$myperc xbox=634 ybox=10 wbox=300 hbox=500 \
330 label1=" Time (s)" \
331 title="Pred length=$minlag, Oper length=$maxlag" \
332 windowtitle="Deconvolution" verbose=0 &
333 fi
334
335 #------------------------------------------------
336 # After deconvolution -- plot autocorrelation
337
338 suwind < tmp3 tmin=$taa tmax=$tzz > tmp4
339
340 if [ $wiggle -eq 0 ] ; then
341 suacor <tmp4 ntout=101 sym=0 |
342 suxwigb perc=$myperc xbox=946 ybox=10 wbox=300 hbox=500 \
343 label1=" Time (s)" label2=$mykey key=$mykey \
344 title="Acor: first time = $taa, last time = $tzz" \
345 windowtitle="Acor after decon" verbose=0 &
346 else
347 suacor < tmp4 ntout=101 sym=0 |
348 suximage perc=$myperc xbox=946 ybox=10 wbox=300 hbox=500 \
349 label1=" Time (s)" \
350 title="Acor: first time = $taa, last time = $tzz" \
351 windowtitle="Acor after decon" verbose=0 &
352 fi
353
354 #------------------------------------------------
355 # Do more decon or exit
356
14-29
f-k Filter and Deconvolution
362
363 case $selection in
364 1)
365 finish=false
366 ;;
367 2)
368 cp tmp3 idecon.su
369 cp tmp5 idecon.txt
370 echo " "
371 echo "Output seismic file is ==> idecon.su"
372 echo " Output log file is ==> idecon.txt"
373 pause exit
374 zap xwigb > tmp6 # decrease screen messages
375 zap ximage > tmp6 # decrease screen messages
376 finish=true
377 ;;
378 esac
379
380 new=false
381
382 done
383
384 #------------------------------------------------
385 # Exit
386 #------------------------------------------------
387
388 # Remove temporary files
389 rm -f tmp*
390
391 # Exit politely from shell
392 exit
393
Script idecon.sh has the following major sections:
• Lines 19-23 are user-supplied values: the input file and a percent white noise
value.
• Lines 27-64 describe prediction length and operator length and provide a
reference. Also, lines 66-77 describe the temporary files.
• Lines 81-182 operate before the deconvolution test loop.
o Line 87 removes old temporary files.
o Lines 95-98 get the user’s gain value.
o Lines 101-106 get the user’s perc value.
o Lines 109-112 get the users’ key plot value.
o Lines 118-130 get the user’s preference for wiggle or image plots.
o Line 135 applies gain to the input data and holds the output in tmp0.
o Lines 141-153 compute the sample interval (seconds) and the time length of
the traces (seconds).
o Lines 158-163 write information to the log file.
14-30
f-k Filter and Deconvolution
o Lines 167-170 create initial autocorrelation window analysis start and stop
times – the values computed in lines 141-153.
o Line 174 sets a flag that means this is the first time through the test loop.
o Line 175 sets a flag that means the user is not finished testing.
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
14-31
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
Returning to the Taiwan data, we now have 5 seconds of data, we zeroed the bad
hydrophones, applied spherical divergence correction, and band-pass filtered the shot
gathers. The result is Tshot5kgf.su. We will now
• sort these shot gathers to CMP order,
• analyze the CMPs for stacking velocities,
• use the stacking velocities to apply NMO, and
• stack the CMPs.
15.2 Sort
Sorting to CMP order is a one-line command:
susort < Tdata/Tshot5kgf.su cdp offset > Tdata/Tcmp5kgf.su
We send the sorted data to “Tdata” since this file is just as large as the file of windowed
(0-5 seconds) shot gathers: 133 Mbytes.
15-1
Taiwan: Sort, Velocity Analysis, NMO, Stack
15-2
Taiwan: Sort, Velocity Analysis, NMO, Stack
plotting the velan with picks overlain and once for plotting the original velan. See lines
305 and 316 in Section 7.6.6.C.
suvelan < panel.$picknow nv=$nvs dv=$dvs fv=$fvs |
suximage xbox=10 ybox=10 wbox=300 hbox=450 perc=97 \
By changing perc from 99 (in Section 7.6.6.C) to 97, we “increase the volume” of the
semblance values.
Below are two of the semblance plots with picks overlain. Our philosophy was to
pick generally increasing velocities. Better picks might be made by following subtle
events. We encourage you to experiment.
Figure 15.2: Semblance plots. Left: CMP 4225. Right: CMP 4575.
15-3
Taiwan: Sort, Velocity Analysis, NMO, Stack
Below is the file of t-v picks. For the sake of presentation, we made line 1 into two
lines. Line 1 as presented here has no spaces at the end of the first part and no spaces at
the beginning of the second part, making this a usable file. Remember to remove line 2
before using this file in sunmo.
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
1 cdp=4175,4200,4225,4250,4275,4300,4325,4350,4375,4400,4425,\
4450,4475,4500,4525,4550,4575,4600,4625,4650,4675,4700 \
2 #=1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22 \
3 tnmo=0.0527704,0.527704,2.26913,3.219,3.7467,4.96042 \
4 vnmo=1485.83,1574.17,2325,2634.17,3252.5,4224.17 \
5 tnmo=0.0659631,1.0686,3.33773,5 \
6 vnmo=1463.75,1706.67,3053.75,3826.67 \
7 tnmo=0.0527704,0.844327,2.00528,3.37731,4.93404 \
8 vnmo=1507.92,1640.42,2325,3031.67,3426.41 \
9 tnmo=0.0395778,0.897098,2.41425,3.49604,4.97361 \
10 vnmo=1530,1640.42,2347.08,3009.58,4069.58 \
11 tnmo=0.0527704,0.620053,2.71768,3.64116,4.98681 \
12 vnmo=1463.75,1596.25,2479.58,2832.92,3583.75 \
13 tnmo=0.0395778,0.633245,2.62533,3.48285,4.98681 \
14 vnmo=1463.75,1596.25,2656.25,3208.33,3892.92 \
15 tnmo=0.0527704,0.633245,3.7467,4.98681 \
16 vnmo=1441.67,1552.08,3451.25,3848.75 \
17 tnmo=0.0527704,0.699208,2.8496,4.98681 \
18 vnmo=1507.92,1662.5,2567.92,3053.75 \
19 tnmo=0.0263852,1.04222,3.85224,4.97361 \
20 vnmo=1441.67,1772.92,3164.17,3716.25 \
21 tnmo=0.0527704,1.22691,3.10026,4.12929,5 \
22 vnmo=1463.75,1728.75,2590,3539.58,3804.58 \
23 tnmo=0.0527704,1.18734,2.4934,3.66755,5 \
24 vnmo=1463.75,1684.58,2413.33,2899.17,3252.5 \
25 tnmo=0.0527704,1.10818,2.94195,3.69393,4.97361 \
26 vnmo=1485.83,1706.67,2413.33,2921.25,3296.67 \
27 tnmo=0.0527704,0.91029,3.68074,4.98681 \
28 vnmo=1507.92,1772.92,3495.42,4091.67 \
29 tnmo=0.0659631,1.21372,3.69393,4.97361 \
30 vnmo=1530,1596.25,2965.42,3738.33 \
31 tnmo=0.0527704,1.21372,3.44327,5.01319 \
32 vnmo=1485.83,1662.5,2766.67,3694.17 \
33 tnmo=0.0395778,1.26649,3.79947,4.97361 \
34 vnmo=1463.75,1640.42,2877.08,3097.92 \
35 tnmo=0.0527704,2.24274,4.1161,4.97361 \
36 vnmo=1463.75,1883.33,2457.5,2943.33 \
37 tnmo=0.0395778,1.93931,3.25858,5.01319 \
38 vnmo=1441.67,1927.5,2457.5,3451.25 \
39 tnmo=0.0659631,1.27968,3.25858,4.97361 \
40 vnmo=1485.83,1618.33,2810.83,3826.67 \
41 tnmo=0.0527704,1.54354,3.13984,4.97361 \
42 vnmo=1485.83,1861.25,2766.67,3318.75 \
43 tnmo=0.0527704,1.41161,3.50923,4.98681 \
44 vnmo=1507.92,1905.42,2943.33,3340.83 \
45 tnmo=0.0527704,1.20053,2.3219,3.0343,4.98681 \
46 vnmo=1485.83,1839.17,2722.5,3031.67,3407.08 \
15-4
Taiwan: Sort, Velocity Analysis, NMO, Stack
7 # Set messages on
8 set -x
9
10 # User values
11 indata=Tdata/Tcmp5kgf.su
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
12 outdata=Tstack.su
13 myperc=95
14
15 # NMO
16 sunmo < $indata \
17 cdp=4175,4200,4225,4250,4275,4300,4325,4350,4375,4400,4425,
4450,4475,4500,4525,4550,4575,4600,4625,4650,4675,4700 \
18 tnmo=0.0527704,0.527704,2.26913,3.219,3.7467,4.96042 \
19 vnmo=1485.83,1574.17,2325,2634.17,3252.5,4224.17 \
20 tnmo=0.0659631,1.0686,3.33773,5 \
21 vnmo=1463.75,1706.67,3053.75,3826.67 \
22 tnmo=0.0527704,0.844327,2.00528,3.37731,4.93404 \
23 vnmo=1507.92,1640.42,2325,3031.67,3426.41 \
24 tnmo=0.0395778,0.897098,2.41425,3.49604,4.97361 \
25 vnmo=1530,1640.42,2347.08,3009.58,4069.58 \
26 tnmo=0.0527704,0.620053,2.71768,3.64116,4.98681 \
27 vnmo=1463.75,1596.25,2479.58,2832.92,3583.75 \
28 tnmo=0.0395778,0.633245,2.62533,3.48285,4.98681 \
29 vnmo=1463.75,1596.25,2656.25,3208.33,3892.92 \
30 tnmo=0.0527704,0.633245,3.7467,4.98681 \
31 vnmo=1441.67,1552.08,3451.25,3848.75 \
32 tnmo=0.0527704,0.699208,2.8496,4.98681 \
33 vnmo=1507.92,1662.5,2567.92,3053.75 \
34 tnmo=0.0263852,1.04222,3.85224,4.97361 \
35 vnmo=1441.67,1772.92,3164.17,3716.25 \
36 tnmo=0.0527704,1.22691,3.10026,4.12929,5 \
37 vnmo=1463.75,1728.75,2590,3539.58,3804.58 \
38 tnmo=0.0527704,1.18734,2.4934,3.66755,5 \
39 vnmo=1463.75,1684.58,2413.33,2899.17,3252.5 \
40 tnmo=0.0527704,1.10818,2.94195,3.69393,4.97361 \
41 vnmo=1485.83,1706.67,2413.33,2921.25,3296.67 \
42 tnmo=0.0527704,0.91029,3.68074,4.98681 \
43 vnmo=1507.92,1772.92,3495.42,4091.67 \
44 tnmo=0.0659631,1.21372,3.69393,4.97361 \
45 vnmo=1530,1596.25,2965.42,3738.33 \
46 tnmo=0.0527704,1.21372,3.44327,5.01319 \
47 vnmo=1485.83,1662.5,2766.67,3694.17 \
48 tnmo=0.0395778,1.26649,3.79947,4.97361 \
49 vnmo=1463.75,1640.42,2877.08,3097.92 \
50 tnmo=0.0527704,2.24274,4.1161,4.97361 \
51 vnmo=1463.75,1883.33,2457.5,2943.33 \
52 tnmo=0.0395778,1.93931,3.25858,5.01319 \
53 vnmo=1441.67,1927.5,2457.5,3451.25 \
54 tnmo=0.0659631,1.27968,3.25858,4.97361 \
55 vnmo=1485.83,1618.33,2810.83,3826.67 \
56 tnmo=0.0527704,1.54354,3.13984,4.97361 \
57 vnmo=1485.83,1861.25,2766.67,3318.75 \
58 tnmo=0.0527704,1.41161,3.50923,4.98681 \
59 vnmo=1507.92,1905.42,2943.33,3340.83 \
60 tnmo=0.0527704,1.20053,2.3219,3.0343,4.98681 \
61 vnmo=1485.83,1839.17,2722.5,3031.67,3407.08 |
62
63 # Stack
64 sustack > $outdata
65
66 suximage < $outdata perc=$myperc label2=cdp key=cdp \
67 title="$outdata perc=$myperc" &
68
15-5
Taiwan: Sort, Velocity Analysis, NMO, Stack
(line 64).
Lines 66-67 make a plot using suximage. Remember, suximage reads the first key
value (here it is cdp) and increments by “1” (see Section 7.7). If key values do not
increment by “1” (for example, due to missing CMPs), the x-axis annotation will be
wrong.
Figure 15.3 shows our stack of the Taiwan data.
15-6
Taiwan: Migration
16-1
Taiwan: Migration
6
7 set -x
8
9 indata=Tdata/Tmigcvp.su
10 perc=95
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
11
12 loop=1 # 1 = run panels forward continuously
13 # 2 = run panels back and forth continuously
14 # 0 = load all panels then stop
15
16 n1=1251 # number of time samples
17 d1=0.004 # time sample interval
18 n2=790 # number of traces per panel
19 d2=1 # trace spacing
20
21 width=550 # width of window
22 height=700 # height of window
23
24 fframe=1000 # velocity of first panel for title annotation
25 dframe=100 # panel velocity increment for title annotation
26
27 suxmovie < $indata perc=$perc loop=$loop \
28 n1=$n1 d1=$d1 n2=$n2 d2=$d2 \
29 width=$width height=$height \
30 fframe=$fframe dframe=$dframe \
31 title="Velocity %g" &
32
33 exit
34
Below is the surange output of Tmigcvp.su. Several facts from this output are useful
for supplying values to Tmigmovie.sh. On line 21, you have to supply the number of time
samples – the value of key ns. On line 17, you have to supply the time sample interval –
the value of key dt.
surange < Tmigcvp.su
24490 traces:
tracl=(1,790) tracr=(1,25344) fldr=(1,31) tracf=(1,144) ep=(740,915)
cdp=(4027,4816) cdpt=(1,144) trid=1 nvs=31 nhs=(1,48)
offset=(1000,4000) sdepth=80000 swdep=(2120000,3540000) scalel=-10000
scalco=1
sx=(-4773100,-3965250) gx=(-4764600,-3600343) gy=(2277,94805) counit=3
tstat=12
ns=1251 dt=4000 gain=9 afilf=160 afils=72
lcf=3 hcf=160 lcs=6 hcs=72 year=95
day=260 hour=(12,13) minute=(0,59) sec=(0,59)
The panel velocities are in the offset key and the number of velocities is in the nvs
key. Using these two keys, you can calculate the values for fframe and dframe (if you do
not want to check the Tmigcvp.sh values of firstv and increment).
Refer to Section 9.4 or the xmovie selfdoc for help using the mouse and keyboard
with the movie. If you switch from Movie mode to Step mode by pressing the S key, you
can step forward with the F key or backward with the B key. To restart the movie, press
the C key. To quit, press the Q key.
Viewing the movie of single velocity migrations, it was difficult for us to pick a
“best” migration. We think 2500 m/s is a reasonable image, but we are not happy with it.
16-2
Taiwan: Migration
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
Figure 16.1: Left: A portion of the 2-D line off the shelf area southwest of Taiwan. From
Berndt and Moore (1999), Figure 6a (used with permission). Right: Our 2500 m/s
constant velocity Stolt migration.
Figure 16.1 puts our migration next to an image reported by Berndt and Moore
(1999).
16.2 Discussion
While viewing the shot gathers, we did not mention that we saw sideswipe. Sideswipe
is “evidence of a structural feature which lies off to the side of a line or traverse (Sheriff,
2002).” The problem with a 2-D survey is that real geology is 3-D. A 2-D survey is
16-3
Taiwan: Migration
processed to image information from below the line of acquisition, but acoustic energy
can reflect from side features as well as from “down.”
Figure 16.2 shows a strong sideswipe event between 3.5-4 seconds. Other shot
gathers show sideswipe at various times, generally at the same angle as in Figure 16.2
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
Figure 16.2: Taiwan shot gather 950. The arrow points to sideswipe.
When sideswipe (also called “out of plane reflections”) occurs on 2-D data, we
cannot “image” it because a 2-D line does not have enough information. We usually try
various ways to eliminate sideswipe. One way is tau-p filtering. The tau-p filter is also
called slant stack processing and radon filtering (Yilmaz 1987 and Yilmaz 2001).
Although we do not discuss tau-p filtering in this Primer, SU does have suradon and
sutaup. Look at the examples in demos/Tau_P.
The paper by Berndt and Moore (1999) discusses several methods to attenuate sea-
floor multiples, one of which they applied to this data set (their Figure 6). Two processes
we recommend that you investigate are:
• prestack f-k or tau-p filtering and
• time-varying migration.
16-4
References
Berndt, C. & Moore, G.F., 1999, Dependence of multiple-attenuation techniques on the
geologic setting: A case study from offshore Taiwan, The Leading Edge, Vol. 18,
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
No. 1, 74-80.
Cohen, J.K. & Stockwell, Jr. J.W., 2002, CWP/SU: Seismic Unix Release No. 36: a free
package for seismic research and processing, Center for Wave Phenomena,
Colorado School of Mines.
Lamb, L., 1990, Learning the vi Editor, Sebastopol, California: O’Reilly & Associates,
Inc..
Lindseth, R.O., 1982, Digital Processing of Geophysical Data – A Review, Tulsa: Society
of Exploration Geophysicists (reprint).
Moore, G.F., Shipley, T.H., Stoffa, P.L., Karig, D.E., Taira, A., Kuramoto, S.,
Tokuyama, H., Suyehiro, K., 1990, Journal of Geopysical Research, Vol. 95, No.
B6, 8753-8765.
Robinson, E.A., Treitel, S., 2000, Geophysical Signal Analysis (reprint), Tulsa: Society
of Exploration Geophysicists, p. 278-279.
Sheriff, R.E., 2002, Encyclopedic Dictionary of Applied Geophysics, 4th Edition, Tulsa:
Society of Exploration Geophysicists.
Stockwell, Jr. J.W., & Cohen, J.K., 2002, The New SU User’s Manual, Golden: CWP,
Colorado School of Mines.
Yilmaz, Ö., 1987, Seismic Data Processing, Tulsa: Society of Exploration Geophysicists.
Yilmaz, Ö., 2001, Seismic Data Analysis, Tulsa: Society of Exploration Geophysicists.
R-1
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
Warranty Disclaimer:
NO GUARANTEES, OR WARRANTIES, EITHER EXPRESS OR IMPLIED, ARE
PROVIDED BY CWP, CSM, ANY EMPLOYEE OR MEMBER OF THE AFORESAID
ORGANIZATIONS, OR BY ANY CONTRIBUTOR TO THIS SOFTWARE
PACKAGE, REGARDING THE ACCURACY, SAFETY, FITNESS FOR A
PARTICULAR PURPOSE, OR ANY OTHER QUALITY OR CHARACTERISTIC OF
THIS SOFTWARE, OR ANY DERIVATIVE SOFTWARE.
Limited License:
The CWP/SU Seismic Un*x package (SU) is not public domain software, but it is
available free under the following terms and conditions:
1. Permission to use, copy, and modify this software for any purpose without fee and
within the guidelines set forth below is hereby granted, provided that the above copyright
notice, the warranty disclaimer, and this permission notice appear in all copies, and the
name of the Colorado School of Mines (CSM) not be used in advertising or publicity
pertaining to this software without the specific, written permission of CSM.
2. The simple repackaging and selling of the SU package as is, as a commercial software
product, is expressly forbidden without the prior written permission of CSM. Any
approved repackaging arrangement will carry the following restriction: only a modest
profit over reproduction costs may be realized by the reproducer.
A-1
Appendix A: Seismic Un*x Legal Statement
Cohen, J. K. and Stockwell, Jr. J. W., (200_), CWP/SU: Seismic Un*x Release No. __: a
free package for seismic research and processing, Center for Wave Phenomena, Colorado
School of Mines.
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
Acknowledgements:
SU stands for CWP/SU:Seismic Un*x, a processing line developed at Colorado School of
Mines, partially based on Stanford Exploration Project (SEP) software.
A-2
Appendix B: Seismic Un*x at MTU
B-1
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
C. Utility Programs
C.1 Introduction
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
Sometimes the library of Seismic Unix (SU) programs doesn’t have exactly what we
want. Sometimes it is just fun to write a program that does exactly what we want. This
Appendix presents three C programs that do simple tasks in SU.
• sukeycount.c
• sudumptrace.c
• tvnmoqc.c
These three programs were written while we used Seismic Un*x version 38. You
might find these programs in a later SU version. We are glad to contribute to the SU
library.
Before we examine these programs, we recommend the programming sections of
“The New SU User’s Manual” (Stockwell & Cohen, 2002). The Manual web site is:
http://www.cwp.mines.edu/sututor/sututor.html
The Manual’s Contents web page is:
http://www.cwp.mines.edu/sututor/node1.html
The Manual web page that describes shell programming, “Extending SU by shell
programming,” is:
http://www.cwp.mines.edu/sututor/node128.html
This Appendix does not discuss shell programming. We think you have seen many
shell examples in previous chapters. However, we recommend the above link because it
helped us learn SU shell programming.
The SU web page that begins the section about C programming, “How to Write an
SU Program,” is:
http://www.cwp.mines.edu/sututor/node136.html
Under this section, there are three web pages:
• “Setting up the Makefile” –
http://www.cwp.mines.edu/sututor/node137.html
• “A template SU program” –
http://www.cwp.mines.edu/sututor/node138.html
• “Writing a new program: suvlength” –
http://www.cwp.mines.edu/sututor/node139.html
You will understand the examples in this Appendix faster if you first study the
Manual pages. Reading and understanding the Makefile web page will save you the effort
of compiling and linking your program – the Makefile helps you compile and link.
In this Appendix, we do not teach C programming; we presume you are familiar with
C. This Appendix is written to help you extend your knowledge of C into SU. However,
we do not explain our programs in detail. We hope that our simple examples help you
start on the road to becoming a C programmer in SU.
C-1
Appendix C: Utility Programs
Required parameters:
key=key1 One key word.
Optional parameters:
verbose=0 quiet
=1 chatty
Examples:
sukeycount < stdin key=fldr
sukeycount < stdin key=fldr > out.txt
Program sukeycount reads the first trace. After that, each trace’s key value is
compared against the previous trace. If the new trace’s key value is the same as the
previous key value, a counter is incremented. The next trace is read and the comparison is
made again. When the key value changes, sukeycount writes (to the screen or to a log
file) the key name, the value that was recently tested, and the count of the traces with that
value. After each write, the counter is re-set.
If the input data are a 2-D line sorted in CMP-order, a count of traces within CMPs is
equivalent to a fold count. Suppose the first 16 traces of cmp4.su have the following cdp
values:
sequential
cdp
trace
value
number
1 1
2 2
3 3
4 3
5 4
6 4
7 5
8 5
9 5
10 6
11 6
12 6
13 7
14 7
15 7
16 7
Below, we use sukeycount on the CMP-ordered 2-D data.
sukeycount < cmp4.su key=cdp > model4Fold.txt
Below are the first 7 lines of output
C-2
Appendix C: Utility Programs
C-3
Appendix C: Utility Programs
C-4
Appendix C: Utility Programs
110
111 /* Don't write just because first trace is new */
112 if ( itotal == 0 ) sortold = sort ;
113
114 if ( sort != sortold )
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
115 {
116 printf(" %8s = %f", key[0], sortold) ;
117 printf(" has %d trace(s)", gatherkount) ;
118 printf("\n") ;
119 sortold = sort ;
120 gatherkount = 0 ;
121 }
122 ++gatherkount ;
123 ++itotal ;
124 }
125 else /* non-float header */
126 {
127 isort = vtoi(type1,vsort) ;
128
129 /* Don't write just because first trace is new */
130 if ( itotal == 0 ) isortold = isort ;
131
132 if ( isort != isortold )
133 {
134 printf(" %8s = %d", key[0], isortold) ;
135 printf(" has %d trace(s)", gatherkount) ;
136 printf("\n") ;
137 isortold = isort ;
138 gatherkount = 0 ;
139 }
140 ++gatherkount ;
141 ++itotal ;
142 }
143 }
144
145 /* Write after last trace is read */
146 if (*type1 == 'f')
147 {
148 printf(" %8s = %f", key[0], sortold) ;
149 printf(" has %d trace(s)", gatherkount) ;
150 printf("\n") ;
151 }
152 else
153 {
154 printf(" %8s = %d", key[0], isortold) ;
155 printf(" has %d trace(s)", gatherkount) ;
156 printf("\n") ;
157 }
158 printf("\n %d trace(s) read\n\n", itotal) ;
159
160 return(CWP_Exit()) ;
161 }
162
C-5
Appendix C: Utility Programs
Below is the selfdoc of sudumptrace: The default value num=4 means sudumptrace
prints values for the first four traces it reads. No key names are required. If the user
supplies key names, key names and their values are printed above each trace. If hpf=1,
the print format of trace key values is exponential; the default format is floating point.
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
Optional parameters:
num=4 number of traces to dump
key=key1,key2,... key(s) to print above trace values
hpf=0 header print format is float
=1 print format is exponential
Examples:
sudumptrace < inseis.su PRINTS: 4 traces, no headers
sudumptrace < inseis.su key=tracf,offset
sudumptrace < inseis.su num=7 key=tracf,offset > info.txt
sudumptrace < inseis.su num=7 key=tracf,offset hpf=1 > info.txt
C-6
Appendix C: Utility Programs
C-7
Appendix C: Utility Programs
C-8
Appendix C: Utility Programs
C-9
Appendix C: Utility Programs
97 tracefp = etmpfile();
98 headerfp = etmpfile();
99
100 do
101 {
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
102 ++ntr;
103 efwrite(&tr, HDRBYTES, 1, headerfp);
104 efwrite(tr.data, FSIZE, nt, tracefp);
105
106 /* Get header values */
107 for (ikey=0; ikey<nkeys; ++ikey)
108 {
109 Value val;
110 float fval;
111
112 gethdval(&tr, key[ikey], &val) ;
113 type = hdtype(key[ikey]) ;
114 fval = vtof(type,val) ;
115 hedr[(ntr-1)*nkeys+ikey] = fval ;
116 }
117
118 }
119 while (ntr<numtr && gettr(&tr)) ;
120
121 }
122 else /* user-supplied tmpdir */
123 {
124 char directory[BUFSIZ];
125 strcpy(directory, tmpdir);
126 strcpy(tracefile, temporary_filename(directory));
127 strcpy(headerfile, temporary_filename(directory));
128 /* Handle user interrupts */
129 signal(SIGINT, (void (*) (int)) closefiles);
130 signal(SIGQUIT, (void (*) (int)) closefiles);
131 signal(SIGHUP, (void (*) (int)) closefiles);
132 signal(SIGTERM, (void (*) (int)) closefiles);
133 tracefp = efopen(tracefile, "w+");
134 headerfp = efopen(headerfile, "w+");
135 istmpdir=cwp_true;
136
137 do
138 {
139 ++ntr;
140 efwrite(&tr, HDRBYTES, 1, headerfp);
141 efwrite(tr.data, FSIZE, nt, tracefp);
142
143 /* Get header values */
144 for (ikey=0; ikey<nkeys; ++ikey)
145 {
146 Value val;
147 float fval;
148
149 gethdval(&tr, key[ikey], &val) ;
150 type = hdtype(key[ikey]) ;
151 fval = vtof(type,val) ;
152 hedr[(ntr-1)*nkeys+ikey] = fval ;
153 }
154
155 }
156 while (ntr<numtr && gettr(&tr)) ;
157
158 }
159
C-10
Appendix C: Utility Programs
C-11
Appendix C: Utility Programs
223 }
224
225 putchar('\n') ;
226
227 printf("\nCounter Time Values\n") ; /* Column titles */
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
228
229 for (i=1; i<=ntime; ++i) /* Print trace values */
230 {
231 printf(" %6d ", i) ;
232 printf(" %8.3f ", dt*(i)+delrt) ;
233 for (j=1; j<=ntr; ++j)
234 {
235 printf("%11.4e\t", data[(j-1)*ntime+(i-1)]) ;
236 }
237 putchar('\n') ;
238 }
239 putchar('\n') ;
240
241 }
242
243
244 /* for graceful interrupt termination */
245 static void closefiles(void)
246 {
247 efclose(headerfp);
248 efclose(tracefp);
249 eremove(headerfile);
250 eremove(tracefile);
251 exit(EXIT_FAILURE);
252 }
253
C-12
Appendix C: Utility Programs
In tvnmoqc.c:
Lines 99-139 were taken almost line-by-line from sunmo. These lines check that the
cdp-tnmo-vnmo values have acceptable values for sunmo.
Lines 137-138 are the diagnostic print to the screen.
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
C-13
Appendix C: Utility Programs
53 " tn vn ",
54 " ",
55 " One file is output for each input pair of tnmo-vnmo series. ",
56 " ",
57 " A CDP VALUE MUST BE SUPPLIED FOR EACH TNMO-VNMO ROW PAIR. ",
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
58 " ",
59 " Prefix of each output file is the user-supplied value of ",
60 " parameter PREFIX. ",
61 " Suffix of each output file is the cdp value. ",
62 " For the example above, output files names are: ",
63 " PREFIX.15 PREFIX.35 ... PREFIX.95 ",
64 " ",
65 NULL};
66
67 /* Credits:
68 * MTU: David Forel (adapted from SUNMO)
69 */
70 /**************** end self doc ***************************************/
71
72 void mktvfile(char outfile[], int ntnmo, float *tnmo, float *vnmo);
73 int main(int argc, char **argv)
74 {
75 int k; /* index used in loop */
76 int mode; /* mode=1: qc; mode=2: qc + make cdp-t-v file */
77 int icdp; /* index into cdp array */
78 int ncdp; /* number of cdps specified */
79 int *cdp; /* array[ncdp] of cdps */
80 int nvnmo; /* number of vnmos specified */
81 float *vnmo; /* array[nvnmo] of vnmos */
82 int ntnmo; /* number of tnmos specified */
83 float *tnmo; /* array[ntnmo] of tnmos */
84 cwp_String prefix; /* prefix of output files */
85 char dot[] = "."; /* for output file name */
86 char outfile[80]; /* output file name */
87
88 /* Hook up getpar */
89 initargs(argc, argv);
90 requestdoc(1);
91
92 /* Get parameters */
93 if(!getparint("mode",&mode))mode=1;
94 if(mode==2)
95 if (!getparstring("prefix", &prefix))
96 err("When mode=2, you must supply a prefix name.");
97
98 /* Are there cdp values and vnmo-tnmo sets for each cdp? */
99 ncdp = countparval("cdp");
100 warn("This file has %i CDPs.",ncdp);
101 if (ncdp>0) {
102 if (countparname("vnmo")!=ncdp)
103 err("A vnmo set must be specified for each cdp");
104 if (countparname("tnmo")!=ncdp)
105 err("A tnmo set must be specified for each cdp");
106 } else {
107 err("A cdp value must be supplied for each tnmo-vnmo set");
108 }
109
110 /* Get cdp values */
111 cdp = ealloc1int(ncdp);
112 if (!getparint("cdp",cdp))
113 err("A cdp value must be supplied for each tnmo-vnmo set");
114
115 /* Get tnmo-vnmo values */
C-14
Appendix C: Utility Programs
C.5 Conclusion
As the SU User’s Manual states, “The secret to efficient SU coding is finding an
existing program similar to the one you want to write.” For example, if you want part of
your program to modify trace headers (keys), study suchw.c. If you want to modify trace
amplitudes in a single-trace process (Section 10.6), we suggest you study sugain.c.
C-15
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
D.1 Introduction
Makefiles are commonly used for software installation on Unix systems. But make
can be used for other purposes as well. As a small example, I will describe a makefile
(found in Section D.4) that reproduces the functionality of the shell script model1.sh
(Section 4.2).
First, some general comments about working with makefiles. There can be only one
makefile in a directory and, for our purposes, it will always be named “makefile.” It is a
text file like any other and can therefore be edited by whatever tools you use to edit shell
scripts. I use vi, which is a simple editor built into every Unix system. To speed things
up. I define an alias in my environment called vimake:
alias vimake 'vi makefile'
There are variations from system to system, but this kind of alias is usually defined in
a “dot” file, such as .login or .cshrc (talk to your local system expert for advice). With
this alias in place, you can navigate to the working directory you want, then type vimake
and be editing the makefile.
A makefile can perform a simple or complex series of interrelated tasks. For example,
the Stanford Exploration Project at Stanford University uses make to generate processing
results, figures for publication, and even entire books. To be specific, at this time they use
a variant called gmake, but I will describe generic make here.
D-1
Appendix D: Makefiles: Alternative to Shell Scripts
ignored by make, but very useful for anyone who wants to understand your makefile
(including you at some later date). So the lines:
# Experiment Number
num=1
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
give a comment and then a value is assigned to the parameter named num. Note that in
the actual makefile these lines are flush left. Makefiles are touchy about alignments and
spaces versus tabs, so it is always a good idea to copy a working makefile and then
modify it to your needs; otherwise, a lot of time can be wasted trying to figure out
formatting problems. Parameter num now has the value 1, and to use it somewhere later
we need to refer to it in a particular way. For example we can use $(num) or ${num}; for
our purposes these are equivalent. The next four lines of the makefile include two more
comments and two parameter assignments:
# Name output binary model file
modfile=model${num}.dat
# Name output encapsulated Postscript image file
psfile=model${num}.eps
Notice that, as in shell scripts, the parameter assignments can be embedded, meaning
that parameters can be set based on previously set parameters. Parameters modfile and
psfile are thus defined in terms of the num parameter.
So much for parameter assignments; now on to the main action. The default action of
the makefile is described by:
default:
@echo " "
@echo "Available Makes:"
@echo " model${num}.dat ps clean"
@echo " Example: make model${num}.dat"
@echo " "
Again, it is very important that “default:” begin in column 1 and end with a colon,
and equally important that all lines in the default routine begin with a tab, not a sequence
of spaces. The makefile interprets these lines in this way: “When the user types make
with no following characters, I will go to the first executable step, which happens to be
named default in this case, and it tells me to echo 5 lines of text back to the command
line, then stop.”
When we enter make, we again see:
Available Makes:
model1.dat ps clean
Example: make model1.dat
verifying that parameters like ${num} have been replaced with their values, thus
generating an echo like model1.dat from model${num}.dat.
The next make item actually gets something done:
${modfile}:
rm -f ${psfile}
trimodel xmin=0 xmax=6 zmin=0 zmax=2 \
xedge=0,6 \
.
.
.
sfill=0.1,1.5,0,0,0.25,0,0 \
D-2
Appendix D: Makefiles: Alternative to Shell Scripts
> ${modfile}
The name of this item is given by parameter modfile, which in turn depends on
parameter num. In fact, it is not easy to tell what we type to invoke this item. But from
earlier typing of make, we know it is:
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
% make model1.dat
that will get the job done.
The first action is to remove the postscript file whose name is stored in parameter
psfile. Next, SU program trimodel is executed with several lines of parameters (not all
are shown here), and the resulting model file is created with a name given by parameter
modfile. As with SU shell scripts, each continuation line must end in a “\”,not a space. To
get secondary indentation (good style), use a tab and 2 spaces. The number of spaces is
up to you, but the tab is necessary or the makefile will not work.
The next item in our makefile is invoked by:
% make ps
This executes the following make code:
ps: ${modfile}
# Create a Postscipt file of the model
# Set gtri=1.0 to see sloth triangle edges
spsplot < ${modfile} \
gedge=0.5 gtri=2.0 gmin=0 gmax=1 \
title="Earth Model - 5 layers [M${num}]" \
labelz="Depth (km)" labelx="Distance (km)" \
dxnum=1.0 dznum=0.5 wbox=6 hbox=2 \
> ${psfile} \
Note the name of this item is hardwired to “ps” and that something sits on the name
line to the right of the colon (:). Make interprets this line this way: “To make ps, I need a
file called model1.dat. If it exists in the current directory, then I will proceede to execute
this item. If it does not exist, then I will make it before continuing.” This is an example
of a conditional make. To make ps, we need to make model1.dat, unless a file of that
name is already in the directory. This is a powerful idea. For example, a finite difference
simulation might take 24 hours to create data that we want to display. With a conditional
make item, the simulation will not be redone unless the data file is missing. If the data
file is there, then only the display will be done.
For this to work right, make items have to be strictly matched to their dependencies.
Consider the following example make items:
data:
suspike ... > data.su
ps: data
supsimage .... > fig.ps
If we enter “make ps”, then make thinks it needs a file named “data.” Not finding it,
make will run the make item “data” and generate a data.su file. There is still no file
named “data,” but make is happy it did something logical and it goes on to create the
image. So this is not a conditional make, but an absolute one. Invoking “make ps” will
always make new data. The way to make this conditional is:
data.su:
suspike ... > data.su
ps: data.su
D-3
Appendix D: Makefiles: Alternative to Shell Scripts
default:
@echo " "
@echo "Available Makes:"
@echo " model${num}.dat ps clean"
@echo " Example: make model${num}.dat"
@echo " "
${modfile}:
rm -f ${psfile}
trimodel xmin=0 xmax=6 zmin=0 zmax=2 \
xedge=0,6 \
zedge=0,0 \
sedge=0,0 \
xedge=0,2,4,6 \
zedge=0.30,0.50,0.20,0.30 \
sedge=0,0,0,0 \
xedge=0,2,4,6 \
zedge=0.55,0.75,0.45,0.55 \
sedge=0,0,0,0 \
xedge=0,2,4,6 \
zedge=0.65,0.85,0.55,0.65 \
sedge=0,0,0,0 \
xedge=0,2,4,6 \
zedge=1.30,1.30,1.60,1.20 \
sedge=0,0,0,0 \
xedge=0,6 \
zedge=2,2 \
D-4
Appendix D: Makefiles: Alternative to Shell Scripts
sedge=0,0 \
kedge=1,2,3,4,5,6 \
sfill=0.1,0.1,0,0,0.44,0,0 \
sfill=0.1,0.4,0,0,0.40,0,0 \
sfill=0.1,0.6,0,0,0.35,0,0 \
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
sfill=0.1,1.0,0,0,0.30,0,0 \
sfill=0.1,1.5,0,0,0.25,0,0 \
> ${modfile}
ps: ${modfile}
# Create a Postscipt file of the model
# Set gtri=1.0 to see sloth triangle edges
spsplot < ${modfile} \
gedge=0.5 gtri=2.0 gmin=0 gmax=1 \
title="Earth Model - 5 layers [M${num}]" \
labelz="Depth (km)" labelx="Distance (km)" \
dxnum=1.0 dznum=0.5 wbox=6 hbox=2 \
> ${psfile} \
clean:
rm *.dat *.eps
D-5
Appendix D: Makefiles: Alternative to Shell Scripts
28 <TAB> zedge=0.65,0.85,0.55,0.65 \
29 <TAB> sedge=0,0,0,0 \
30 <TAB> xedge=0,2,4,6 \
31 <TAB> zedge=1.30,1.30,1.60,1.20 \
32 <TAB> sedge=0,0,0,0 \
Downloaded 06/03/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/
33 <TAB> xedge=0,6 \
34 <TAB> zedge=2,2 \
35 <TAB> sedge=0,0 \
36 <TAB> kedge=1,2,3,4,5,6 \
37 <TAB> sfill=0.1,0.1,0,0,0.44,0,0 \
38 <TAB> sfill=0.1,0.4,0,0,0.40,0,0 \
39 <TAB> sfill=0.1,0.6,0,0,0.35,0,0 \
40 <TAB> sfill=0.1,1.0,0,0,0.30,0,0 \
41 <TAB> sfill=0.1,1.5,0,0,0.25,0,0 \
42 <TAB>> ${modfile}
43
44 ps: ${modfile}
45 <TAB># Create a Postscipt file of the model
46 <TAB># Set gtri=1.0 to see sloth triangle edges
47 <TAB>spsplot < ${modfile} \
48 <TAB> gedge=0.5 gtri=2.0 gmin=0 gmax=1 \
49 <TAB> title="Earth Model - 5 layers [M${num}]" \
50 <TAB> labelz="Depth (km)" labelx="Distance (km)" \
51 <TAB> dxnum=1.0 dznum=0.5 wbox=6 hbox=2 \
52 <TAB>> ${psfile} \
53
54 clean:
55 <TAB>rm *.dat *.eps
D-6
Appendix E: On the CDs
E-1
Appendix E: On the CDs
E-2
Appendix E: On the CDs
E-3
Seismic Data Processing with Seismic Un*x: A 2D Seismic Data Processing Primer
Supplementary material
http://link.aip.org/link/mm/doi=10.1190/1. 9781560801948.supp&filename=262E_supp.zip