Promax - SeisSpace Manual Vol 1 500083 PDF
Promax - SeisSpace Manual Vol 1 500083 PDF
Seismic Processing
and Analysis
Training Manual
Volume 1 for R5000.8.3
This publication has been provided pursuant to an agreement containing restrictions on its use. The publication is also protected by
Federal copyright law. No part of this publication may be copied or distributed, transmitted, transcribed, stored in a retrieval system,
or translated into any human or computer language, in any form or by any means, electronic, magnetic, manual, or otherwise, or
disclosed to third parties without the express written permission of:
Trademark Notice
3D Drill View, 3D Drill View KM, 3D Surveillance, 3DFS, 3DView, Active Field Surveillance, Active Reservoir Surveillance, Adaptive Mesh
Refining, ADC, Advanced Data Transfer, Analysis Model Layering, ARIES, ARIES DecisionSuite, Asset Data Mining, Asset Decision Solutions,
Asset Development Center, Asset Development Centre, Asset Journal, Asset Performance, AssetConnect, AssetConnect Enterprise, AssetConnect
Enterprise Express, AssetConnect Expert, AssetDirector, AssetJournal, AssetLink, AssetLink Advisor, AssetLink Director, AssetLink Observer,
AssetObserver, AssetObserver Advisor, AssetOptimizer, AssetPlanner, AssetPredictor, AssetSolver, AssetSolver Online, AssetView, AssetView
2D, AssetView 3D, BLITZPAK, CasingLife, CasingSeat, CDS Connect, Channel Trim, COMPASS, Contract Generation, Corporate Data Archiver,
Corporate Data Store, Crimson, Data Analyzer, DataManager, DataStar, DBPlot, Decision Management System, DecisionSpace, DecisionSpace 3D
Drill View, DecisionSpace 3D Drill View KM, DecisionSpace AssetLink, DecisionSpace AssetPlanner, DecisionSpace AssetSolver, DecisionSpace
Atomic Meshing, DecisionSpace Nexus, DecisionSpace Reservoir, DecisionSuite, Deeper Knowledge. Broader Understanding., Depth Team,
Depth Team Explorer, Depth Team Express, Depth Team Extreme, Depth Team Interpreter, DepthTeam, DepthTeam Explorer, DepthTeam Express,
DepthTeam Extreme, DepthTeam Interpreter, Design, Desktop Navigator, DESKTOP-PVT, DESKTOP-VIP, DEX, DIMS, Discovery, Discovery
3D, Discovery Asset, Discovery Framebuilder, Discovery PowerStation, DMS, Drillability Suite, Drilling Desktop, DrillModel, Drill-to-the-Earth-
Model, Drillworks, Drillworks ConnectML, DSS, Dynamic Reservoir Management, Dynamic Surveillance System, EarthCube, EDM, EDM
AutoSync, EDT, eLandmark, Engineer's Data Model, Engineer's Desktop, Engineer's Link, ESP, Event Similarity Prediction, ezFault, ezModel,
ezSurface, ezTracker, ezTracker2D, FastTrack, Field Scenario Planner, FieldPlan, For Production, FrameBuilder, FZAP!, GeoAtlas, GeoDataLoad,
GeoGraphix, GeoGraphix Exploration System, GeoLink, Geometric Kernel, GeoProbe, GeoProbe GF DataServer, GeoSmith, GES, GES97,
GESXplorer, GMAplus, GMI Imager, Grid3D, GRIDGENR, H. Clean, Handheld Field Operator, HHFO, High Science Simplified, Horizon
Generation, I2 Enterprise, iDIMS, Infrastructure, Iso Core, IsoMap, iWellFile, KnowledgeSource, Landmark (as a service), Landmark (as software),
Landmark Decision Center, Landmark Logo and Design, Landscape, Large Model, Lattix, LeaseMap, LogEdit, LogM, LogPrep, Magic Earth, Make
Great Decisions, MathPack, MDS Connect, MicroTopology, MIMIC, MIMIC+, Model Builder, NETool, Nexus (as a service), Nexus (as software),
Nexus View, Object MP, OpenBooks, OpenJournal, OpenSGM, OpenVision, OpenWells, OpenWire, OpenWire Client, OpenWire Server,
OpenWorks, OpenWorks Development Kit, OpenWorks Production, OpenWorks Well File, PAL, Parallel-VIP, Parametric Modeling, PetroBank,
PetroBank Explorer, PetroBank Master Data Store, PetroStor, PetroWorks, PetroWorks Asset, PetroWorks Pro, PetroWorks ULTRA, PlotView,
Point Gridding Plus, Pointing Dispatcher, PostStack, PostStack ESP, PostStack Family, Power Interpretation, PowerCalculator, PowerExplorer,
PowerExplorer Connect, PowerGrid, PowerHub, PowerModel, PowerView, PrecisionTarget, Presgraf, PressWorks, PRIZM, Production, Production
Asset Manager, PROFILE, Project Administrator, ProMAGIC, ProMAGIC Connect, ProMAGIC Server, ProMAX, ProMAX 2D, ProMax 3D,
ProMAX 3DPSDM, ProMAX 4D, ProMAX Family, ProMAX MVA, ProMAX VSP, pSTAx, Query Builder, Quick, Quick+, QUICKDIF,
Quickwell, Quickwell+, Quiklog, QUIKRAY, QUIKSHOT, QUIKVSP, RAVE, RAYMAP, RAYMAP+, Real Freedom, Real Time Asset
Management Center, Real Time Decision Center, Real Time Operations Center, Real Time Production Surveillance, Real Time Surveillance, Real-
time View, Reference Data Manager, Reservoir, Reservoir Framework Builder, RESev, ResMap, RTOC, SCAN, SeisCube, SeisMap, SeisModel,
SeisSpace, SeisVision, SeisWell, SeisWorks, SeisWorks 2D, SeisWorks 3D, SeisWorks PowerCalculator, SeisWorks PowerJournal, SeisWorks
PowerSection, SeisWorks PowerView, SeisXchange, Semblance Computation and Analysis, Sierra Family, SigmaView, SimConnect, SimConvert,
SimDataStudio, SimResults, SimResults+, SimResults+3D, SIVA+, SLAM, SmartFlow, smartSECTION, Spatializer, SpecDecomp, StrataAmp,
StrataMap, StrataModel, StrataSim, StratWorks, StratWorks 3D, StreamCalc, StressCheck, STRUCT, Structure Cube, Surf & Connect, SynTool,
System Start for Servers, SystemStart, SystemStart for Clients, SystemStart for Servers, SystemStart for Storage, Tanks & Tubes, TDQ, Team
Workspace, TERAS, T-Grid, The Engineer's DeskTop, Total Drilling Performance, TOW/cs, TOW/cs Revenue Interface, TracPlanner, TracPlanner
Xpress, Trend Form Gridding, Trimmed Grid, Turbo Synthetics, VESPA, VESPA+, VIP, VIP-COMP, VIP-CORE, VIPDataStudio, VIP-DUAL,
VIP-ENCORE, VIP-EXECUTIVE, VIP-Local Grid Refinement, VIP-THERM, WavX, Web Editor, Well Cost, Well H. Clean, Well Seismic Fusion,
Wellbase, Wellbore Planner, Wellbore Planner Connect, WELLCAT, WELLPLAN, WellSolver, WellXchange, WOW, Xsection, You're in Control.
Experience the difference, ZAP!, and Z-MAP Plus are trademarks, registered trademarks, or service marks of Halliburton.
All other trademarks, service marks and product or service names are the trademarks or names of their respective owners.
Note
The information contained in this document is subject to change without notice and should not be construed as a commitment by
Halliburton. Halliburton assumes no responsibility for any error that may appear in this manual. Some states or jurisdictions do not
allow disclaimer of expressed or implied warranties in certain transactions; therefore, this statement may not apply to you.
Contents
Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-i
Agenda. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A--ix
Day 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A--ix
Day 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A--x
Day 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A--xi
Day 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A--xii
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . -1
Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . -3
Extract Information from the SEGY File and Write JavaSeis Dataset. . . . . 8-4
Day 1
• Initial Demonstration
• Flowbuilding Exercise
• Basic Trace Display
• Data Selection and Sorting
2D Marine workflow
Day 2
LAND 3D WORKFLOW
Day 3
Residual statics
Velocity Analysis
Day 4
3D Poststack Migration
3D SPS Geometry
If any time remains on Day 4, consult with the instructor and choose from the following sections
in Volume 2 of the manual, which are summarized below.
The first three topics use the same Land 3D Line/Subproject that is covered in Volume 1. The
geometry database must be completed and the shots dataset with updated headers must be ready
to use for these. These topics may be exercised in conjunction with the Volume 1 exercises.
Plotting
NOTE: Third party software is required in order put data on a paper plot.
Normal training facilities do not have plotting capability.
Archival Methods
• SEG-Y output
• Tape Data Output
• Archive Wizard
Most if not all of the class time will be spent using Volume 1, which can
be thought of in three main sections:
After the class, you will find the manuals useful as a supplement to the
online documentation of the application.
Conventions
MB1 refers to an operation using the left mouse button. MB2 is the
middle mouse button. MB3 is the right mouse button.
• Shift-Click: Hold the shift key while depressing the mouse button.
• Drag: Hold down the mouse button while moving the mouse.
In some instances mouse buttons may not work properly if either Caps
Lock or Nums Lock is on.
Exercise Organization
Each exercise consists of a series of steps that will build take the user
through a specific workflow. In most instances this involves creating a
“flow” composed of one or more modules, information about with
parameter selection and module interaction. The flow is executed and
the results analyzed or an interactive tool exercised. With the
introduction of JavaSeis data format, a number of exercises involve
launching an interactive display and/or analysis tool directly from the
dataset, rather than building and executing a flow. Many of the steps
give a detailed explanation of how to correctly pick parameters or use
the functionality of interactive processes.
The flow examples list key parameters for each process of the exercise.
As you progress through the exercises, familiar parameters will not
always be listed in the flow example.
You will find many images of tool menus. In general, arrows are
used to highlight parameters that you must change from their
default values.
The exercises are organized such that your dataset is used throughout the
training session. Carefully follow the instructor’s direction when
assigning geometry and checking the results of your flow. An
improperly generated dataset or database may cause a subsequent
exercise to fail.
Before you can start the Navigator, there is a daemon called the
“sitemanager” which needs to be running. In a typical production
environment, this daemon will be configured to start on the head node
of the cluster when the system boots and will be running as the “root”
user. This is the general recommended mode of operation and as such
the class will be conducted in this manner.
For some classes your instructor may have started the sitemanager on
your machine. If the sitemanager is not running, your instructor will
provide simple instructions on how to start it.
You can start the navigator by executing the script SSclient from the
command line. Please leave the UNIX window open where you start
SeisSpace. SeisSpace will be writing information to this window that
may be useful for diagnosing problems. This script sits in the “home
directory” of the user:
> ./SSclient
You will see a number of messages scroll through the window, and it
will take several seconds for the Navigator to appear.
On the upper left of the Navigator, click on Edit to get a pulldown menu,
then click on Preferences at the bottom of the pulldown. This opens the
following dialog:
For a white background choose “D21 Light” for the Look and Feel.
You may also want to select “Large” for the Size of the Toolbar Icons,
and possibly choose a larger value for the Scalar for the UI fonts. A
value of 1.2 to 1.4 is a big as most people may want.
Click OK. If you change only the background color, you do not need to
restart the Navigator. The other two changes require that you exit and
re-open the Navigator for those choices to be implemented.
Screen images for the remainder of this manual are made with the white
background, as this produces much better printed quality.
1. The Folders section which is also called the “tree view” on the left
side of the Navigator. The default configuration closes this view
when you open a flow. for editing. You will find it more convenient
to keep the tree view open
2. The Tabbed view section which has allows several views for flow
editing, data selection, etc. The Tabbed View is the middle panel of
the Navigator. This where the Flow Editor appears, and is where
you will spend most of your time.
The general state of the Navigator is saved when you exit, and restored
when you restart the Navigator. User preferences allow each user to
customize the restart behavior and selected mouse button behaviors and
other options. These preferences will be discussed later.
For example, MB1 on a Project name in the Folders view opens the list
of Subprojects in the Tabbed view (center section) of the Navigator.
MB3 on a Project name will reveal a menu of actions you can take on
that project, such as Copy, Paste, Delete. MB1 on a flow name will open
that flow for editing in the Tabbed view. MB3 on a flow name opens a
long menu of actions you might choose.
• Help: Access to the help files and information about the version
of SeisSpace.
When starting the Navigator for the first time, you will have access to
any “Shared” Data_Homes that were set up by the administrator. You
may also be able to add your own “private” data_homes. Under each
data_home is a list of “Projects”. Each Project is likely to have multiple
subprojects. A Subproject directory contains all of the information for
either a single 2D line or a single 3D survey.
Click on the “toggle” next to one of the project folder names to see a list
of your available sub-projects. You will create a new Project, a new
Sub-project and a Flow to read some data and get familiar with the Trace
Display tool.
Click on the text name of Data_Home to be used for the class. This will
list the current Projects (or Areas) in the table view. You can either use
the File --> New Project option from the pull down menus, the “white
page with the sun” icon under the word File in the upper left corner of
the Navigator, or use the MB3 --> New Project option from the MB3
menu in the open space in the table view to create a new project.
Please use your own name for the Project name (e.g., Fred’s
Project). You can use up to 32 characters to describe your Project, with
any characters. Blank spaces are allowed as well as most special
characters. However, it is recommended that you use alphabetic,
numeric, the hyphen ( - ) and the underscore ( _ ) characters.
Click on your new project name in the Folders view on the left. Click
both the name and open the toggle to the left of the name. Right now
there is nothing under your new Project name.
Click MB3 on Your Project name and choose New from the menu, then
type in 2D Marine Line in the dialog box.
to set the focus on it, then use the keyboard Alt-F, Alt-N, Alt-S to
open the dialog.
• Or click on the left-most icon on the tool bar (a Folder with a
starburst).
• Or click on File > New > Subproject.
Click on the 2D Marine Line text to highlight it and use either the File
--> New Flow, or MB3 options menu to add a new flow. Give the new
flow the name 01 SEGY Input. This will open the Flow Editor.
The flow naming convention we will use in the class is to number the
flows as they are built. This is strictly for convenience, to help organize
the flows.
Flow Editor
Type desired
module name
here.
Once you are in the flow editor you can select the processing modules
you want to use to build the flow. There are many ways to build flows.
Sometimes we will create new flows or copy existing flows. You can
have more than one flow visible in the Flow Editor and use Copy and
Paste or Drag and Drop to move modules from one flow to another. You
can choose modules from the Processes List by clicking on the module
name, however the list of modules is very long, so few people use this
method.
We will build this flow by typing a few characters of the module wanted
and use the “quick-search” option. The flow we will build is:
SEGY Input
Automatic Gain Control
Trace Display
One of the big advantages of the flow-builder is the option to open more
than one flow at a time in the Flow Editor tab. In order to make it more
obvious which flow is active, the background color of the active flow is
light yellow.
2. Add the Automatic Gain Control module. You may use the
shortcut of typing AGC, which is the common acronym.
3. Add the Trace Display module. A very useful shortcut for Trace
Display is CED, which comes from TraCE Display. Be sure to
choose the module name with the blue circle icon.
You can use common abbreviations like “nmo” for Normal Moveout
Correction and “agc” for Automatic Gain Control. If you don’t know the
name of a particular tool, try typing a few letters of what you want to do,
such as “display” for producing some type of display, or “mig” for a
migration module.
When the Flow Editor is “active”, here are the main tool bar options
available on the top of the Navigator with their pulldown menu options.
Notice that the icons on the pulldown menus are the same as
corresponding function icons across the top of the Navigator:
• View: The options under the pulldown menu are: Log Viewer to
see the log of the most recent execution of the highlighted flow
(this is equivalent to the job.output file of ProMAX); Job Viewer
to show the history and status of submitted jobs; Flow Viewer to
view all the parameters in a flow in ASCII, XML or as a flow
diagram format; Print Flow to generate an html file of the flow
under the flow directory and open it in a browser. Most of the
remaining options modify the layout and visibility of various
sections of the interface. The Templatize/Detemplatize Flow is
the key option for the flow replication capabilities, which may be
discussed later in this class.
• Tools: Allows the user to open the Replica Editor, the Replica
Job tables and start DBTools
The icons across the top of the Navigator GUI perform the same
functions as the corresponding item on the pull down menus.
To view the help file for the highlighted module use the F1 key. You
can also use the Edit --> Help on Processes from the pull down or
MB3 options menus.
NOTE: There are only two parameters in the SEG-Y Input menu
that you will change from their default values for this exercise. The
input file happens to work nicely with the default values. Please see
the Help File for this module for details. The majority of default
parameters adhere to the published standards for SEG-Y format data
(i.e., byte locations and format, coordinate scalar usage, etc.). There
are numerous options to override the standards, remap header
values, etc., to accommodate deviations from the standard.
5. Notice that if the window is active and you let the mouse hover over
any parameter in a menu, you will get a pop-up with a short
explanation of the parameter. This is referred to as “mouse help”.
Change from
Tape to Disk
/home/student/misc_files/marine2d_shots.segy
7. Select Automatic Gain Control with MB2 to open the menu. Set
the AGC operator length to 1500 ms.
To change this value simply place your cursor on the existing value
(it will be highlighted), and type in the number 1500.
For now, do not change any of the values. We will discuss many of
these options in the next chapter. At that point, you will have the
opportunity to test and explore the various options.
10. Use the Test Parameters feature until you get the message
“Successful Init Phase!”. Then submit the flow for execution using
the Submit on this machine icon.
Note that the columns may not appear in the same order as shown in
the diagram. You can reorder the columns by dragging them to the
desired location and turn off columns that you are not interested in
with the column selector icon from the icon bar.
In the Job Viewer, you can hold down MB2 while the mouse is on
the 01 SEGY Input to view the tail end of the job log.
11. Select the Next Screen icon with MB1. This icon is the black
triangle pointing to the right.
This ends the job. All we intended to do with this flow was to verify
that we can read the SEG-Y file.
Return to the Flow Editor. Let’s modify the flow to write the trace data
to an output dataset instead of displaying it.
13. Click MB3 on the AGC module and again on the Trace Display
module. This “deactivates” these modules, so the next time we
execute the flow those modules will be ignored. However, it is
convenient because we can “re-activate” those modules by clicking
MB3 on them again. This is a very convenient option for testing,
and spares you from having to reselect menu parameters every time
you want to re-use a module.
14. Type DDO in the tool selector box and choose Disk Data Output.
Your flow should now look like this:
Notice the word INVALID next to the module name. This is a reminder
that you must parameterize this menu by selecting or adding a dataset
name that the module will write the trace data to.
15. Click MB2 to open the DDO menu, then click on the word
INVALID. This action changes the Panel view to the Datasets list
for the current subproject. There are no datasets in the list, so you
need to click MB3 in the list view and select New ProMAX
Dataset.
16. Enter the name “01 Shots from SEGY” in the dialog and click
OK. The naming convention we have chosen for datasets is to use
the numeric prefix of the flow as a prefix for any dataset written by
that flow. This is purely for convenience, and is not a requirement.
However, it does make is much easier to manage and identify your
datasets.
17. Submit the job to the local machine. Select the job name in the Job
Viewer, and when the job completes, select the Display job output
icon. There are several other places in the Navigator that allow you
top view the job log.
18. If the Job Viewer Status indicates a job failure, the job log should
show some reasonable error message indicating the problem. Ask
your instructor for assistance if you cannot determine the solution.
There is a great deal of valuable information in the job log related to the
SEGY data, such as minimum and maximum values found for various
permutation of trace header bytes. Have a look through the log and see
what you can discover. Ask your instructor to explain anything you do
not understand.
Your first look at the data was the first shot with all channels. After
clicking the Next Ensemble icon, you saw the next shot. What if you
wanted to look at every other shot? What if you only wanted to look at
a subset of the channels? What if you wanted to sort the data to CDP and
then display. All these options and more are available in Disk Data
Input.
2. Open Disk Data Input menu with MB2 and click where the menu
reads Get All for Trace Read Option.This toggles the read option
to Sort, and the menu will automatically add several new options:
4. Leave the secondary sort set to NONE, this means that the default
sorting of traces within ensembles will be used. This default was set
when the dataset was written out to disk.
5. Type in the text box for the sort order list. If the list you plan to use
is very long you can select the pencil on paper icon to the right of
Sort order for dataset to open a text editor window appears. A
format and example are given at the bottom of this window.
This specifies that only SOURCE numbers 1 and 3 will be read into
the flow. The slash mark is used to separate an optional second list
of primary keys to read. It is generally good practice to end the text
strings with the slash ( / ) character.
When the last source is displayed, the Next Screen icon becomes
inactive. To exit this display, select File --> Exit/Stop Flow.
2. Select CHAN for the secondary trace header entry. This will allow
you to sort each SOURCE ensemble by channel number, and also
limit the number of channels to be processed.
Note
If you only select a primary sort key, then only one set of values should be specified
in the sort order for dataset. If you select both a primary and a secondary sort key,
then two sets of values, separated by a colon, are necessary in the sort order. This is
a common place for new users to have job failures.
You will see the first shot and all subsequent shots display with only
the first 60 channels.
6. Move your cursor into the trace display area. Notice that the mouse
button help gives a listing of the current SOURCE and CHAN.
Trace Display will always give you a listing of the values for the
current Secondary and Primary sort keys.
Recall that the primary trace header entry specifies the type of ensemble
to build, and also the range of that ensemble to read. The secondary sort
key allows you to select and sort the traces within each ensemble.
NOTE: The SEGY file that was read already contains CDP numbers,
offset, CDP and shot X-Y coordinates and various other header values.
The default behavior of the SEGY Input tool reads values from standard
locations and puts these into the ProMAX trace headers. Refer to the
module documentation for details. Do not expect that every SEGY file
will have meaningful or valid values for any byte positions. Always QC
any data that you get from elsewhere.
2. Select CDP for the Primary trace header entry. This tells the
program to build CDP gathers from the input dataset.
3. Select AOFFSET for the secondary trace header entry. This tells
the program to order the traces within each CDP gather by the
AOFFSET header.
• 500-600(25) This select every 25th CDP between 500 and 600.
6. Notice that we have now displayed a CDP gather, even though the
input dataset is stored on disk as shot gathers.
7. Move your cursor into the trace display area, and confirm that the
displayed gather has Primary and Secondary sorts of CDP and
AOFFSET.
4. Set the sort order for dataset to *:*/. The asterisk is the wildcard
character and indicates that “all” shots and “all” channel values are
requested.
5. Submit the job and step through the data panels. Select File -->
Exit/Stop Flow when finished.
The following pages explain the most commonly used options and
features of the Trace Display interactive tool.
• Rewind: Shows the first ensemble in the sort order. Is not active if
the user does not specify Interactive Data Access in the input flow,
or if the first ensemble in the sort order is currently displayed.
• Save Image: Save the current screen image. Annotation and picked
events are saved with the trace data, to be viewed later.
• Paint Brush: Use this tool to apply picked Trace Kills, Reversals,
and Mutes to the display. This tool is only active when you are
picking a parameter table. The paintbrush tool is a toggle button,
select once to apply the active tables, select again to undo.
• Zoom Tool: Click and drag using MB1 to select an area to zoom. If
you release MB1 outside the window, the zoom operation is
canceled. If you just click MB1 without dragging, this tool will
unzoom. You can use the zoom tool in the horizontal or vertical axis
area to zoom in one direction only
• Annotation Tool: When active, you can add, change, and delete
text annotation in the trace and header plot areas. For adding text,
activate the icon, then click MB1 where you want the text to appear.
For changing text, the pointer changes to a circle when it is over
existing text annotation, move by dragging the text with MB1,
delete by clicking MB2, and edit the text or annotation color with
MB3.
Zoom
Zoom feature is
active when icon
is selected.
Press and hold MB1 to define the first corner of the zoom window.
Continue to hold the button and drag the cursor to the other corner.
Release the Mouse button and the display will zoom into that rectangle.
2. A single MB1 click on the data area will unzoom the display.
Press and hold MB1 in the column of numbers to define the start time
Continue to hold the button and drag the cursor to the maximum time.
Release the Mouse button and the display will zoom
4. A single MB1 click will unzoom the display. Click in the data
region to unzoom completely. Click on the time annotation region
to unzoom only vertically. Click on the header annotation region to
unzoom only in the horizontal direction.
Press and hold MB1 in the row of numbers to define the start trace
Continue to hold the button and drag the cursor to the maximum trace.
Release the Mouse button and the display will zoom
Add Annotation
Text annotation can be added anywhere on the display which is useful
for screen captures to identify specific features about on the display.
Text may be added, moved (in position on the display), and/or edited
using the appropriate mouse button as described at the bottom of the
display in the mouse button help area.
Click MB1 anywhere on the data area where you wish to place annotation and
the “Edit Text” window appears. Type in some text and press the OK button.
The text will appear on the display where you clicked.
You may move, delete or edit this text by placing the cursor on the text.
The size of the text can be controlled as an X-resource by editing the
appropriate X-resources file. This is typically owned by the Administrator, but
is found at $PROMAX_HOME/port/misc/lib/X11/app-defaults/TraceDisplay.
You can add additional text labels by clicking somewhere else on the display.
Velocity Measurement
With the dx/dt analysis feature you can measure the apparent velocity of
linear or hyperbolic events that appear on the display. This feature will
only work if the trace offset values in the headers exist and are accurate.
Select any trace with MB1 and the trace header list will open in a
separate window. Click on any other trace to show its header values.
Click MB2 to open a header list for another trace for comparison.
You can remove the header list by clicking on the icon to de-activate it.
You may also find resizing and moving the windows to be useful.
Save Screen
This icon saves the current screen image in memory. These screens can
be recalled from memory and then can be reviewed in different
sequences.
Click on the
icon ONCE to
save the current
view to memory.
Animate Screens
After you save at least two screens, the Animation Icon becomes
available (i.e., it is no longer “greyed out”). Click on the Animation
icon to open the Animation Tool dialog.
With the Animation feature active you can review the saved screens.
You may elect to view them circularly, one at time in sequence, or compare
two of the saved screens.
The speed of the circulation can also be controlled by the Speed Slide bar.
The speed can be changed as the screens are swapping.
The first two methods will be discussed here. The database selection
method will be covered in a later chapter.
Next
Previous
Rewind to
First
Ensemble
in the
Interactive
Data Access
dialog list.
Type in a new
selection entry
4. Notice that the Primary, Secondary, and Tertiary sorts are displayed
for reference only, you cannot change to attribute names, only the
range of values.
Also notice that you can select a previous sort list from the middle
box, or select a previously saved sort list from the file menu.
Menu Bar
Note:
Use caution when using the stop option. For example, assume that you have a flow
that contains Disk Data Input to read in ten ensembles followed by Disk Data
Output and Trace Display. If you execute this flow and use the Exit/Stop Flow
option after viewing the first five ensembles, then only the five ensembles that you
viewed will be stored in the output dataset as opposed to writing out ten ensembles.
If you use the Exit/Continue Flow option instead, then all ten ensembles will be
written out.
You may also elect to change the numbers plotted above the traces. For
example you may want to look at the FFID numbers and the offsets.
The best way to learn these features is to play with them and see what
happens.
Watch the difference between the Apply and OK buttons. The Apply
button will make the changes but the selection window will remain. The
OK button will make the changes and dismiss the window.
Once you have selected a parameter table for your picks, a new icon will
appear in the icon bar.
NOTE: The first break picking capabilities under the Picking options
are not discussed in this part of the course. It may be discussed by your
instructor, if time allows.
2. Select to read the first, middle and last shot gathers on the line
(Sources 1,88,176).
Type in a
descriptive
name for
your table
Type in a new table name, for example top mute - direct arrival,
and press the OK button. It is recommended that you describe the
purpose of the table as well as the type of data that it was picked on.
When you create a new table, another window appears listing trace
headers to choose the secondary key from.
Some parameter tables require a top pick and a bottom pick, such as
a surgical mute or a miscellaneous time gate. Once you have picked
the top element such as the top of a time gate, depress MB3
anywhere inside the trace portion of Trace Display. A new menu
appears allowing you to pick an associated layer (New Layer).
Some of the other options allow you to snap your pick to the nearest
amplitude peak, trough or zero crossing.
6. Pick a mute.
7. Click MB3 in the display field and choose Project from the popup
menu to display the interpolation/extrapolation of your picks on all
offsets and to the other shots in the display.
8. Pick a different mute on the last shot and click MB3 > Project
again. Watch how the projected mute on the center shot is
interpolated based on the first and last shots.
When you finish picking, your mute should look similar to the
following. Your mute should be below the direct arrival (around 200
ms on the near offset), but should not cut into the water bottom
reflector or refractor, for this exercise.:
Direct arrival
9. When you are happy with your mute, save the table to disk by
clicking on File > Save Picks. It is a good practice to save your
picks occasionally, in case you get taken away from your work.
This mute will be used when you process this data “for real” in
Chapters 3-6.
You can toggle the mute on and off with the Paint Brush Icon. Edit
the mute if you are not happy. Remember you can only edit picks
when the picking icon is highlighted.
NOTE: The Paint Brush icon only shows graphically what the mute
would look like if applied. You must save the table, then run another
flow to actually apply the mute to real traces.
12. Using MB1 select some traces to be killed (use your imagination).
MB2 can be used to remove previously selected traces from the list.
The traces to be killed will be marked with at Red line.
13. Select File --> Exit/Stop Flow. When you choose to exit, you are
prompted to save the picks you have just made. The picks are saved
in parameter tables which can be used later in processing.
1. Let’s start by editing your previous flow and inserting the modules
Trace Muting and Trace Kill/Reverse
Now go to the icon bar across the top of the Navigator and click MB1
on the icon indicated below -- this icon reveals a pulldown set of three
additional icons. Click MB1 on the second of the three icons.
Compact style
Scrollable list
The second “pulldown” icon changes the Flow Editor to the “scrolled
parameter list” presentation. Notice how the Flow Editor has changed.
You still see the list of modules, but now the menu parameters for all
modules are shown to the right.
Click on a module name on the left and its parameter menu shifts to the
top of the scrollable list. The scroll bar on the right allows you to move
up or down anywhere in the list. If you prefer this Flow Editor style, you
can set this in the Flow Editor tab of the User Preferences dialog (Edit
If you prefer the normal style of the Flow Editor, click on the same icon
location and select the first pulldown icon for “Compact” style.
simply click on the same icon sequence to that got you here.
3. In the Trace Kill/Reverse menu select the table containing the list
of traces to be killed.
Notice the effect Trace Muting has on your data. Also, be aware that
this effect is only applied to the display. There is no Disk Data
Output tool in the flow, so no data is being saved.
NOTE: The “0.0” setting for the Record length to output parameter
means output the entire trace, according to the length found in the
input data. If desired, the trace length can also be redefined using the
tool Trace Length. Do not do this in this exercise.
3. The first display will have the first three shots, if you followed
instructions. If you have only one shot in view, then step forward to
the third shot.
4. Use the File --> Exit/Stop Flow pull down menu to stop the flow.
The Job Viewer will show the job status as User Terminated.
5. Exit from the flow and click on the Navigator tab and then
Datasets in the Folders list.
Notice that this file contains only the first three shots that were
displayed. Because the Flow was halted, no additional shots were
processed.
8. Use the File --> Exit/Continue Flow pull down menu. This closes
the Trace Display but allows the flow to continue sending data
through all the other modules.
9. This very small job finishes quickly. Notice that the Job Viewer
will show status of Completed instead of User Terminated.
10. Click on Datasets in the Folders view, then click MB3 on the
dataset named temp and select Properties. Notice that the file now
contains 20 ensembles, which matches the selected range of values
in the Disk Data Input menu.
11. Delete the file named temp from disk by selecting the dataset in
either the folders or table view in the navigator and using the MB3-
-> Delete option.
12. Close the Flow Editor with the X on the upper right of the Editor
dialog.
Geometry Assignment creates the OPF (Ordered Parameter Files) Database and loads information
into the trace headers of the data. The sequence of steps, or flows, depends upon available
information. This chapter is an introduction to one of the different approaches for geometry
assignment. The Geometry Overview section in the online helpfile provide further details of the
geometry assignment process.
x
near offset = 200m 25m
far offset = 3150m 200m
group interval = 25m
shot interval = 50m
Shooting Geometry
Parameter Value
Source Depth 6m
Streamer Depth 11 m
NOTE:
14. There are no parameters for this module. Execute the flow to get the
following geometry assignment main menu.
15. Select Setup and fill in the boxes according to the acquisition
geometry table. The information that is input to the Setup menu is
used for QC purposes.
17. Fill in the parameters for this menu, and click OK.
Patterns spreadsheet
19. Open the Patterns spreadsheet.
Patterns Spreadsheet
The Pattern for this acquisition geometry uses two rows of the
spreadsheet.
• The channel number in the Min Chan column is always the channel
number closest to the boat. If Channel 1 was the far offset, then the
Chan Inc would be negative. For streamer patterns, the X Offset
and Y Offset columns always define the position of Min Chan
(channel closest to boat) relative to the X-Y coordinate of the
source array. The Grp Int column defines how the receiver positions
change from the near channel to the far channel. This parameter
uses the same sign convention as the X and Y offsets.
20. Select File --> Exit from the Pattern spreadsheet menu.
Sources spreadsheet
21. Open the Sources spreadsheet.
Action needed on the indicated columns
22. When filling columns in the spreadsheets, there are three steps:
Select all rows by clicking MB3 on one of the numbers in the Mark
Block column.
Select Edit --> Fill. This will bring up a window where you will
specify to fill the column starting at 1 and incrementing by 1. You
can also access this fill menu by clicking MB2 in a column heading
(remember to look at the mouse button help).
24. The X,Y coordinates for this spreadsheet are already set by the
Auto-2D calculation. X-coordinates start at 10000 and increment
by 8.7. Y-coordinates start at 10000 and increment by 49.2. (These
increments are calculated from the 50 meter nominal shot interval,
and the 10 degree shot line azimuth).
27. Fill the FFID column starting at 5650 and incrementing by 2. Scroll
to the bottom of the spreadsheet and confirm that shot station 176
has FFID of 6000.
29. The streamer azimuth is calculated automatically from the shot line
azimuth. It is determined by orienting as if you were standing on
the boat and looking in the direction of the tail buoy. In this case,
the boat is traveling 10 degrees East of North, therefore, the
streamer azimuth will be 190 degrees. You can specify a feathering
angle with this parameter.
31. Enter 1 for the Src Pattern column. Remember that we set the
pattern number (name) to “1” when we edited the Patterns
Spreadsheet, and you must declare that relationship here.
32. The Shot Fold* column will be filled automatically when midpoints
are assigned during binning.
34. Select File --> Exit to exit and save the Sources spreadsheet.
Binning
35. Select Bin in the main 2D Marine Geometry Assignment window.
Steps
1
2
Assigning midpoints
37. Select Binning in the 2D Marine Binning window, and select the
Midpoints, user defined OFB parameters method.
CDP
binning
parameters
Offset
binning
parameters
Binning Midpoints
Six of seven boxes in the lower half of the window become active.
Fill these in with the following information. There are two
independent sets of boxes, the first three boxes describe the CDP
binning, and the second three describe the offset binning.
38. The first two boxes allow you to specify a CDP number to tie to a
given source station. Entering some value larger than the maximum
number of channels per shot is a good rule of thumb. In this case, if
we tie Station 1 to CDP 127, the first midpoint recorded of the line
will be CDP 1.
Offset binning creates the OFB (offset bin) parameter files in the
database. These are necessary for any surface consistent processes
which you might run, but they may also be useful for other purposes.
There are a couple of different scenarios for the values to use for the
offset binning parameters. The first is to make each channel its own
offset bin. In this case you would set the offset bin increment equal to
the group interval.
The second choice is to assign offset bins so that each bin has continuous
CDP coverage. For a typical marine case, you would set the offset bin
increment equal to twice the shot increment. For this dataset, the shot
interval is 50m, so the offset bin increment would be 100m.
25m inc
3150m
x
x
200
237.5m
minimum bin center = 237.5
maximum bin center = 3137.5
offset bin increment = 100
Offset bin equals twice the shot interval
41. Enter the Minimum and Maximum Offset Bin Centers of 237.5
m and 3137.5 m.
43. When the boxes are filled, click OK. When successfully completed,
click OK in the final status window.
Binning Receivers
Finalize Database
50. Select File --> Exit to exit the 2D Marine Geometry Spreadsheet.
Open DBTools
NOTE: From DBTools you can launch another database display tool
called XDB. DBTools and XDB have a number of similar capabilities,
but they each have capabilities that are unique. These will be explored
as you work through this class. You open XDB by selecting File -->
XDB Database Display from the DBTools main menu. In this class we
will work primarily with DBTools.
2. In the DBTools main window click on the CDP tab and then double
click on FOLD to view a 2D plot of CDP versus FOLD.
4. Activate the Tracking Icon and move your mouse into the display
area.
5. Check the values of the first and last CDPs on the line. They should
be 1 and 819. The maximum fold on the line should be 30.
6. Zoom along the horizontal axis. Notice the repeating pattern of 30,
30, 30, 29, 30, 30, 30, 29 fold. This is due to having 119 channels
rather than the more typical 120 channels.
7. Slowly move your cursor across the bars in the histogram. Notice
the count of 441 CDPs with 30 fold and 154 CDPs with 29 fold. All
other histogram values are part of the taper-on / taper-off at either
end of the line.
10. From the Create 2D Crossplot Matrix window click on the TRC
tab, then select TRC, OFFSET, CHN, CHN.
Zoom on the
horizontal axis
13. Activate the Tracking icon, and verify that the offset of channel 1 is
-200, and the offset of channel 119 is -3150. The values for current
cursor locations are displayed in the mouse button help area at the
bottom of the window.
14. Select View --> Close to close this display, but leave the main
DBTools window open for the next set of exercises.
1. The next step will be to load the database information to the trace
headers, and output a ProMAX format dataset. In your 2D Marine
Line add a flow named 03 Inline Header Load. Add the 3 tools as
shown below
2. In Disk Data Input, select the dataset 01 Shots from SEGY. Use
the Get All (default) option for choosing which traces to read from
the input data file.
5. Execute the flow. Confirm that the Job Viewer shows a status of
successful Completion.
1. Add a new flow and give it the name Display shots. Add the
modules Disk Data Input, AGC and Trace Display.
2. In Disk Data Input, select the 03 Shots with geometry dataset you
just created. Choose to sort by SIN (source index number), and
choose Yes for Interactive Data Access.
YES to
interactive
data access
5. Execute the flow, and use the Header List icon to check that the
trace headers are populated. The velocity tool may also be
informative.
6. In the main DBTools window select the SIN tab and then double
click on FFID.
Double-click
on FFID
white. Send this selected set of data to Trace Display by (3) an MB1
click on the Bow and Arrow icon.
Notice that 2
the Bow-
and-Arrow
icon is
“bright”
when
DBTools
is able to
communicate
through PD.
A suite of processes are provided for you that allow convenient and flexible parameter testing and
data analysis. You will use one of these processes to test parameters for gain recovery. You will
then apply deconvolution to your dataset and create a brute stack.
o Parameter Test
o Pick a Deconvolution Time Gate
o Apply Preprocessing
o Create Brute Stack
Parameter Test
Parameter Test creates two header words. The first is called REPEAT
which is a sequential counter of the data copy. REPEAT is used to
distinguish each of the identical copies of input data. The second is
called PARMTEST and is an ASCII string uniquely interpreted by the
Trace Display tool as a display label for the traces.
Parameter Test will not work with Interactive Data Access, so set
it to No.
NO
Select Yes for Apply dB/sec corrections, and enter five nines
(99999) for the dB/sec correction constant.
Note:
Entering five nines (99999) is a flag that tells the process to use the values
found in Parameter Test for this parameter.
7. Look at all groups of shots by using the Next Ensemble icon. The
Control Copy shows the unprocessed data copy for each shot.
Notice that you have the three test values annotated and “control
copy” for reference with the tested parameter “not” applied. The
control copy is always last, so it makes sense for dB/sec values to be
entered in decreasing order so the smallest value tested is next to the
control copy.
8. View the tests on all three shots, decide on the most appropriate
value for the dB/sec correction, and then select File --> Exit/Stop
Flow.
10. Edit your flow again, and change the Trace Display parameter
Number of ENSEMBLES (line segments) / screen to 1.
12. Use the Next Ensemble icon to step forward to the control copy for
the first record, then use the Animation tool to review the tests. Do
the same for the remaining shots.
1. Copy your Parameter Test flow to a new flow called 05 - Pick Decon
Gate. Your instructor can show you a variety of ways to copy a flow
or to copy the menus from one flow to another. One method is to
select the New Flow icon at the top of the Navigator, type in the new
name, and then highlight all the processes in the 04 - Parameter
Test flow and drag and drop them into the new flow.
2. Use the same parameters for Disk Data Input as before, that is to
input only SIN values 1, 88 and 176.
3. Remove the Parameter Test menu. Highlight it and hit the Delete
key.
7. Select Picking --> Pick Miscellaneous Time Gate and give the
gate a name such as decon gate, and select a secondary key of
AOFFSET in the next dialog box that appears. You are ready to
begin picking when the Pick Layers dialog appears with your gate
name highlighted.
8. Pick the top of the decon gate on Source 1 (the record on the left),
similar to what you see in the image above. Click MB3 in the data
region and choose Project on the pulldown menu. This adds a
green line showing the extrapolation and/or interpolation of the
gate onto the other records. You can use this as a reference for
picking the top of the gate on the other records. Pick the third
record (slightly different from the first), then choose Project again
to check the interpolation on the middle record. Pick the top for the
middle record if you think the interpolation would not be adequate.
9. Now you MUST initialize the bottom of the gate by clicking MB3
on the data and selecting New Layer from the pulldown menu. This
will add a new entry in the Pick Layers dialog that has the same
name as your table with a (2) prepended. Pick the bottom of the
gate on every record that has the top of gate picked.
NOTE: You MUST use the MB3 > New Layer feature to initialize and
pick the bottom of the gate. The miscellaneous time gate (GAT) table
must be picked with top and bottom pairs. Making a new table called
“(2) decon gate” will not work!
10. When you are happy with your picks, select File --> Exit/Stop
Flow, and select Yes to Save Edits Before Exiting. Your picks
should look similar to those pictured above.
Apply Preprocessing
2. In the first Disk Data Input make to select your 03 shots - with
geometry and choose to Get All traces. Do NOT leave this with a
sort for only 3 shots.
4. Open the Trace Muting menu, then click on the word INVALID to
get to the MUT table list and select the mute you picked in the
exercise in Chapter 2.
Click on INVALID
and select your
time gate from the
GAT table list
7. Select the dataset 06 Shots - with decon for the second Disk Data
Input menu. The dataset is empty now, but will be filled with traces
when this tool is executed in the flow, which is after Disk Data
Output has done its work.
10. The first “sub-flow” will run, reading data from 03 shots - with
geometry and apply the preprocessing and output to 06 Shots -
with decon. Transparently to the user, the job continues directly
into running the second “sub-flow”, reading data from 06 Shots -
with decon and displaying the data in the Trace Display tool.
Getting your first look at the stacked data will involve several
processing steps including reading in the data as CDP gathers, applying
NMO, and stacking.
1. Add a new flow named 07 - Brute Stack and add the modules as
shown below:
2. In Disk Data Input select your 06 Shots - with decon dataset and
choose to Sort by CDP. Enter */ for the sort order to select all
CDPs.
NOTE: Table files have a distinguishing name, and this VEL table
will automatically be added to the correct position in the folders of
your subproject.
Use a stretch
mute of 30%.
This is crude,
but good
enough for
this class.
7. Add a new dataset name 07 Stack - Brute via the Disk Data
Output menu.
8. Use the Test Parameters icon to check your flow before you
submit it. If you have Intelligent Parameterization turned on, this
behavior is automatically being done and should show status
messages at the bottom of the Flow Editor window.
2. Select your stack dataset in the Disk Data Input menu and default
all other parameters so that you Get All traces.
4. Default all values for AGC. A 500 ms length is fine for this
example.
6. Execute the flow. You should have a Brute Stack similar to this.
You will generally want to prepare a special dataset for input to the velocity analysis tools. The
data must not have normal moveout corrections applied, but in order to improve the analysis you
would benefit from applying a standard bandpass filter, some type of amplitude scaling and
whitening applications. The input sort order will be CDP.
The output from this processing sequence will be a velocity parameter table which may be used in
subsequent processing.
You can operate on supergathers which combine several CDPs into a single location. Supergathers
can be helpful when analyzing low fold or poor signal/noise data.
o Velocity Analysis
o Using the Volume Viewer/Editor
o Velocity Smoothing with Volume Viewer/Editor
Velocity Analysis
Set the Minimum center CDP to 150, the Maximum center CDP to
750 and the CDP increment to 150 CDPs.
In this exercise we will use 1000 m/s and 5500 m/s for the minimum
and maximum velocities for the semblance analysis and display.
Enter 9 for the number of CDPs per stack strip. This matches the
number of CDPs available in the supergathers.
To generate velocity fan function strips for our velocity analysis, use
the default Top/base range for the method of computing stack
velocity functions. Also use the defaults of 500 for the Velocity
variation at time 0 and 1500 for Velocity variation at maximum time.
Set the Interactive Data Access option to Yes and allow the
remaining parameters to default.
See explanation
below about
which items are
visible.
This is the
existing table
to be used as
reference
functions.
Stretch mute 60.0
for gather panel
The input data for this tool is precomputed which is the default
setting for Velocity Analysis menu. (The VA Precompute tool is
optional because you could do the semblance and stack calculations
directly in the Velocity Analysis tool.)
Add a new table name such as Final velocities where your picked
functions will be stored.
Provide the existing table name initial picked vels as the Velocity
guide function table.
Set the Maximum stretch mute for NMO to 60.0. This value will be
applied to the gather panel only. Because the incoming data is
precomputed, the stretch mute is already applied for stack and
semblance data.
For the moment, select Yes for Set which items are visible. The
menu expands to show a large number of parameters. Check about
1/3 down the list and select Yes for Apply NMO on gather panel and
Yes for Animate NMO on gather panel. There are many other
options that we will not explore here.
Now select No for Set which items are visible to collapse the menu.
Your choices will remain in effect. They are only hidden to keep the
size of the menu small.
9. Execute the job.The IDA dialog and the Velocity Analysis tool
should appear on screen in a short time.
10. You can modify the display by selecting one of the pulldown menus
from the top of the Velocity Analysis display window.
Activate the picking icon (by default it is activated and black), and
begin picking a function with MB1. You can pick in either the
semblance display, or the stack display. As you pick velocities on the
semblance plot, the picks are also displayed on the stack strips, and
vice versa. Use the next ensemble icon to move to the next analysis
location.
As you add picks, the gather panel updates with the current
“function”. Beware of strong multiples on this data.
12. After you pick the first location and move to the second, you may
want to overlay the function that you just picked as a guide. You
can do this by clicking on the View --> Object visibility pulldown
menu and toggling on the Previous CDP checkbox. Other guide
function options are available from this dialog box.
Your velocity picks are automatically saved to the VEL table when
you move from one location to the next. You also have the option to
save picks using the Table/Save Picks option, which is a good
practice. When you choose to exit the tool, you will be prompted to
save picks if you have made new picks since the last save.
Be aware that the picks you see in the tool will overwrite an existing
velocity function in the output table as you move from location to
location.
As you pick velocities along a line using the Velocity Analysis tool, you
may want to QC the picked velocity field. This can be accomplished by
simultaneously viewing a color display of the entire velocity field. The
tool used for this is a standalone process called the Volume Viewer/
Editor. VVE communicates with the Velocity Analysis tool through the
Pointing Dispatcher (PD), which is initiated by the IDA option in Disk
Data Input. To establish communication, the Velocity Analysis tool
must be running before starting the Volume Viewer/ Editor. After
picking and saving at least one velocity analysis location, return to the
Navigator. You may choose to iconify the Velocity Analysis window.
1. Return to the Flow Editor for you velocity analysis flow. Click MB3
on all modules to toggle them off, and add the Volume Viewer/
Editor.
Select the velocity table Final velocity that you are using in the
Velocity Analysis tool.
have made an arrangement that is workable for you. If you have dual
monitors, this should be easy.
If you have not picked any velocities (that is if the velocity table is
empty), the Volume Viewer/Editor display will contain zero
values, the display will be all blue, and the velocity scale will be very
large.
If you have only a single velocity function in the table, you will only
see a vertical color variation in the VVE Cross Section window. That
velocity function is extrapolated across the entire line/section.
Notice that the VVE tool shows light green vertical lines at the
CDPs 150 to 750 incrementing by 150. These are the locations for
data that is available to the Velocity Analysis tool. This information
is communicated through PD.
The Volume Controls dialog window will appear. Select the Cross-
section Nodes button, then click Ok. This will display vertical blue
lines in the Cross Section window indicating the positions of the
Velocity Analysis locations already picked and saved to the velocity
table. The VVE tool refers to these locations as velocity “nodes”.
When you are finished picking this new analysis location, select the
“Process next ensemble” icon again. This will not only move you to
the next analysis location, but will automatically send the velocity
picks just made to the Volume Viewer/Editor display.
9. With the “PD” icon activated, position the mouse cursor over a
node in the VVE display. The cursor should change from an arrow
to an “o” or small circle. Click MB1 on that location to retrieve that
velocity function into the Velocity Analysis display. Clicking MB2
deletes that function. You will have to re-pick that entire function if
you change you mind about the deletion.
10. To various locations in the Viewer and send data to the Velocity
Analysis tool to get familiar with the PD behavior.
11. Continue picking velocities in Velocity Analysis until you finish all
of the locations on this project and are happy with the velocity field.
12. Once you have finished picking all locations, select the Bow and
Arrow icon to send the final picks to the Volume Viewer/Editor,
then deactivate the PD icon in the Volume Viewer/Editor.
13. Select File --> Exit/stop flow in the Velocity Analysis window.
14. Leave the Volume Viewer/Editor window active for the next
exercise.
Your stacking velocity picks have now been saved to disk for later use.
Before exiting the Volume Viewer/Editor, you are going to smooth the
stacking velocities and save them to a NEW table name later use in
migration.
3. Notice that the velocity field is smoother, and the blue lines
indicating the node locations have changed in response to the
increment values you specified above.
Give your table the name smoothed for fkmig and click Ok.
Caution
Make sure that you select Save As to give your smoothed field a new name,
otherwise it will overwrite the input stacking velocity table and you will have to do
all the picking again.
o Final Stack
o Compare Brute and Final Stacks
o Poststack Migration Processes
o Tapering in Migration Modules
o Apply F-K Migration
o Compare the Stack and Migration
o JavaSeis Framework Create and SeismicCompare
Final Stack
Now that you have an updated velocity field, you should create a new
stack using the new velocities.
2. Select your new velocity Final velocity table in the NMO menu.
4. Select a new dataset name 10 Stack - final for Disk Data Output.
To compare the brute stack and final stack datasets, return to your earlier
flow that displayed the brute stack.
2. Add the process Disk Data Insert, and select your 10 Stack - final.
This process will insert your Final Stack into the flow after reading
the Brute Stack, but will not merge the datasets.
5. Use the Next screen icon to step forward to your Final stack. Both
stacks have been automatically saved as screen images.
Poststack Migrations
Migration Name Category Type Velocity V(x) V(t/z) Steep Rel
Dip Times
Memory Stolt F-K F-K Time VRMS(x,t) Poor Poor Fair 0.2
Steep Dip Explicit FD FD (70 deg) Time VINT(x,t) Fair Good Good 21.0
Time FD (50 deg) Time VINT(x,t) Fair Good Fair 10.0
Reverse-Time T-K Reverse Time Time VINT(t) None Good Good 2.5
Upper edge
taper default is 2 traces
Bottom
taper default is 200ms
Lower edge
taper default is 20 traces
In this exercise, you will run a simple F-K migration on your data.
2. Select your 10 Stack - final dataset and use Get All in Disk Data
Input.
Select Yes for Get RMS velocities from database, and select your
smoothed velocity table smoothed for fkmig.
Set the percent velocity scale factor to 95. No special reason for this
choice.
When the migration job finishes, you should compare the stack and
migration. Return to your earlier flow that displayed the brute stack and
final stack.
Look creates a temporary flow that includes Disk Data Input and Trace
Display, using the default values of these modules. There is no option
for additional processing to be done -- this a simply a convenient way to
very quickly and a simply “look” at the data.
Use the Look option to view any of the datasets in this project. You will
see a job with the name LookNNNN.0 in the Job Viewer. When you exit
the Trace Display, the temporary flow is deleted.
NOTE: The JavaSeis Framework Create tool adds two files that are
a logical part of the original ProMAX dataset. The ProMAX
components of the dataset are unchanged, and you can still use it as
input to any appropriate ProMAX tool.
o What is JavaSeis?
o JavaSeis Dataset Organization
o JavaSeis Terminology
o How does JavasSeis work?
o JavaSeis Dataset Examples
o JavaSeis Data Input - parallel distribution examples
Introduction to JavaSeis
What is JavaSeis?
JavaSeis is an open source data format for seismic data. The format was
defined through the open source JavaSeis Project which is hosted by
SourceForge.net. The JavaSeis.org website has a great deal of
information, but the content is mainly useful to software developers
rather than end users.
and everything is in its place” (if each thing exists). The location of any
unit of data within a JavaSeis dataset is defined by the indexed structure
of the Framework.
The Trace axis has a fixed number of traces with a logical minimum,
maximum and increment value. A shot record is a set of traces, and we
often identify those traces by the recording channel number. The Trace
axis may use channel number (CHAN) as the logical labels.
JavaSeis Terminology
Here is a glossary of terms that are used with JavaSeis datasets. We start
with the generic names for the dimensions or axes of a JavaSeis dataset.
• range -the triplet of {start, end, increment} values that define the
sampling or labeling of a dataset axis.
The Data Context describes the overall organization of the data within a
running job. There is typically only a small amount of data at any point
within a running job, but the system “understands” the entire Data
Context at each point in the job.
The basic unit of data passing through a job is a Frame. Processes may
be applied to individual traces within a Frame, but data moves from tool
to tool through the job as Frames.
There also are processes that operate on a volume of data at once, which
allows application of true 3D algorithms. Tools of this kind are
commonly implemented as Distributed Array tools, which allow the
coordinated use of the memory of multiple nodes. The Distributed Array
is loaded “a frame at time” until the complete logical volume is in the
array, then the chosen algorithm is applied. This will be explained in a
bit more detail in a later chapter. There are even more complex
algorithms being developed for the Distributed Array that allow
operations between and among multiple volumes, but that is beyond this
course.
Each shot record is a Frame, and the set of Frames make a single volume
of data, hence a 3D dataset. In essence, a shot ensemble is a Frame.
Conventional prestack 2D data has three dimensions as JavaSeis.
As a disk dataset, this will be read one Frame at a time. Each Frame is a
shot record, which should be familiar as an ordinary “ensemble” of
traces.
In this example the prestack data are sorted into a number of Volumes,
where each Volume contains a specific range of offsets and each offset
range is assigned an “offset bin number” (commonly OFB_NO). Within
each offset volume the data are organized in Frames of Inlines, and each
Inline contains the set of Traces that are the range of crosslines within
each Inline.
1. JavaSeis Data Input reads traces from a JavaSeis dataset that was
created and populated by JavaSeis Data Output. Parameters in the
JDI menu control how data is distributed in a parallel job.
4. Inline Merge Sort reorganizes and physically sorts the traces into a
new JavaSeis context. Data are not output from this tool until all
traces have been read into it and been sorted. The tool will
exchange traces between nodes automatically to accomplish
sorting.
NOTE: The Data Context Editor only “describes” how the data is
organized and how it is to be passed through the system. This tool does
not sort traces, it does not physically rearrange Frames, it does not
change trace header values -- it does not physically change anything
about the trace data. It only describes the organization. If you describe
the data incorrectly, unexpected things may happen.
The exact options are dependent upon the Framework of the input
dataset.
• Block by Volume
• Block by Frame
• Circular by Volume
• Circular by Frame
There are three other options for “Workpile” which are typically only
used for shot migrations such as RTM. We will only illustrate the four
common methods here.
This indicates that we have 16 shot lines (S_LINE), each of which has
11 shot stations (SOU_SLOC). For this exercise, we really do not care
how many traces per shot record or the trace length and sample rate. This
is because we read JavaSeis data “a frame at a time”. The size and
content of the Frame does not matter in the parallel distribution method.
Let’s look at this as a fully populated dataset, meaning that every Frame
contains a shot record.
Notice above that the 4th Exec has less data to process -- it only gets two
rows of Frames, while the other Execs get three rows.
Notice that no data exists in Volumes 13-16, so the 4th Exec has no work
to do. This means you might have a node tied up (via the job queue) for
that Exec, but the node would be idle. This is highly undesirable. The
problem is that the system does not know where data exists in the
dataset. When a job runs, each Exec must check whether any data exists
for each Frame. It is up to the user to create datasets that are “compact”
and have the least amount of “empty” Frames. You should avoid over-
specifying the Framework size.
Here is a way that we can work around this problem, if it does occur. The
simple solution is to use Circular by Volume distribution:
In the example above, all four Exec processes will have a relatively
equal amount of data to process. No Exec will be completely idle, so
there will be no “wasted” resource.
Project Overview
You will process a land 3D project from raw records and geometry
through prestack and poststack migration. The emphasis will be on
introducing features and options in the system rather than the variety of
geophysical tools.
We will treat this as a re-processing project. The SEGY data has full
shot and receiver geometry information in the headers, and we will use
this to initialize the database. There are no duplicated shot locations and
no duplicated FFID numbers. We have prior first break pick times and
NMO velocity data in ASCII format.
We know the specifications of the CDP binning grid, but we will also
look at a method to automatically calculate a binning grid that fits the
midpoint locations.
Project Specifications
• This project has a multi-cable rolling swath shooting geometry.
• The shot line numbers range from 1 to 154 and are 165 ft apart
• The shot station numbers range from 101 to 306 and are 165 ft apart
The full extraction process makes one very critical assumption in that
there must be some unique trace header value for all traces of the same
shot and of the same receiver. That is, there must be something unique
about each source and each receiver position in combination with the
channel number. The uniqueness can be based on any of the following:
FFID numbers, or SOURCE attribute numbers, or surface station and
line numbers, or XY coordinates, or recording date and time values.
In this exercise, you will read seven SEGY files and extract the
geometry from the headers to build a database. You will also output the
trace data to a JavaSeis dataset.
2. Click MB3 on the new subproject name, select New Flow from the
pulldown menu and give it the name 01 - Extract database files.
Initially, add only the modules shown below.
3. Parameterize the SEGY Input tool. Items flagged with arrows need
your attention for this exercise.
Type-in
or
Browse
for file
names
4. Select Disk for the Type of storage to use. Then choose Browse to
open a dialog to choose the SEGY files. Your instructor will
provide the directory name where you can find the seven SEGY
files.
When you are at the directory indicated by your instructor, you will
see the SEGY files in the Files panel. You may need to stretch the
dialog to a larger size to see the full length of names.
Use MB1 and Shft-MB1 to select the files shown in the Files panel,
then click on Add to put the pathnames for all files in the bottom
panel. With all seven files showing, click on Done to close the
dialog and return to the SEGY Input menu.
Attribute names are not sensitive to case, but you may find it easier
to read attribute names using upper case characters. The format
value “4I” indicates 4-byte integer format. The format value “4R”
indicates 4-byte real (floating point) format, and requires the
additional code for IEEE (as opposed to IBM) in order read the
bytes correctly.
sou_sloc,,4I,,197/s_line,,4I,,201/srf_sloc,,4I,,205/r_line,,4I,,209/
fb_pick,,4R,IEEE,233/cable_id,,4I,,237/
4I
Upper
case
letter I
Test Parameters
The message Error trying to build device list indicates that the
pathname to a SEGY file is incorrect. Check the spelling carefully.
Fix any errors and select Test Parameters again, and repeat until you
get the message Successful Init Phase!
10. AFTER you have a Successful Init Phase, add the Data Context
Editor tool to the flow. The Init Phase gathers information from the
SEGY Input menu and from the SEGY file that is used to populate
a number of parameters in the DCE menu.
General Tab
The SEGY Input menu indicates that the input data are shot records, so
the fundamental data type is automatically set to SOURCE.
ProMAX Tab
Many ProMAX modules need to know the primary and secondary sort
keys, as well as certain other legacy information that is declared in the
ProMAX tab.
The value Yes for ProMAX TRACENO header values are valid is tied
to the use of Extract Database Files and writing a dataset to disk. Each
trace of the dataset uniquely maps to specific information that gets
written to the TRC Order of the geometry database.
Sample Tab
These menu values are taken from the SEGY binary header. Make sure
your values match what is shown above.
Trace Tab
The CHAN (recording channel number) is selected for the Trace axis
with values ranging from 1 to 848, with the maximum number taken
from the SEGY Input menu.
The Physical units can be changed to feet if you like. However, this
value is not used by any process, so it does not matter. The Physical
spacing of traces can be left at 1.0. These values are IGNORED by the
system.
Frame Tab
For this dataset we need to pay careful attention to the Frame and
Volume parameterization. This dataset was carefully checked to ensure
there is no duplication of shot station number within any shot line. That
is, every shot record can be uniquely identified by its shot station
number and shot line number.
A Frame is an Ensemble.
Volume Tab
The Volume axis will be S_LINE (Source line, swath or sailline) with
minimum and maximum values of 1 and 154. All SOU_SLOC values
within a given S_LINE will constitute a logical “volume” within the
dataset.
NOTE: This dataset has 2094 shot records. The range of SOU_SLOC
values is 206 and S_LINE values is 154. This accommodates 206 X 154
= 31,724 shots. However, only about every sixth shot station was
occupied and only half the shot lines were used, which reduces the
number by roughly a factor of 12. You will also see that the corner areas
of the project space were not shot.
12. Add a JavaSeis Data Output tool to the flow. Open the menu and
click on NONE to get to the datasets list, then click MB3 and add a
new JavaSeis dataset with the name “01 Shots - extracted”.
Be aware:
Create at
runtime
Create
“now”
Be aware that there are some infrequent situations when the framework
can only be created at runtime.
13. Notice the previous comment! This job is a case where you must
choose Yes to Create/recreate dataset at runtime. You can click on
the Create button to see the what the Framework looks like, but
you will still have to select Yes as described above
14. When you believe you have all the parameters set properly, click on
the Create button in the JDO menu. If parameters are valid, the
framework for the output dataset will be created and shown to you
at the bottom of the menu. Check the values and make sure they are
correct. It is possible to have a valid framework that has the wrong
ranges of values. If any errors or problems are found, it will report
(FAILED 1b) or very similar, and you will see messages at the
bottom of the flow to help diagnose what is wrong. Your instructor
can provide assistance in deciphering errors.
This job
requires
Yes to
create /
recreate
Check
these
values
carefully.
15. Execute the flow. This flow must be run with a single Exec (or
joblet). Your instructor will discuss Execs and joblets as we
progress further into the class and start running jobs in parallel.
There are two reasons this flow cannot be run as a parallel or multi-
joblet flow. First, SEGY files cannot be read in parallel, as they are
only read from the beginning to the end. Second, this job includes
the Extract Database Files module that writes to the OPF database
files. This MUST be done as a single joblet as we are not able to
write to the OPF database files in parallel.
In the ProMAX tab of the DCE menu, you selected Yes that the
TRACENO values in the trace headers are valid. This declares that
there is an explicit one-to-one match between the traces in the
dataset and the TRC OPF (Ordered Parameter File). This “match” is
how we associate the database information with the trace data.
Another term you may see is that the dataset has “valid trace
numbers,” permitting further processing with a consistent pairing
between the OPFs and the dataset.
Parameter Defaulting
User Defaults
As you build more and more flows, you may find that you want to
change the defaults of some parameters in some processes so that you
don’t have to continuously change them every time you use the same
tool. This section will introduce this concept by setting some parameters
in the JavaSeis data output process.
NOTE: This procedure is used for both ProMAX and SeisSpace menus.
The defaults are stored in a special flow. The administrator may set up
some defaults for everyone to use, but users can set up their own
defaults. On the Navigator tool bar, select Edit > Administration >
Edit Parameter Defaults > User Defaults as seen below:.
A special flow editor is opened in a new tab called the Default Flows tab
and is automatically called User.0 [Parameter Defaulting].
Add the Trace Display tool (blue ProMAX version) and open the menu.
Near the bottom of the menu, select Entire Screen for the Trace scaling
option, and click on the checkbox to the left. It is the checkbox that
makes the default take effect, not just changing the value.
Save this flow. If you do not save the flow, you have not changed the
default.
Now, every time you select (blue) Trace Display for a flow, the menu
default for Trace scaling option will be Entire Screen. You can change
the default values for any parameters in the menu to suit your
preferences. Of course, these parameters can be changed to the value
you need in case your preferred default is not appropriate to a particular
flow.
You can add tools (modules) to this special non-executable flow as you
find more tools where you want to change the defaults. This flow is
hidden from your real Areas/Lines/Flows, as it has a unique purpose in
the system.
5. On the spreadsheet window, select View -> View All -> Basemap
to open a receiver and shot location map. When shot information
exists, it is automatically added to the view.
6. Select the Cross domain (double fold) icon. This icon allows a
variety of information. Move the cursor onto the data area and
check the mouse help information below the map. Hold down MB1
to see the receivers for the shot closest to your cursor. Move the
cursor across the project to get a better understanding of the
shooting geometry. Hold down MB2 to highlight the shots that
were recorded by the receiver nearest the cursor.
8. Select the Zoom icon and use MB1 to zoom in. Re-select the
Cross-domain icon. Click and hold MB3 on any receiver, then drag
the cursor to check the spacing between receiver stations and
receiver lines. Distance and azimuth values are shown below the
map as you drag the cursor. Receiver spacing is ~165 feet and cable
spacing is ~330 feet.
Select Color > Bar to display the elevation color scale. Select Color
> Extents to change the elevation range.
10. Select the Views > Remove > Shot and Receiver based Field of
Elevation.
11. Return to the main menu and click on Sources to open the sources
spreadsheet.
Click MB1 on any shot location on the basemap. This action shifts
the spreadsheet to that shot.
After selecting a shot location with the Report feature, click MB1
on the BORDER of the spreadsheet window. Now you will see the
selected shot highlighted with a black box.
The Setup menu allows you to define global information applying to the
configuration and operation of the Geometry Spreadsheet. Much of this
is already set correctly by the Extract Database Files tool that
initialized the database.
Station Intervals section sets values that are primarily used for QC
functions that will be explained later.
Nominal shot station interval -- leave at 0.0. Shot spacing for this
project is too irregular for us to do any useful QC based on spacing.
Nominal Survey Azimuth -- 87.5. This is the correct value for the
project, but does not effect any QC that we will perform.
Source type -- Select Surface seismic source for this project. This
value sets the appropriate default value for certain menus related to
datum statics.
NOTE: The Units value only affects the annotation that you see on
displays, such as showing “m/sec” or “ft/sec”. It is assumed that all data
used in the system has the same unit type. If you have coordinate data in
metric units and need to import velocity data that has English units, you
will have to adjust the velocity values to metric unit values before you
use the velocities. This can be done in the ProTab editor, which you will
use later in this class.
2. Click the OK button to register these values and close the dialog.
Receiver Interval QC
1. Open the Receiver Spreadsheet for another QC option.
2. Click MB1 on Setup > Sort > Ascending. The following warning
appears.
We will sort the receiver data only temporarily. You will be warned
again before you can commit to saving the sorted data. And, even if
you did save, the tool will ensure that integrity of the geometry data
will be retained. Click Ok.
3. Notice the help information at the bottom of the window. You need
to sort the receiver data by Line and by Station (within each line).
Click MB1 on the column heading Line, then click on Station.
The data is now sorted, even if you didn’t see anything happen.
station interval of 165 that was entered in the Setup menu. Scroll
down to look for anomalous values. You will see a big difference at
the change from the last station on a line to the first station on
another line. Do not make any changes.
6. Select Setup > QC > No QC Fields. You must do this or the entire
spreadsheet program may fail.
Midpoint Assignment
This exercise explains the CDP binning procedures. We are treating this
as a reprocessing project and already know the details for the binning
grid that we need to use.
You will also see how to automatically calculate a binning grid that fits
exactly onto the midpoint positions. You can manually adjust the grid to
optimize its position.
1. In the main menu click MB1 on Bin. to open the dialog below.
A good way to remember this is to think of the binning grid based on the
back of your left hand, with your index finger pointing along the Y-axis
and your thumb pointing along the X-axis. In essence, the bin grid uses
quadrant I of a Cartesian system.
Inline 1
Xline 1
CDP 1
o
225
Azimuth __________
o Inline 1
X
Inline Parallel to ______ 45 Xline 1
CDP 1
Azimuth 87.5o
Y
Inline Parallel to ______
Inline 1
87.5o Xline 1
87.5o CDP 1
1. Select Define binning grid from the binning window and click Ok.
2. Select Display > Midpoint > Control Points > White (or choose
black if you prefer).
3. On the XYGraph display select Grid > Display, then Grid >
Parameterize.
This default grid has the X-Y Origin at the lower left corner of the
map and 10 by 10 cells with 100 by 100 spacing.
For this project we will assume that the grid details have been
provided from previous work. If you were given three XY corners,
some simple math would be needed to calculate the values required.
TIP: If you were not provided the bin grid parameters, you could do
some simple calculations based on shooting geometry and shot and
receiver spacing, as well as finding an appropriate azimuth using the
shot-receiver basemap features that were shown earlier.
4. Zoom in on the midpoint map to see how well it fits over the data.
This project was carefully shot, and the midpoint data fall in
relatively tight clusters that generally fall nicely centered in the grid
cells.
5. Scroll to the northwest corner of the grid. Notice the origin grid cell
has an X in it. The Origin X-Y is the center of this grid cell. The
Move Grid
icon
6. If your grid does not fit neatly over the midpoints, you can select
the Move Grid icon and drag the grid using MB1.
Notice the
appearance of
the grid along
the north edge.
Compare this to
the next image
after switching
the grid display
mode after
selecting
Grid > Drawing
cell. This lets you have a “crosshair” to help put the grid centers
more precisely on the midpoint clusters.
In the alternate
Drawing mode,
the grid display
changes to show
the center of each
grid cell as a
“crosshair”.
Select
Grid > Drawing
to change the
grid display mode.
back to normal.
NOTE: The Grid > Drawing option is a toggle that changes the way
the grid is displayed. The grid parameters are not changed by using
this option. Select Grid > Drawing a second time to return to the
normal grid display mode.
9. Save the grid definition by selecting Grid --> Save to and give
your grid a name such as Final typed-in grid. and click Ok.
10. Exit from the XYgraph by selecting the File > Exit > Confirm.
Re-load the final CDP Binning info and Complete CDP Binning
1. Return to the 3D Binning and QC window and select Bin midpoints
> OK. Click on Load to bring in the final grid parameters into this
menu.
2. Set the Min offset to bin to 165.0 and Offset binning increment to
330.0. There are variety of rationales for selecting the offset
binning parameters. We chose to use twice the receiver interval.
Enter
values for
these three
parameters
only.
These are
the only
values
needed to
calculate
a grid
that fits
exactly
over the
midpoints.
2. Enter values for Azimuth, Grid bin X dimension and Grid bin Y
dimension. (87.2, 82.5 and 82.5). Then click MB1 on Calc Dim on
the bottom left of this dialog. This will calculate values for the next
four parameters.
3. You now have a parameters for a binning grid that fits a minimum
sized rectangle containing all the midpoint data of the project.
SAVE the Grid. Change the grid name in the menu above from
DEFAULT grid to something very descriptive, such as
calculated grid 87.5 Azim, 82.5x82.5.
4. Now you can return to the 3D Binning and QC window and select
the Define binning grid option and open XYGraph. Display the
midpoint data. Select Grid > Open and select the calculated grid
that you just saved.
6. Return to the 3D Land Midpoint Binning menu, Load the grid, set
all other parameters as discussed previously, then Apply the grid.
4. Examine the resulting plot, and ensure that inlines range from 1-
308 and crosslines range from 1-390. If your ranges are wrong, then
you probably set the inlines to be parallel to the X-axis instead of
the Y-axis in a previous step. Go back and correct it now, if needed.
8. Select File > Exit from the main spreadsheet menu to exit the
Geometry Spreadsheet. Close any remaining displays such as
XYGraph. It is a good idea to keep your workspace clear of
unneeded windows.
9. Open DBTools and go to View > LIN. A new dialog will appear.
Scroll down to the bottom of the window and click on View Bin
Design.
Scroll
to the
bottom
10. Make sure your bin display looks like the display below. Note the
location of the origin, the number of Inlines and Crosslines. If your
display does not match this, return to the Geometry Spreadsheet to
determine what you need to change.
12. Use Database --> Close to exit the Line Database Editor
1. If the geometry in the database looks good, build the following flow,
03 - Load Geom to Headers:
2. In JavaSeis Data Input select your input dataset that contains the
shots after geometry extraction.
3. In Inline Geom Header Load select the option to match the traces
by their “valid trace numbers”.
Since the traces were read and counted with Extract Database Files,
you have a “valid trace number” to identify a trace. In case the grid
definition you used was a little different from the one in the book,
select Yes to the two Drop Traces questions to avoid any traces with
“null” trace values for CDP or receiver location.
NOTE: To compare multiple menus, open one menu with MB2, then
open the other menu by holding the Cntl key as you click MB2.
5. Execute this flow. This flow can be run either single or multiple
joblet depending on the training environment. Your instructor will
describe the options that are available for this class.
6. Use MB3 on the dataset name to open the Foldmap of the output
dataset and watch it populate as the job runs. On the Foldmap,
select Options -> Update Automatically.
7. After the job finishes, go to the Datasets list in the Navigator and
select MB3 -> Properties on the output dataset name. Review the
details of the dataset that are available here.
8. The dataset should contain exactly 1,699,444 traces and reflect that
both the Geometry and Trace numbers match the database.
The preprocessing flow that you will build will apply a first break
suppression mute and a simple spiking deconvolution. Therefore, you
must pick a top mute and a miscellaneous time gate (decon design gate)
to satisfy the parameterization requirements of these processes.
Since 3D shot records usually span multiple cables, they will typically
have some duplicate offsets. Sorting the shot record by offset may help
pick the parameter tables, since both tables are time values, interpolated
as a function of offset.
There are two basic ways we can approach the location issue. We could
go directly from the Foldmap and display an arbitrary selection of shots,
picking tables from each shot. This would be fine unless we wanted to
revisit those exact locations to make changes to the way the tables were
picked.
2. From the datasets list use the MB3 options menu and select Fold
Map for the 03 shots with geometry dataset.
3. The view above suggests locations you might use for picking
parameters.
(206 shot stations times 154 shot lines). The inconvenience of this is that
the Foldmap would have five tiny blue spots showing the shot locations
and you would have to zoom in on the Foldmap in order to find and click
on each location to bring the data into the Viewer. If you wanted to use
this sparse dataset in the Seismic Compare tool, finding the live shots
within the sparse dataset is even more difficult.
NOTE: This flow will include the module Data Context Editor,
commonly called DCE. This tool allows the user to change the data
context in the flow. When you have Intelligent Parameterization
“active”, the context of the flow is being evaluated as you add modules
and change parameter values. We recommend that you build the flow
and make parameter choices for all tools that come before DCE in the
flow. This especially applies regarding the input dataset in JDI, which
sets the initial context of the flow.
Add the DCE menu after you have made the key parameter choices.
When you add the DCE menu to the flow, its parameters are populated
with the data context of the flow at that instant. If you then change
parameters or modules, this may change the context of the flow, which
would then require changes in the DCE menu.
You can use the Reset button in the DCE menu to refresh all its
parameters in to match the current context according to tool and menu
choices above it in the flow.
BE AWARE that the Reset button refreshes ALL TABS of the DCE
menu.
You must type-in the numbers shown below into the selection list
parameter of the JDI menu. All parameters not shown further down in
the menu can be defaulted.
24:159/24:265/80:218/135:133/135:294/
3. We use Trace Header Math to renumber the five shot records. Set
the Select mode option to Sequence renumber mode.
Select
Follow
details
below
Type in SEQ as the name we will give for this new header. Let this be
5. Set the starting and increment values to 1. Your THM menu should
look like this:
At this point in the flow we now have a convenient attribute called SEQ
that numbers the shot records from 1 to 5 (remember, we only have 5
records in this dataset). We will use this attribute as part of a new data
context.
7. Select the Frame tab of the DCE menu and choose the SEQ
attribute and set the first and last SEQ values to 1 and 5
respectively.
NOTE: If you feel you have made a mess of the DCE menu parameters,
you can click on the Reset button at the bottom. Beware that this resets
every menu item of every Tab to match the context above this point in
the flow. Use the Reset button with caution.
NOTE: Check the framework shown at the bottom of the menu and
confirm that it is 3-dimensional with SEQ 1-5(1) as the Frame axis.
9. Execute the flow. You should inspect the job log after the job
finishes. The summary at the bottom of the log should help you see
how much data was processed.
Notice on the Fold Map that the horizontal axis shows that there is only
one single Volume, which is evidence that this is a 3-dimensional
dataset. The vertical axis is labeled with SEQ(1-5,1), indicating there
are five Frames in this dataset.
2. Click on any of the five shot locations in the foldmap to display the
corresponding data in the viewer.
Notice that each shot record is displayed with the traces in their existing
sort order which is by channel number. You can see each of the eight
cables of the shooting geometry.
Sort icon
Pick Editor
3. Change the sort order of the traces by selecting the A->Z icon on
the viewer, then select AOFFSET as the sort key. Click on the
CLOSE button to dismiss the sort dialog.
NOTE: The display contains a Frame of traces. We can sort the traces
within a Frame at any time, including while they are in this display tool.
4. Click on the Pick Editor icon and select Top mute. Alternatively,
you can select Edit on the tool bar and select Pick editor to
initialize picking. In the bottom of the dialog box you will need to
provide a name for the mute table that you will pick. All mute
tables appear in the same table list, so it is important to use
descriptive names for table. The name should include information
to remind you of the purpose of that mute. Something like “top
mute - pre-decon” is recommended.
7. As you pick, notice that the red line is simply a straight line
between pick locations on the screen. The green line is the
interpolated value based on the AOFFSET value of each trace in the
display. This is how the mute will actually be applied. Use the
Refresh icon on the Picks dialog box to see the affect of the mute.
NOTE: this is a graphical affect only; no traces are actually being
processed.
8. Create a “miscellaneous time gate” using the Edit --> Pick Editor
pull down or the Picking icon in the icon bar to use as a time
window for the deconvolution design gate. Do not include any first
break or refraction energy in this design gate. After you pick the
top of the miscellaneous time gate, move the cursor onto the trace
data area and click MB3 --> New Layer to add the bottom of the
miscellaneous time gate.
Notice that the Picks dialog highlights the active element that you are
picking. Be careful as you move from one shot to another that you select
the appropriate item in the Picks dialog. The “- base” is the bottom of
the time gate immediately above it in the list.
In the Trace Display window, red shows the active picks, green shows
the interpolation of the red picks and blue shows other pick elements
that are not active.
9. View all shots and adjust the top mute and deconvolution design
gate as necessary. It is a good practice to select the Save All button
occasionally in the Picks dialog.
10. Exit from the Trace Display (2D Viewer). You will be prompted to
Save Picks if you have not saved since your latest pick.
NOTE: If you close the Fold Map before exiting the Trace Display, you
will lose any unsaved picks. When picking parameter tables, you should
always save picks or exit the Trace Display before closing the Fold
Map.
In this section and the next section, we will show two different methods
for displaying power spectra and comparing data. In this section you
will create a ProMAX-style job flow and run the Interactive Spectral
Analysis (ISA) tool. In the subsequent section you will see how to use
SeismicCompare with its interactive processing capability and spectral
display view to perform a very similar exercise.
As with all tools and features shown in this class, you can choose what
is useful for your purposes.
2. Exit from the display using the File > Exit/Stop Flow pull down
menu.
3. Return to the flow and change the ISA Data selection mode to
Single Subset.
Select
Subset
In this mode you can select a Single Subset of the available data for
the purposes of computing the average power and phase spectra.for
a smaller Time-Offset window of the record.
5. Click on the Select Rectangular Region Icon. Use MB1 to set one
corner of the analysis window, then click MB1 a second time to set
the opposite corner of the window. The data window and spectral
windows will change configuration to match your data selection.
Exit from the display using the File > Exit/Stop Flow pull down
menu.
6. Return to the flow and change the ISA Data selection mode to
Multiple Subsets. Also select Yes to Freeze the selected subsets in
the ISA menu.
8. Select the Select Rectangular Region icon. Use MB1 to set one
corner of the analysis window, then click MB1 a second time to set
the opposite corner of the window. Select the Options > Spectral
Analysis pull down menu. If you select a new area and repeat the
Options > Spectral Analysis pull down selection, a new window
will appear. In this way you can compare the spectral results for
different areas.
2. This flow will read the first shot frame, apply a default AGC and
then apply the first break mute that you picked earlier. This frame
(shot record) will be duplicated and the second copy will have the
decon applied using the design gate you picked earlier and the
default parameters.
3. After the display comes up you can select the Options > Spectral
Analysis pull down menu to show the spectral estimate for the data
before decon.
5. Select the Options > Spectral Analysis pull down menu again to
show the spectral estimate for the data after decon.
You can experiment with selecting subsets of the shot record before
and after decon. Notice how the tool remembers the selection
window from the first copy to the second copy of a record. Each
“new” shot record has a different configuration, so the window
expands to the full record, so you must select a new subset.
You will use the 04 shots for testing dataset, but the capabilities shown
in this section are generally applicable to any JavaSeis dataset.
1. In the Folders view for your subproject, click on or toggle open the
Datasets level.
2. Click MB3 on the 04 Shots for testing dataset and choose Seismic
Compare. The following display will appear.
Sort
traces
Hand or
Selection
icon
Processing
Flow
icon
The red border around a data panels indicates it is the selected panel
that will be acted on. Typically, the Hand (selection) should be
activated (click on it) in order to select a panel. Click on the data of
either panel to select it, as confirmed by the red border. Beware that the
Rectangular selection icon provides a different function that will be
exercised below.
You can use any tool in the Processes list that operates on individual
traces or on single ensembles. This capability makes parameter testing
very easy.
Initial SeismicCompare flow dialog Flow dialog after adding TAR tool
5. Click on the Submit icon (Process Initial Data) on the bottom left.
This applies the processes in the Flow editor to the highlighted
Frame in the display. The default option of “New Tile” opens the
processed frame in a New Tile, giving you two tiles or panels in the
display.
6. You now have two copies of the shot record, one “raw” and one that
has 6 dB/sec gain applied. It may be easier to evaluate the data if
you zoom horizontally to show only one or two receiver lines of
data and change to wiggle-trace mode (short-cut: click MB3 on the
data area and choose Quick Display > Wiggle/Variable Area
Display).
7. The dataset name is shown below the data. When you apply
8. Click MB3 on the data area for a pulldown menu that allows you to
modify the display. The option to switch easily between variable
density and wiggle trace is very handy.
9. The red border around a tile indicates it is the “selected” tile. Click
on the panel on the right to select it, then click on the Delete key on
your keyboard. The selected tile should disappear from the display.
10. Let’s run a test to find a reasonable value for the gain correction.
Return to the SeismicCompare flow editor and open the menu for
True Amplitude Recovery. This flow editor has two special
features that make testing simple. One allows you to select a menu
parameter for testing multiple values of a parameter, and the other
allows the tested parameter values to be annotated automatically on
each data tile.
Cntl-MB2 to
select a parameter
for testing
MB2 to annotate
a parameter’s
value on the
data
Enter the values to be tested which are 3, 6, and 9 using the “pipe” ( | )
character as a separator. Spaces are not required, but make it easier to
see the values clearly.
You may add more tools to this editor, but you can test only one menu
parameter at a time. When you click Cntl-MB2 on another menu
parameter, this automatically turns off any other test parameter.
Previous test values for the testing option are remembered, in case you
want to review those tests.
You should now have a display with four tiles. Use the Location
Selection dialog to move to another shot location. Notice that the
processes are applied and you get the same set of panels, but with a new
data Frame.
13. Return to the flow editor and use Cntl-MB2 and MB2 to turn off
the test mode for the TAR menu.
15. Add the tool Trace Muting, and select the mute table that you
picked earlier.
16. Click on the Submit icon to apply these processes, and your display
should have two tiles.
17. Add the tool Spiking/Predictive Decon to the flow, select the
decon design gate that you picked earlier, and set the decon
operator length to 200.
18. Apply this set of processes, which adds a third tile to the display
.Notice in the display below that the third panel is "selected". The Flow
dialog has a white background, indicating the processing that has been
applied in that panel. The white background (instaed of yellow) also
indicates that nothing more can be applied to that panel. You can only
"add" new tools or change parameters when the Flow dialog is yellow
and associated with a panel that can have processing applied to it.
19. Locate and click on the Sort Traces icon (A-Z icon) and choose
AOFFSET from the selection dialog that appears. The traces in all
tiles are sorted immediately when you click on an attribute. Close
the selection dialog after making the sort selection.
SeismicCompare should now have three shots records in view, with the
traces sorted by offset in all tiles. The first tile is the raw shot, the second
tile has a top mute and gain recovery applied, and the third tile has a top
mute, gain recovery and spiking decon applied.
20. Click MB1 on the Amplitude Spectrum icon (bottom icon on the
left side of the SeismicCompare display). This opens a spectral
display for the entire selected tile.
The display opens with the area for dataset names virtually hidden. We
will be comparing several datasets, so drag the divider to stretch open
that part of the display so you can see the name(s). You can re-size the
display in whatever way you wish.
Add Spectrum
of selected panel
TIP: Hold the cursor over each icon of the Amplitude Spectrum
window to see a variety of options.
22. Click on the Hand icon on SeismicCompare, then select (click on)
a different data tile. Now click on the Add spectrum icon on the
Spectrum display. Now you have two spectra showing.
Alternatively, you can add the spectra for all tiles by clicking Shft-
MB1 on the Amplitude spectrum icon.
NOTE: When you “select” a tile that has processing applied to it, the
Flow Editor shows exactly the processing that has been applied to that
panel. Notice also that the Flow Editor turns from yellow to white,
indicating that you cannot edit anything in that mode. If you want to add
more processing, you need to re-select the first (original) tile.
23. Play with the added processing and spectral display option until
you are comfortable with the key features. Step to different Frames
(shot records) using the Location Selector dialog as well.
IMPORTANT:
25. Go back to the Navigator and add a new flow and give it the name
08 Preprocessing.
26. Make sure you have the first tile selected in the SeismicCompare.
Go to the SeismicCompare Flow Editor, select all three tools using
MB1 and Shft-MB1, then drag those tools and drop them onto your
new flow. You just saved a lot of time, effort and possible error for
parameterizing your batch production job.
REMINDER: You can choose which tools and workflow methods are
most useful and effective for your situation.
Seismic Compare has far simpler steps and greater flexibility for
changing the processes applied and data navigation and selection.
• Compute static time shifts to take the seismic data from their
original recorded times, to a time reference as if the data were
recorded on a final datum (usually flat) using a replacement
velocity (usually constant).
• Partition the total statics into two parts, the Pre (before) NMO term
and Post (after) NMO terms relative to N_DATUM.
• Apply the Pre (before) -NMO portion of the statics and write the
remainder to the trace header.
The first three steps occur in the calculation phase and the last step in the
apply phase. The calculation phase uses your input parameters in
combination with the information in the database and then results are
saved in the database. The apply phase reads the information from the
database and transfers it to the trace headers. ProMAX offers several
options for both phases; which option you should use depends on how
you are processing your data.
project, even though you are only updating the headers for the input
dataset. In a large project, the time spent doing the redundant datum
statics calculation can be substantial, especially if combined with having
to wait to get access to the database.
In a typical workflow for large volume land processing, you would run
Datum Statics Calculation once to update the entire project database and
then run Datum Statics Apply for each dataset comprising the project.
Since Datum Statics Apply only reads the precalculated and saved
information in the database and transfers it to the trace headers, you
avoid repeating the calculation phase in Apply Elevation Statics.
Processing time is saved and the possibility of having several flows
trying to write to the database at the same time is eliminated.
S.P. CDP
Receiver
N_DATUM
NMO_STAT
Surface
Elevation
NMO_STAT
FNL_STAT
F_DATUM
Database Attributes:
N_DATUM = floating datum
NA_STAT = statics less than one sample period which are not-yet-applied
and ELEV for the Vertical axis, and ILINE_NO for both Color and
for Histogram. Make sure your selection are as shown here:.
4. The initial view does not make much sense. We need to focus on a
smaller amount of data. Click in the histogram at Inline number
100.
Elevation
C_STAT02
C_STAT01
Histogram of
ILINE number
C_STAT02 is
smooth.
101 point filter
C_STAT01 is not
as smooth.
51 point filter.
One major criteria that you might use to help diagnose a reasonable
smoothing value is to look at the value of C_STAT in the area of a
proposed Super-Gather for velocity analysis. You would prefer that all
CDPs in a Super Gather have the same (or very similar) C_STAT value.
It is likely that higher values for smoothing are necessary in areas with
rapidly changing elevations. The channel feature of this survey requires
a smoother that is larger than the default value.
You will use the 02 version of the statics when you build the processing
flow later.
NOTE: The elevation range for this project is very small. This exercise
is to demonstrate features and functions in the system. If we were
processing this project “for real”, we would not be very concerned about
such a small elevation range and small datum static range.
Trace Statistics provides one method for identifying bad data and killing
traces. We use it as a way to introduce many other features of the
system, such as DBTools. There are many other modules and workflow
options for killing traces and removing noise from your data. Explore
the variety of modules in the Processes list.
In this exercise you will try to identify bad traces with Trace Statistics.
Based on the values computed for each trace, you will edit the data
volume to remove abnormal traces.
4. Click on User-defined File and enter the path and filename of the
first break picks as specified by your instructor (e.g., /.../misc_files/
salt3dfbjs). The file has an extension “.a_db” which you should
NOT include. This file was exported by ProMAX and has a
recognized format for importing.
6. When all data are displayed, click on Cancel at the lower right
corner of the import dialog.
12. Parameterize the IF menu to select traces with the FB_PICK value
between 0.0-4000.0 msec. This is to prevent traces with “NULL”
first break pick times from being used in Trace Statistics, which
would cause the job to fail.
13. In the Trace Statistics menu, select all of the available statistics,
choose to use first breaks, output to the Database & Headers, and
add a description of the statistics.
NOTE: This data has a sample rate of 8 msec. The “pre-first break”
attributes require 10 live samples above the first break pick time. Traces
with a pick time less than 88 msec will output a NULL value for the
PFBAMP and PFBFRQ attributes.
14. Output the traces to a new JavaSeis dataset 07- Shots with Trace
Statistics
Note ranges of values for each statistic for which you might elect to kill
the traces. Some simple examples are shown below.
If you have time, you may run Ensemble Statistics for several
attributes. The PFBAMP for receivers on this dataset clearly shows
some areas where there is probably some kind of mechanical pump or
other noise.
1. Edit the 07 Trace statistics flow to display the shot gathers with and
without the trace edits. We will use the module JavaSeis Data
Match to bring the original data into the flow, then display the traces
with and without the trace kills.
3. Add the modules as shown below on the left side. Parameterize the
menus from IF through Trace Display Label as seen on the right
side. The header attribute DS_SEQNO (dataset sequence number is
used to control data in the flow. When DS_SEQNO is 2, the traces
only get a new Trace Label value. The second dataset in the flow is
03 Shots with geometry in the JavaSeis Data Match tool. All other
traces are from the first dataset in the flow (DS_SEQNO = 1) are
passed into the ELSE to ENDIF sequence, which is where traces
will be killed based on various Trace Statistics header values in the
07 Shots with trace statistics dataset.
data. If you get it wrong, you kill the good data and pass the bad
data.
6. Near the bottom of the Trace Display menu set Number of display
panels to 2 and Trace scaling option to Entire Screen.
8. When Trace Display opens, select View > Header Plot >
Configure...
9. Select the following headers to plot, and choose a different color for
each one by clicking on the black box next to “Line color”:
• PFBAMP01 (red)
• TRCAMP01 (blue)
• TRC_TYPE (black)
11. Use the header plots and the database display tools to determine
ranges and values of particular statistics that will be useful in
performing trace kills.
12. Use header plots of the statistics and TRC_TYPE (black) to see
which traces have been killed. Adjust parameters as required to edit
the appropriate traces.
13. Select File > Exit/Continue Flow to ensure that the entire dataset
is processed and attribute information is written to the database.
14. When the flow completes, open DBTools. Select the TRC tab and
double click on the TRC_TYPE parameter. If you put your cursor
over the small red box on the right edge in the histogram you can
see the number of traces that have been killed.
Preprocessing Flow
You should be able to drag-and-drop some of the tool menus from other
earlier flows.
580, 662, 731, 917, 920, 995, 1091, 1131, 1217, 1700, 1708
These shots are known to be bad. This is the type of information that
you might get from Observer Notes.
These values will kill about 90,000 traces or about 5% of the data.
Triple check your parameters choices for the range of values to kill.
5. Include Trace Muting and select the top mute that you picked
earlier.
8. Datum Statics Apply should use the attributes with the 101 point
smoother that were previously calculated in the N_DATUM test
exercise.These are the “02” attributes.
9. Add Trace Display Label and indicate that “decon and elev
statics” have been applied.
10. In JavaSeis Data Output add a new dataset name, then click on the
Create button to build the framework of the output dataset.
11. Execute the flow. You may choose to run this with 2 or more
“Execs per node”, and you may monitor the job via the output
dataset’s Foldmap. Open the Foldmap and select Options > Update
Automatically.
2. On the Create Table Dataset dialog box, use the pulldown menu,
scroll down and choose VEL (RMS (stacking) Velocity).
3. Type in imported from ASCII for the name. Click OK. This
creates an empty table.
4. In the Folders view, click on Tables to show the list of all tables in
the center panel. Click MB3 on imported from ASCII and select
Edit using ProTab from the pulldown menu.
Click on the File > ASCII Import pull down menu in the upper left
corner to open the Import File Selection dialog.
Up one
directory
8. The essential steps are to highlight the X Coord button in the upper
left corner. Then drag the mouse over the “X coor” columns in the
lower window. Do the same for Y Coord, Time and Vel_RMS.
Do NOT select CDP. If this data came from another system would you
trust that the CDP numbering system is identical to the numbering for
this system? In general it is X and Y coordinates that can be shared and
trusted when exchanging data. In Line and X Line are not in this file.
How confident would you be that they match your numbering scheme in
X-Y space? If done with care, you may be able to use inline and
crossline numbers, but ONLY if you ensure the X-Y coordinates match
the inline-crossline numbers correctly.
9. The example below shows the completed selections. After you have
defined the data to import, click File > Continue.
10. You will get a pop-up message confirming the number of rows
imported. Click OK to dismiss the message.
11. Now you will have the values loaded in the table as shown below.
Notice that the CDP column is automatically filled to match the
project numbering scheme.
NOTE:
This ASCII input file contains velocity functions with CDP numbers and
XY coordinates as reference. ProMAX 3D parameter tables rely upon valid
X and Y coordinates that are consistent with the LIN order of the database.
The LIN contains the relationship of XY space and inline-crossline-CDP
numbering for the project.
When you import data from another vendor or source, be very careful in
choosing the information to import. You will probably need to “resolve”
other fields of the table based on what you know to be valid information for
your project. XY coordinates are the prefered reference because they are a
common reference for virtually everyone. Inline, crossline and CDP number
systems may change when a project is reprocessed.
12. Use the File > Resolve pull down menu to compute the Inline and
Crossline values from coordinates.
13. Click on File > Save and File > Close All to save the parameter
table and exit from the editor.
14. Check the table for correctness by going back to the list of tables in
the Navigator and select to Edit the table with MB3.
Notice that the table does not contain the Inline and Crossline values
that we resolved for previously. This is NORMAL behavior. The
Inline and Crossline numbers are not stored with the table because
those values can be resolved for viewing whenever you choose. The
table is smaller on disk by not storing duplicated information.
15. Click on File > Close All to exit from the editor.
y x
t
b
a p
c
Known x, y, v, t point
Interpolated x, y, v, t point
shown in the example menu. Notice the option for sorting the traces
within the Frame. We cannot use Floating Point attributes for the
Framework, but we can sort the traces by AOFFSET and then let
them be indexed by SEQNO, which is a sequential counter.
2. Submit the job and on completion, review the job log to confirm
the sorted range of inlines, crosslines and the maximum fold. You
should find values in the log as follows:
• Volumes = 308
• FramePerVolume = 390
• TracesPerFrame = 41
• TraceInSort = 1699444
• Min logical volume = 1
• Max logical volume = 308
• Logical volume increment = 1
• Min logical frame = 1
• Max logical frame = 390
• Logical frame increment = 1
• Min logical trace = 1
• Max logical trace = 41
• Logical trace increment = 1
• Label4 = ILINE_NO
• Label3 = XLINE_NO
• Label2= SEQNO
The details above describe the data context for the sort map.
The purpose of the sort map is to allow access to the traces needed to
resolve a new ensemble organization from the dataset. There are two
distinct ways in which you can use a sort map:
2) direct the reading of required traces via the Fold Map for the 2D
Viewer.
The sort map can be utilized in JavaSeis Data Input such that the
required traces for each Frame (CDP gather in this example) can be read
directly from disk. For a batch job this can be a very inefficient method
because the job has to “seek and read” randomly from the disk to get the
required traces. The preferred method for sorting data in a production
job is to use the module Inline Merge Sort which allows JDI to stream
data directly from disk into memory, and then perform the sorting
operation in memory by using as many nodes as needed to hold the
dataset.
NOTE: The terms Fold Map and sort map identify very distinct things.
• A sort map is a part of the dataset, and it provides the location for
the system to access the traces that form ensembles of a different
sort order. You can open the Fold Map for a sort map.
The more common use of the sort map is to access data through the Fold
Map of a dataset for display and analysis purposes.
3. You now have a second Fold Map open for this dataset, one
showing shot organization and one showing CDP (ILINE-XLINE)
organization. You can open the 2D Viewer and navigate around the
dataset viewing CDP gathers.
Launch 2D Viewer
Random access response is very fast for display purposes (fast enough
for a human to look at various Frames), but may be quite slow as a batch
process due to random seek-and-read from disk.
Low fold and poor signal to noise data may require some special options
in order to optimize parameter selection. The following exercise
demonstrates a method to combine multiple CDP gathers into a single
ensemble (a supergather) in order to increase fold and offset
distribution. The resulting ensemble can then be used to pick a post
normal moveout mute.
2. Parameterize the JDI menu as shown below. The three key points
are to 1) use the ILINE-XLINE-SEQNO sort map, 2) indicate the
range of inline and crossline values to use, and 3) indicate that
supergathers are wanted. We will create supergathers comprised of
3 inlines and 3 crosslines at the selected locations. In this case, we
have chosen inlines 100 and 200 at crosslines 100, 200 and 300.
Notice that the bottom part of the menu is not shown, as the default
values are appropriate for this flow.
7. Execute the flow. When the display comes up, select the Pick
Editor icon and choose Top Mute and add a new table name. We
recommend a very descriptive name like “top mute - post-NMO”.
8. Select AOFFSET as the interpolation key, click OK, then pick your
top mute.
4. The first JavaSeis Data Output will write a CDP gather (inline by
crossline) organized dataset. Add a new dataset named 11 IL-XL
gathers preprocessed, then click the Create button in the JDO
menu and verify the framework that appears at the bottom of the
menu. The data context created by Inline Merge Sort is used to
define this Framework. We must sort the data to perform stacking,
7. Use the SeisSpace (green) module Ensemble Stack and use the
default menu values. The stacking process reduces the data from a
4D context to a 3D context, and automatically passes this new
context to be used in creating the Framework of the output dataset
via the JDO menu.
NOTE: The data coming down the flow is organized as frames of CDPs
with various traces in each frame, but it is more precise to say they are
frames of crosslines within volumes of inlines. We stack the traces
within each gather (or Frame or ensemble), reducing each frame to have
a single trace. The Ensemble Stack module automatically resets the
Crossline range from the Frame axis down to the Trace axis, and resets
the Inline range from the Volume axis to the Frame axis. This produces
a 3D context that is used for our 3D stack dataset framework.
9. Add a new dataset named 11 Stack - initial via the JDO menu, then
click the Create button and check the framework. Notice that it
automatically defines a 3D framework of Inline, Crossline and
Time.
NOTE: If problems are found, your output dataset framework might not
be correct. Be careful.
11. Select the middle icon of the three job submit icon to open the Job
Submit dialog. Select Hosts and localhost and 1 Exec per
node.When the job finishes, submit it a second time using 2 Execs
per node and compare run times. If you have time and access to
multiple nodes, submit the job using various combinations of nodes
and execs per node to learn a bit about performance.
In this exercise we will use various display tools to display inlines from
the stack volume.
1. Open the Fold Map for the initial stack dataset. Notice that the
Volume axis (annotated at the bottom of the display) indicates that
this dataset consists of only one Volume (1-1,1). This should make
sense because it is stack data for a 3D project and has a 3D
framework, so naturally it is a simple “volume” of seismic data.
Inline numbers are annotated along the vertical axis. Each inline is a
“Frame” of traces within this single volume.
Notice that the number of traces varies among the inlines. The “fold”
varies from one inline to the next because there are different numbers of
“live” crosslines on each inline; the zero-fold crosslines do not have a
trace. The shape of the project is irregular, which we already know from
looking CDP fold with DBTools back in the geometry chapter.
2. Open the 2D Viewer from the Fold Map and get familiar with the
stack volume. Change display parameters to suit your own
preferences.
3. As you move through the inlines, notice that the number of traces
may vary from inline to inline but the data are always displayed in
the same amount of space in the tool. This may be distracting for
some people, especially if you use the movie animation option. An
option for solving this problem will be shown later.
Seismic Compare
In the exercise where you used SeismicCompare for the first time in this
Line/Subproject, a new flow named _seiscomp_prepro was
automatically added to your list of flows. Look in the Folders view
under your list of Flows. If you do not see _seiscomp_prepro, use the
refresh option.
3. Click MB3 on the 11 Initial stack dataset and move the cursor onto
SeismicCompare on the pulldown menu. Let the cursor sit for a
second or two until the “tool tip” pops up. Notice the option to use
Cntl-MB1 to open the SeismicCompare Launcher menu.
Any other datasets that you “drag” onto the display will have this same
preprocessing applied. Also, if you select multiple datasets (using MB1,
Shft-MB1 or Cntl-MB1 in the Datasets list, not in the Folders view), all
data will have this preprocessing applied.
Notice the range of options on the Actions menu for turning the
preprocessing on or off.
You may prefer to keep things simple and return to the SeismicCompare
Launcher menu and choose No for preprocessing. Then, you can use
launch SeismicCompare directly (bypassing the launcher menu), and
not have to worry about what preprocessing might be unintentionally
applied.
4. Include Pad 3D Stack Volume to fill out the edges of the stack
volume with dead traces. The menu default values will pad each
inline to have 390 traces (full range of crossline values), which will
allow the animation mode of the 2D Viewer to show spatially
meaningful comparisons.
7. Open the Fold Map for this dataset and notice that the entire map is
blue, indicating the same fold for every Frame, confirming the
inlines have been padded to contain a trace for each crossline.
8. Open the 2D Viewer and compare this dataset with the unpadded
stack. Experiment with display options you have not used earlier.
3D Viewer Introduction
2. The following menu will appear. Change the menu items from the
The following window should appear showing the status of the caching
procedure. When it reaches 100% the 3D Viewer will appear with the
data in view.
NOTE: The time to load the data and open the display will vary
according to the size of the dataset and your machine’s speed.
In the initial menu you may choose Cache off. The 3D Viewer will open
very quickly, and the response of the tool will be very fast moving along
the INLINE (Frame or “fast”) axis. However, the response may be very
slow for probes on the CROSSLINE (Trace) axis or for TIME slices.
Data “caching” for the 3D Viewer reformats the data for much faster
loading and access when in the 3D Viewer tool, notably for crossline
(Trace axis) and time slice (Sample axis) probes.
• Click and drag MB1 on the data to tilt and rotate the volume,
• Click MB1 on a probe (a data slice) to select it. The selected probe
has a thin white border around the edge. Hold Cntl-MB1 to
change data along the axis of the highlighted probe. (hold Cntl key
and MB1 simultaneously and drag the cursor along the data axis)
NOTE: When “cache” is being used, the icon bar of the viewer will have
a green background. When “cache” is NOT being used, the icon bar will
have a red background.
The following exercise applies F-XY Decon to the initial stack. In this
situation we simply want another volume that looks different from the
original. These two stack volumes will then be used to demonstrate the
some additional capabilities in Seismic Compare.
The F-XY Decon tool belongs to a group of tools that use the
Distributed Array. Distributed Array tools operate on volumes of data
which allows use of true 3D algorithms. The size of data volumes may
be quite large, requiring the memory of many nodes of a cluster. Even if
a data volume could fit in the memory of a single node, it is simpler to
design and maintain these 3D tools to always use the Distributed Array.
As you will see, the distributed array tool is “sandwiched” between the
Load Distributed Array and Unload Distributed Array tools. These
tools handle the data management associated with having a logical data
volume distributed across many individual nodes.
IMPORTANT: Near the bottom of the JDI menu select ILINE_NO for
the Parallel distribution axis. (The default is VOLUME.) The
distributed array should be loaded “by Frame” for this process.
4. Be sure to select the “green” SeisSpace tool F-XY Decon. Set the
menu item Length of operator time window (ms) to a value of 500.
The default value of 200 Length and 100 for window taper are not
suitable for the 8 msec sampling of this dataset. The warning you
may see is related to 100 not being evenly divisible by 8.
7. Select the middle Job Submit icon to open the Submit Job dialog
shown below. Select Hosts, localhost and 2 Execs per node.
By using 2 Execs, the job will run considerably faster. There are
additional implications in memory use (Xmx setting) and how the
distributed array is utilized, but this is beyond the course level.
1. After the F-XY Decon job finishes, select Datasets in the Folders
view to show the list of datasets in the Tab view of the Navigator.
Select dataset 11 - Initial stack with MB1 and select 13 F-XY
decon on initial stack with Cntl-MB1, then click MB3 and choose
Seismic Compare.
2. The tool opens with the left panel selected -- that is, it has a red
border around it. Click on the Math Operation icon (third from the
bottom) on the Seismic Compare tool to bring up the following
menu.
3. Click on Set in the upper right of the Data Math menu to assign the
selected panel’s dataset as View 1 for the math operation.
4. Click MB1 on the data of the right panel to make it the selected
panel. It will now have the red border.
5. Click on Set for View 2 in the Data Math menu. The default math
operation is “subtraction”.
8. Use MB1 to drag the F-XY data (middle panel) to the left so that it
is on top of the Initial Stack data. These two datasets now share the
same space. Use the up or down arrow keys to swap between the
these two views. The difference section is still on the right panel.
Notice at the bottom of the panel that it shows [ 1 / 2 ] or [ 2 / 2 ] in
front of the dataset name, confirming that there are two datasets in
that panel.
9. Click on the Unstack Views icon (double plus sign) to put the
datasets into separate panels. You should now be back to three data
panels.
10. Drag the Initial Stack the right so it is on top of the F-XY panel,
then drag the Difference panel toward the left so that all three
datasets are in the same panel. Use the up and down arrow keys to
swap between the datasets.
11. You can still utilize the frequency spectrum feature, the processing
feature, etc., according to what you want to see and do with the
datasets.
There are several options for displaying crosslines. The method you
choose depends on how you want to view and interact with the crossline
data. The main options are:
• Read the dataset (get all) and sort with Inline Merge Sort and
write a crossline organized dataset.
One further option exists. You could build a flow with JDI using the
sortmap and Trace Display. This “batch job” approach is the least
flexible. A main advantage of JavaSeis datasets is the ability to display
directly from the data rather than displaying via an executed flow.
Notice the default allowance of 1024 megabytes for the Java Heap
Memory. This is an allowance, not a reservation/allocation. From the
dataset Properties, you can see that this stack dataset is about 150
megabytes at 16-bit on disk, so it will be roughly 300 megabytes at 32-
bit samples in memory. The transpose operation requires double this, so
the total memory that is used for this operation on this dataset is about
600 megabytes.
You can check the memory usage for a process with the “top” command
in an Xterm or Konsole window.
It will take a few seconds for the transpose operation to happen for this
dataset, perhaps 5-20 seconds, depending on system speed and activity.
You will need to decide for your own datasets whether the
SeismicCompare transpose option is adequate for your datasets. This
will depend on the size of dataset and memory available on the node
where your Navigator is running. You may prefer to produce a new
dataset in the preferred orientation. Options for this are explained below.
3. Open the Fold Map for the 11 Stack - initial dataset. On the upper
right of the Fold Map, click on <NONE> for the pulldown menu
and select your XLINE_NO-ILINE_NO sortmap. This opens
another Fold Map that accesses the Frames of crosslines.
The advantages of this method are that it is quite fast to create a sortmap
for stack data and you do not write another copy of the dataset, so you
save some disk space.
The disadvantage is that you cannot use Seismic Compare via a sortmap
and performance may be slower than you want.
1. Add a new flow with the name 14-a Crosslines via Sortmap.
Include only the modules JDI and JDO. Parameterize the JDI menu
as shown below. Use the defaults for parameters not shown (further
down in the menu).
2. Add a new dataset name (choose a name you like) via the JDO
menu, click on the Create button. Confirm that the Framework
makes sense, then execute the job.
3. When the job finishes, use the Fold Map > 2D Viewer, or use
Seismic Compare to display the crossline dataset.
The advantage of this approach is that you can use Seismic Compare as
well as the 2D Viewer (which allows you to choose an alternate
Sortmap).
The disadvantage is that you have another copy of the dataset using disk
space.
In this exercise be sure to input the F-XY Decon dataset. This way you
will have two crossline organized datasets to work with in the Seismic
Compare tool.
1. Build the following flow using the name 14-b Crosslines via IMS.
3. The output dataset name 14b F-XY Decon - XL sort in the JDO
menu is similar to the input dataset name. We simply added “XL
sort” to the name to indicate that the dataset is organized in frames
of crosslines.
4. Execute the flow. When finished, you can open the Fold Map and
use the 2D Viewer or use Seismic Compare along with the crossline
stack volume from the previous exercise.
The advantage of using Inline Merge Sort is that you do not have to
create a sortmap -- you simply build and run this flow. Inline Merge Sort
should run much faster than doing a sorted read via a sortmap.
The disadvantages are that it might take multiple nodes for a very big
stack volume to run quickly, plus you end up with an extra copy of the
dataset on disk.
3. Add the tool General 3D Transpose Macro and select the T132 -
Trace and Frame option. This tool is a “macro” composed of
LDA-v2, Transpose-v2 and UDA-v2. The purpose is to simplify the
procedure for doing a transpose operation on a 3D dataset.
4. Add a new dataset name (choose a name you like) via the JDO
menu, click on the Create button. Confirm that the Framework
makes sense.
6. Execute the job. When it finishes, use the Fold Map > 2D Viewer,
or use Seismic Compare to display the crossline dataset.
There are two options for performing the transpose to time (or depth)
slices. You can use the transpose option in SeismicCompare in the same
manner as shown previously for displaying crosslines. Alternatively,
you can run a batch job to perform the transpose and output the
transposed time slices as a dataset.
As with the Crossline transpose, this will take a few seconds to happen
for this dataset, perhaps 5-20 seconds, depending on system speed and
activity.
You will need to decide for your own datasets whether the
SeismicCompare transpose option is adequate for your needs. This will
depend on the size of dataset and memory available on the node where
your Navigator is running. You may prefer to produce a new dataset in
the preferred orientation. Options for this are explained below.
3. Add a meaningful output dataset name for JDO, then execute the
flow.
NOTE: If you want filtering, scaling, etc., applied to the data, you
should include these processes before performing the transpose. You
cannot apply such processes in a meaningful way to data that is already
converted to time slices.
4. Open the Properties for the time slice dataset and see what the
Framework looks like. It should look like this:
6. Modify the Time Slice Transpose flow to input the FXY Decon
dataset and create a new output dataset. Run the flow.
panel, or “drop” it at the bottom (in the title area) to put the dataset
in its own panel. See the example below.
Drag-and-drop
here to put data on
the same panel
Drag-and-drop
here to put data in
its own panel
8. Play with the display options. Exit when you are comfortable with
the features discussed above.
In this exercise you will pick a horizon (HOR) table to use in the residual
statics calculation. If importing an existing horizon, be sure you
understand the datum reference of the horizon and parameterize any tool
that uses the horizon so that the data and horizon have the same
reference. Typically, an interpretation horizon will be at the final datum
while the horizon used for residual statics will be at the floating datum
(N_DATUM).
1. Edit the earlier flow 12 Inline display prep. Comment out the
JavaSeis Data Output and add a Trace Display (blue ProMAX tool).
2. In the JDI menu select Arbitrary Subset for the Trace read option,
and enter 1,25-300(25),308 for the ILINE_NO arbitrary selection
list.
5. In the Trace Display tool select Picking > Other Horizons... at the
top of the display.
6. Type in a name for the horizon, such as “about 1000 msec”, and
click OK. In this exercise, start picking the horizon on any event
NOTE ABOUT THIS HORIZON: You do not have to pick the event
perfectly. Your picked horizon should follow the structural trend, and
does not have follow any particular peak or trough. The horizon will be
the center of the window of data that will be correlated to find
differential time picks for each prestack trace. The time picks are then
decomposed to generate shot and receiver statics. There are many
calculations that happen in getting to the resulting statics. The precision
of the horizon is inconsequential in getting to this end result.
7. Pick a horizon at about 1000 ms. Keep in mind that this is a shallow
gate when considering the post-NMO top mute. Only about half to
two-thirds of the offsets will contribute to the statics estimate for
this dataset. You can see this easily on Inline 1 above where the data
comes from only a single shot. Refer to the gather where you
picked the top mute in Chapter 10, at about page 16.
Picking the horizon may be difficult on some lines due to low signal
to noise. Use your best judgement to approximate the structure in
these areas. Picking exact peaks or troughs is not important. The
horizon simply defines window where correlations are calculated
and what structure will be removed prior to correlating the prestack
data. In general you want this horizon to be relatively smooth.
9. Save your picks occasionally. Select File > Exit/Stop Flow after
picking the horizon on all of the displayed lines.
4. Your instructor will explain the various parameter options. The key
choices made for this exercise are:
• default is build internal model from the input gathers; this exercise
uses an external model (stack dataset) for computing correlations
• default is write statics to trace headers; this exercise writes statics to
the database
• default is 3 iterations of static estimation; this exercise uses 1
iteration to save time in the class -- we don’t need a great result
• default is maximum static to estimate is 50 msec; this exercise uses
maximum static of 16 to run the job faster
• default for maximum offset is 20000; this exercise uses 10000
because the gate is shallow and long offsets have no live data
because of the top mute applied (this is normal for seismic data)
• default is to use HOR (horizon) table, so select the HOR table that
you picked.
• the correlation will be done on a window 500 msec wide centered
on the horizon times.
• NMO must be applied to the trace data and the default is Yes to
apply block NMO internally in the program. Select the best
available velocity table.
5. Execute the flow. It may take a little while for the job to finish.
6. When the job finishes, review the summary information in the job
log.
Notice that most of the static values are near 0 ms, and only a small
range of outliers. If your data showed large outlier static values, you
may want to view these particular shots in trace display to determine if
they if there is something wrong with these shots, or if the static value
seems to be legitimate.
The range of static values that your job produces may differ from these
results.
In this exercise you will apply residual statics, create stacks, and
compare them with the brute stack.
1. Add a new flow 17 Residual statics stack, and include the modules
as shown below:
3. In the menu for Apply Residual Statics, select the residual shot
and receiver statics from the database as shown here. This indicates
which database attributes to read and apply to the data:
As long as the residual statics are small (which they typically are), there
is negligible distortion of the NMO curve and we are safe in our choice
of computational efficiency and application of only one interpolation
filter.
If you choose to apply residual statics AFTER you have applied NMO,
then you must include the tool Apply Fractional Statics to ensure the
entire static shift is fully applied.
8. Use the Seismic Compare tool to compare the residual statics stack
with the initial stack. Consider using the option in the initial menu
for Seismic Compare to perform a transpose in order to view
crosslines or time slices.
Use any of the display tools and methods that you want more practice in
using.
Velocity Analysis is the most notable part of a conventional processing sequence that still has
more functionality with ProMAX format datasets than with JavaSeis. This is related to the inter-
process communication between the Volume Viewer/Editor tool and the Velocity Analysis tool
with its interactive data access feature tied to ProMAX Disk Data Input.
The JavaSeis Data Input tool accommodates parameterization for forming supergathers that are
the typical input for velocity analysis. The data used in the velocity analysis workflow is typically
a much smaller subset of the total project, and this data is used only for this analysis step.
Therefore, using ProMAX format datasets works well and allows continued use of several
existing ProMAX tools.
Prior to analyzing the velocities, we will precompute data for input to Velocity Analysis.
Precomputing improves the interactive performance of Velocity Analysis.
NOTE: If you choose to process with ProMAX datasets and not use
JavaSeis, then you should use the 3D Supergather Select module to
select the traces for supergathers. Optionally, supergathers can be
generated using the 2D or 3D Supergather Formation* modules, but
these modules use considerable amounts of memory and their
performance may be less than desirable. Refer to the respective module
documentation for further details.
Supergathers are generated so that the data used for picking velocity
functions is well sampled in offset and provides sufficient signal-to-
noise ratio. Commonly, we gather traces from a span of adjacent inlines
and crosslines into one supergather for each velocity analysis location.
The “span” may be a value of 1 or larger in either inline or crossline.
50
Inline
Number
100
50 100
Crossline Number
Include a band pass filter and an AGC. The defaults for both
modules are acceptable, but you may change them if you like.
You may use either blue ProMAX Trace Display or the green
SeisSpace Trace Display tool for this exercise. These two tools have
very similar general capabilities, but each has unique features that
you may prefer for particular situations.
Use the Header plot option to plot the trace offsets above the
supergathers. You should look for linearity of the offset distribution.
NOTE: The stack strip will be composed of “real” CDP stacks, and
if the supergather contains more than one inline, you may see
discontinuities in the stack “structure” if there is significant geologic
dip in the data. When there is significant crossline dip, you should
consider using only one inline for the supergathers, otherwise you
may see “stair steps” that are visually distracting.
Use 82.5 for the offset of the first bin center and 165 for the offset
bin size. This controls the re-binning and sub-stacking of the gather
traces that are used for the semblance calculations and the reference
gather that is seen in the interactive analysis tool.
Velocity Analysis
1. You may begin editing the flow while the job is running. Toggle all
modules “off” and add Disk Data Input and Velocity Analysis
modules.
2. In Disk Data Input, select the dataset from the Velocity Analysis
Precompute execution of the flow. Select Sort for Trace read
option. This opens the option to say Yes to Interactive data access
(IDA). IDA provides the communication between the Velocity
Analysis tool and the Volume Viewer/Editor tool.
This is the
“new”
table that
this job
writes into.
This is
your
existing
velocity
table.
The default menu for Velocity Analysis shows only the major or
most commonly changed parameters. There are a large number of
parameters that are exposed by selecting Yes for Set semblance
scaling and autosnap parameters. These parameters can be
changed after the display appears by selecting the View pulldown
menu on the Velocity Analysis display. These parameters are all
used by the tool, even if they are hidden. The default settings will
work fine for this exercise.
Select Yes to the parameter Set which items are visible works to
show a large number of additional options for the display. The
defaults are fine for the exercise, but you can change any of these
parameters via View pulldown on the VA tool.
4. One item many people wish to see is the interval velocity in the
semblance panel. Select Yes to Set which items are visible and then
select Yes to Display interval velocity functions (the fourth item
down the list).
5. After the precompute flow has completed, execute the edited flow
and begin picking velocities in the Velocity Analysis display.
Add a pick with MB1, and delete the nearest pick with MB2. As
you pick velocities on the semblance plot, the picks are also
displayed on the variable velocity stack strips and the interval
velocity plot is modified. You may also pick velocities on the stack
strips.
6. On the VA tool, select Gather > Apply NMO to see the current
velocity picks applied to the gather. You may also choose Gather >
Animate NMO, then animate the gather by dragging MB3 left and
right on a pick in the semblance panel.
Note the CDP, ILN, and XLN values that appear in the upper left
hand corner of the display. These provide the center CDP value and
the inline and crossline numbers of the current velocity analysis
location.
7. After you pick the first location and move to the second location,
you may want to overlay the function that you just picked as a an
additional guide or reference. You can do this by clicking on View
> Object Visibility, then select Previous CDPs (orange). This will
display the function from the nearest CDP less than the current
CDP. There are several options to overlay functions that may be
useful.
As you pick velocities along a line using the Velocity Analysis tool, you
may want to QC your new velocity field. This can be accomplished by
simultaneously viewing a color isovelocity display of the entire velocity
volume. The tool used for this is a standalone process called the Volume
Viewer/Editor, and should be executed while you are running Velocity
Analysis, as outlined below.
In addition to letting you see the velocity field as you are updating it with
new picks and functions, the Volume Viewer/Editor tool can
communicate with the Velocity Analysis tool by telling it what location
you want to see and edit. The easiest way to do this is by selecting a
location in the Map view of the Volume Viewer/Editor display. You will
see how this is done further in the exercise.
Make sure you input the same velocity volume (table) that you are
currently using in Velocity Analysis.
Also, make sure you select Yes to Interact with other processes
using PD? This will enable the PD (pointing dispatcher) to
communicate with the Velocity Analysis tool that is already running.
If you have not yet saved any velocities in Velocity Analysis, the
table has no values and therefore the Volume Viewer/Editor
windows will be all blue and the velocity range is 0.0 (meaningless).
It will be easier to exit VVE, pick and save a function in VA, then
execute the flow again to start VVE with a meaningful velocity and
color range.
The Map window displays a time slice through the current velocity
volume at the position of the heavy, gray line that appears across the
Cross Section window. You can change the time slice by activating
the “Select a horizontal slice” icon in the Cross Section window and
clicking MB1 at the desired time in the Cross Section window. The
Map window shows the full inline and crossline range of your 3D
survey.
5. From the Cross Section window, click View > Volume Display.
Locations available to
the Velocity Analysis
tool that are not yet
picked are a
green plus sign.
When you are finished picking this new analysis location, click on
the “Process next ensemble” icon again. This will not only move you
to the next analysis location, but will automatically send the velocity
picks just made to the Volume Viewer/Editor displays.
8. In either the Map window or the Cross Section window, click on the
PD icon to activate the function.
Click MB1 in either VVE window to make the nearest available data
appear in the Velocity Analysis tool. You can move around
anywhere in the VVE displays to select a location to pick or edit in
the VA tool.
Remember, you may either use the bow-and-arrow icon to send the
picks from Velocity Analysis to the Volume Viewer/Editor* displays
for QC before moving to the next analysis location, or you may
move directly to the next location and your previous picks will be
automatically sent to the Volume Viewer/Editor* displays.
10. To finish picking in Velocity Analysis, click on the File > Exit/Stop
Flow pull down menu in the Velocity Analysis and the File > Exit
pull down in the Volume Viewer/Editor*.
In this exercise, we will build a CDP Taper flow and apply the tapers to
the stack data and QC the results on the traces and in the database.
Use Seismic Compare to compare your initial stack with your final
stack.
In most cases you will want to apply some amplitude tapering to the
edge traces of a 3D stack volume prior to 3D migration. Tapering
options exist in most of the migration programs, but they are based on
length of inline and crossline numbers, and do not comprehend the often
irregular shape of live data in a stack dataset. In cases where the live data
line length varies, you may still end up with amplitude discontinuities
from line to line or crossline to crossline. The CDP Taper program
computes amplitude scalars that following the shape of live data (in a
map view sense) based on user specified number of inlines and
crosslines.
For 3D, this tool scans the CDP fold over a moving rectangular array of
user defined size, computing top and bottom taper numbers for the
center CDP in the array.
on the edge
dead in the
corner
The actual value used at a trace is the taper value * the amplitude.
If there are no zero fold CDPs in the array, the taper value is 1.
If there are zero fold CDPs in the array and the center CDP has non-zero
fold, the taper number is calculated as:
If the center CDP is on a dataset edge, the fold of the center CDP is zero,
or the number of zero fold CDPs is greater than half the array area, the
taper number is 0.
The computed taper values are written to the CDP database and applied
to the dataset. Optionally, you may select only to apply existing taper
values already saved in the database.
In this exercise, we calculate and apply the tapers to the stack data and
QC the results on the traces and in the database.
1. Add a new flow 20 Apply CDP taper and include three modules:
2. Select the Final Stack as input to JDI and default all other menu
parameters. The CDP Taper module will be writing to the database
so the flow cannot be run in parallel.
3. Set the CDP Taper menu parameters as follows. The CDP bin size
is square, so we will use the same number of crosslines and inlines
in the operators to keep the scaling shape symmetric in space. The
scaling will cover a larger area at deeper times.
6. Compare the input and output datasets using Seismic Compare. The
differencing feature may be interesting.
NOTE:
Notice that the first and last lines are completely dead after the CDP Taper. This is
the expected behavior. For this reason you may elect to pad the CDP grid by one
CDP in all directions since CDP Taper will kill any trace on the first and last inline
and cross line of the project.
The CDP Taper process writes two sets of numbers to the CDP database.
A value is output for the top taper for each CDP and another value for
the bottom taper. We can use the database display tools to visualize how
the taper varies in space.
2. Select View > Predefined > CDP fold map as a reference display.
In the histogram click Cntl-MB1 on the leftmost bin to turn off the
zero fold locations. This is the outline area used in the taper
calculations.
3. Select the CDP tab and then View > 2D Matrix. Select CDP_X,
CDP_Y, TOPTAPER, TOPTAPER respectively for the horizontal,
vertical, color and histogram.
5. Use Cntl-MB1 to turn off the zero value scalars for both plots.
Here you can clearly see the original zero fold CDPs in the fold plot
and you can see the traces which have been assigned a taper scaler
of zero.
A typical use of this tool is to analyze velocities for anomalous points that you may want to edit.
In particular, bad velocities are frequently created when converting stacking velocities to interval
velocities. This tool ensures that a reasonable velocity field is being passed to migration.
The following figures are included to help guide you through the
tool.
Icon Bar
• Zoom: Enables zooming of the velocity field.
• Move: Move view forward and back or up and down. Also used to
flip to an inline view when in a crossline view and visa-versa.
mark width of zone used to mark nearby velocity functions on the axis
of an inline or crossline view.
3D Table Triangulation
The above time slice view shows the triangulation used for spatial
interpolation by ProMAX tables. After values in a table are interpolated
vertically in time or depth, they are interpolated spatially using the 3
vertexes of the triangle that encloses the location to interpolate. The
triangulation of the function locations is defined via the Delaunay
approach that produces the most equilateral triangles possible.
If you did not complete the velocity field picking you may use the
original field that we imported from the ASCII file.
We will output two tables from this program. The smoothed RMS
field will be used to make a smoothed interval velocity in time field
in preparation for FK Migration and Phase Shift 3D Migration.
The screen will adjust to have two windows. On the left is the
velocity contour and on the right is the velocity function edit
window.
Edit Icon
Icon Bar
Editing velocities
The Edit velocity function window will contain the function nearest
to your mouse location, The right hand window shows the location
of the selected functions control points with red circles. The mouse
pointing help at the bottom of the screen guides your mouse
motions.
MB1: Selects the nearest velocity function for editing. This function
will appear in the right window in red. As you move your mouse, the
blue function will still reflect the function nearest to your mouse
location. In this way, you can compare two functions. To freeze a
blue function you can use MB2. Move your mouse to the right
window and activate the Edit Function Icon. This lets you add/move/
delete the red function locations marked by the circles. Use the
mouse button helps at the bottom of the screen as a guide.
MB3: Delete all points at a function location, and hence delete the
function.
When you press UPDATE with your new velocity function, you will
see its effect on the entire velocity field. If you don’t like your
changes, use the Modify/Undo button to remove the old function.
2. Select a function to edit with MB1, and then activate the “Picking”
icon.
3
1
3. Use MB1 to add a bogus value to the velocity function and then
press update. Note the anomaly around 1300 msec below.
5. Use the Rotate icon along with the mouse button helps to move
from line to line and change the display from inlines, crosslines and
time slices.
The first two entries ask about the sampling of the new smoothed
field. We can enter values that are the same as our input field.
The time sampling is up to the user and how complex the velocity field
is as a function of time. Our field is fairly well behaved with no
inversions and a relatively linear increase as a function of time. We can
resample our field at 200 msec intervals without any problems.
2. Click OK.
4. If the smoother was too harsh you can use the Modify > Undo last
change pull down, reset the parameters and repeat the process until
satisfied.
5. Save this velocity field to disk using the File > Save table to disk
pull down menu.
2. Review some inlines, crosslines and time slices after the conversion
and see if any additional smoothing or editing is required. You may
want to smooth the volume again using the same parameters as
before, but increasing the time smoother to 500 msec.
3. Use the File > Save table to disk and exit pull down menu to save
this table to disk and exit the program.
and output table type is Interval velocity in time (VIT). Enter the
existing table that is referenced to processing datum, and add a new
table name that will be the table at final datum. And choose Yes to
Adjust velocities to the final datum. Be sure to use a descriptive
name for the output table so you know it is the table at Final Datum.
2. You may elect to view the input and output fields using the 3D
Velocity Viewer/ Editor* or the Volume Viewer/Editor. The
elevations are quite flat across this project, so there is very little
change evident in the output velocities. The velocities are shifted by
a maximum of about 5 milliseconds, which is virtually impossible
to see in the display.
velocities are picked and applied at the processing datum, and then the
traces are static shifted to the final datum.
Even for small datum shifts, the velocities may change significantly, and
the effect is most significant at shallow times. On this project the
processing datum to final datum shift is only about 5 milliseconds, but
this changes the velocities at 1000 from about 5000 to about 5125 ft/sec.
This is about a 2.5% change, which can be quite significant for
migration. Bigger datum shifts cause much more dramatic velocity
changes.
3. Turn off the first Velocity Manipulation and add a new of Velocity
Manipulation menu to your flow and parameterize it as shown
here:
4. The input will be your Vrms table which you smoothed with the 3D
Velocity Viewer/ Editor* tool. The output will be a temporary VIT
interval velocity table. You are urged to use the Smoothed
gradients option for the Dix conversion. We use the default of
sampling the output function every 30 milliseconds.
8. You may elect to view the input and output fields using the 3D
Velocity Viewer/ Editor* or the Volume Viewer/Editor. The
elevations are quite flat across this project, so there is very little
change evident in the output velocities.
You now have velocity tables properly prepared for use in the migration
exercises that follow.
There is not time in this course to provide instruction on rigorous velocity model building or
velocity updating.
Ideally, input data has perfect offset distribution that allows each offset
bin to be single fold with no gaps or holes in coverage. The number of
offset bins would be equal to the CDP fold. Conventional data collection
methods do not accommodate this “perfect world”, especially for land
data.
1. Open DBTools, and generate the following 2D matrix from the TRC
domain:
2. Select one offset range near 1000 ft. from the histogram.
3. Select a single offset range near 12,000 ft. from the histogram.
Notice that this offset range is missing from many of the CDPs.
NOTE: If you search the Processes list, you may find a very old module
with the name Prestack Time Migration. This is a 2D “poor man’s”
method that includes NMO, DMO and poststack migration of common-
offset stack sections. This is NOT considered an acceptable method by
most interpreters, and we suggest not using this module.
This section does not include the depth migration tools in the Landmark
Depth Imaging product, which are “add-on” licenses after a ProMAX/
SeisSpace 3D license. LDI includes a common azimuth migration, one-
way shot profile migration, and a two-way reverse time shot migration.
The ProMAX (blue) version of this algorithm has the exact name
“Prestack Kirchhoff 3D Time Mig.”. The identical geophysical
algorithm is used in the SeisSpace (green) tool with the exact name
“3D Prestack Kirchhoff Time Mig.” This SeisSpace tool can be run in
parallel with JavaSeis Data Input and Output.
2. In JavaSeis Data Input, input all your shot organized data with pre-
processing applied. The 3D Prestack Kirchhoff Time Migration
tool will accept data in any sort order, although there are
performance considerations that we won’t address here because we
are going to run rather smalls jobs.
Header Statics moves the traces from floating to final datum using
header word FNL_STAT.
Click Yes
to populate
coordinates
below
Click here
to show the
sub-menu
parameters.
9. Below is the full menu after populating the sub-menu (the Image
Set). We show all parameters for the sub-menu further down and
explain a bit about them.
Number
of CPUs
hardware
dependent
See full
parameter
details of
Image Set
sub-menu
below.
10. The first main menu item Number of CPUs refers to the number of
threads (migration sub-processes) to spawn on the machine where
you will run the program. On a 4-processor machine, you should
use three threads for the migration, because the main job exec will
keep one processor busy. If you submit the job to run on multiple
nodes, this value is the number of threads on each node.
We set this job to image inline 100 and inline 200 with crosslines 1
to 390. All input traces that contribute to those image locations will
be migrated onto the image, so this will be a true 3D migration
because we have chosen No restriction for the Migration
Direction.
The Image Gather Type is set to Full Offset Image Traces. This
choice produces a stacked dataset using the full range of input
offsets.
12. Click on Yes to Show Advanced Features near the bottom of the
menu.
14. Add a dataset name for JavaSeis Data Output, then click on the
Create button and check the Framework.
15. Execute the flow. The job should run in about three to four minutes
on reasonable 4-processor machine.
16. Compare the 3D Prestack Kirchhoff Time Mig output with previous
stacks using Seismic Compare tool.
NOTE: This job reads all the prestack traces, but it only outputs the
imaged data of Inlines 100 and 200. Depending on the aperture and
geometry of the data, any input trace may contribute to the output image.
The Framework of this PSTM Stack dataset is the same as your previous
stack datasets. This allows you to easily compare these datasets with
SeismicCompare. Be aware the if you display any inlines except 100
and 200, there is no data in the migrated dataset, so SeismicCompare
may show all zeroes (no data) for that panel.
2. On the left side of the menu (near the middle) click on Add to add a
new image set. Comment out the first image set with MB3:
NOTE: You can scroll down in the sub-menu and enter a more
descriptive name in Replace Set Name. This will help you remember the
primary parameterization choices for the different Image Sets.
All sub-menu parameters are the same as for the stack output except
for the Image Gather Type which is set to Sou-Rec Offset Limited
Gathers. This choice is the familiar “offset gather” at each CDP as
as the output. We chose a range of offsets for output that seems
reasonable of the project.
5. Make sure you change the name of the output dataset and click
the Create button.
6. Execute the flow. The job should run in about five to eight minutes
elapsed time, depending on machine type.
9. You can see from the relatively flat events that the migration
velocity was accurate.
10. Build a new flow, 24a - Stack of PSTM gathers, that will stack the
migrated gathers using Ensemble Stack. You should have all the
examples needed from previous flows.
11. Compare the stacked output directly from the migration to the stack
of the migrated gather traces.
The objective of this chapter is to make the student aware of the variety of modules available, and
to produce a migrated volume as the end product of the processing sequence of this class.
Rel.
Migration Name Type Domain Velocity V(x,y) V(t/z) Dip
Time
Also, the stacked data must be sorted with the primary sort of inline and
the secondary sort of crossline. Use the Pad 3d Stack Volume process to
pad the stacked data using ILINE_NO as the primary sort. The padded
traces should be sorted with the primary sort of inline and the secondary
sort of crossline.
With all 3D Migrations, you should be aware of the potential need for
extended scratch space. How much scratch space a particular migration
will use may be determined in the View file. When running 3D
Migrations in parallel, certain conventions should be followed for
naming scratch space on these machines. Refer to the Extended Scratch
Space section in the System Administration online help for a complete
description of the extended scratch space setup and requirements.
Stolt FK Migration
Stolt migration is computationally efficient, but has difficulty imaging
steep dips in areas where there are large horizontal and vertical velocity
variations. This algorithm uses Stolt’s (1978) stretching technique to
account for horizontal and vertical velocity variations. The F-K process
requires RMS velocities as input and migrates common offset or stacked
data. It is the fastest migration algorithm in ProMAX. Velocity
variations are compensated for via the Stolt stretch. This algorithm does
not accurately handle strong vertical or horizontal velocity variations.
To reduce run times for this algorithm you may specify a maximum dip
of either 30 or 50 degrees, rather than the default of 70 degrees. Run
times are dependent upon the maximum frequency for migration, so
choose this value accordingly. A further option to enhance performance
is to select the Split option, for two-pass migration, instead of the Full
3D option, for one-pass migration.
You can choose from 30, 50, and 70 degree options of which the higher
maximum dip angles have longer run times. A further option to enhance
performance is to select the Split option, for two-pass migration, over
the Full 3D option, for one-pass migration. The primary advantages of
this approach are efficiency and good handling of vertically-variant
velocities and moderate dips, and fair handling of spatial velocity
variations. Trace padding should be specified to reduce wrap-around
effects in the frequency domain. Values in the range of 30 to 50 percent
are generally adequate for normal amplitude-balanced datasets.
This exercise uses the module Stolt or Phase Shift 3D Mig, and as the
name implies, you may choose in the menu which algorithm to use. The
runtime is about the same for either method. The example
parameterization shown is for the Phase Shift method, but you may run
the Stolt option if you like. There may be time to run both for
comparison purposes.
2. In JavaSeis Data Input, input the final stack dataset with CDP
taper. You can use the default values for all other menu parameters.
a reasonable range for this dataset. The job will run faster with a
smaller range of frequencies.
6. Set the top taper to 100 ms and the bottom taper to 100 ms.
7. Select Yes to Reapply trace mutes so that the original mute times
are applied/retained and any trace that was dead on input is dead on
output.
11. When complete, you have a new stack volume. You can compare
this volume to previous volumes using Seismic Compare.
Many of the steps in this sequence are the same as what was done in the earlier “Geometry from
Extraction” exercise.
In this exercise you will import three SPS files that normally accompany
modern recording systems. The files contain:
You will load these files to fill the SIN and SRF and PAT (Patterns)
spreadsheets, and continue with interactive binning.
Project Specifications:
• This project has a rolling multiple cable swath shooting geometry.
Begin be adding a new project. Make sure you do not work in an existing
project, which could ruin your previous work.
6. From the Format pulldown menu, open a list of saved formats and
choose STANDARD SHELL SPS Land 3D.
Also note that, if desired, the coordinates can be altered using the
Math OP and Op Value columns.
While the import is running, you will see a variety of Status windows.
Eventually you will see a “Successfully Completed” window.
There are still two more files to read. We have read the “R” file but still
need to read the “S” and “X” files.
9. Use the File Open pull down menu from the UKOOA File
Import window to access the “salt3d_sps.s” file.
10. Choose to Apply the format and Overwrite all the data.
11. Use the File Open pull down menu from the UKOOA File
Import window to access the “salt3d_sps.x” file.
12. Choose to Apply the format and Overwrite all the data.
13. Quit from each of the column definition windows and select
File Exit from the main import window.
FFID values are specific to shot locations and the FFID attribute is held
in the SIN Order of the database. However, SPS format maintains FFID
in the relational file, and these data are imported to the PAT order of the
database.
If you want to import FFID from SPS, you must follow these steps to
read specific parts of the SPS relational file into the SIN spreadsheet.
This is not required to complete the geometry assignment correctly,
many people rely on the FFID attribute as a necessary addressing
mechanism for their data.
4. Select Format. Create a new format name like “sps - ffid import”,
click OK.
5. Fill in the column formats for LINE to 15-29, Station to 30-37 and
FFID to 8-11. Click on Save if you will need this format again.
6. Click Apply.
• Leave the azimuth at the default value for now, you will enter the
correct value later.
Note:
Note that the Assignment mode is set to the third option of Matching line and station
numbers in the SIN and PAT spreadsheet
This mode is generally reserved for SPS input where every shot gets a separate
pattern defined for it.
NOTE: The cross domain plots (MB1 and MB2) only work after the
first binning step (assign midpoints) is completed.
4. Go back the Setup window and enter 87.5o as the nominal survey
azimuth.
• Computes the SIN and SRF for each trace and populates the
TRC OPF.
Shot Spread QC
1. Open the Receiver Spreadsheet and generate a basemap using
the.View View All Basemap pull down menu.
2. Use the Cross Domain Contribution (Double Fold) icon MB1 and
MB2 functions to view which receivers have been defined to be
live for each shot and also to see which shots contribute to each
receiver. You should observe a split spread of eight cables that rolls
on and off the spread at the ends of the survey.
1. Select to “Bin midpoints” and click OK. You should get the
following window:
3. Click Calc Dim, which computes the origin of the grid and the
Maximum X and Y dimensions.
3. Click on Grid Open and select the grid name that you saved
from the Calc Dim operation.
Because of the density of the display a zoom will help show and QC
the results.
You may elect to alter the grid by using any of the interactive grid
editing icons if desired. (There should be no need to alter the grid).
6.Select File Exit from the main spreadsheet menu to exit the
Geometry Spreadsheet.
This tool is optional, as the same steps can be completed using the
Geometry Spreadsheet. The value of the CDP Binning tool is primarily
for very large projects where it is more convenient to perform the work
as a batch job that can run on a compute node.
The only module needed is CDP Binning*, and the only menu
parameter is the selection of the binning grid name that you saved
in the previous section.
This process will perform the CDP binning and Finalization steps in
a batch job instead of interactively using the spreadsheet.
2. Once the Binning is complete you can generate the QC plots using
the Database. Some example plots are listed below:
• View 2D Matrix
SIN:X_COORD:Y_COORD:NCHANS:NCHANS
Check for shots with an unusually high or low number or receivers
(channels)
• View 3D Crossplot
SRF:X_COORD:Y_COORD:ELEV:ELEV:ELEV
QC elevations assigned to receivers. You can generate a similar
display for shots.