Dgt2d440 Dfsms Using Datasets
Dgt2d440 Dfsms Using Datasets
SC26-7410-05
z/OS
SC26-7410-05
Note
Before using this information and the product it supports, be sure to read the general information under “Notices” on page
645.
Contents v
Provision of a Resource Pool . . . . . . . . 207 Chapter 16. Coding VSAM
Building a Resource Pool: BLDVRP . . . . . 207 User-Written Exit Routines. . . . . . 241
Connecting a Data Set to a Resource Pool: Guidelines for Coding Exit Routines . . . . . . 241
OPEN . . . . . . . . . . . . . . . 211 Programming Guidelines . . . . . . . . . 242
Deleting a Resource Pool Using the DLVRP Multiple Request Parameter Lists or Data Sets 243
Macro . . . . . . . . . . . . . . . 211 Return to a Main Program . . . . . . . . 243
Management of I/O Buffers for Shared Resources 212 IGW8PNRU Routine for Batch Override . . . . 244
Deferring Write Requests . . . . . . . . 212 Register Contents . . . . . . . . . . . 244
Relating Deferred Requests by Transaction ID 213 Programming Considerations . . . . . . . 244
Writing Buffers Whose Writing is Deferred: EODAD Exit Routine to Process End of Data . . . 245
WRTBFR . . . . . . . . . . . . . . 213 Register Contents . . . . . . . . . . . 245
Accessing a Control Interval with Shared Programming Considerations . . . . . . . 246
Resources . . . . . . . . . . . . . 215 EXCEPTIONEXIT Exit Routine . . . . . . . 246
Restrictions and Guidelines for Shared Resources 216 Register Contents . . . . . . . . . . . 246
Programming Considerations . . . . . . . 246
Chapter 14. Using VSAM Record-Level JRNAD Exit Routine to Journalize Transactions . . 247
Sharing . . . . . . . . . . . . . . 219 Register Contents . . . . . . . . . . . 247
Controlling Access to VSAM Data Sets . . . . . 219 Programming Considerations . . . . . . . 247
Accessing Data Sets Using DFSMStvs and VSAM LERAD Exit Routine to Analyze Logical Errors . . 253
Record-Level Sharing . . . . . . . . . . . 219 Register Contents . . . . . . . . . . . 254
Record-Level Sharing CF Caching . . . . . 220 Programming Considerations . . . . . . . 254
Using VSAM RLS with CICS . . . . . . . 222 RLSWAIT Exit Routine . . . . . . . . . . 254
Non-CICS Use of VSAM RLS . . . . . . . 225 Register Contents . . . . . . . . . . . 255
| Using 64-Bit Addressable Data Buffers . . . . 225 Request Environment . . . . . . . . . . 255
Read Sharing of Recoverable Data Sets . . . . 226 SYNAD Exit Routine to Analyze Physical Errors 256
Read-Sharing Integrity across KSDS CI and CA Register Contents . . . . . . . . . . . 256
Splits . . . . . . . . . . . . . . . 227 Programming Considerations . . . . . . . 256
Read and Write Sharing of Nonrecoverable Data Example of a SYNAD User-Written Exit Routine 257
Sets . . . . . . . . . . . . . . . 227 UPAD Exit Routine for User Processing . . . . 258
Using Non-RLS Access to VSAM Data Sets . . 227 Register Contents . . . . . . . . . . . 259
| RLS Access Rules . . . . . . . . . . . 228 Programming Considerations . . . . . . . 260
Comparing RLS Access and Non-RLS Access 228 User-Security-Verification Routine . . . . . . 261
Requesting VSAM RLS Run-Mode . . . . . 231
Using VSAM RLS Read Integrity Options . . . 231 Chapter 17. Using 31-Bit Addressing
Using VSAM RLS with ESDS . . . . . . . . 232 Mode with VSAM . . . . . . . . . . 263
Specifying Read Integrity . . . . . . . . . 233 VSAM Options . . . . . . . . . . . . . 263
Specifying a Timeout Value for Lock Requests . . 233
| Index Trap . . . . . . . . . . . . . . 234
Chapter 18. Using Job Control
Chapter 15. Checking VSAM Language for VSAM . . . . . . . . 265
Using JCL Statements and Keywords . . . . . 265
Key-Sequenced Data Set Clusters for Data Set Name . . . . . . . . . . . . 265
Structural Errors . . . . . . . . . . 235 Disposition . . . . . . . . . . . . . 265
EXAMINE Command . . . . . . . . . . 235 Creating VSAM Data Sets with JCL . . . . . . 266
Types of Data Sets . . . . . . . . . . . 235 Temporary VSAM Data Sets . . . . . . . 269
EXAMINE Users . . . . . . . . . . . 235 Examples Using JCL to Allocate VSAM Data
How to Run EXAMINE . . . . . . . . . . 236 Sets . . . . . . . . . . . . . . . 270
Deciding to Run INDEXTEST, DATATEST, or Retrieving an Existing VSAM Data Set . . . . . 273
Both Tests . . . . . . . . . . . . . 236 Migration Consideration . . . . . . . . . 273
Skipping DATATEST on Major INDEXTEST Keywords Used to Process VSAM Data Sets . . 273
Errors . . . . . . . . . . . . . . . 236
Examining a User Catalog . . . . . . . . 236 Chapter 19. Processing Indexes of
Understanding Message Hierarchy . . . . . 237
Controlling Message Printout . . . . . . . 237
Key-Sequenced Data Sets . . . . . . 275
Samples of Output from EXAMINE Runs . . . . 238 Access to a Key-Sequenced Data Set Index . . . 275
INDEXTEST and DATATEST Tests of an Access to an Index with GETIX and PUTIX . . 275
Error-Free Data Set . . . . . . . . . . 238 Access to the Index Component Alone . . . . 275
INDEXTEST and DATATEST Tests of a Data Set Prime Index . . . . . . . . . . . . . 276
with a Structural Error . . . . . . . . . 238 Index Levels . . . . . . . . . . . . . 277
INDEXTEST and DATATEST Tests of a Data Set Format of an Index Record . . . . . . . . . 279
with a Duplicate Key Error . . . . . . . . 239 Header Portion . . . . . . . . . . . . 279
Contents vii
SYNADAF—Perform SYNAD Analysis Function 368 Allocating Extended-Format Data Sets . . . . 407
SYNADRLS—Release SYNADAF Message and Allocating Compressed-Format Data Sets . . . 408
Save Areas . . . . . . . . . . . . . 369 Opening and Closing Extended-Format Data
Device Support Facilities (ICKDSF): Diagnosing Sets . . . . . . . . . . . . . . . 409
I/O Problems . . . . . . . . . . . . 369 Reading, Writing, and Updating
Limitations with Using SRB or Cross-Memory Extended-Format Data Sets Using BSAM and
Mode . . . . . . . . . . . . . . . . 369 QSAM . . . . . . . . . . . . . . . 410
Concatenating Extended-Format Data Sets with
Chapter 23. Sharing Non-VSAM Data Other Data Sets . . . . . . . . . . . 410
Sets . . . . . . . . . . . . . . . 371 Extending Striped Sequential Data Sets . . . . 410
Migrating to Extended-Format Data Sets . . . 410
Enhanced Data Integrity for Shared Sequential
Data Sets . . . . . . . . . . . . . . . 374
| Processing Large Format Data Sets . . . . . . 411
Setting Up the Enhanced Data Integrity
| Characteristics of Large Format Data Sets . . . 412
Function . . . . . . . . . . . . . . 374
| Allocating Large Format Data Sets . . . . . 412
Synchronizing the Enhanced Data Integrity
| Opening and Closing Large Format Data Sets 413
Function on Multiple Systems . . . . . . . 376
| Migrating to Large Format Data Sets . . . . 413
Using the START IFGEDI Command . . . . 376
Bypassing the Enhanced Data Integrity Function Chapter 26. Processing a Partitioned
for Applications . . . . . . . . . . . 376 Data Set (PDS) . . . . . . . . . . . 415
Diagnosing Data Integrity Warnings and Structure of a PDS . . . . . . . . . . . . 415
Violations . . . . . . . . . . . . . 377 PDS Directory . . . . . . . . . . . . . 416
PDSEs . . . . . . . . . . . . . . . . 379 Allocating Space for a PDS . . . . . . . . . 419
Direct Data Sets (BDAM) . . . . . . . . . 380 Calculating Space . . . . . . . . . . . 419
Factors to Consider When Opening and Closing Allocating Space with SPACE and AVGREC . . 420
Data Sets . . . . . . . . . . . . . . . 381 Creating a PDS . . . . . . . . . . . . . 420
Control of Checkpoint Data Sets on Shared DASD Creating a PDS Member with BSAM or QSAM 421
Volumes . . . . . . . . . . . . . . . 381 Converting PDSs . . . . . . . . . . . 421
System Use of Search Direct for Input Operations 383 Copying a PDS or Member to Another Data Set 421
Adding Members . . . . . . . . . . . 422
Chapter 24. Spooling and Scheduling Processing a Member of a PDS . . . . . . . 424
Data Sets . . . . . . . . . . . . . 385 BLDL—Construct a Directory Entry List . . . 424
DESERV . . . . . . . . . . . . . . 425
Job Entry Subsystem . . . . . . . . . . . 385
FIND—Position to the Starting Address of a
SYSIN Data Set . . . . . . . . . . . . . 386
Member . . . . . . . . . . . . . . 428
SYSOUT Data Set . . . . . . . . . . . . 386
STOW—Update the Directory . . . . . . . 429
Retrieving a Member of a PDS . . . . . . . 430
Chapter 25. Processing Sequential Modifying a PDS . . . . . . . . . . . . 434
Data Sets . . . . . . . . . . . . . 389 Updating in Place . . . . . . . . . . . 434
Creating a Sequential Data Set . . . . . . . . 389 Rewriting a Member . . . . . . . . . . 437
Retrieving a Sequential Data Set . . . . . . . 390 Concatenating PDSs . . . . . . . . . . . 437
Concatenating Data Sets Sequentially . . . . . 391 Sequential Concatenation . . . . . . . . 437
Concatenating Like Data Sets . . . . . . . 392 Partitioned Concatenation . . . . . . . . 437
Concatenating Unlike Data Sets . . . . . . 396 Reading a PDS Directory Sequentially . . . . . 438
Modifying Sequential Data Sets . . . . . . . 398
Updating in Place . . . . . . . . . . . 398 Chapter 27. Processing a Partitioned
Using Overlapped Operations . . . . . . . 398
Data Set Extended (PDSE) . . . . . . 439
Extending a Data Set . . . . . . . . . . 399
Advantages of PDSEs . . . . . . . . . . 439
Achieving Device Independence . . . . . . . 399
PDSE and PDS Similarities . . . . . . . . 441
Device-Dependent Macros . . . . . . . . 400
PDSE and PDS Differences . . . . . . . . 441
DCB and DCBE Subparameters . . . . . . 401
Structure of a PDSE . . . . . . . . . . . 441
Improving Performance for Sequential Data Sets 401
PDSE Logical Block Size . . . . . . . . . 442
Limitations on Using Chained Scheduling with
Reuse of Space . . . . . . . . . . . . 442
Non-DASD Data Sets . . . . . . . . . . 402
Directory Structure . . . . . . . . . . 443
DASD and Tape Performance . . . . . . . 403
Relative Track Addresses (TTR) . . . . . . 443
Determining the Length of a Block when Reading
Processing PDSE Records . . . . . . . . . 444
with BSAM, BPAM, or BDAM . . . . . . . . 403
Using BLKSIZE with PDSEs . . . . . . . 445
Writing a Short Format-FB Block with BSAM or
Using KEYLEN with PDSEs . . . . . . . 445
BPAM . . . . . . . . . . . . . . . . 405
Reblocking PDSE Records . . . . . . . . 445
Using Hiperbatch . . . . . . . . . . . . 406
Processing Short Blocks . . . . . . . . . 446
Processing Extended-Format Sequential Data Sets 406
Processing SAM Null Segments . . . . . . 447
Characteristics of Extended-Format Data Sets 406
Contents ix
Using the NOTE Macro to Return the Relative Using the Basic Direct Access Method (BDAM) . . 569
Address of a Block . . . . . . . . . . . 515 Processing a Direct Data Set Sequentially . . . . 570
Using the POINT Macro to Position to a Block . . 516 Organizing a Direct Data Set . . . . . . . . 570
Using the SYNCDEV Macro to Synchronize Data 517 By Range of Keys . . . . . . . . . . . 570
By Number of Records . . . . . . . . . 570
Chapter 31. Using Non-VSAM With Indirect Addressing . . . . . . . . 571
User-Written Exit Routines. . . . . . 519 Creating a Direct Data Set . . . . . . . . . 571
Restrictions in Creating a Direct Data Set Using
General Guidance . . . . . . . . . . . . 519
QSAM . . . . . . . . . . . . . . . 571
Programming Considerations . . . . . . . 520
With Direct Addressing with Keys . . . . . 571
Status Information Following an Input/Output
With BDAM to Allocate a VIO Data Set . . . 572
Operation . . . . . . . . . . . . . 520
Referring to a Record . . . . . . . . . . . 573
EODAD End-of-Data-Set Exit Routine . . . . . 527
Record Addressing . . . . . . . . . . 573
Register Contents . . . . . . . . . . . 527
Extended Search . . . . . . . . . . . 573
Programming Considerations . . . . . . . 527
Exclusive Control for Updating . . . . . . 574
SYNAD Synchronous Error Routine Exit . . . . 528
Feedback Option . . . . . . . . . . . 574
Register Contents . . . . . . . . . . . 531
Adding or Updating Records . . . . . . . . 574
Programming Considerations . . . . . . . 533
Format-F with Keys . . . . . . . . . . 574
DCB Exit List . . . . . . . . . . . . . 535
Format-F without Keys . . . . . . . . . 575
Register Contents for Exits from EXLST . . . 537
Format-V or Format-U with Keys. . . . . . 575
Serialization . . . . . . . . . . . . . 538
Format-V or Format-U without Keys . . . . 575
Allocation Retrieval List . . . . . . . . . . 538
Tape-to-Disk Add—Direct Data Set . . . . . 576
Programming Conventions . . . . . . . . 538
Tape-to-Disk Update—Direct Data Set . . . . 577
Restrictions . . . . . . . . . . . . . 538
With User Labels . . . . . . . . . . . 577
DCB ABEND Exit . . . . . . . . . . . . 539
Sharing DCBs . . . . . . . . . . . . . 578
Recovery Requirements . . . . . . . . . 541
DCB Abend Installation Exit . . . . . . . 543
DCB OPEN Exit . . . . . . . . . . . . 543 | Appendix D. Using the Indexed
Calls to DCB OPEN Exit for Sequential | Sequential Access Method . . . . . . 579
Concatenation . . . . . . . . . . . . 543 Using the Basic Indexed Sequential Access Method
Installation DCB OPEN Exit . . . . . . . 544 (BISAM) . . . . . . . . . . . . . . . 579
Defer Nonstandard Input Trailer Label Exit List Using the Queued Indexed Sequential Access
Entry . . . . . . . . . . . . . . . . 544 Method (QISAM) . . . . . . . . . . . . 579
Block Count Unequal Exit . . . . . . . . . 544 Processing ISAM Data Sets . . . . . . . . . 580
EOV Exit for Sequential Data Sets . . . . . . 545 Organizing Data Sets . . . . . . . . . . . 580
FCB Image Exit . . . . . . . . . . . . . 546 Prime Area . . . . . . . . . . . . . 582
JFCB Exit . . . . . . . . . . . . . . . 547 Index Areas . . . . . . . . . . . . . 582
JFCBE Exit . . . . . . . . . . . . . . 548 Overflow Areas. . . . . . . . . . . . 584
Open/Close/EOV Standard User Label Exit . . . 549 Creating an ISAM Data Set . . . . . . . . . 584
Open/EOV Nonspecific Tape Volume Mount Exit 553 One-Step Method . . . . . . . . . . . 584
Open/EOV Volume Security and Verification Exit 556 Full-Track-Index Write Option . . . . . . . 585
QSAM Parallel Input Exit . . . . . . . . . 558 Multiple-Step Method . . . . . . . . . 586
User Totaling for BSAM and QSAM . . . . . . 558 Resume Load . . . . . . . . . . . . 587
Allocating Space . . . . . . . . . . . . 587
Appendix A. Using Direct Access Prime Data Area . . . . . . . . . . . 589
Labels . . . . . . . . . . . . . . 561 Specifying a Separate Index Area . . . . . . 590
Specifying an Independent Overflow Area. . . 590
Direct Access Storage Device Architecture . . . . 561
Specifying a Prime Area and Overflow Area . . 590
Volume Label Group . . . . . . . . . . . 562
Calculating Space Requirements . . . . . . . 590
Data Set Control Block (DSCB) . . . . . . . 564
Step 1. Number of Tracks Required . . . . . 590
User Label Groups . . . . . . . . . . . 564
Step 2. Overflow Tracks Required . . . . . 591
Step 3. Index Entries Per Track . . . . . . 591
Appendix B. Using the Double-Byte Step 4. Determine Unused Space . . . . . . 592
Character Set (DBCS) . . . . . . . . 567 Step 5. Calculate Tracks for Prime Data Records 592
DBCS Character Support . . . . . . . . . 567 Step 6. Cylinders Required . . . . . . . . 593
Record Length When Using DBCS Characters . . 567 Step 7. Space for Cylinder Indexes and Track
Fixed-Length Records . . . . . . . . . 567 Indexes . . . . . . . . . . . . . . 593
Variable-Length Records . . . . . . . . . 568 Step 8. Space for Master Indexes . . . . . . 593
Summary of Indexed Sequential Space
Appendix C. Processing Direct Data Requirements Calculations . . . . . . . . 594
Sets . . . . . . . . . . . . . . . 569 Retrieving and Updating . . . . . . . . . 595
Sequential Retrieval and Update . . . . . . 595
Contents xi
xii z/OS V1R7.0 DFSMS Using Data Sets
Figures
1. DASD Volume Track Formats . . . . . . . 9 44. Nonspanned, Format-V Records . . . . . 296
2. REPRO Encipher and Decipher Operations 65 45. Spanned Format-VS Records (Sequential
3. VSAM Logical Record Retrieval . . . . . . 73 Access Method) . . . . . . . . . . . 298
4. Control Interval Format . . . . . . . . 75 46. Spanned Format-V Records for Direct Data
5. Record Definition Fields of Control Intervals 76 Sets . . . . . . . . . . . . . . . 301
6. Data Set with Nonspanned Records . . . . 77 47. Undefined-Length Records . . . . . . . 302
7. Data Set with Spanned Records . . . . . . 78 48. Fixed-Length Records for ISO/ANSI Tapes 305
8. Entry-Sequenced Data Set . . . . . . . . 80 49. Nonspanned Format-D Records for
9. Example of RBAs of an Entry-Sequenced Data ISO/ANSI Tapes As Seen by the Program . . 308
Set . . . . . . . . . . . . . . . 80 50. Spanned Variable-Length (Format-DS)
10. Record of a Key-Sequenced Data Set . . . . 82 Records for ISO/ANSI Tapes As Seen by the
11. Inserting Records into a Key-Sequenced Data Program . . . . . . . . . . . . . 309
Set . . . . . . . . . . . . . . . 83 51. Reading a Sequential Data Set . . . . . . 321
12. Inserting a Logical Record into a CI . . . . 84 52. Reentrant—Above the 16 MB Line . . . . 322
13. Fixed-length Relative-Record Data Set . . . . 86 53. Sources and Sequence of Operations for
14. Control Interval Size . . . . . . . . . 89 Completing the DCB . . . . . . . . . 324
15. Primary and Secondary Space Allocations for 54. Opening Three Data Sets at the Same Time 327
Striped Data Sets . . . . . . . . . . . 90 55. Changing a Field in the DCB . . . . . . 337
16. Control Interval in a Control Area . . . . . 91 56. Closing Three Data Sets at the Same Time 338
17. Layering (Four-Stripe Data Set) . . . . . . 92 57. Record Processed when LEAVE or REREAD
18. Alternate Index Structure for a Key-Sequenced is Specified for CLOSE TYPE=T . . . . . 339
Data Set . . . . . . . . . . . . . . 98 58. Constructing a Buffer Pool from a Static
19. Alternate Index Structure for an Storage Area . . . . . . . . . . . . 351
Entry-Sequenced Data Set . . . . . . . . 99 59. Constructing a Buffer Pool Using GETPOOL
20. VSAM Macro Relationships . . . . . . . 154 and FREEPOOL . . . . . . . . . . . 351
21. Skeleton VSAM Program. . . . . . . . 155 60. Simple Buffering with MACRF=GL and
22. Control Interval Size, Physical Track Size, and MACRF=PM . . . . . . . . . . . . 353
Track Capacity . . . . . . . . . . . 159 61. Simple Buffering with MACRF=GM and
23. Determining Free Space . . . . . . . . 164 MACRF=PM . . . . . . . . . . . . 354
24. Scheduling Buffers for Direct Access . . . . 174 62. Simple Buffering with MACRF=GL and
25. General Format of a Control Interval 181 MACRF=PL . . . . . . . . . . . . 354
26. Format of Control Information for 63. Simple Buffering with MACRF=GL and
Nonspanned Records . . . . . . . . . 184 MACRF=PM-UPDAT Mode . . . . . . . 355
27. Format of Control Information for Spanned 64. Parallel Processing of Three Data Sets 367
Records . . . . . . . . . . . . . 185 65. JCL, Macros, and Procedures Required to
28. Exclusive Control Conflict Resolution 194 Share a Data Set Using Multiple DCBs . . . 372
29. Relationship Between the Base Cluster and 66. Macros and Procedures Required to Share a
the Alternate Index . . . . . . . . . 196 Data Set Using a Single DCB . . . . . . 373
30. VSAM RLS address and data spaces and 67. Creating a Sequential Data Set—Move Mode,
requestor address spaces . . . . . . . . 220 Simple Buffering . . . . . . . . . . 390
31. CICS VSAM non-RLS access . . . . . . 223 68. Retrieving a Sequential Data Set—Locate
32. CICS VSAM RLS . . . . . . . . . . 223 Mode, Simple Buffering . . . . . . . . 391
33. Example of a JRNAD exit . . . . . . . 249 69. Like Concatenation Read through BSAM 396
34. Example of a SYNAD exit routine . . . . 258 70. Reissuing a READ or GET for Unlike
35. Relation of Index Entry to Data Control Concatenated Data Sets . . . . . . . . 397
Interval . . . . . . . . . . . . . 276 71. One Method of Determining the Length of a
36. Relation of Index Entry to Data Control Record when Using BSAM to Read
Interval . . . . . . . . . . . . . 277 Undefined-Length or Blocked Records . . . 405
37. Levels of a Prime Index . . . . . . . . 278 72. A Partitioned Data Set (PDS) . . . . . . 416
38. General Format of an Index Record . . . . 279 73. A PDS Directory Block . . . . . . . . 416
39. Format of the Index Entry Portion of an 74. A PDS Directory Entry . . . . . . . . 417
Index Record . . . . . . . . . . . 282 75. Creating One Member of a PDS . . . . . 421
40. Format of an Index Record . . . . . . . 282 76. Creating Members of a PDS Using STOW 423
41. Example of Key Compression . . . . . . 285 77. BLDL List Format . . . . . . . . . . 425
42. Control Interval Split and Index Update 286 78. DESERV GET by NAME_LIST Control Block
43. Fixed-Length Records . . . . . . . . . 294 Structure . . . . . . . . . . . . . 426
For information about the accessibility features of z/OS, for users who have a
physical disability, see Appendix G, “Accessibility,” on page 643.
You should also understand how to use access method services commands,
catalogs, and storage administration, which the following documents describe.
Topic Document
Access method z/OS DFSMS Access Method Services for Catalogs describes the access
services commands method services commands used to process virtual storage access
method (VSAM) data sets.
Catalogs z/OS DFSMS Managing Catalogs describes how to create master and
user catalogs.
Storage administration z/OS DFSMSdfp Storage Administration Reference and z/OS DFSMS
Implementing System-Managed Storage describe storage
administration.
Macros z/OS DFSMS Macro Instructions for Data Sets describes the macros
used to process VSAM and non-VSAM data sets.
Referenced documents
For a complete list of DFSMS documents and related z/OS documents referenced
by this document, see the z/OS Information Roadmap. You can obtain a softcopy
version of this document and other DFSMS documents from sources listed here.
You can use LookAt from the following locations to find IBM message
explanations for z/OS elements and features, z/VM®, and VSE:
v The Internet. You can access IBM message explanations directly from the LookAt
Web site at http://www.ibm.com/eserver/zseries/zos/bkserv/lookat/.
v Your z/OS TSO/E host system. You can install code on your z/OS or z/OS.e
systems to access IBM message explanations, using LookAt from a TSO/E
command line (for example, TSO/E prompt, ISPF, or z/OS UNIX System
Services running OMVS).
v Your Windows® workstation. You can install code to access IBM message
explanations on the z/OS Collection (SK3T-4269), using LookAt from a Windows
DOS command line.
v Your wireless handheld device. You can use the LookAt Mobile Edition with a
handheld device that has wireless access and an Internet browser (for example,
You can obtain code to install LookAt on your host system or Windows
workstation from a disk on your z/OS Collection (SK3T-4269), or from the LookAt
Web site (click Download, and select the platform, release, collection, and location
that suit your needs). More information is available in the LOOKAT.ME files
available during the download process.
You might notice changes in the style and structure of some content in this
document—for example, more specific headings for notes, such as Tip and
Requirement. The changes are ongoing improvements to the consistency and
retrievability of information in DFSMS documents.
New Information
This edition includes the following new enhancements:
v Large format sequential data sets. For more information, see “Processing Large
Format Data Sets” on page 411.
v Basic format data sets (sequential data sets which are neither large format nor
extended format).
v VSAM extent constraint relief. For more information, see “Data Set Size” on
page 73 and “Using VSAM Extents” on page 110.
v VSAM RLS 64-bit addressing for data buffers. For more information, see “Using
64-Bit Addressable Data Buffers” on page 225.
v Information about the IHAEXLST mapping macro has been added to “DCB Exit
List” on page 535.
v Information about how VSAM RLS OPENs are allowed has been added to “RLS
Access Rules” on page 228, and information about index record checks has been
added to “Index Trap” on page 234.
Changed Information
The following information changed in this edition:
v Volume assignment considerations for PDSEs in a sysplex. See “Choosing
Volumes for PDSEs in a Sysplex” on page 473for more information.
v Information about ISAM has been changed to indicate that ISAM is no longer
supported, and programs and data sets must be converted to VSAM or use the
ISAM interface to VSAM. For more information, see “Indexed Sequential Access
Method” on page 5, Appendix D, “Using the Indexed Sequential Access
Method,” on page 579, and Appendix E, “Using ISAM Programs with VSAM
Data Sets,” on page 611.
v Recommendations on using IEBCOPY to copy between PDSEs and PDS data sets
have been added to “Copying a PDSE or Member to Another Data Set” on page
454.
Deleted Information
Most information about ISAM data sets has been deleted from this edition.
New Information
This edition includes the following new enhancements:
v Restartable PDSE address space. For more information, see “PDSE Address
Spaces” on page 479.
Changed Information
The following information changed in this edition:
v Corrected information about allocating space for a linear data set in “Linear
Data Sets” on page 110.
v Added information about using entry-sequenced data sets (ESDSs) with VSAM
record-level sharing to “Using VSAM RLS with ESDS” on page 232.
v Added information about using 64-bit real storage to “Constructing a Buffer
Pool” on page 348.
v Added information about using partitioned data sets (PDSs) as generation data
sets (GDS) to “Data Set Organization of Generation Data Sets” on page 502.
v Updated information about coded character set identifiers (CCSID) in
Appendix F, “Converting Character Sets,” on page 625.
v This book has been enabled for z/OS LibraryCenter advanced searches by
command name.
New Information
This edition includes the following new enhancements:
v You can specify a maximum file sequence number up to 65 535 for a data set on
a tape volume.
v The JOBCAT and STEPCAT DD statements are now disabled by default.
v When the name-hiding function is in effect, you can retrieve the names of data
sets only if you have read access to the data sets or VTOC.
v VSAM automatically determines the resources required to upgrade VSAM
alternate indexes.
v Unrelated messages do not appear between the lines of a multiple-line VSAM
message, so that the operator can interpret the information more easily.
v The system consolidates adjacent extents for VSAM data sets when extending
data on the same volume.
v You can activate the enhanced data integrity function to prevent users from
concurrently opening a shared sequential data set for output or update
processing.
v You can use extended-format sequential data sets with a maximum of 59 stripes.
v You can use the basic partitioned access method (BPAM) to read z/OS UNIX
files.
v Users can specify whether to reclaim generation data sets (GDSs) automatically.
Moved Information
The following information has moved to a new location in this document:
v The information on using magnetic tape volumes is in “Magnetic Tape Volumes”
on page 11.
v The information on hierarchical file system (HFS) data sets is now in Chapter 28,
“Processing z/OS UNIX Files,” on page 481.
New Information
This edition includes the following new information:
v Data set naming conventions
v Virtual input/output (VIO) limit
v Real addresses greater than 2 GB available for all VSAM data sets
v Caching all or some of the VSAM record level sharing (RLS) data in a coupling
facility (CF) cache structure
v VSAM RLS system-managed duplexing rebuild process, and validity checking
for a user-managed rebuild or alter process
v Dynamic volume count for space constraint relief when you store data sets on
DASD volumes
v A summary of the effects of specifying all extents (ALX) or maximum
contiguous extents (MXIG) for virtual input/output (VIO) data sets
Changed Information
The following information changed in this edition:
v Allocation of HFS data sets requiring directory block size
v Requirements for the ENCIPHER and DECIPHER functions of the REPRO
command
v Relative byte address (RBA) in an entry-sequenced data set
v Processing techniques for VSAM system-managed buffering
Topic Location
Data Storage and Management 3
Access Methods 4
Direct Access Storage Device (DASD) Volumes 8
Magnetic Tape Volumes 11
Data Management Macros 15
Data Set Processing 16
Distributed Data Management (DDM) Attributes 21
Virtual I/O for Temporary Data Sets 21
Data Set Names 22
Catalogs and Volume Table of Contents 23
A data set is a collection of logically related data and can be a source program, a
library of macros, or a file of data records used by a processing program. Data
records are the basic unit of information used by a processing program. By placing
your data into volumes of organized data sets, you can save and process the data.
You can also print the contents of a data set or display the contents on a terminal.
Exception: z/OS UNIX files are different from the typical data set because they are
byte oriented rather than record oriented.
Each block of data on a DASD volume has a distinct location and a unique
address, making it possible to find any record without extensive searching. You can
store and retrieve records either directly or sequentially. Use DASD volumes for
storing data and executable programs, including the operating system itself, and
for temporary working storage. You can use one DASD volume for many different
data sets, and reallocate or reuse space on the volume.
Data management is the part of the operating system that organizes, identifies,
stores, catalogs, and retrieves all the information (including programs) that your
installation uses. Data management does these main tasks:
v Sets aside (allocates) space on DASD volumes.
v Automatically retrieves cataloged data sets by name.
The data sets allocated through SMS are called system-managed data sets or
SMS-managed data sets. For information about allocating system-managed data
sets, see Chapter 2, “Using the Storage Management Subsystem,” on page 27. If
you are a storage administrator, also see z/OS DFSMSdfp Storage Administration
Reference for information about using SMS.
Access Methods
An access method defines the technique that is used to store and retrieve data.
Access methods have their own data set structures to organize data, macros to
define and process data sets, and utility programs to process data sets.
Access methods are identified primarily by the data set organization. For example,
use the basic sequential access method (BSAM) or queued sequential access
method (QSAM) with sequential data sets. However, there are times when an
access method identified with one organization can be used to process a data set
organized in a different manner. For example, a sequential data set (not
extended-format data set) created using BSAM can be processed by the basic direct
access method (BDAM), and vice versa. Another example is UNIX files, which you
can process using BSAM, QSAM, basic partitioned access method (BPAM), or
virtual storage access method (VSAM).
Optionally, BDAM uses hardware keys. Hardware keys are less efficient than the
optional software keys in virtual sequential access method (VSAM).
The following describes some of the characteristics of PDSs, PDSEs, and UNIX
files:
Partitioned data set
PDSs can have any type of sequential records.
Partitioned data set extended
A PDSE has a different internal storage format than a PDS, which
gives PDSEs improved usability characteristics. You can use a
PDSE in place of most PDSs, but you cannot use a PDSE for
certain system data sets.
z/OS UNIX files
UNIX files are byte streams and do not contain records. BPAM
converts the bytes in UNIX files to records. You can use BPAM to
read but not write to UNIX files. BPAM access is like BSAM access.
Data-in-Virtual (DIV)
| The data-in-virtual (DIV) macro provides access to VSAM linear data sets. For
| more information, see z/OS MVS Programming: Assembler Services Guide. You can
| also use window services to access linear data sets, as described in that book.
| you cannot create, open, copy, convert, or dump indexed sequential (ISAM) data
| sets. You can delete or rename them. You can use an earlier release of z/OS to
| convert them to VSAM data sets.
| Important: Do not use ISAM. You should convert all indexed sequential data sets
| to VSAM data sets. See Appendix D, “Using the Indexed Sequential Access
| Method,” on page 579.
Any type of VSAM data set can be in extended format. Extended-format data sets
have a different internal storage format than data sets that are not extended. This
storage format gives extended-format data sets additional usability characteristics
and possibly better performance due to striping. You can choose for an
extended-format key-sequenced data set to be in the compressed format.
Extended-format data sets must be SMS managed. You cannot use an
extended-format data set for certain system data sets.
You can use the following types of UNIX files with the access methods:
v Regular files, including files accessed through Network File System (NFS),
temporary file system (TFS), HFS, or zSeries file system (zFS)
v Character special files
v First-in-first-out (FIFO) special files
v Symbolic links
Restriction: You cannot use the following types of UNIX files with the access
methods:
v UNIX directories, except indirectly through BPAM
v External links
Files can reside on other systems. The access method user can use NFS to access
them.
Restriction: You cannot process VSAM data sets with non-VSAM access methods,
although you can use DIV macros to access linear data sets. You cannot process
non-VSAM data sets except for UNIX files with VSAM.
See z/OS TSO/E Command Reference for information about using BSAM and QSAM
to read from and write to a TSO/E terminal in line mode.
DASD Labels
The operating system uses groups of labels to identify DASD volumes and the
data sets they contain. Application programs generally do not use these labels
directly. DASD volumes must use standard labels. Standard labels include a
volume label, a data set label for each data set, and optional user labels. A volume
label, stored at track 0 of cylinder 0, identifies each DASD volume.
A utility program initializes each DASD volume before it is used on the system.
The initialization program generates the volume label and builds the volume table
of contents (VTOC). The VTOC is a structure that contains the data set labels.
See Appendix A, “Using Direct Access Labels,” on page 561 for information about
direct access labels.
Track Format
Information is recorded on all DASD volumes in a standard format. This format is
called count-key data (CKD) or extended count-key data (ECKD).
Each track contains a record 0 (also called track descriptor record or capacity
record) and data records. Historically, S/390 hardware manuals and software
manuals have used inconsistent terminology to refer to units of data written on
DASD volumes. Hardware manuals call them records. Software manuals call them
blocks and use “record” for something else. The DASD sections of this document
use both terms as appropriate. Software records are described in Chapter 6,
“Organizing VSAM Data Sets,” on page 73 and Chapter 20, “Selecting Record
Formats for Non-VSAM Data Sets,” on page 293.
The process of grouping records into blocks is called blocking. The extraction of
records from blocks is called unblocking. Blocking or unblocking might be done by
the application program or the operating system. In z/OS UNIX, blocking means
suspension of program execution.
| Under certain conditions, BDAM uses the data area of record zero to contain
| information about the number of empty bytes following the last user record on the
| track. This is called the track descriptor record.
Figure 1 shows the two different data formats, count-data and count-key-data, only
one of which can be used for a particular data set. An exception is PDSs that are
not PDSEs. The directory blocks are in count-key-data format, and the member
blocks normally are in count-data format.
Count-Data Format
Count-Key-Data Format
Count-Data Format: Records are formatted without keys. The key length is 0. The
count area contains 8-bytes that identify the location of the block by cylinder, head,
and record numbers, and its data length.
Count-Key-Data Format: The blocks are written with hardware keys. The key area
(1 - 255 bytes) contains a record key that specifies the data record, such as the part
number, account number, sequence number, or some other identifier.
In data sets, only BDAM, BSAM, EXCP, and PDS directories use blocks with
hardware keys. Outside data sets, the VTOC and the volume label contain
hardware keys.
Tip: The use of hardware keys is less efficient than the use of software keys (which
VSAM uses).
Track Overflow
The operating system no longer supports the track overflow feature. The system
ignores any request for it.
Actual Addresses
When the system returns the actual address of a block on a direct access volume to
your program, it is in the form MBBCCHHR, in which the characters represent the
following values:
M 1-byte binary number specifying the relative extent number. Each extent is a set
of consecutive tracks allocated for the data set.
BBCCHH Three 2-byte binary numbers specifying the cell (bin), cylinder, and head
number for the block (its track address). The cylinder and head numbers are
recorded in the count area for each block. All DASDs require that the bin
number (BB) be zero.
R 1-byte binary number specifying the relative block number on the track. The
block number is also recorded in the count area.
If your program stores actual addresses in your data set, and you refer to those
addresses, the data set must be treated as unmovable. Data sets that are
unmovable cannot reside on system-managed volumes.
If you store actual addresses in another data set, those addresses become nonvalid
if the first data set is moved or migrated. Although you can mark the data set with
the unmovable attribute in DSORG, that prevents the data set from being SMS
managed.
Relative Addresses
| BDAM, BSAM and BPAM optionally use relative addresses to identify blocks in
| the data set.
BSAM and BPAM relative addresses are relative to the data set on the current
volume. BDAM relative addresses are relative to the data set and go across all
volumes.
| BDAM relative block addresses. The relative block address is a 3-byte binary
| number that shows the position of the block, starting from the first block of the
| data set. Allocation of noncontiguous sets of blocks does not affect the number. The
| first block of a data set always has a relative block address of 0.
| BDAM, BSAM, and BPAM Relative Track Addresses. With BSAM you can use
| relative track addresses in basic or large format data sets. With BPAM you can use
| relative track addresses in PDSs. The relative track address has the form TTR or
| TTTR:
|| TT or TTT An unsigned two-byte or three-byte binary number specifying the position of the
| track starting from the first track allocated for the data set. It always is two bytes
| with BDAM and it is two bytes with BSAM and BPAM when you do not specify
| the BLOCKTOKENSIZE=LARGE parameter on the DCBE macro. It is three bytes
| with BSAM and BPAM when you specify the BLOCKTOKENSIZE=LARGE
| parameter on the DCBE macro. The value for the first track is 0. Allocation of
| noncontiguous sets of tracks does not affect the relative track number.
| R 1-byte binary number specifying the number of the block starting from the first
| block on the track TT or TTT. The R value for the first block of data on a track is
| 1.
|
| With some devices, such as the IBM 3380 Model K, a data set can contain more
than 32 767 tracks. Therefore, assembler halfword instructions could result in
non-valid data being processed.
A multistriped data appears to the user as a single logical volume. Therefore, for a
multistriped data set, the RBN is relative to the beginning of the data set and
incorporates all stripes.
Relative Track Addresses for PDSEs. For PDSEs, the relative track addresses
(TTRs) do not represent the actual track and record location. Instead, the TTRs are
tokens that define the record’s position within the data set. See “Relative Track
Addresses (TTR)” on page 443 for a description of TTRs for PDSE members and
| blocks. Whether the value is three bytes or four bytes depends on whether you
| specify BLOCKTOKENSIZE=LARGE on the DCBE macro.
Relative Track Addresses for UNIX files. For UNIX files, the relative track
addresses (TTRs) do not represent the actual track and record location. Instead, the
TTRs are tokens that define a BPAM logical connection to a UNIX member or the
| record’s position within the file. Whether the value is three bytes or four bytes
| depends on whether you specify BLOCKTOKENSIZE=LARGE on the DCBE
| macro.
Because data sets on magnetic tape devices must be organized sequentially, the
procedure for allocating space is different from allocating space on DASD. All data
sets that are stored on a given magnetic tape volume must be recorded in the same
density. See z/OS DFSMS Using Magnetic Tapes for information about magnetic tape
volume labels and tape processing.
Related reading: For information about nonstandard label processing routines, see
z/OS DFSMS Installation Exits.
Restriction: The ISO/ANSI (AL) labeled tapes do not allow a file sequence
number greater than 9999.
Related reading: For additional information about using file sequence numbers,
see z/OS DFSMS Using the New Functions and z/OS DFSMS Using Magnetic Tapes.
You can specify the file sequence number in one of the following ways:
v Code the file sequence number as the first value of the LABEL keyword on the
DD statement or using the DYNALLOC, macro for dynamic allocation.
v Catalog each data set using the appropriate file sequence number and volume
serial number. Issue the OPEN macro because the catalog provides the file
sequence number.
OPEN uses the file sequence number from the catalog if you do not specify it on
the DD statement or dynamic allocation.
v You can use the RDJFCB macro to read the job file control block (JFCB), set the
file sequence number in the JFCB, and issue the OPEN, TYPE=J macro for a new
or uncataloged data set. The maximum file sequence number is 65 535. This
method overrides other sources of the file sequence number.
Related reading: For more information on the OPEN macro, see z/OS DFSMS
Macro Instructions for Data Sets. For more information on the RDJFCB and OPEN,
TYPE=J macros, see z/OS DFSMSdfp Advanced Services. For more information on
IEHPROGM, see z/OS DFSMSdfp Utilities.
Example:
//* STEP05
//* Create a tape data set with a file sequence number of 10 011.
//* Update the file sequence number (FSN) in JFCB using OPEN TYPE=J macro.
//*--------------------------------------------------------------------
//STEP05 EXEC ASMHCLG
//C.SYSIN DD *
. . .
L 6,=F’10011’ CREATE FSN 10011
RDJFCB (DCBAD) READ JFCB
STCM 6,B’0011’,JFCBFLSQ STORE NEW FSN IN JFCB
OPEN (DCBAD,(OUTPUT)),TYPE=J CREATE FILE
PUT DCBAD,RECORD WRITE RECORD
CLOSE DCBAD CLOSE FILE
. . .
DCBAD DCB DDNAME=DD1,DSORG=PS,EXLST=LSTA,MACRF=PM,LRECL=80,RECFM=FB
LSTA DS 0F RJFCB EXIT LIST
DC X'87’ CODE FOR JFCB
DC AL3(JFCBAREA) POINTER TO JFCB AREA
JFCBAREA DS XL176 JFCB AREA
IEFJFCBN DEFINE THE JFCB FIELDS
RECORD DC CL80’RECORD10011’ RECORD AREA
END
//* JCL FOR ALLOCATING TAPE DATA SET
//DD1 DD DSN=DATASET1,UNIT=TAPE,VOL=SER=TAPE01,DISP=(NEW,CATLG),
// LABEL=(1,SL)
Result: The output displays information about the new tape data set with a file
sequence number of 10 011:
IEC205I DD1,OCEFS005,G.STEP05,FILESEQ=10011, COMPLETE VOLUME LIST,
DSN=DS10011,VOLS=TAPE01,TOTALBLOCKS=1
Example:
//* STEP06
//* Create files 1 through 10 010 on a single volume.
//*--------------------------------------------------------------
//STEP06 EXEC ASMHCLG
//C.SYSIN DD *
. . .
L 6,=F’10010’ CREATE 10 010 FILES
LA 5,1 START AT FILE 1 AND DS1
RDJFCB (DCBAD) READ JFCB
MVC JFCBAREA(44),=CL44’DS’ DSNAME IS ’DSfsn’ WHERE
Result: This excerpt from the output shows information about the tape data set
with a file sequence number of 9999:
IEC205I DD1,OCEFS001,G.STEP06,FILESEQ=09999, COMPLETE VOLUME LIST,
DSN=DS09999,VOLS=TAPE01,TOTALBLOCKS=1
If you want to catalog or pass data sets that reside on unlabeled volumes, specify
the volume serial numbers for the required volumes. Specifying the volume serial
numbers ensures that data sets residing on multiple volumes are not cataloged or
passed with duplicate volume serial numbers. Retrieving such data sets can give
unpredictable errors.
When a program writes data on a nonstandard labeled tape, the installation must
supply routines to process labels and tape marks and to position the tape. If you
want the system to retrieve a data set, the installation routine that creates
nonstandard labels must write tape marks. Otherwise, tape marks are not required
after nonstandard labels because installation routines manage positioning of the
tape volumes.
Notes:
1. The data-in-virtual (DIV) macro, which is used to access a linear data set, is
described in z/OS MVS Programming: Assembler Services Guide.
2. PDSs and PDSEs are both partitioned organization data sets.
3. BSAM and QSAM cannot be used to create or modify user data in directory
entries.
4. Refers to fixed-length and variable-length RRDSs.
5. Sequential data sets and extended-format data sets are both sequential
organization data sets.
6. A UNIX file can be in any type of z/OS UNIX file system such as HFS, NFS,
TFS, or zFS.
7. When you access a UNIX file with BSAM or QSAM, the file is simulated as a
single-volume sequential data set.
8. When you access a UNIX directory and its files with BPAM, they are simulated
as if they were a PDS or PDSE. One or more directories (with separate DD
statements) can be in a concatenation with real PDSs and PDSEs
9. When you access a UNIX file with VSAM, the file is simulated as an ESDS.
Data sets can also be organized as PDSE program libraries. PDSE program libraries
can be accessed with BSAM, QSAM, or the program management binder. The first
member written in a PDSE library determines the library type, either program or
data.
Related reading: For information about dynamic allocation, see z/OS MVS
Programming: Authorized Assembler Services Guide.
You can use any of the following methods to allocate a data set.
ALLOCATE Command
You can issue the ALLOCATE command either through access method services or
TSO/E to define VSAM and non-VSAM data sets.
JCL
All data sets can be defined directly through JCL.
Related reading: For information about access method services commands see
z/OS DFSMS Access Method Services for Catalogs. For information about TSO
commands, see z/OS TSO/E Command Reference. For information about using JCL,
see z/OS MVS JCL Reference and z/OS MVS JCL User’s Guide.
BSAM, QSAM, BPAM, and VSAM convert between record-oriented data and
byte-stream oriented data that is stored in UNIX files.
31-bit addressing mode to access these areas above 16 MB. See Chapter 17, “Using
31-Bit Addressing Mode with VSAM,” on page 263.
The BSAM, BPAM, QSAM, and BDAM access methods let you create certain data
areas, buffers, certain user exits, and some control blocks in virtual storage above
the 16 MB line if you run the macro in 31-bit mode. See z/OS DFSMS Macro
Instructions for Data Sets.
//ddname DD DSN=LIBNAME(MEMNAME),...
v You can use BSAM or QSAM macros to add or retrieve UNIX files. The OPEN
and CLOSE macros handle data set positioning and directory maintenance. Code
the DSORG=PS parameter in the DCB macro, and the DDNAME parameter of
the JCL DD statement with a complete path and filename as follows:
//ddname DD PATH=’/dir1/dir2/file’, ...
You can then use BPAM to read files as if they were members of a PDS or PDSE.
v When you create a PDS, the SPACE parameter defines the size of the data set
and its directory so the system can allocate data set space. For a PDS, the SPACE
parameter preformats the directory. The specification of SPACE for a PDSE is
different from the specification for a PDS. See “Allocating Space for a PDSE” on
page 447.
v You can use the STOW macro to add, delete, change, or replace a member name
or alias in the PDS or PDSE directory, or clear a PDSE directory. You can also
use the STOW macro to delete all the members of a PDSE. However, you cannot
use the STOW macro to delete all the members of a PDS. For program libraries,
you cannot use STOW to add or replace a member name or alias in the
directory.
v You can read multiple members of PDSs, PDSEs, or UNIX directories by passing
a list of members to BLDL; then use the FIND macro to position to a member
before processing it.
v You can code a DCBE and use 31-bit addressing for BPAM.
v PDSs, PDSEs, members, and UNIX files cannot use sequential data striping. See
Chapter 26, “Processing a Partitioned Data Set (PDS),” on page 415 and
Chapter 27, “Processing a Partitioned Data Set Extended (PDSE),” on page 439.
Also see z/OS DFSMS Macro Instructions for Data Sets for information about
coding the DCB (BPAM) and DCBE macros.
BSAM Processing
When you use BSAM to process a sequential data set and members of a PDS or
PDSE, the following rules apply:
v BSAM can read a member of a PDSE program library, but not write the member.
v The application program must block and unblock its own input and output
records. BSAM only reads and writes data blocks.
v The application program must manage its own input and output buffers. It must
give BSAM a buffer address with the READ macro, and it must fill its own
output buffer before issuing the WRITE macro.
v The application program must synchronize its own I/O operations by issuing a
CHECK macro for each READ and WRITE macro issued.
v BSAM lets you process blocks in a nonsequential order by repositioning with the
NOTE and POINT macros.
v You can read and write direct access storage device record keys with BSAM.
PDSEs and extended-format data sets are an exception.
QSAM Processing
When you use QSAM to process a sequential data set and members of a PDS or
PDSE, the following rules apply:
v QSAM processes all record formats except blocks with keys.
v QSAM blocks and unblocks records for you automatically.
v QSAM manages all aspects of I/O buffering for you automatically. The GET
macro retrieves the next sequential logical record from the input buffer. The PUT
macro places the next sequential logical record in the output buffer.
v QSAM gives you three transmittal modes: move, locate, and data. The three
modes give you greater flexibility managing buffers and moving data.
Programs can access the information in UNIX files through z/OS UNIX system
calls or through standard z/OS access methods and macro instructions. To identify
and access a data file, specify the path leading to it.
You can access a UNIX file through BSAM or QSAM (DCB DSORG=PS), BPAM
(DSORG=PO), or VSAM (simulated as an ESDS) by specifying PATH=pathname in
the JCL data definition (DD) statement, SVC 99, or TSO ALLOCATE command.
BSAM, QSAM, BPAM, and VSAM use the following types of UNIX files:
v Regular files
v Character special files
v First-in-first-out (FIFO) special files
v Hard or soft links
v Named pipes
BSAM, QSAM, and VSAM do not support the following types of UNIX files:
v Directories, except BPAM, which does not support direct reading of the directory
v External links
Data can reside on a system other than the one the user program is running on
without using shared DASD. The other system can be z/OS or non-z/OS. NFS
transports the data.
Because the system stores UNIX files in a byte stream, UNIX files cannot simulate
all the characteristics of sequential data sets, partitioned data sets, or ESDSs.
Certain macros and services have incompatibilities or restrictions when they
process UNIX files. For example:
v Data set labels and unit control blocks (UCBs) do not exist for UNIX files. Any
service that relies on a DSCB or UCB for information might not work with these
files.
v With traditional MVS data sets, other than VSAM linear data sets, the system
maintains record boundaries. That is not true with byte-stream files such as
UNIX files.
Related Reading: For more information about the following topics, see:
v Chapter 28, “Processing z/OS UNIX Files,” on page 481
v “Simulated VSAM Access to UNIX files” on page 81
v For information on coding the DCB and DCBE macros for BSAM, QSAM,
BPAM, and EXCP, see z/OS DFSMS Macro Instructions for Data Sets.
Guideline: Do not use the EXCP and XDAP macros to access data. These macros
cannot be used to process PDSEs, extended-format data sets, VSAM data sets,
UNIX files, dummy data sets, TSO/E terminals, spooled data sets, or OAM objects.
The use of EXCP, EXCPVR, and XDAP require detailed knowledge of channel
programs, error recovery, and physical data format. Use BSAM, QSAM, BPAM, or
VSAM instead of the EXCP and XDAP macros to access data.
Distributed file manager creates and associates DDM attributes with data sets. The
DDM attributes describe the characteristics of the data set, such as the file size
class and last access date. The end user can determine whether a specific data set
has associated DDM attributes by using the ISMF Data Set List panel and the
IDCAMS DCOLLECT command.
Distributed file manager also provides the ability to process data sets along with
their associated attributes. Any DDM attributes associated with a data set cannot
be propagated with the data set unless DFSMShsm uses DFSMSdss as its data
mover. See z/OS DFSMS DFM Guide and Reference for information about the DDM
file attributes.
A VIO data set appears to the application program to occupy one unshared virtual
(simulated) direct access storage volume. This simulated volume is like a real
direct access storage volume except for the number of tracks and cylinders. A VIO
data set can occupy up to 65 535 tracks even if the device being simulated does not
have that many tracks.
A VIO data set always occupies a single extent (area) on the simulated device. The
size of the extent is equal to the primary space amount plus 15 times the
secondary amount (VIO data size = primary space + (15 × secondary space)). An
easy way to specify the largest possible VIO data set in JCL is SPACE=(TRK,65535).
You can set this limit lower. Specifying ALX (all extents) or MXIG (maximum
contiguous extents) on the SPACE parameter results in the largest extent allowed
on the simulated device, which can be less than 65 535 tracks.
Do not allocate a VIO data set with zero space. Failure to allocate space to a VIO
data set will cause unpredictable results when reading or writing.
A summary of the effects of ALX or MXIG with VIO data sets follows.
A data set name can be from one to a series of twenty-two joined name segments.
Each name segment represents a level of qualification. For example, the data set
name DEPT58.SMITH.DATA3 is composed of three name segments. The first name on
the left is called the high-level qualifier, the last is the low-level qualifier.
Data set names must not exceed 44 characters, including all name segments and
periods.
See “Naming a Cluster” on page 104 and “Naming an Alternate Index” on page
119 for examples of naming a VSAM data set.
Restriction: The use of name segments longer than 8 characters would produce
unpredictable results.
You should use only the low-level qualifier GxxxxVyy, in which xxxx and yy are
numbers, in the names of generation data sets. Define a data set with GxxxxVyy as
the low-level qualifier of non-generation data sets only if a generation data group
with the same base name does not exist. However, IBM recommends that you
restrict GxxxxVyy qualifiers to generation data sets, to avoid confusing generation
data sets with other types of non-VSAM data sets.
For example, the following names are not valid data set names:
v A name that is longer than 8 characters (HLQ.ABCDEFGHI.XYZ)
v A name that contains two successive periods (HLQ..ABC)
v A name that ends with a period (HLQ.ABC.)
v A name that contains a segment that does not start with an alphabetic or
national character (HLQ.123.XYZ)
VTOC
The VTOC lists the data sets that reside on its volume, along with information
about the location and size of each data set, and other data set attributes. See z/OS
DFSMSdfp Advanced Services for information about the VTOC structure.
(Application programmers usually do not need to know the contents of the VTOC.)
Also see Appendix A, “Using Direct Access Labels,” on page 561.
Catalogs
A catalog describes data set attributes and indicates the volumes on which a data
set is located. Data sets can be cataloged, uncataloged, or recataloged. All
system-managed DASD data sets are cataloged automatically in a catalog.
Cataloging of data sets on magnetic tape is not required but usually it simplifies
users jobs. All data sets can be cataloged in a catalog.
Non-VSAM data sets can also be cataloged through the catalog management
macros (CATALOG and CAMLST). An existing data set can be cataloged through
the access method services DEFINE RECATALOG command.
Access method services is also used to establish aliases for data set names and to
connect user catalogs to the master catalog. See z/OS DFSMS Managing Catalogs for
information about using catalog management macros.
The APIs that access data set names include the following:
OBTAIN macro Reads a data set control block (DSCB) from a
VTOC.
CVAF macros Reads a VTOC and VTOC index. These macros are
CVAFDIR, CVAFDSM, CVAFFILT, CVAFSEQ, and
CVAFTST.
RDJFCB macro You can use the RDJFCB macro to learn the name
of a data set and the volume serial number of a
VSAM data set. You also can use RDJFCB with the
OPEN TYPE=J macro to read a VTOC. When you
use the RDJFCB macro, use a DCB and the exit list
for the DCB because using an ACB and VSAM exit
list would not work.
OPEN TYPE=J macro Can be used to open and read a VTOC. This macro
supplies a job file control block (JFCB), which
represents the information in the DD statement.
VSAM does not support OPEN TYPE=J.
LOCATE macro Locates and extracts information from catalogs.
Catalog search interface Locates and extracts information from catalogs. For
more information, see z/OS DFSMS Managing
Catalogs.
Related reading: For more information on these macros, see z/OS DFSMSdfp
Advanced Services.
Topic Location
Using Automatic Class Selection Routines 29
Allocating Data Sets 30
When you allocate or define a data set to use SMS, you specify your data set
requirements by using a data class, a storage class, and a management class.
Typically, you do not need to specify these classes because a storage administrator
has set up automatic class selection (ACS) routines to determine which classes to
use for a data set.
The Storage Management Subsystem (SMS) can manage tape data sets on native
volumes in a tape library and on the logical volumes in a Virtual Tape Server
(VTS). DFSMSrmm provides some services for the stacked volumes contained in a
Virtual Tape Server. See z/OS DFSMSrmm Implementation and Customization Guide.
v SMS must be active when you allocate a new data set to be SMS managed.
v Job steps in which a JOBCAT or STEPCAT DD statement is used cannot use
system-managed data sets.
v Your storage administrator must be aware that ACS routines are used for data
sets created with distributed file manager (DFM). These data sets must be
system managed. If the storage class ACS routine does not assign a storage class,
distributed file manager deletes the just-created data set, because distributed file
manager does not create non-system-managed data sets. Distributed file
manager does, however, access non-system-managed data sets.
Table 3 lists the storage management functions and products you can use with
system-managed and non-system-managed data sets. For details, see z/OS
DFSMSdfp Storage Administration Reference.
Table 3. Data Set Activity for Non-System-Managed and System-Managed Data Sets
Activity Allocation Non-System-Managed Data System-Managed Data
Data placement JCL, storage pools ACS, storage groups
Allocation control Software user installation ACS
exits
Allocation authorization, RACF3, JCL, IDCAMS, RACF3, data class, JCL,
definition TSO/E, DYNALLOC IDCAMS, TSO/E,
DYNALLOC
Access
Access authorization RACF3 RACF3
Read/write performance, Manual placement, JCL, Management and storage
availability DFSMSdss1, DFSMShsm2 class
Access method access to Not available JCL (PATH=)
UNIX byte stream
Space Management
Backup DFSMShsm2, DFSMSdss1, Management class
utilities
Expiration JCL Management class
Release unused space DFSMSdss1, JCL Management class, JCL
Deletion DFSMShsm2, JCL, utilities Management class, JCL
Migration DFSMShsm2 Data and management class,
JCL
Notes:
1. DFSMSdss: Moves data (dump, restore, copy, and move) between volumes on DASD
devices, manages space, and converts data sets or volumes to SMS control. See z/OS
DFSMSdss Storage Administration Guide for information about using DFSMSdss.
2. DFSMShsm: Manages space, migrates data, and backs up data through SMS classes and
groups. See z/OS DFSMShsm Managing Your Own Data for information about using
DFSMShsm.
3. RACF: Controls access to data sets and use of system facilities.
v Unmovable data sets (DSORG is xxU) except when set by a checkpoint function
v Data sets with absolute track allocations (ABSTR value for SPACE parameter on
DD statement)
v Tape data sets
v Spooled data sets
Direct data sets (BDAM) can be system-managed but if a program uses OPTCD=A,
the program might become dependent on where the data set is on the disk. For
example, the program might record the cylinder and head numbers in a data set.
Such a data set should not be migrated or moved. You can specify a management
class that prevents automatic migration.
You can use a storage class and a management class only with system-managed
data sets. You can use a data class for data sets that are either system managed or
not system managed, and for data sets on either DASD or tape volumes. SMS can
manage tape data sets on physical volumes in a tape library and on the logical
volumes in a Virtual Tape Server (VTS). DFSMSrmm provides some services for
the stacked volumes contained in a Virtual Tape Server (see z/OS DFSMSrmm
Implementation and Customization Guide). Your storage administrator defines the data
classes, storage classes, and management classes your installation will use. Your
storage administrator provides a description of each named class, including when
to use the class.
Using a data class, you can easily allocate data sets without specifying all of the
data set attributes normally required. Your storage administrator can define
standard data set attributes and use them to create data classes, for use when you
allocate your data set. For example, your storage administrator might define a data
class for data sets whose names end in LIST and OUTLIST because they have
similar allocation attributes. The ACS routines can then be used to assign this data
class, if the data set names end in LIST or OUTLIST.
(ALLOCATE and DEFINE CLUSTER command sections) for information about the
attributes that can be assigned through the SMS class parameters, and examples of
defining data sets.
Another way to allocate a data set without specifying all of the data set attributes
normally required is to model the data set after an existing data set. You can do
this by referring to the existing data set in the DD statement for the new data set,
using the JCL keywords LIKE or REFDD. See z/OS MVS JCL Reference and z/OS
MVS JCL User’s Guide.
| In this book we often use the term “creation” to refer to the first meaning of
| “allocation” although creation often includes the meaning of writing data into the
| data set.
To allocate a new data set on DASD, you can use any of the following methods:
v JCL DD statements. See z/OS MVS JCL Reference.
v Access method services ALLOCATE command or DEFINE command. See z/OS
DFSMS Access Method Services for Catalogs for the syntax and more examples.
v TSO ALLOCATE command. See z/OS TSO/E Command Reference for the syntax
and more examples.
v DYNALLOC macro using the SVC 99 parameter list. See z/OS MVS
Programming: Authorized Assembler Services Guide.
To update an existing data set, specify a DISP value of OLD, MOD, or SHR. Do
not use DISP=SHR while updating a sequential data set unless you have some
other means of serialization because you might lose data.
To share a data set during access, specify a DISP value of SHR. If a DISP value of
NEW, OLD, or MOD is specified, the data set cannot be shared.
Tip: If SMS is active and a new data set is a type that SMS can manage, it is
impossible to determine if the data set will be system-managed based solely on the
JCL because an ACS routine can assign a storage class to any data set.
Related reading: For more information, see “Using HFS Data Sets” on page 483.
You can request the name of the data class, storage class, and management class in
the JCL DD statement. However, in most cases, the ACS routines pick the classes
needed for the data set.
When first allocated, the PDSE is neither a program library or a data library. If the
first member written, by either the binder or by IEBCOPY, is a program object, the
library becomes a program library and remains such until the last member has
been deleted. If the first member written is not a program object, then the PDSE
becomes a data library. Program objects and other types of data cannot be mixed in
the same PDSE library.
| Allocating a Large Format Data Set. Large format data sets are sequential data
| sets that can grow beyond 65 535 tracks (4369 cylinders) per volume. Large format
| data sets can be system-managed or not. You can allocate a large format data set
| Allocating a Basic Format Data Set. Basic format data sets are sequential data sets
| that are specified as neither extended format nor large format. Basic format data
| sets have a size limit of 65 535 tracks (4369 cylinders) per volume. Basic format
| data sets can be system-managed or not. You can allocate a basic format data set
| using the DSNTYPE=BASIC parameter on the DD statement, dynamic allocation
| (SVC 99), TSO/E ALLOCATE or the access method services ALLOCATE command,
| or the data class. If no DSNTYPE value is specified from any of these sources, then
| its default is BASIC.
| Note: The data class cannot contain a DSNTYPE of BASIC; leave DSNTYPE blank
| to get BASIC as the default value.
Allocating a VSAM Data Set. See Chapter 18, “Using Job Control Language for
VSAM,” on page 265 for information about allocating VSAM data sets using JCL.
Allocating a PDSE
The following example shows the ALLOCATE command used with the DSNTYPE
keyword to create a PDSE. DSNTYPE(LIBRARY) indicates the data set being
allocated is a PDSE.
//ALLOC EXEC PGM=IDCAMS,DYNAMNBR=1
//SYSPRINT DD SYSOUT=A
//SYSIN DD *
ALLOC -
DSNAME(XMP.ALLOCATE.EXAMPLE1) -
NEW -
STORCLAS(SC06) -
MGMTCLAS(MC06) -
DSNTYPE(LIBRARY)
/*
BLKSIZE(1000) -
LRECL(100) -
DSORG(PS) -
UNIT(3380) -
VOL(338002) -
RECFM(F,B)
/*
You do not have to specify the user ID, GOLD, as an explicit qualifier. Because the
BLKSIZE parameter is omitted, the system determines a block size that optimizes
space usage.
The following example allocates a new VSAM entry-sequenced data set, with a
logical record length of 80, a block size of 8000, on two tracks. To allocate a VSAM
data set, specify the RECORG keyword on the ALLOCATE command. RECORG is
mutually exclusive with DSORG and with RECFM. To allocate a key-sequenced
data set, you also must specify the KEYLEN parameter. RECORG specifies the type
of data set you want.
ALLOC DA(EX2.DATA) RECORG(ES) SPACE(2,0) TRACKS LRECL(80)
BLKSIZE(8192) NEW
Topic Location
Specification of Space Requirements 35
Maximum Data Set Size 37
Primary and Secondary Space Allocation without the Guaranteed Space 38
Attribute
Allocation of Data Sets with the Guaranteed Space Attribute 40
Allocation of Data Sets with the Space Constraint Relief Attributes 41
Extension to Another DASD Volume 42
Multiple Volume Considerations for Sequential Data Sets 44
Additional Information on Space Allocation 44
The system can use a data class if SMS is active even if the data set is not SMS
managed. For system-managed data sets, the system selects the volumes.
Therefore, you do not need to specify a volume when you define your data set.
If you specify your space request by average record length, space allocation is
independent of device type. Device independence is especially important to
system-managed storage.
Blocks
When the amount of space required is expressed in blocks, you must specify the
number and average length of the blocks within the data set, as in this example:
// DD SPACE=(300,(5000,100)), ...
From this information, the operating system estimates and allocates the number of
tracks required.
The system uses this block length value only to calculate space. This value does
not have to be the same as the BLKSIZE value. If the data set is extended format,
the system adds 32 to this value when calculating space.
Recommendation: For sequential and partitioned data sets, let the system calculate
the block size instead of requesting space by average block length. See
“System-Determined Block Size” on page 329.
If the average block length of the real data does not match the value coded here,
the system might allocate much too little or much too much space.
U—Use a scale of 1
K—Use a scale of 1024
M—Use a scale of 1048576
When the AVGREC keyword is specified, the values specified for primary and
secondary quantities in the SPACE keyword are multiplied by the scale and those
new values will be used in the space allocation. For example, the following request
results in the primary and secondary quantities being multiplied by 1024:
// DD SPACE=(80,(20,2)),AVGREC=K, ...
From this information, the operating system estimates and allocates the number of
tracks required using one of the following block lengths, in the order indicated:
1. 4096, if the data set is a PDSE.
2. The BLKSIZE parameter on the DD statement or the BLKSIZE subparameter of
the DCB parameter on the DD.
3. The system determined block size, if available.
4. A default value of 4096.
For an extended-format data set, the operating system uses a value 32 larger than
the above block size. The primary and secondary space are divided by the block
length to determine the number of blocks requested. The operating system
determines how many blocks of the block length can be written on one track of the
device. The primary and secondary space in blocks is then divided by the number
of blocks per track to obtain a track value, as shown in the examples below. These
examples assume a block length of 23200. Two blocks of block length 23200 can be
written on a 3380 device:
(1.6MB / 23200) / 2 = 36 = primary space in tracks
(160KB / 23200) / 2 = 4 = secondary space in tracks
Tracks or Cylinders
The following example shows the amount of space required in tracks or cylinders:
// DD SPACE=(TRK,(100,5)), ...
// DD SPACE=(CYL,(3,1)), ...
Absolute Track
If the data set contains location-dependent information in the form of an actual
track address (such as MBBCCHHR or CCHHR), you can request space in the number of
tracks and the beginning address. In this example, 500 tracks is required, beginning
at relative track 15, which is cylinder 1, track 0:
// DD SPACE=(ABSTR,(500,15)),UNIT=3380, ...
Restriction: Data sets that request space by absolute track are not eligible to be
system managed and they interfere with DASD space management done by the
space management products and the storage administrator. Avoid using absolute
track allocation.
Data sets that are not limited to 65 535 total tracks allocated on any one volume
are:
| v Large format sequential
v Extended-format sequential
v UNIX files
v PDSE
v VSAM
If a virtual input-output (VIO) data set is to be SMS managed, the VIO maximum
size is 2 000 000 KB, as defined in the Storage Group VIO Maxsize parameter.
| A multivolume direct (BDAM) data set is limited to 255 extents across all volumes.
| The system currently does not enforce this limit when creating the data set.
| Using extended addressability, the size limit for a VSAM data set is determined by
| either:
| v Control interval size multiplied by 4 GB
| v The volume size multiplied by 59.
| A control interval size of 4 KB yields a maximum data set size of 16 TB, while a
| control interval size of 32 KB yields a maximum data set size of 128 TB. A control
| interval size of 4 KB is preferred by many applications for performance reasons.
| No increase in processing time is expected for extended format data sets that grow
| beyond 4 GB. To use extended addressability, the data set must be:
| v SMS-managed
| v Defined as extended format.
Data sets allocated in the extended-format achieve the added benefits of improved
error detection when writing to DASD as well as the use of a more efficient and
functionally complete interface to the I/O subsystem.
Table 4 shows how stripes for an extended-format sequential data set are different
from stripes for an extended-format VSAM data set.
Table 4. Differences Between Stripes in Sequential and VSAM Data Sets
Sequential Extended-Format Striped VSAM Extended-Format Striped
The data set can have a maximum of 59 stripes. The data set can have a maximum of 16
stripes.
Each stripe must reside on one volume and Each stripe can reside on one or more
cannot be extended to another volume. volumes. There is no advantage to
increasing the number of stripes for
VSAM to be able to acquire additional
space. When extending a stripe to a new
volume, the system derives the amount of
the first space allocated according to the
Additional Volume Amount in the data
class. This space derived from the
primary or secondary space. The default
value is the primary space amount.
After the system fills a track, it writes the After the system writes a control interval
following blocks on a track in the next stripe. (CI), it writes the next CI on a track in the
next stripe. A CI cannot span stripes.
You can use the BSAM and QSAM access You can use the VSAM access method.
methods.
4. The system will attempt to allocate new space only on the last volume. On
that volume secondary amounts continue to be allocated until the volume is
out of space or the data set extent limit is reached.
The amount of preallocated space for VSAM striped data is limited to 16 volumes.
Allocations and extends to new volumes proceed normally until space cannot be
obtained by normal means.
The system performs space constraint relief in two situations: when a new data set
is allocated and when a data set is extended to a new volume. During EOV
processing, space constraint relief affects the primary or secondary allocation
amount for VSAM data sets, or the secondary allocation amount for non-VSAM
data sets. During CREATE processing, the primary quantity might be reduced for
both non-VSAM and VSAM data sets.
Exception: The system does not use space constraint relief when data sets are
extended on the same volume.
The allocation fails as before if either or both methods 1 and 2 are not successful.
Recommendation: You can specify 0% in the data class for this parameter so space
is not reduced.
SMS removes the 5-extent-at-a-time limit. (For example, sequential data sets can
have a maximum of 16 extents.) Without this change, the system tries to satisfy
your primary or secondary space request with no more than five extents. If you
request a large amount of space or the space is fragmented, the system might need
more than five extents.
Restriction: VSAM and non-VSAM multistriped data sets do not support space
constraint relief. However, single-striped VSAM and non-VSAM data sets use
space constraint relief.
| Note: After a multivolume data set is unable to extend on the current volume and
| the data set is extended to a new volume, then all previous volumes can no
| longer be selected for future extensions.
In example 2, although the catalog contains only five candidate volumes, the data
set can be extended to 11 candidate volumes, including the primary volume.
Topic Location
Using REPRO for Backup and Recovery 46
Using EXPORT and IMPORT for Backup and Recovery of VSAM Data Sets 47
Writing a Program for Backup and Recovery 48
Using Concurrent Copy for Backup and Recovery 49
Updating a Data Set After Recovery 49
Synchronizing Catalog and VSAM Data Set Information During Recovery 49
It is important to establish backup and recovery procedures for data sets so you
can replace a destroyed or damaged data set with its backup copy. Generally data
administrators set up automated procedures for backup so you do not have to be
concerned with doing it yourself. SMS facilitates this automation by means of
management class.
There are several methods of backing up and recovering VSAM and non-VSAM
data sets:
v Using Data Facility Storage Management Subsystem Hierarchical Storage
Manager (DFSMShsm™). You can use DFSMShsm only if DSS and DFSMShsm
are installed on your system and your data sets are cataloged in a catalog. For
information about using DFSMShsm backup and recovery, see z/OS DFSMShsm
Managing Your Own Data.
v Using the access method services REPRO command.
v Using the Data Facility Storage Management Subsystem Data Set Services
(DFSMSdss™) DUMP and RESTORE commands. You can use DSS if it is
installed on your system and your data sets are cataloged in a catalog. For
uncataloged data sets, DSS provides full volume, and physical or logical data set
dump functions. For compressed extended format data sets, DFSMShsm
processes the compressed data sets using DFSMSdss as the data mover. When
using DFSMSdss for logical dump/restore with VSAM compressed data sets, the
target data set allocation must be consistent with the source data set allocation.
For DFSMShsm, a VSAM extended format data set migrated and/or backed up
will only be recalled and/or recovered as an extended format data set. For
information about using DFSMSdss, see z/OS DFSMSdss Storage Administration
Reference.
v Writing your own program for backup and recovery.
v For VSAM data sets, using the access method services EXPORT and IMPORT
commands.
v For PDSs using IEBCOPY utility.
v Using concurrent copy to take an instantaneous copy. You can use concurrent
copy if your data set resides on DASD attached to IBM storage controls that
support the concurrent copy function.
Each of these methods of backup and recovery has its advantages. You need to
decide the best method for the particular data you want to back up. For the
requirements and processes of archiving, backing up, and recovering data sets
using DFSMShsm, DSS, or ISMF, see z/OS DFSMShsm Managing Your Own Data,
which also contains information on disaster recovery.
Using REPRO for backup and recovery has the following advantages:
v Backup copy is accessible. The backup copy obtained by using REPRO is
accessible for processing. It can be a VSAM data set or a sequential data set.
v Type of data set can be changed. The backup copy obtained by using REPRO
can be a different type of data set than the original. For example, you could back
up a VSAM key-sequenced data set by copying it to a VSAM entry-sequenced
data set. A compressed VSAM key-sequenced data set cannot be copied to a
VSAM entry-sequenced data set using REPRO. The data component of a
compressed key-sequenced data set cannot be accessed by itself.
v Key-sequenced data set or variable-length RRDS is reorganized. Using REPRO
for backup results in data reorganization and the recreation of an index for a
key-sequenced data set or variable-length RRDS. The data records are
rearranged physically in ascending key sequence and free-space quantities are
restored. (Control interval and control area splits can have placed the records
physically out of order.) When a key-sequenced data set is reorganized, absolute
references using the relative byte address (RBA) are no longer valid.
If you are accessing a data set using RLS, see Chapter 14, “Using VSAM
Record-Level Sharing,” on page 219.
REPRO provides you with several options for creating backup copies and using
them for data set recovery. The following are suggested ways to use REPRO:
1. Use REPRO to copy the data set to a data set with a different name.
Either change your references to the original copy or delete the original and
rename the copy.
2. Create a backup copy on another catalog, then use the backup copy to replace
the original.
v Define a data set on another catalog, and use REPRO to copy the original
data set into the new data set you have defined.
v You can leave the backup copy in the catalog it was copied to when you
want to replace the original with the backup copy. Then, change the JCL
statements to reflect the name of the catalog that contains the backup copy.
3. Create a copy of a nonreusable VSAM data set on the same catalog, then delete
the original data set, define a new data set, and load the backup copy into the
newly defined data set.
v To create a backup copy, define a data set, and use REPRO to copy the
original data set into the newly defined data set. If you define the backup
data set on the same catalog as the original data set or if the data set is SMS
managed, the backup data set must have a different name.
v To recover the data set, use the DELETE command to delete the original data
set if it still exists. Next, redefine the data set using the DEFINE command,
then restore it with the backup copy using the REPRO command.
4. Create a copy of a reusable VSAM data set, then load the backup copy into the
original data set. When using REPRO, the REUSE attribute permits repeated
backups to the same VSAM reusable target data set.
v To create a backup copy, define a data set, and use REPRO to copy the
original reusable data set into the newly defined data set.
v To recover the data set, load the backup copy into the original reusable data
set.
5. Create a backup copy of a data set, then merge the backup copy with the
damaged data set. When using REPRO, the REPLACE parameter lets you
merge a backup copy into the damaged data set. You cannot use the REPLACE
parameter with entry-sequenced data sets, because records are always added to
the end of an entry-sequenced data set.
v To create a backup copy, define a data set, and use REPRO to copy the
original data set into the newly defined data set.
v To recover the data set, use the REPRO command with the REPLACE
parameter to merge the backup copy with the destroyed data set. With a
key-sequenced data set, each source record whose key matches a target
record’s key replaces the target record. Otherwise, the source record is
inserted into its appropriate place in the target cluster. With a fixed-length or
variable-length RRDS, each source record, whose relative record number
identifies a data record in the target data set, replaces the target record.
Otherwise, the source record is inserted into the empty slot its relative record
number identifies. When only part of a data set is damaged, you can replace
only the records in the damaged part of the data set. The REPRO command
lets you specify a location to begin copying and a location to end copying.
6. If the index of a key-sequenced data set or variable-length RRDS becomes
damaged, follow this procedure to rebuild the index and recover the data set.
This does not apply to a compressed key-sequenced data set. It is not possible
to REPRO just the data component of a compressed key-sequenced data set.
v Use REPRO to copy the data component only. Sort the data.
v Use REPRO with the REPLACE parameter to copy the cluster and rebuild
the index.
Restrictions:
1. Do not use JOBCAT or STEPCAT DD statements for system-managed data sets.
The JOBCAT or STEPCAT DD statement fails if it references a system-managed
catalog, or if the data set that is being searched is system managed. Also, you
must connect all referenced catalogs to the system master catalog.
2. JOBCAT and STEPCAT DD statements are disabled by default. For information
on enabling JOBCAT and STEPCAT DD statements, see z/OS DFSMS Managing
Catalogs.
Using EXPORT and IMPORT for Backup and Recovery of VSAM Data
Sets
Using EXPORT/IMPORT for backup and recovery has the following advantages:
v Key-sequenced data set or variable-length RRDS is reorganized. Using
EXPORT for backup results in data reorganization and the recreation of an index
for a key-sequenced data set or variable-length RRDS. The data records are
rearranged physically in ascending key sequence and free-space quantities are
balanced. (Control interval and control area splits can have placed the records
physically out of order.) When a key-sequenced data set is reorganized, absolute
references using the RBA are no longer valid.
Most catalog information is exported along with the data set, easing the problem
of redefinition. The backup copy contains all of the information necessary to
redefine the VSAM cluster or alternate index when you IMPORT the copy.
You can export entry-sequenced or linear data set base clusters in control interval
mode by specifying the CIMODE parameter. When CIMODE is forced for a linear
data set, a RECORDMODE specification is overridden.
Use the IMPORT command to totally replace a VSAM cluster whose backup copy
was built using the EXPORT command. The IMPORT command uses the backup
copy to replace the cluster’s contents and catalog information.
IMPORT will not propagate distributed data management (DDM) attributes if you
specify the INTOEMPTY parameter. Distributed file manager (DFM) will
reestablish the DDM attributes when the imported data set is first accessed.
Compressed data must not be considered portable. IMPORT will not propagate
extended format or compression information if the user specifies the INTOEMPTY
parameter.
routine to copy the record you are going to update, and write it to a different
data set. When you return to VSAM, VSAM completes the requested update. If
something goes wrong, you have a backup copy. See “JRNAD Exit Routine to
Journalize Transactions” on page 247.
DFSMShsm can use concurrent copy to copy its own control data sets and journal.
Running concurrent copy (like any copy or backup) during off-peak hours results
in better system throughput.
Related reading: For information about using concurrent copy, see z/OS DFSMSdss
Storage Administration Guidey.
Backing up the data sets in a user catalog lets you recover from damage to the
catalog. You can import the backup copy of a data set whose entry is lost or you
can redefine the entry and reload the backup copy.
For information about backing up and recovering a catalog, see z/OS DFSMS
Managing Catalogs and z/OS DFSMShsm Managing Your Own Data.
When the last CLOSE for a VSAM data set completes successfully, VSAM turns off
the open-for-output indicator. If the data set is opened for input, however, VSAM
leaves the open-for-output indicator on. It is the successful CLOSE after an OPEN
for output that causes the open-for-output indicator to turn off. Before you use any
data set that was not successfully closed, determine the status of the data in the
data set. Turning off the open-for-output indicator in the catalog does not make the
data set error free.
You can also use the IDCAMS VERIFY command to verify a VSAM data set. When
you issue this command, IDCAMS opens the VSAM data set for output, issues a
VSAM VERIFY macro call, and closes the data set. The IDCAMS VERIFY
command and the verification by VSAM OPEN are the same. Neither changes the
data in the verified data set.
The catalog will be updated from the verified information in the VSAM control
blocks when the VSAM data set which was opened for output is successfully
closed.
The actual VSAM control-block fields that get updated depend on the type of data
set being verified. VSAM control block fields that can be updated include “High
used RBA/CI” for the data set, “High key RBA/CI”, “number of index levels”,
and “RBA/CI of the first sequence set record”.
The VERIFY command should be used following a system failure that caused a
component opened for update processing to be improperly closed. Clusters,
alternate indexes, entry-sequenced data sets, and catalogs can be verified. Paths
over an alternate index and linear data sets cannot be verified. Paths defined
directly over a base cluster can be verified. The VERIFY macro will perform no
function when VSAM RLS is being used. VSAM RLS is responsible for maintaining
data set information in a shared environment.
You should issue the VERIFY command every time you open a VSAM cluster that
is shared across systems. For information about using VERIFY with clusters that
are shared, see “Cross-System Sharing” on page 200.
Duplicate data in a key-sequenced data set, the least likely error to occur, can
result from a failure during a control interval or control area split. To reduce the
number of splits, specify free space for both control intervals and control areas. If
the failure occurred before the index was updated, the insert is lost, no duplicate
exits, and the data set is usable.
If the failure occurred between updating the index and writing the updated control
interval into secondary storage, some data is duplicated. However, you can access
both versions of the data by using addressed processing. If you want the current
version, use REPRO to copy it to a temporary data set and again to copy it back to
a new key-sequenced data set. If you have an exported copy of the data, use the
IMPORT command to obtain a reorganized data set without duplicate data.
If the index is replicated and the error occurred between the write operations for
the index control intervals, but the output was not affected, both versions of the
data can be retrieved. The sequence of operations for a control area split is similar
to that for a control interval split. To recover the data, use the REPRO or IMPORT
command in the same way as for the failure described in the previous paragraph.
Use the journal exit (JRNAD) to determine control interval and control area splits
and the RBA range affected.
You cannot use VERIFY to correct catalog records for a key-sequenced data set, or
a fixed-length or variable-length RRDS after load-mode failure. An entry-sequenced
data set defined with the RECOVERY attribute can be verified after a create (load)
mode failure; however, you cannot run VERIFY against an empty data set or a
linear data set. Any attempt to do either will result in a VSAM logical error. For
information about VSAM issuing the implicit VERIFY command, see “Opening a
Data Set” on page 137.
The following are some of the tasks that you can perform with CICSVR:
v Perform complete recovery to restore and recover lost or damaged VSAM data
sets that were updated by CICS and batch applications.
v Perform logging for batch applications.
v Recover groups of VSAM data sets.
v Process backup-while-open (BWO) VSAM data sets.
v Automate the creation and submission of recovery jobs using an ISPF dialog
interface.
v Use Change Accumulation to consolidate log records and reduce the amount of
time required to recover a VSAM data set.
v Use Selective Forward Recovery to control which log records get applied to the
VSAM data set when you recover it.
Related reading: For more information, see IBM CICS VSAM Recovery
Implementation Guide.
Topic Location
The system ignores password protection for SMS-managed data sets. See “Data Set
Password Protection” on page 55.
If a discrete profile or a generic profile does not protect a data set, password
protection is in effect.
Related reading: For more information about RACF, see z/OS Security Server RACF
Security Administrator’s Guide.
RACF and password protection can coexist for the same VSAM data set. The
RACF authorization levels of alter, control, update, and read correspond to the
VSAM password levels of master, control, update, and read.
To have password protection take effect for a non-system-managed data set, the
catalog that contains the data set must be either RACF protected or password
protected, and the data set itself must not be defined to RACF. Although
passwords are not supported for an RACF-protected data set, they can still provide
protection if the data set is moved to a system that does not have RACF protection.
Note: VSAM OPEN routines bypass RACF security checking if the program
issuing OPEN is in supervisor state or protection key 0.
Profiles that automatic data set protection (ADSP) processing defines during a data
set define operation are cluster profiles only.
Multivolume data sets. To protect multivolume non-VSAM DASD and tape data
sets, you must define each volume of the data set to RACF as part of the same
volume set.
v When an RACF-protected data set is opened for output and extended to a new
volume, the new volume is automatically defined to RACF as part of the same
volume set.
v When a multivolume physical-sequential data set is opened for output, and any
of the data set’s volumes are defined to RACF, either each subsequent volume
must be RACF-protected as part of the same volume set, or the data set must
not yet exist on the volume.
v The system automatically defines all volumes of an extended sequential data set
to RACF when the space is allocated.
v When an RACF-protected multivolume tape data set is opened for output, either
each subsequent volume must be RACF-protected as part of the same volume
set, or the tape volume must not yet be defined to RACF.
v If the first volume opened is not RACF protected, no subsequent volume can be
RACF protected. If a multivolume data set is opened for input (or a
nonphysical-sequential data set is opened for output), no such consistency check
is performed when subsequent volumes are accessed.
Tape data sets. You can use RACF to provide access control to tape volumes that
have no labels (NL), IBM standard labels (SL), ISO/ANSI standard labels (AL), or
tape volumes referred to with bypass label processing (BLP).
RACF protection of tape data sets is provided on a volume basis or on a data set
basis. A tape volume is defined to RACF explicitly by use of the RACF command
language, or automatically. A tape data set is defined to RACF whenever a data set
is opened for OUTPUT, OUTIN, or OUTINX and RACF tape data set protection is
active, or when the data set is the first file in a sequence. All data sets on a tape
volume are RACF protected if the volume is RACF protected.
If a data set is defined to RACF and is password protected, access to the data set is
authorized only through RACF. If a tape volume is defined to RACF and the data
sets on the tape volume are password protected, access to any of the data sets is
authorized only through RACF. Tape volume protection is activated by issuing the
RACF command SETROPTS CLASSACT(TAPEVOL). Tape data set name
protection is activated by issuing the RACF command SETROPTS
CLASSACT(TAPEDSN). Data set password protection is bypassed. The system
ignores data set password protection for system-managed DASD data sets.
ISO/ANSI Version 3 and Version 4 installation exits that run under RACF will
receive control during ISO/ANSI volume label processing. Control goes to the
RACHECK preprocessing and postprocessing installation exits. The same
IECIEPRM exit parameter list passed to ISO/ANSI installation exits is passed to
the RACF installation exits if the accessibility code is any alphabetic character from
A through Z.
Related reading: For more information about these exits, see z/OS DFSMS
Installation Exits.
The following are examples of passwords required for defining, listing, and
deleting non-system-managed catalog entries:
v Defining a non-system-managed data set in a password-protected catalog
requires the catalog’s update (or higher) password.
v Listing, altering, or deleting a data set’s catalog entry requires the appropriate
password of either the catalog or the data set. However, if the catalog, but not
the data set, is protected, no password is needed to list, alter, or delete the data
set’s catalog entry.
OPEN and CLOSE operations on a data set can be authorized by the password
pointed to by the PASSWD parameter of the ACB macro. For information about
the password level required for each type of operation, see z/OS DFSMS Macro
Instructions for Data Sets.
Each higher-level password allows all operations permitted by lower levels. Any
level can be null (not specified), but if a low-level password is specified, the
DEFINE and ALTER commands give the higher passwords the value of the highest
password specified. For example, if only a read-level password is specified, the
read-level becomes the update-, control-, and master-level password as well. If you
56 z/OS V1R7.0 DFSMS Using Data Sets
Protecting Data Sets
specify a read password and a control password, the control password value
becomes the master-level password as well. However, in this case, the update-level
password is null because the value of the read-level password is not given to
higher passwords.
Catalogs are themselves VSAM data sets, and can have passwords. For some
operations (for example, listing all the catalog’s entries with their passwords or
deleting catalog entries), the catalog’s passwords can be used instead of the entry’s
passwords. If the master catalog is protected, the update- or higher-level password
is required when defining a user catalog, because all user catalogs have an entry in
the master catalog. When deleting a protected user catalog, the user catalog’s
master password must be specified.
Some access method services operations might involve more than one password
authorization. For example, importing a data set involves defining the data set and
loading records into it. If the catalog into which the data set is being imported is
password protected, its update-level (or higher-level) password is required for the
definition; if the data set is password protected, its update-level (or higher-level)
password is required for the load. The IMPORT command lets you specify the
password of the catalog; the password, if any, of the data set being imported is
obtained by the commands from the exported data.
Password-Protection Precautions
When you use protection commands for a non-system-managed catalog or for a
data set, you need to observe certain password-protection precautions, which the
following lists describe.
For a Catalog. Observe the following precautions when you use protection
commands for a non-system-managed catalog:
v To create a non-system-managed catalog entry using the DEFINE command, the
update-level or higher-level password of the catalog is required.
v To modify a catalog entry using the ALTER command, the master password of
the entry, or the master password of the catalog that contains the entry, is
required. However, if the entry to be modified is a non-VSAM or generation
data group entry, the update-level password of the catalog is sufficient.
v To gain access to passwords in a catalog (for example, to list or change
passwords), specify the master-level password of either the entry or the catalog.
A master-level password must be specified with the DEFINE command to model
an entry’s passwords.
v To delete a protected data set entry from a catalog, requires the master-level
password of the entry or the master-level password of the catalog containing the
entry. However, if the entry in a catalog describes a VSAM data space, the
update-level password of the catalog is sufficient.
v To delete a non-VSAM, generation data group, or alias entry, the update-level
password of the catalog is sufficient.
v To list catalog entries with the read-level passwords, specify the read password
of the entry or the catalog’s read-level password. However, entries without
passwords can be listed without specifying the catalog’s read-level password.
v To list the passwords associated with a catalog entry, specify the master
password of the entry or the catalog’s master password.
To avoid unnecessary prompts, specify the catalog’s password, which permits
access to all entries the operation affects. A catalog’s master-level password lets
you refer to all catalog entries. However, a protected cluster cannot be processed
with the catalog’s master password.
Specification of a password where none is required is always ignored.
For a Data Set. Observe the following precautions when you use protection
commands for a data set:
v To access a VSAM data set using its cluster name instead of data or index
names, specify the proper level password for the cluster even if the data or
index passwords are null.
v To access a VSAM data set using its data or index name instead of its cluster
name, specify the proper data or index password. However, if cluster passwords
are defined, the master password of the cluster can be specified instead of the
data or index password.
v Null means no password was specified, if a cluster has only null passwords,
access the data set using the cluster name without specifying passwords, even if
the data and index entries of the cluster have defined passwords. Using null
passwords permits unrestricted access to the VSAM cluster but protects against
unauthorized modification of the data or index as separate components.
Password Prompting
Computer operators and TSO/E terminal users should supply a correct password
if a processing program does not give the correct one when it tries to open a
password-protected data set. When the data set is defined, use the CODE
parameter to specify a code instead of the data set name to prompt the operator or
terminal user for a password. The prompting code keeps your data secure by not
permitting the operator or terminal user to know both the name of the data set
and its password.
A data set’s code is used for prompting for any operation against a
password-protected data set. The catalog code is used for prompting when the
catalog is opened as a data set, when an attempt is made to locate catalog entries
that describe the catalog, and when an entry is to be defined in the catalog.
If you do not specify a prompting code, VSAM identifies the job for which a
password is needed with the JOBNAME and DSNAME for background jobs or
with the DSNAME alone for foreground (TSO/E) jobs.
When you define a data set, use the ATTEMPTS parameter to specify the number
of times the computer operator or terminal user is permitted to give the password
when a processing program is trying to open a data set.
If you are logged in to TSO/E, VSAM tries the login password before prompting at
your terminal. Using the TSO/E login password counts as one attempt.
The system ignores data set password protection for system-managed data sets.
Assigning a Password
Use the PROTECT macro or the IEHPROGM PROTECT command to assign a
password to the non-VSAM data set. See z/OS DFSMSdfp Advanced Services and
z/OS DFSMSdfp Utilities.
Two levels of protection options for your data set are available. Specify these
options in the LABEL field of a DD statement with the parameter PASSWORD or
NOPWREAD. See z/OS MVS JCL Reference.
v Password protection (specified by the PASSWORD parameter) makes a data set
unavailable for all types of processing until a correct password is entered by the
system operator, or for a TSO/E job by the TSO/E user.
v No-password-read protection (specified by the NOPWREAD parameter) makes a
data set available for input without a password, but requires that the password
be entered for output or delete operations.
The system sets the data set security indicator either in the standard header label 1,
as shown in z/OS DFSMS Using Magnetic Tapes, or in the data set control block
(DSCB). After you have requested security protection for magnetic tapes, you
cannot remove it with JCL unless you overwrite the protected data set.
For a data set on direct access storage devices, place the data set under protection
when you enter its password in the PASSWORD data set. Use the PROTECT macro
or the IEHPROGM utility program to add, change, or delete an entry in the
PASSWORD data set. Using either of these methods, the system updates the DSCB
of the data set to reflect its protected status. Therefore, you do not need to use JCL
whenever you add, change, or remove security protection for a data set on direct
access storage devices. For information about maintaining the PASSWORD data
set, including the PROTECT macro, see z/OS DFSMSdfp Advanced Services. For
information about the IEHPROGM utility, see z/OS DFSMSdfp Utilities.
User-Security-Verification Routine
Besides password protection, VSAM lets you protect data by specifying a program
that verifies a user’s authorization. “User-Security-Verification Routine” on page
261 describes specific requirements. To use this additional protection, specify the
name of your authorization routine in the AUTHORIZATION parameter of the
DEFINE or ALTER command.
If a password exists for the type of operation you are performing, you must
specify the password, either in the command or in response to prompting. VSAM
calls the user-security-verification routine only after it verifies the password. VSAM
bypasses this routine whenever you specify a correct master password, whether
the operation requires the master password.
| The objective of the erase-on-scratch function is to ensure that none of the data on
| the released tracks can be read by any host software even if the device is
To have the system erase sensitive data with RACF, the system programmer can
start the erase feature with the RACF SETROPTS command. This feature controls
the erasure of DASD space when it is releases. Space release occurs when you
delete a data set or release part of a data set. SETROPTS selects one of the
following methods for erasing the space:
v The system erases all released space.
v The system erases space only in data sets that have a security level greater than
or equal to a certain level.
v The system erases space in a data set only if its RACF data set profile specifies
the ERASE option.
v The system never erases space.
If the ERASE option is set in the RACF profile, you cannot override the option by
specifying NOERASE in access methods services commands.
disk space more efficiently. DDSR has the side effect of usually erasing released
tracks, even if you do not request the ERASE option. DDSR is faster than the
erase-on-scratch function on other types of disks. Without erase-on-scratch,
however, DDSR is less secure. The erasure might not complete before data set
deletion or space release. After a successful erasure, your data remains physically
on disk, in a compressed form, but is not accessible by any software.
All access method services load modules are contained in SYS1.LINKLIB, and the
root segment load module (IDCAMS) is link edited with the SETCODE AC(1)
attribute.
APF authorization is established at the job step level. If, during the execution of an
APF-authorized job step, a load request is satisfied from an unauthorized library,
the task is abnormally terminated. It is the installation’s responsibility to ensure
that a load request cannot be satisfied from an unauthorized library during access
method services processing.
The following situations could cause the invalidation of APF authorization for
access method services:
v An access method services module is loaded from an unauthorized library.
Because APF authorization is established at the job-step task level, access method
services is not authorized if invoked by an unauthorized application program or
unauthorized terminal monitor program (TMP).
The system programmer must enter the names of those access method services
commands that require APF authorization to run under TSO/E in the authorized
command list.
If the above functions are required and access method services is invoked from an
application program or TSO/E terminal monitor program, the invoking program
must be authorized.
For information about authorizing for TSO/E and ISPF, see z/OS DFSMSdfp Storage
Administration Reference.
When you use the REPRO ENCIPHER command, you can specify whether to use
the Programmed Cryptographic Facility or Integrated Cryptographic Service
Facility (ICSF) to manage the cryptographic keys, depending on which
cryptographic facility is running as a started task. You can use the REPRO
ENCIPHER and REPRO DECIPHER to perform simple encryption and decryption
of sensitive data. The data remains protected until you use the REPRO DECIPHER
option to decipher it with the correct key. If you also have cryptographic hardware
and RACF, you can use these REPRO commands with ICSF to perform more
sophisticated encryption and decryption.
Related reading: For information on using the REPRO command to encrypt and
decrypt data, see z/OS DFSMS Access Method Services for Catalogs. For information
on using ICSF, z/OS Cryptographic Services ICSF Overview.
You can use the REPRO command to copy a plaintext (not enciphered) data set to
another data set in enciphered form. Enciphering converts data to an unintelligible
form called a ciphertext. You can then store the enciphered data set offline or send
it to a remote location. When desired, you can bring back the enciphered data set
online and use the REPRO command to recover the plaintext from the ciphertext
by copying the enciphered data set to another data set in plaintext (deciphered)
form.
Enciphering and deciphering are based on an 8-byte binary value called the key.
Using the REPRO DECIPHER option, you can either decipher the data on the
system that it was enciphered on, or decipher the data on another system that has
the required key to decipher the data.
The input data set for the decipher operation must be an enciphered copy of a data
set produced by REPRO. The output data set for the encipher operation can only
be a VSAM entry-sequenced, linear, or sequential data set. The target (output) data
set of both an encipher and a decipher operation must be empty. If the target data
set is a VSAM data set that has been defined with the reusable attribute, use the
REUSE parameter of REPRO to reset it to empty.
For both REPRO ENCIPHER and REPRO DECIPHER, if the input data set
(INDATASET) is system managed, the output data set (OUTDATASET) can be
either system managed or not system managed, and must be cataloged.
Figure 2 is a graphic representation of the input and output data sets involved in
REPRO ENCIPHER and DECIPHER operations.
Source Data Set Target Data Set Source Data Set Target Data Set
(Plaintext) (Ciphertext) (Ciphertext) (Plaintext)
When you encipher a data set, specify any of the delimiter parameters available
with the REPRO command (SKIP, COUNT, FROMADDRESS, FROMKEY,
FROMNUMBER, TOADDRESS, TOKEY, TONUMBER) that are appropriate to the
data set being enciphered. However, you cannot specify delimiter parameters when
deciphering a data set. If DECIPHER is specified together with any REPRO
delimiter parameter, your REPRO command terminates with a message.
When the REPRO command copies and enciphers a data set, it precedes the
enciphered data records with one or more records of clear header data. The header
data preceding the enciphered data contains information necessary for the
deciphering of the enciphered data, such as:
v Number of header records
v Number of records to be ciphered as a unit
v Key verification data
v Enciphered data encrypting keys
Tip: If the output data set for the encipher operation is a compressed format data
set, little or no space is saved. Save space for the output if the input data set is in
compressed format and is compressed.
You can use the Programmed Cryptographic Facility or ICSF to install the
secondary key-encrypting keys. If you are using the Programmed Cryptographic
Facility, use the Programmed Cryptographic Facility key generator utility to set up
the key pairs.
If you are using ICSF, use the Key Generation Utility Program (KGUP) to set up
the key pairs on both the encrypting and decrypting systems.
The key generator utility generates the key-encrypting keys you request and stores
the keys, in enciphered form, in the cryptographic key data set (CKDS). It lists the
external name of each secondary key and the plaintext form of the secondary key.
If the secondary encrypting key is to be used on a system other than the system on
which the keys were generated, the utility must also be run on the other system to
define the same plaintext key-encrypting keys. The plaintext key-encrypting keys
can be defined in the CKDS of the other system with different key names. If you
want to manage your own private keys, no key-encrypting keys are used to
encipher the data encrypting key; it is your responsibility to ensure the secure
nature of your private data encrypting key.
Related reading: For more information on setting up keys with KGUP, see z/OS
Cryptographic Services ICSF Administrator’s Guide.
v Code COMPAT(YES) for the data set for the ICSF options. This option enables
REPRO to invoke the Programmed Cryptographic Facility macros on ICSF.
v If you are migrating from PCF to ICSF, convert the Programmed Cryptographic
Facility CKDS to ICSF format. New ICSF users do not need to perform this
conversion.
v If you are using ICSF, you must start it before executing the REPRO command.
If you are using the Programmed Cryptographic Facility, you must start it before
executing the REPRO command.
Topic Location
VSAM Data Formats 73
VSAM Data Striping 89
Selection of VSAM Data Set Types 78
Extended-Format VSAM Data Sets 88
Access to Records in a VSAM Data Set 94
Access to Records through Alternate Indexes 97
Data Compression 100
Logical records of VSAM data sets are stored differently from logical records in
non-VSAM data sets. VSAM stores records in control intervals. A control interval is
a continuous area of direct access storage that VSAM uses to store data records
and control information that describes the records. Whenever a record is retrieved
from direct access storage, the entire control interval containing the record is read
into a VSAM I/O buffer in virtual storage. The desired record is transferred from
the VSAM buffer to a user-defined buffer or work area. Figure 3 shows how a
logical record is retrieved from direct access storage.
DASD storage
Virtual storage
I/O buffer
CI I/O path R1 R2 R3
Work R2
area
CI = Control interval
R = Record
| can be expanded to 123 extents per volume. In addition to the limit of 123 extents
| per volume, these are the other limits on the number of extents for a VSAM data
| set:
| v If non-SMS-managed, then up to 255 extents per component.
| v If SMS-managed, then the following are true:
| – If not striped and without the extent constraint removal parameter in the data
| class, then up to 255 extents per component.
| – If striped and without the extent constraint removal parameter in the data
| class, then up to 255 extents per stripe.
| – If the extent contraint removal parameter in the data class is set to a value of
| Y, then the number of extents is limited by the number of volumes for the
| data set.
| VSAM attempts to extend a data set when appropriate. Each attempt to extend the
| data set might result in up to five extents.
Related reading: For information about space allocation for VSAM data sets, see
“Allocating Space for VSAM Data Sets” on page 108.
Control Intervals
The size of control intervals can vary from one VSAM data set to another, but all
the control intervals within the data portion of a particular data set must be the
same length. Use the access method services DEFINE command and let VSAM
select the size of a control interval for a data set, or request a particular control
interval size. For information about selecting the best control interval size, see
“Optimizing Control Interval Size” on page 157.
In a linear data set all of the control interval bytes are data bytes. There is no
imbedded control information.
CIDFs are 4 bytes long, and contain the amount and location of free space. RDFs
are 3 bytes long, and describe the length of records and how many adjacent
records are of the same length.
If two or more adjacent records have the same length, only two RDFs are used for
this group. One RDF gives the length of each record, and the other gives the
number of consecutive records of the same length. Figure 5 shows RDFs for records
of the same and different lengths:
Control interval 1
Control interval size = 512 bytes
Record length = 160-byte records
Record definition fields: Only 2 RDFs are needed because all
records are the same length.
80 80 80 100 93 63 3 3 3 3 4
FS = Free space
information similar to the record prefix for describing the segment. The length of
the record prefix for nonspanned records is 3 bytes, and the length for spanned
records is 5 bytes.
The stored record format has no affect on the data seen by the user as a result of a
VSAM GET request. In addition, no special processing is required to place the
record in the data set in a compressed format.
The presence of the record prefix does result in several incompatibilities that can
affect the definition of the key-sequenced data set or access to the records in the
key-sequenced data set. When a VSAM data set is in compressed format, VSAM
must be used to extract and expand each record to obtain data that is usable. If a
method other than VSAM is used to process a compressed data set and the
method does not recognize the record prefix, the end result is unpredictable and
could result in loss of data. See “Compressed Data” on page 93.
Control Areas
The control intervals in a VSAM data set are grouped together into fixed-length
contiguous areas of direct access storage called control areas. A VSAM data set is
actually composed of one or more control areas. The number of control intervals in
a control area is fixed by VSAM.
The maximum size of a control area is one cylinder, and the minimum size is one
track of DASD storage. When you specify the amount of space to be allocated to a
data set, you implicitly define the control area size. For information about defining
an alternate index, see “Defining Alternate Indexes” on page 119. For information
about optimizing control area size, see “Optimizing Control Area Size” on page
161.
Spanned Records
Sometimes a record is larger than the control interval size used for a particular
data set. In VSAM, you do not need to break apart or reformat such records,
because you can specify spanned records when defining a data set. The SPANNED
parameter permits a record to extend across or span control interval boundaries.
Spanned records might reduce the amount of DASD space required for a data set
when data records vary significantly in length, or when the average record length
is larger compared to the CI size. The following figures show the use of spanned
records for more efficient use of space.
Figure 7 contains a data set with the same space requirements as in Figure 6, but
one that permits spanned records.
The control interval size is reduced to 4096 bytes. When the record to be stored is
larger than the control interval size, the record is spanned between control
intervals. In Figure 7, control interval 1 contains a 2000-byte record. Control
| intervals 2, 3, and 4 together contain one 10 000-byte record. Control interval 5
| contains a 2000-byte record. By changing control interval size and permitting
spanned records, you can store the three records in 20 480 bytes, reducing the
amount of storage needed by 10 240 bytes.
v Do you want to use access method services utilities with an IBM DB2® cluster?
Entry-sequenced data sets are best for the following kinds of applications:
v Applications that require sequential access only. It is better to use
entry-sequenced data sets or variable-length RRDSs for sequential access,
because they support variable-length records and can be expanded as records
are added.
v Online applications that need to use an existing entry-sequenced data set. If you
want to use an entry-sequenced data set in an online application, load the data
set sequentially by a batch program and access the data set directly by the
relative byte address (RBA).
Key-sequenced data sets are best for the following kinds of applications:
v Applications that require that each record have a key field.
v Applications that require both direct and sequential access.
v Applications that use high-level languages which do not support RBA use.
v Online applications usually use key-sequenced data sets.
v You want to access the data by an alternate index.
v The advantage of key-sequenced data sets over fixed-length RRDS using direct
access is ease of programming.
v You want to have compressed data.
Linear data sets, although rarely used, are best for the following kinds of
applications:
v Specialized applications that store data in linear data sets
v Data-in-virtual (DIV)
Relative-record data sets are best for the following kinds of applications:
v Applications that require direct access only.
v Applications in which there is a one-to-one correspondence between records and
relative record numbers. For example, you could assign numeric keys to records
sequentially, starting with the value 1. Then, you could access a RRDS both
sequentially and directly by key.
v Fixed-length RRDSs use less storage and are usually faster at retrieving records
than key-sequenced data sets or variable-length RRDSs.
v If the records vary in length, use a variable-length RRDS.
v Variable-length RRDSs can be used for COBOL applications.
R5
R4
R1 R2 R3
Records are added only at the end of the data set. Existing records cannot be
deleted. If you want to delete a record, you must flag that record as inactive. As
far as VSAM is concerned, the record is not deleted. Records can be updated, but
they cannot be lengthened. To change the length of a record in an entry-sequenced
data set, you must store it either at the end of the data set (as a new record) or in
the place of a record of the same length that you have flagged as inactive or that is
no longer required.
When a record is loaded or added, VSAM indicates its relative byte address (RBA).
The RBA is the offset of this logical record from the beginning of the data set. The
first record in a data set has an RBA of 0. The value of the RBA for the second and
subsequent records depends on whether the file is spanned and on the control
interval size chosen for the file, either manually or automatically. In general, it is
not possible to predict the RBA of each record, except for the case of fixed-length
records and a known control interval size. For a more detailed description of the
internal format of VSAM files, see “VSAM Data Formats” on page 73.
R1 R2 R3 R4 R5
Record Length 98 56 60 70 70
Table 5 lists the operations and types of access for processing entry-sequenced data
sets.
Table 5. Entry-Sequenced Data Set Processing
Operation Sequential Access Direct Access
Loading the data set Yes No
Adding records Space after the last record is used for adding No
records
Retrieving records Yes (returned in entry sequence) Yes (by RBA)
Updating records Yes, but you cannot change the record length Yes (by RBA), but you cannot change the
record length
Deleting records Records cannot be deleted, but you can reuse Records cannot be deleted, but you can reuse
its space for a record of the same length its space for a record of the same length
When you use simulated VSAM, the application program sees the UNIX file as if it
were an ESDS.
Because the system does not actually store UNIX files as ESDSs, the system cannot
simulate all the characteristics of an ESDS. Certain macros and services have
incompatibilities or restrictions when dealing with UNIX files.
Related reading: For information about VSAM interfaces and UNIX files, see
Chapter 28, “Processing z/OS UNIX Files,” on page 481 and z/OS DFSMS Macro
Instructions for Data Sets.
| When a file is accessed as binary, the length of each record is returned in the RPL
| as the largest possible record, except, possibly, the last record. The length of the
| last record is whatever remains after the previous GET or READ macro.
When a file is accessed as text, if any record in the file consists of zero bytes (that
is, a text delimiter is followed by another text delimiter), the record returned
consists of one blank. If any record is longer than the length of the buffer, it results
in an error return code for GET (for an ACB).
v To specify the maximum record size, code the LRECL keyword on the JCL DD
statement, SVC 99, or TSO ALLOCATE. If not specified, the default is 32 767.
v On return from a synchronous PUT or a CHECK associated with an
asynchronous PUT, it is not guaranteed that data written has been synchronized
to the output device. To ensure data synchronization, use ENDREQ, CLOSE, or
CLOSE TYPE=T.
v There is no CI (control interval) access (MACRF=CNV).
The following services and utilities do not support UNIX files. Unless stated
otherwise, these services and utilities return an error or unpredictable value when
issued for a UNIX file:
v IDCAMS—ALTER, DEFINE, DELETE, DIAGNOSE, EXAMINE, EXPORT,
IMPORT, LISTCAT, and VERIFY
v OBTAIN, SCRATCH, RENAME, TRKCALC, and PARTREL macros
These macros require a DSCB or UCB. z/OS UNIX files do not have DSCBs or
valid UCBs.
Guideline: ISPF Browse/Edit does not support UNIX files, but you can use the
OBROWSE command.
Key Field
Unique
In the same position in each record
In the first segment of a spanned record
The key must be in the same position in each record, the key data must be
contiguous, and each record’s key must be unique. After it is specified, the value of
the key cannot be altered, but the entire record can be erased or deleted. For
compressed data sets, the key itself and any data before the key will not be
compressed.
When a new record is added to the data set, it is inserted in its collating sequence
by key, as shown in Figure 11.
Key Field
654
Table 6 lists the operations and types of access for processing key-sequenced data
sets.
Table 6. Key-Sequenced Data Set Processing
Direct or Skip-Sequential
Operation Sequential Access Access
Loading the data set Yes No
Adding records Yes (records must be written Yes (records are added
in key sequence) randomly by key)
Retrieving records Yes (records are returned in Yes (by key)
key sequence)
Updating records Yes Yes
Deleting records Yes Yes
Free Space
When a key-sequenced data set is defined, unused space can be scattered
throughout the data set to permit records to be inserted or lengthened. The unused
space is called free space. When a new record is added to a control interval (CI) or
an existing record is lengthened, subsequent records are moved into the following
free space to make room for the new or lengthened record. Conversely, when a
record is deleted or shortened, the space given up is reclaimed as free space for
later use. When you define your data set, use the FREESPACE parameter to specify
what percentage of each CI is to be set aside as free space when the data set is
initially loaded.
Within each CA, reserve free space by using free CIs. If you have free space in
your CA, it is easier to avoid splitting your control area when you want to insert
additional records or lengthen existing records. When you define your data set,
specify what percentage of the control area is to be set aside as free space, using
the FREESPACE parameter.
For information about specifying the optimal amount of CI and CA free space, see
“Optimizing Free Space Distribution” on page 162.
pointers to each CI within a single CA. Each entry contains a compressed value
representing the highest key that can be contained within that control interval. The
value stored for the control interval containing records with the highest key in that
control area represents the highest record-key value that can be contained in that
control area. Once all the records are deleted from any single control interval, the
current high-key value is no longer associated with that control interval’s entry in
the sequence set record. It becomes a “free” control interval in which records
containing any key within the range of keys for that control area can be inserted.
This is called a CI reclaim.
However, this does not apply when it is the last empty control interval within the
control area. In that case, the high-key value for that control interval is maintained
and it becomes the highest key for any record that can be inserted into that control
area. There is no reclaim capability for control areas that is comparable to that
provided for control intervals. What can occasionally be observed as a normal
result of not reclaiming control areas is data sets that just continue to grow in size.
This will result when applications continually add records with keys that are in
ascending sequence, followed by another or the same application that deletes old
records after they have undergone some type of processing. During the deletion
processing, the high-key value that was associated with that CA will be
maintained, requiring that only records falling within that high-key range are
eligible for insertion into that control area.
Since the record keys are always getting higher, no additional records will qualify
for insertion into these empty control areas. The result is a data set in which a
majority of the space is occupied by empty control intervals. When such a
condition is detected, the only option a user has to reclaim this space is to rebuild
the data set. This will require a logical copy of the data set, followed by a deletion
of the old data set and a reload operation from the logical copy.
Two logical records are stored in the first control interval shown in Figure 12. Each
logical record has a key (11 and 14). The second control interval shows what
happens when you insert a logical record with a key of 12.
When a record is deleted, the procedure is reversed, and the space occupied by the
logical record and corresponding RDF is reclaimed as free space.
Prime Index
A key-sequenced data set always has a prime index that relates key values to the
relative locations of the logical records in a data set. The prime index, or simply
index, has two uses in locating:
v The collating position when inserting records
v Records for retrieval
When initially loading a data set, records must be presented to VSAM in key
sequence. The index for a key-sequenced data set is built automatically by VSAM
as the data set is loaded with records.
When a data control interval is completely loaded with logical records, free space,
and control information, VSAM makes an entry in the index. The entry consists of
the highest possible key in the data control interval and a pointer to the beginning
of that control interval.
Key Compression
The key in an index entry is stored by VSAM in a compressed form. Compressing
the key eliminates from the front and back of a key those bytes that are not
necessary to distinguish it from the adjacent keys. Compression helps achieve a
smaller index by reducing the size of keys in index entries. VSAM automatically
does key compression in any key-sequenced data set. It is independent of whether
the data set is in compressed format.
Related reading: For information about using data-in-virtual (DIV), see z/OS MVS
Programming: Assembler Services Guide.
Each slot has a unique relative record number, and the slots are sequenced by
ascending relative record number. Each record occupies a slot and is stored and
retrieved by the relative record number of that slot. The position of a data record is
fixed; its relative record number cannot change. A fixed-length RRDS cannot have
a prime index or an alternate index.
Because the slot can either contain data or be empty, a data record can be inserted
or deleted without affecting the position of other data records in the fixed-length
RRDS. The record definition field (RDF) shows whether the slot is occupied or
empty. Free space is not provided in a fixed-length RRDS because the entire data
set is divided into fixed-length slots.
In a fixed-length RRDS, each control interval contains the same number of slots.
The number of slots is determined by the control interval size and the record
length. Figure 13 shows the structure of a fixed-length RRDS after adding a few
records. Each slot has a relative record number and an RDF. Table 7 shows the
access options available for RRDS processing.
Table 7 lists the operations and types of access for processing fixed-length RRDSs.
Table 7. RRDS Processing
Direct or Skip- Sequential
Operation Sequential Access Access
Loading the data set Yes Yes
Adding records Yes (empty slots are used) Yes (empty slots are used)
Retrieving records Yes Yes (by relative record
number)
You must load the variable-length RRDS sequentially in ascending relative record
number order. To define a variable-length RRDS, specify NUMBERED and
RECORDSIZE. The average record length and maximum record length in
RECORDSIZE must be different.
Free space is used for inserting and lengthening variable-length RRDS records.
When a record is deleted or shortened, the space given up is reclaimed as free
space for later use. When you define your data set, use the FREESPACE parameter
to specify what percentage of each control interval and control area is to be set
aside as free space when the data set is initially loaded. “Insertion of a Logical
Record in a CI” on page 84 shows how free space is used to insert and delete a
logical record.
Table 9. Comparison of ESDS, KSDS, Fixed-Length RRDS, Variable-Length RRDS, and Linear Data Sets
Variable-Length
ESDS KSDS Fixed-Length RRDS RRDS Linear Data Sets
Records are in order Records are in Records are in relative Records are in relative No processing at
as they are entered collating sequence by record number order record number order record level
key field
Direct access by RBA Direct access by key Direct access by Direct access by Access with
or by RBA relative record relative record data-in-virtual (DIV)
number number
Alternate indexes Alternate indexes No alternate indexes No alternate indexes No alternate indexes
permitted1 permitted permitted permitted permitted
A record’s RBA A record’s RBA can A record’s relative A record’s relative No processing at
cannot change change record number cannot record number cannot record level
change change
Space at the end of Free space is used for Empty slots in the Free space is used for No processing at
the data set is used inserting and data set are used for inserting and record level
for adding records lengthening records adding records lengthening records
A record cannot be Space given up by a A slot given up by a Space given up by a No processing at
deleted, but you can deleted or shortened deleted record can be deleted or shortened record level
reuse its space for a record becomes free reused record becomes free
record of the same space space
length1
Spanned records Spanned records No spanned records No spanned records No spanned records
permitted permitted
Extended format Extended format or Extended format Extended format Extended format
permitted1 compression permitted permitted permitted
permitted
Note:
1. Not supported for HFS data sets.
VSAM data sets must also be in extended-format to be eligible for the following
advanced functions:
v Partial space release (PARTREL)
v Candidate volume space
v System-managed buffering (SMB)
An extended-format data set for VSAM can be allocated for key-sequenced data
sets, entry-sequenced data sets, variable-length or fixed-length relative-record data
sets, and linear data sets.
Certain types of key-sequenced data set types are excluded. The following data
sets cannot have an extended format:
v Catalogs
v Other system data sets
v Temporary data sets
When a data set is allocated as an extended format data set, the data and index are
extended format. Any alternate indexes related to an extended format cluster are
also extended format.
If a data set is allocated as an extended format data set, 32 bytes (X’20’) are added
to each physical block. Consequently, when the control interval size is calculated or
explicitly specified, this physical block overhead may increase the amount of space
actually needed for the data set. Figure 14 shows the percentage increase in space
as indicated. Other control intervals do not result in an increase in needed space
A striped data set has tracks that spread across multiple devices, as is the case for
sequential access method or the CIs for VSAM. This format allows a single
application request for records in multiple tracks or CIs to be satisfied by
concurrent I/O requests to multiple volumes. The result is improved performance
for sequential data access by achieving data transfer into the application at a rate
greater than any single I/O path. The scheduling of I/O to multiple devices to
satisfy a single application request is referred to as an I/O packet.
VSAM data striping applies only to data sets that are defined with more than one
stripe. Any data set listed with one stripe is in the extended format and is not
considered to be a striped data set.
Layer 2:
Layer 3:
DA6D4999
Figure 15. Primary and Secondary Space Allocations for Striped Data Sets
Figure 16 shows examples of the CIs within a control area (CA) on multiple
volumes for a four-stripe VSAM data set.
Space Allocation for Striped VSAM Data Sets: The general rules discussed for
striped extended format data sets apply to striped VSAM data sets. When the
system allocates space for a striped extended-format data set, the system divides
the primary amount among the volumes. If it does not divide evenly, the system
rounds up the amount. For extended-format data sets, when the primary space on
any volume is full, the system allocates space on that volume. The amount is the
secondary amount divided by the number of stripes. If the secondary amount does
not divide evenly, the system rounds up the amount.
Some additional considerations apply to the control area (CA) for VSAM. All
allocations must be rounded to a CA boundary. The number of stripes influences
the size of the control area, resulting in some differences in allocation quantity
required to meet the stripe count and CA requirements. The following section on
CA size considerations discusses this in more detail.
All data set extends are as described for striped data set extends. Basically, the
system divides the secondary amount by the stripe count and allocates the result
to each stripe. This occurs in all cases, including a data set with the guaranteed
space attribute from the associated storage class (SC), as well as extending to a
new layer.
Restriction: Volume High Used RBA statistics do not apply for multistriped VSAM
data sets. The high-use RBA is kept on the volume for the first stripe because the
value is the same for all stripes.
extensions occur by stripe and can occur on the same volume or on a new volume,
using the primary-space amount when a secondary-space amount of zero is
specified.
Increased Number of Extents: A striped VSAM data set can have 255 extents per
stripe in the data component. Only the data component is striped. The index
component of a striped VSAM data set has a limit of 255 extents, regardless of
striping. Because a striped VSAM data set can have a maximum of 16 stripes, a
striped data component can have a maximum of 4080 extents.
| Starting in z/OS V1R7, the 255-extent per stripe limit is removed if the extent
| constraint removal parameter in the data class is set to Y (yes). The default value is
| N (no), to enforce the 255-extent limit. This limit should be enforced if the data set
| may be shared with a pre-V1R7 system.
Allocation Restrictions: The Space Constraint Relief attribute will not be considered
for striped data sets. The intended purposes for data striping follow:
v Spread the data across volumes (a basic implementation technique for any data
that is striped).
v Provide >5 extent relief (completed for all allocations for VSAM striped data,
regardless of the specification).
Control Area Size Calculation: The control area (CA) size for striped VSAM data
is a factor of the stripe count. A VSAM striped data set can be striped up to a
count of 16. The minimum size for an allocation is a single track. The maximum
CA size is a cylinder. Traditionally that would have meant that the maximum CA
size, based on 3390 geometry, would be 15 tracks. That changes with striped
VSAM data set in that the maximum CA size now has to accommodate the
maximum stripe count (16), and the maximum CA now becomes 16 tracks.
The required allocation quantity now becomes a factor of both user specified
amount and stripe count. As an example, take a specification for space of
TRACKS(1 1) with the following results:
v For nonstriped, traditional VSAM, a control areas size of one track with a
resulting primary and secondary allocation quantity of 1 track.
v For a striped data set with a striped count = maximum =16, the control area size
is then 16 tracks with a resulting primary and secondary quantity of 16 tracks.
Processing Considerations for Striped Data Sets: The basic restrictions associated
with data sets in the extended format also apply to striped data sets.
For the alternate index, neither the data nor the index will be striped.
Compressed Data
To use compression, a data set must be in extended format. Only extended-format
key-sequenced data sets can be compressed. The compressed data records have a
slightly different format than logical records in a data set that will not hold
compressed data. This results in several incompatibilities that can affect the
definition of the data set or access to records in the data set:
v The maximum record length for nonspanned data sets is three bytes less than
the maximum record length of data sets that do not contain compressed data
(this length is CISIZE−10).
v The relative byte address (RBA) of another record, or the address of the next
record in a buffer, cannot be determined using the length of the current record
or the length of the record provided to VSAM.
v The length of the stored record can change when updating a record without any
length change.
v The key and any data in front of the key will not be compressed. Data sets with
large key lengths and RKP data lengths might not be good candidates for
compression.
v Only the data component of the base cluster is eligible for compression.
Alternate indexes are not eligible for compression.
v The global shared resources (GSR) option is not permitted for compressed data
sets.
In addition to these incompatibilities, the data set must meet certain requirements
to permit compression at the time it is allocated:
v The data set must have a primary allocation of at least 5 MBs, or 8 MBs if no
secondary allocation is specified.
v The maximum record length specified must be at least key offset plus key length
plus forty bytes.
v Compressed data sets must be SMS managed. The mechanism for requesting
compression for VSAM data sets is through the SMS data class
COMPACTION=Y parameter.
Spanned record data sets require the key offset plus the key length to be less than
or equal to the control interval size minus fifteen. These specifications regarding
the key apply to alternate keys as well as primary keys.
Compressed data sets cannot be accessed using control interval (CI) processing
except for VERIFY and VERIFY REFRESH processing and may not be opened for
improved control interval (ICI) processing. A compressed data set can be created
using the LIKE keyword and not just using a data class.
All types of VSAM data sets, including linear, can be accessed by control interval
access, but this is used only for very specific applications. CI mode processing is
not permitted when accessing a compressed data set. The data set can be opened
for CI mode processing to permit VERIFY and VERIFY REFRESH processing only.
Control interval access is described in Chapter 11, “Processing Control Intervals,”
on page 179.
To access a record directly from an entry-sequenced data set, you must supply the
RBA for the record as a search argument. For information about obtaining the
RBA, see “Entry-Sequenced Data Sets” on page 79.
Keyed-Sequential Access
Sequential access is used to load a key-sequenced data set and to retrieve, update,
add, and delete records in an existing data set. When you specify sequential as the
mode of access, VSAM uses the index to access data records in ascending or
descending sequence by key. When retrieving records, you do not need to specify
key values because VSAM automatically obtains the next logical record in
sequence.
Sequential processing can be started anywhere within the data set. While
positioning is not always required (for example, the first use of a data set starts
with the first record), it is best to specify positioning using one of the following
methods:
v Use the POINT macro.
v Issue a direct request with note string positioning (NSP), and change the request
parameter list with the MODCB macro from “direct” to “sequential” or “skip
sequential”.
v Use MODCB to change the request parameter list to last record (LRD), backward
(BWD), and direct NSP; then change the RPL to SEQ, BWD, and SEQ.
Sequential access enables you to avoid searching the index more than once.
Sequential is faster than direct for accessing multiple data records in ascending key
order.
Keyed-Direct Access
Direct access is used to retrieve, update, delete and add records. When direct
processing is used, VSAM searches the index from the highest level index-set
record to the sequence-set for each record to be accessed. Searches for single
records with random keys is usually done faster with direct processing. You need
to supply a key value for each record to be processed.
For retrieval processing, either supply the full key or a generic key. The generic
key is the high-order portion of the full key. For example, you might want to
retrieve all records whose keys begin with the generic key AB, regardless of the
full key value. Direct access lets you avoid retrieving the entire data set
sequentially to process a small percentage of the total number of records.
Skip-Sequential Access
Skip-sequential access is used to retrieve, update, delete, and add records. When
skip-sequential is specified as the mode of access, VSAM retrieves selected records,
but in ascending sequence of key values. Skip-sequential processing lets you avoid
retrieving a data set or records in the following inefficient ways:
v Entire data set sequentially to process a small percentage of the total number of
records
v Desired records directly, which would cause the prime index to be searched
from the top to the bottom level for each record
Addressed Access
Another way of accessing a key-sequenced data set is addressed access, using the
RBA of a logical record as a search argument. If you use addressed access to
process key-sequenced data, you should be aware that RBAs might change when a
control interval split occurs or when records are added, deleted, or changed in size.
With compressed data sets, the RBAs for compressed records are not predictable.
Therefore, access by address is not suggested for normal use.
The following family of window services for accessing linear data sets is described
in z/OS MVS Programming: Assembler Services Guide and z/OS MVS Programming:
Assembler Services Reference ABE-HSP:
v CSRIDAC -- Request or Terminate Access to a Data Object
v CSRVIEW -- View an Object
v CSREVW -- View an Object and Sequentially Access It
v CSRREFR -- Refresh an Object
v CSRSCOT -- Save Object Changes in a Scroll Area
v CSRSAVE -- Save Changes Made to a Permanent Object
Related reading: For information about using data-in-virtual (DIV), see z/OS MVS
Programming: Assembler Services Guide.
Keyed-Sequential Access
Sequential processing of a fixed-length RRDS is the same as sequential processing
of an entry-sequenced data set. Empty slots are automatically skipped by VSAM.
Skip-Sequential Access
Skip-sequential processing is treated like direct requests, except that VSAM
maintains a pointer to the record it just retrieved. When retrieving subsequent
records, the search begins from the pointer, rather than from the beginning of the
data set. Records must be retrieved in ascending sequence.
Keyed-Direct Access
A fixed-length RRDS can be processed directly by supplying the relative record
number as a key. VSAM converts the relative record number to an RBA and
determines the control interval containing the requested record. If a record in a slot
flagged as empty is requested, a no-record-found condition is returned. You cannot
use an RBA value to request a record in a fixed-length RRDS.
Keyed-Sequential Access
Sequential processing of a variable-length RRDS is the same as for an
entry-sequenced data set. On retrieval, relative record numbers that do not exist
are skipped. On insert, if no relative record number is supplied, VSAM uses the
next available relative record number.
Skip-Sequential Access
Skip-sequential processing is used to retrieve, update, delete, and add
variable-length RRDS records. Records must be retrieved in ascending sequence.
Keyed-Direct Access
A variable-length RRDS can be processed directly by supplying the relative record
number as a key. If you want to store a record in a specific relative record position,
use direct processing and assign the desired relative record number. VSAM uses
the relative record number to locate the control interval containing the requested
record. You cannot use an RBA value to request a record in a variable-length
RRDS.
Unlike a primary key, which must be unique, the key of an alternate index can
refer to more than one record in the base cluster. An alternate-key value that points
to more than one record is nonunique. If the alternate key points to only one
record, the pointer is unique.
Alternate indexes are not supported for linear data sets, RRDS, or reusable data
sets (data sets defined with the REUSE attribute). For information about defining
and building alternate indexes, see “Defining Alternate Indexes” on page 119.
and one or more pointers to data in the base cluster. For an entry-sequenced base
cluster, the pointers are RBA values. For a key-sequenced base cluster, the pointers
are primary-key values.
Each record in the data component of an alternate index is of variable length and
contains header data, the alternate key, and at least one pointer to a base data
record. Header data is fixed length and provides the following information:
v Whether the alternate index data record contains primary keys or RBA pointers
v Whether the alternate index data record contains unique or nonunique keys
v The length of each pointer
v The length of the alternate key
v The number of pointers
If you ask to access records with the alternate key of BEN, VSAM does the
following:
1. VSAM scans the index component of the alternate index, looking for a value
greater than or equal to BEN.
2. The entry FRED points VSAM to a data control interval in the alternate index.
3. VSAM scans the alternate index data control interval looking for an entry that
matches the search argument, BEN.
4. When located, the entry BEN has an associated key, 21. The key, 21, points
VSAM to the index component of the base cluster.
5. VSAM scans the index component for an entry greater than or equal to the
search argument, 21.
6. The index entry, 38, points VSAM to a data control interval in the base cluster.
The record with a key of 21 is passed to the application program.
RBAs are always written as fullword binary integers.
H
Index a D FRED TOM
R
b
c
Alternate H Con-
Index D BEN 400 BILL 000 FRED 140 540 940 Free space trol
R info
Data d
H Con-
D MIKE 12 TOM 10 41 54 Free space trol
R info
If you ask to access records with the alternate key of BEN, VSAM does the
following:
1. VSAM scans the index component of the alternate index, looking for a value
greater than or equal to BEN.
2. The entry FRED points VSAM to a data control interval in the alternate index.
3. VSAM scans the alternate index data control interval looking for an entry that
matches the search argument, BEN.
4. When located, the entry BEN has an associated pointer, 400, that points to an
RBA in the base cluster.
5. VSAM retrieves the record with an RBA of X'400' from the base cluster.
A search for a given alternate key reads all the base cluster records containing this
alternate key. For example, Figure 18 on page 98 and Figure 19 on page 99 show
that one salesman has several customers. For the key-sequenced data set, several
primary-key pointers (customer numbers) are in the alternate-index data record.
There is one for each occurrence of the alternate key (salesman’s name) in the base
data set. For the entry-sequenced data set, several RBA pointers are in the alternate
index data record. There is one for each occurrence of the alternate key (salesman’s
name) in the base data set. The pointers are ordered by arrival time.
Before a base cluster can be accessed through an alternate index, a path must be
defined. A path provides a way to gain access to the base data through a specific
alternate index. To define a path use the access method services command DEFINE
PATH.
Data Compression
When deciding whether to compress data, consider the following guidelines and
rules:
v Compress when an existing data set is approaching the 4 gigabyte VSAM size
limit or when you have capacity constraints
v Only SMS-managed data is eligible for compression
v The data set must be an extended format key-sequenced data set
v Control interval access is not permitted.
Any program other than DFSMSdss, REPRO, and any other physical data
copy/move program that does direct input/output to DASD for data sets which
have data in compressed format can compromise data integrity. These programs
must be modified to access the data using VSAM keyed access to permit expansion
of compressed data.
Topic Location
Using Cluster Names for Data and Index Components 104
Defining a Data Set with Access Method Services 104
Defining a Data Set with JCL 113
Loading a VSAM Data Set 113
Copying and Merging Data Sets 117
Defining Alternate Indexes 119
Defining a Page Space 123
Checking for Problems in Catalogs and Data Sets 124
Deleting Data Sets 125
This chapter explains how to define VSAM data sets. Other chapters provide
examples and related information:
v For an example of defining a VSAM data set, see Chapter 8, “Defining and
Manipulating VSAM Data Sets: Examples,” on page 127.
v For examples of defining VSAM data sets, see z/OS DFSMS Access Method
Services for Catalogs.
v For information about defining a data set using RLS, see “Locking” on page 229.
VSAM data sets are defined using either access method services commands or JCL
dynamic allocation. A summary of defining a VSAM data sets follows:
1. VSAM data sets must be cataloged. If you want to use a new catalog, use
access method services commands to create a catalog. The procedure for
defining a catalog is described in z/OS DFSMS Managing Catalogs.
2. Define a VSAM data set in a catalog using the TSO ALLOCATE command, the
access method services ALLOCATE or DEFINE CLUSTER command, dynamic
allocation, or JCL. Before you can define a VSAM data set with dynamic
allocation or JCL, SMS must be active on your system. Dynamic allocation and
JCL do not support most of the DEFINE options available with access method
services.
3. Load the data set with either the access method services REPRO command or
your own loading program.
4. Optionally, define any alternate indexes and relate them to the base cluster. Use
the access method services DEFINE ALTERNATEINDEX, DEFINE PATH, and
BLDINDEX commands to do this.
After any of these steps, you can use the access method services LISTCAT and
PRINT commands to verify what has been defined, loaded, or processed. The
LISTCAT and PRINT commands are useful for identifying and correcting
problems.
If you use DEFINE CLUSTER, attributes of the data and index components can be
specified separately from attributes of the cluster.
v If attributes are specified for the cluster and not the data and index components,
the attributes of the cluster (except for password and USVR security attributes)
apply to the components.
v If an attribute that applies to the data or index component is specified for both
the cluster and the component, the component specification overrides the
cluster’s specification.
If you use ALLOCATE, attributes can be specified only at the cluster level.
Naming a Cluster
You specify a name for the cluster when defining it. Usually, the cluster name is
given as the dsname in JCL. A cluster name that contains more than 8 characters
must be segmented by periods; 1 to 8 characters can be specified between periods.
A name with a single segment is called an unqualified name. A name with more
than 1 segment is called a qualified name. Each segment of a qualified name is
called a qualifier.
You can, optionally, name the components of a cluster. Naming the data
component of an entry-sequenced cluster or a linear data set, or the data and index
components of a key-sequenced cluster, makes it easier to process the components
individually.
If you do not explicitly specify a data or index component name when defining a
VSAM data set or alternate index, VSAM generates a name. Also, when you define
a user catalog, VSAM generates only an index name for the user catalog (the name
of the user catalog is also the data component name). VSAM uses the following
format to generate names for both system-managed and non-system-managed data
sets:
1. If the last qualifier of the name is CLUSTER, replace the last qualifier with DATA
for the data component and INDEX for the index component.
After a name is generated, VSAM searches the catalog to ensure that the name is
unique. If a duplicate name is found, VSAM continues generating new names
using the format outlined in 4 until a unique one is produced.
z/OS DFSMS Access Method Services for Catalogs describes the order in which one of
the catalogs available to the system is selected to contain the to-be-defined catalog
entry. When you define an object, you should ensure that the catalog the system
selects is the catalog you want the object entered.
Data set name duplication is not prevented when a user catalog is imported into a
system. No check is made to determine if the imported catalog contains an entry
name that already exists in another catalog in the system.
Temporary system-managed VSAM data sets do not require that you specify a data
set name. If you specify a data set name it must begin with & or &&:
DSNAME(&CLUSTER)
See “Examples of Defining Temporary VSAM Data Sets” on page 130 for
information about using the ALLOCATE command to define a temporary
system-managed VSAM data set. See “Temporary VSAM Data Sets” on page 269
for information about restrictions on using temporary data sets.
If the Storage Management Subsystem (SMS) is active, and you are defining a
system-managed cluster, you can explicitly specify the data class, management
class, and storage class parameters and take advantage of attributes defined by
your storage administrator. You can also implicitly specify the SMS classes by
taking the system determined defaults if such defaults have been established by
your storage administrator. The SMS classes are assigned only at the cluster level.
You cannot specify them at the data or index level.
If SMS is active and you are defining a non-system-managed cluster, you can also
explicitly specify the data class or take the data class default if one is available.
Management class and storage class are not supported for non-system-managed
data sets.
If you are defining a non-system-managed data set and you do not specify the
data class, you must explicitly specify all necessary descriptive, performance,
security, and integrity information through other access method services
parameters. Most of these parameters can be specified for the data component, the
index component, or both. Specify information for the entire cluster with the
CLUSTER parameter. Specify information for only the data component with the
DATA parameter and for only the index component with the INDEX parameter.
See “Using Access Method Services Parameters” for an explanation of the types of
descriptive, performance, security, and integrity information specified using these
parameters.
Both the data class and some other access method services parameters can be used
to specify values to the same parameter, for example, the control interval size. The
system uses the following order of precedence, or filtering, to determine which
parameter value to assign.
1. Explicitly specified DEFINE command parameters
2. Modeled attributes (assigned by specifying the MODEL parameter on the
DEFINE command)
3. Data class attributes
4. DEFINE command parameter defaults
Descriptive Parameters
The following access method services parameters provide descriptive information:
Performance Parameters
The following access method services parameters provide performance
information. All these performance options are discussed in Chapter 10,
“Optimizing VSAM Performance,” on page 157.
v CONTROLINTERVALSIZE parameter—Specifies the control interval size for
VSAM to use (instead of letting VSAM calculate the size).
The size of the control interval must be large enough to hold a data record of
the maximum size specified in the RECORDSIZE parameter unless the data set
was defined with the SPANNED parameter.
Specify the CONTROLINTERVALSIZE parameter for data sets that use shared
resource buffering, so you know what control interval size to code on the
BLDVRP macro.
v SPANNED parameter—Specifies whether records can span control intervals. The
SPANNED parameter is not permitted for fixed-length and variable-length
RRDSs, and linear data sets.
v SPEED|RECOVERY parameter—Specifies whether to preformat control areas
during initial loading of a data set. See “Using a Program to Load a Data Set”
on page 115.
v VOLUMES parameter for the index component—Specifies whether to place the
cluster’s index on a separate volume from data.
| Restriction:
Defining a new KEYRANGE data set is no longer supported. For more information
about converting key-range data sets, see the z/OS DFSMShsm Implementation and
Customization Guide.
You can specify space allocation at the cluster or alternate-index level, at the data
level only, or at both the data and index levels. It is best to allocate space at the
cluster or data levels. VSAM allocates space if:
v Allocation is specified at the cluster or alternate index level only, the amount
needed for the index is subtracted from the specified amount. The remainder of
the specified amount is assigned to data.
v Allocation is specified at the data level only, the specified amount is assigned to
data. The amount needed for the index is in addition to the specified amount.
v Allocation is specified at both the data and index levels, the specified data
amount is assigned to data and the specified index amount is assigned to the
index.
v Secondary allocation is specified at the data level, secondary allocation must be
specified at the index level or the cluster level.
VSAM acquires space in increments of control areas. The control area size
generally is based on primary and secondary space allocations. See “Optimizing
Control Area Size” on page 161 for information about optimizing control area size.
Partial Release
Partial release is used to release unused space from the end of an extended format
data set and is specified through SMS management class or by the JCL RLSE
subparameter. All space after the high used RBA is released on a CA boundary up
to the high allocated RBA. If the high used RBA is not on a CA boundary, the high
used amount is rounded to the next CA boundary. Partial release restrictions
include:
v Partial release processing is supported only for extended format data sets.
v Only the data component of the VSAM cluster is eligible for partial release.
v Alternate indexes opened for path or upgrade processing are not eligible for
partial release. The data component of an alternate index when opened as
cluster could be eligible for partial release.
v Partial release processing is not supported for temporary close.
v Partial release processing is not supported for data sets defined with guaranteed
space.
VSAM checks the smaller of primary and secondary space values against the
specified device’s cylinder size. If the smaller quantity is greater than or equal to
the device’s cylinder size, the control area is set equal to the cylinder size. If the
smaller quantity is less than the device’s cylinder size, the size of the control area
is set equal to the smaller space quantity. The minimum control area size is one
track. See “Optimizing Control Area Size” on page 161 for information about
creating small control areas.
See “Using Index Options” on page 177 for information about index options.
When you define a linear data set, you can specify a control interval size of 4096 to
32 768 bytes in increments of 4096 bytes. If not an integer multiple of 4096, the
control interval size is rounded up to the next 4096 increment. The system chooses
the best physical record size to use the track size geometry. For example, if you
specify CISIZE(16384), the block size is set to 16 384. If the specified
BUFFERSPACE is greater than 8192 bytes, it is decremented to a multiple of 4096.
If BUFFERSPACE is less than 8192, access method services issues a message and
fails the command.
| For nonstriped VSAM data sets, you can specify in the SMS data class parameter
| whether to use primary or secondary allocation amounts when extending to a new
| volume. You can expand the space for a nonstriped VSAM component to 255
| extents. For SMS-managed VSAM data sets, this extent limit is removed, and the
| theoretical limit is the maximum number of volumes (59), times 123 extents per
| volume, or 7257 extents.
| You can expand the space for a striped VSAM component to 255 times the number
| of stripes. The VSAM limit of 255 extents is still enforced for any
| non-SMS-managed data set. The system reserves the last four extents for extending
| a component when the system cannot allocate the last extent in one piece.
For both guaranteed and nonguaranteed space allocations, when you allocate space
for your data set, you can specify both a primary and a secondary allocation.
Guaranteed and nonguaranteed space allocation work similarly until the system
extends the data set to a new volume. The difference is that the guaranteed space
data set uses the “candidate with space” amount that is already allocated on that
volume.
With guaranteed space allocations, the primary allocation is allocated on the first
volume as “PRIME” and all of the other guaranteed space volumes as “candidate
with space”. When all of the space on the primary volume is used, the system gets
space on the primary volume using the secondary amount. When no more space
can be allocated on the primary volume, the system uses the “candidate with
space” amount on the next volume. Subsequent extends again use the secondary
amounts to allocate space until the volume is full. Then the system uses the
“candidate with space” amount on the next volume, and so forth.
Example: The old extent begins on cylinder 6, track 0, and ends on cylinder 9,
track 14, and the new extent begins on cylinder 10, track 0, and ends on cylinder
12, track 14. The two extents are combined into one extent beginning on cylinder 6,
track 0, and ending on cylinder 12, track 14. Instead of two extents, there is only
one extent. Because VSAM combines the two extents, it does not increment the
extent count, which reduces the amount of extents.
Example: You allocate a VSAM data set with CYLINDERS(3 1). The data set
initially gets three cylinders and an additional cylinder every time the data set is
extended. Suppose you extend this data set five times. If none of the extents are
adjacent, the LISTCAT output shows allocations of cylinders 3,1,1,1,1,1, or a total of
eight cylinders.
Results: Depending on which extents are adjacent, the LISTCAT output might
show allocations of cylinders 5,1,1,1, or cylinders 3,5, or cylinders 3,2,3, as follows:
v For the 5,1,1,1 example, only the first three extents are adjacent.
v For the 3,5 example, the first and second extent are not adjacent, but the third
through eighth extent are adjacent.
v For the 3,2,3 example, the first and second extent are not adjacent, the second
and third extents are adjacent, the third and fourth extents are not adjacent, and
the last three extents are adjacent.
All types of SMS-managed VSAM data sets (KSDS, ESDS, RRDS, VRRDS, and
LDS) use extent consolidation.
Restriction: VSAM does not support extent consolidation for the following types of
data sets:
v Key-range data sets
v System data sets such as page spaces
v Catalogs
v VVDSs
v Non-system managed data sets
v Imbedded or replicated indexes
v VSAM data sets that you access using record-level sharing
example shows how to calculate the size of the data component for a
key-sequenced data set. The following are assumed for the calculations:
| The value (1024 – 10) is the control interval length minus 10 bytes for two RDFs
| and one CIDF. The record size is 200 bytes. On an IBM 3380, 31 physical blocks
| with 1024 bytes can be stored on one track. The value (33 × 16) is the number of
| physical blocks per track multiplied by the number of data tracks per cylinder.
You cannot use ALTER to change a fixed-length RRDS into a variable-length RRDS,
or vice versa.
Allocate all of the partitions in a single IEFBR14 job step using JCL. If an adequate
number of volumes exist in the storage groups, and the volumes are not above the
allocation threshold, the SMS allocation algorithms with SRM will ensure each
partition is allocated on a separate volume.
Related reading: See Chapter 18, “Using Job Control Language for VSAM,” on
page 265 for information about the JCL keywords used to define a VSAM data set.
See z/OS MVS JCL Reference and z/OS MVS JCL User’s Guide for information about
JCL keywords and the use of JCL.
With entry-sequenced or key-sequenced data sets, or RRDSs, you can load all the
records either in one job or in several jobs. If you use multiple jobs to load records
into a data set, VSAM stores the records from subsequent jobs in the same manner
that it stored records from preceding jobs, extending the data set as required.
| When records are to be stored in key sequence, index entries are created and
| loaded into an index component as data control intervals and control areas are
| filled. Free space is left as indicated in the cluster definition in the catalog.
VSAM data sets must be cataloged. Sequential and indexed sequential data sets
need not be cataloged. Sequential data sets that are system managed must be
cataloged.
The only way to specify the DSORG parameter is to use the DD statement. The
DCB parameters RECFM, BLKSIZE, and LRECL can be supplied using the DSCB
or header label of a standard labeled tape, or by the DD statement. The system can
determine the optimum block size.
If you use REPRO to copy to a sequential data set, you do not need to supply a
block size because the system determines the block size when it opens the data set.
You can optionally supply a BLKSIZE value using JCL or when you define the
output data set.
If you are loading a VSAM data set into a sequential data set, you must remember
that the 3-byte VSAM record definition field (RDF) is not included in the VSAM
record length. When REPRO attempts to copy a VSAM record whose length is
more than the non-VSAM LRECL−4, a recoverable error occurs and the record is
not copied. (Each non-VSAM record has a four-byte prefix that is included in the
length. Thus, the length of each VSAM variable-length record is four bytes less
than the length of the non-VSAM record.)
Access method services does not support records greater than 32 760 bytes for
non-VSAM data sets (LRECL=X is not supported). If the logical record length of a
non-VSAM input data set is greater than 32 760 bytes, or if a VSAM data set
defined with a record length greater than 32 760 is to be copied to a sequential
data set, the REPRO command terminates with an error message.
used as input. The records in the output data set must have a record length
defined that includes the extended length caused by the key string. To copy
“dummy” indexed-sequential records (with X'FF' in the first byte), specify the
DUMMY option in the ENVIRONMENT parameter.
Related reading: For information about physical and logical errors, see z/OS
DFSMS Macro Instructions for Data Sets.
VSAM uses the high-used RBA field to determine whether a data set is empty. An
implicit verify can update the high-used RBA. Immediately after definition of a
data set, the high-used RBA value is zero. An empty data set cannot be verified.
The terms create mode, load mode, and initial data set load are synonyms for the
process of inserting records into an empty VSAM data set. To start loading an
empty VSAM data set, call the VSAM OPEN macro. Following a successful open,
the load continues while records are added and concludes when the data set is
closed.
Restriction: If an entry-sequenced data set fails to load, you cannot open it.
If the design of your application calls for direct processing during load mode, you
can avoid this restriction by following these steps:
1. Open the empty data set for load mode processing.
2. Sequentially write one or more records, which could be dummy records.
3. Close the data set to terminate load mode processing.
4. Reopen the data set for normal processing. You can now resume loading or do
direct processing. When using this method to load a VSAM data set, be
cautious about specifying partial release. Once the data set is closed, partial
release will attempt to release all space not used.
For information about using user-written exit routines when loading records into a
data set, see Chapter 16, “Coding VSAM User-Written Exit Routines,” on page 241.
During load mode, each control area can be preformatted as records are loaded
into it. Preformatting is useful for recovery if an error occurs during loading.
However, performance is better during initial data set load without preformatting.
The RECOVERY parameter of the access method services DEFINE command is
used to indicate that VSAM is to preformat control areas during load mode. In the
case of a fixed-length RRDS and SPEED, a control area in which a record is
inserted during load mode will always be preformatted. With RECOVERY, all
control areas will be preformatted.
Preformatting clears all previous information from the direct access storage area
and writes end-of-file indicators. For VSAM, an end-of-file indicator consists of a
control interval with a CIDF equal to zeros.
v For an entry-sequenced data set, VSAM writes an end-of-file indicator in every
control interval in the control area.
v For a key-sequenced data set, VSAM writes an end-of-file indicator in the first
control interval in the control area following the preformatted control area. (The
preformatted control area contains free control intervals.)
v For a fixed-length RRDS, VSAM writes an end-of-file indicator in the first
control interval in the control area following the preformatted control area. All
RDFs in an empty preformatted control interval are marked “slot empty”.
The SPEED parameter does not preform at the data control areas. It writes an
end-of-file indicator only after the last record is loaded. Performance is better if
you use the SPEED parameter and if using extended format data sets. Extended
format data sets may use system-managed buffering. This permits the number of
data buffers to be optimized for load mode processing. This can be used with the
REPRO parameter for a new data set for reorganization or recovery. If an error
occurs that prevents loading from continuing, you cannot identify the last
successfully loaded record and you might have to reload the records from the
beginning. For a key-sequenced data set, the SPEED parameter only affects the
data component.
Rule: Remember that, if you specify SPEED, it will be in effect for load mode
processing. After load mode processing, RECOVERY will be in effect, regardless of
the DEFINE specification.
A data set that is not reusable can be loaded only once. After the data set is
loaded, it can be read and written to, and the data in it can be modified. However,
the only way to remove the set of data is to use the access method services
command DELETE, which deletes the entire data set. If you want to use the data
set again, define it with the access method services command DEFINE, by JCL, or
by dynamic allocation.
Instead of using the DELETE - DEFINE sequence, you can specify the REUSE
parameter in the DEFINE CLUSTER|ALTERNATEINDEX command. The REUSE
parameter lets you treat a filled data set as if it were empty and load it again and
again regardless of its previous contents.
A reusable data set can be a KSDS, an ESDS, an LDS, or a RRDS that resides on
one or more volumes. A reusable base cluster cannot have an alternate index, and
it cannot be associated with key ranges. When a reusable data set is opened with
the reset option, it cannot be shared with other jobs.
VSAM uses a high-used relative byte address (RBA) field to determine if a data set
is empty or not. Immediately after you define a data set, the high-used RBA value
is zero. After loading and closing the data set, the high-used RBA is equal to the
offset of the last byte in the data set. In a reusable data set, you can reset to zero
this high-used RBA field at OPEN by specifying MACRF=RST in the ACB at
OPEN. VSAM can use this reusable data set like a newly defined data set.
For compressed format data sets, in addition to the high-used RBA field being
reset to zero for MACRF=RST, OPEN resets the compressed and uncompressed
data set sizes to zero. The system does not reset the compression dictionary token
and reuses it to compress the new data. Because the dictionary token is derived
from previous data, this action could affect the compression ratio depending on the
nature of the new data.
For information about accessing a data set using RLS, see Chapter 14, “Using
VSAM Record-Level Sharing,” on page 219.
Because data is copied as single logical records in either key order or physical
order, automatic reorganization can take place as follows:
v Physical relocation of logical records
v Alteration of a record’s physical position within the data set
v Redistribution of free space throughout the data set
v Reconstruction of the VSAM indexes
If you are copying to or from a sequential data set that is not cataloged, you must
include the appropriate volume and unit parameters on your DD statements. For
more information about these parameters see “Using REPRO to Copy a VSAM
Data Set” on page 114.
Table 10 describes how the data from the input data set is added to the output data
set when the output data set is an empty or nonempty entry-sequenced, sequential,
key-sequenced, or linear data set, or fixed-length or variable-length RRDS.
Table 10. Adding Data to Various Types of Output Data Sets
Type of Data Set Empty Nonempty
Entry sequenced Loads new data set in Adds records in sequential order to the end of the data
sequential order. set.
Sequential Loads new data set in Adds records in sequential order to the end of the data
sequential order. set.
Key sequenced Loads new data set in key Merges records by key and updates the index. Unless the
sequence and builds an REPLACE option is specified, records whose key
index. duplicates a key in the output data set are lost.
Linear Loads new linear data set in Adds data to control intervals in sequential order to the
relative byte order. end of the data set.
Fixed-length RRDS Loads a new data set in Records from another fixed-length or variable-length
relative record sequence, RRDS are merged, keeping their old record numbers.
beginning with relative Unless the REPLACE option is specified, a new record
record number 1. whose number duplicates an existing record number is
lost. Records from any other type of organization cannot
be copied into a nonempty fixed-length RRDS.
Variable-length RRDS Loads a new data set in Records from another fixed-length or variable-length
relative record sequence, RRDS are merged, keeping their old record numbers.
beginning with relative Unless the REPLACE option is specified, a new record
record number 1. whose number duplicates an existing record number is
lost. Records from any other type of organization cannot
be copied into a nonempty fixed-length RRDS.
Except for data class, attributes of the alternate index’s data and index components
can be specified separately from the attributes of the whole alternate index. If
attributes are specified for the whole alternate index and not for the data and
index components, these attributes (except for password and USVR security
attributes) apply to the components as well. If the attributes are specified for the
components, they override any attributes specified for the entire alternate index.
The performance options and the security and integrity information for the
alternate index are the same as that for the cluster. See “Using Access Method
Services Parameters” on page 106.
example, you would not be able to support as many nonunique keys as you would
if the maximum RECORDSIZE value were 5000.
Access method services opens the base cluster to read the data records sequentially,
sorts the information obtained from the data records, and builds the alternate
index data records.
The base cluster’s data records are read and information is extracted to form the
key-pointer pair:
v When the base cluster is entry sequenced, the alternate-key value and the data
record’s RBA form the key-pointer pair.
v When the base cluster is key sequenced, the alternate-key value and the
primary-key value of the data set record form the key-pointer pair.
After the key-pointer pairs are sorted into ascending alternate key order, access
method services builds alternate index records for key-pointer pairs. When all
alternate index records are built and loaded into the alternate index, the alternate
index and its base cluster are closed.
Related reading: For information about calculating the amount of virtual storage
required to sort records, using the BLDINDEX command, and the catalog search
order, see z/OS DFSMS Access Method Services for Catalogs.
You can maintain your own alternate indexes or have VSAM maintain them. When
the alternate index is defined with the UPGRADE attribute of the DEFINE
command, VSAM updates the alternate index whenever there is a change to the
associated base cluster. VSAM opens all upgrade alternate indexes for a base
cluster whenever the base cluster is opened for output. If you are using control
interval processing, you cannot use UPGRADE. See Chapter 11, “Processing
Control Intervals,” on page 179.
You can define a maximum of 125 alternate indexes in a base cluster with the
UPGRADE attribute.
All the alternate indexes of a given base cluster that have the UPGRADE attribute
belong to the upgrade set. The upgrade set is updated whenever a base data
record is inserted, erased, or updated. The upgrading is part of a request and
VSAM completes it before returning control to your program. If upgrade
processing is interrupted because of a machine or program error so that a record is
missing from the base cluster but its pointer still exists in the alternate index,
record management will synchronize the alternate index with the base cluster by
letting you reinsert the missing base record. However, if the pointer is missing
from the alternate index, that is, the alternate index does not reflect all the base
cluster data records, you must rebuild your alternate index to resolve this
discrepancy.
Note that when you use SHAREOPTIONS 2, 3, and 4, you must continue to ensure
read/write integrity when issuing concurrent requests (such as GETs and PUTs) on
the base cluster and its associated alternate indexes. Failure to ensure read/write
integrity might temporarily cause “No Record Found” or “No Associated Base
Record” errors for a GET request. You can bypass such errors by reissuing the GET
request, but it is best to prevent the errors by ensuring read/write integrity.
If you specify NOUPGRADE in the DEFINE command when the alternate index is
defined, insertions, deletions, and changes made to the base cluster will not be
reflected in the associated alternate index.
When a path is opened for update, the base cluster and all the alternate indexes in
the upgrade set are allocated. If updating the alternate indexes is unnecessary, you
can specify NOUPDATE in the DEFINE PATH command and only the base cluster
is allocated. In that case, VSAM does not automatically upgrade the alternate
index. If two paths are opened with MACRF=DSN specified in the ACB macro, the
NOUPDATE specification of one can be nullified if the other path is opened with
UPDATE specified.
Defining a Path
After an alternate index is defined, you need to establish the relationship between
an alternate index and its base cluster, using the access method services command,
DEFINE PATH. You must name the path and can also give it a password. The path
name refers to the base cluster/alternate index pair. When you access the data set
through the path, you must specify the path name in the DSNAME parameter in
the JCL.
When your program opens a path for processing, both the alternate index and its
base cluster are opened. When data in a key-sequenced base cluster is read or
written using the path’s alternate index, keyed processing is used. RBA processing
is permitted only for reading or writing an entry-sequenced data set’s base cluster.
Related reading: See z/OS DFSMS Access Method Services for Catalogs for
information about using the DEFINE PATH command.
A page space has a maximum size equal to 16 777 215 slots (records). However, the
actual usable page space is much less because it has a size limit of 4 GB.
The considerations for defining a page space are much like those for defining a
cluster. The DEFINE PAGESPACE command has many of the same parameters as
the DEFINE CLUSTER command, so the information you must supply for a page
space is similar to what you would specify for a cluster. A page space data set
cannot be in extended format. For a 3390 DASD, the maximum size of a page
space that you can specify on the DEFINE PAGESPACE with CYLINDERS is 5 825.
You can define a page space in a user catalog, then move the catalog to a new
system, and establish it as the system’s master catalog. For page spaces to be
system managed, they must be cataloged, and you must let the system determine
which catalog to use. Page spaces also cannot be duplicate data sets. The system
cannot use a page space if its entry is in a user catalog.
When you issue a DEFINE PAGESPACE command, the system creates an entry in
the catalog for the page space, then preformats the page space. If an error occurs
during the preformatting process (for example, an I/O error or an allocation error),
the page space’s entry remains in the catalog even though no space for it exists.
Issue a DELETE command to remove the page space’s catalog entry before you
redefine the page space.
Each page space is represented by two entries in the catalog: a cluster entry and a
data entry. (A page space is an entry-sequenced cluster.) Both of these entries
should be password protected if the page space is password protected.
The system recognizes a page space if it is defined as a system data set at system
initialization time or if it is named in SYS1.PARMLIB. To be used as a page space,
it must be defined in a master catalog.
Recommendations:
1. When you define page spaces during system initialization, use the ALTER
command to add passwords to each entry because passwords cannot be
specified during system initialization. The passwords you specify with the
DEFINE PAGESPACE command are put in both the page space’s cluster entry
and its data entry. Unless you ensure that the catalog containing the page space
entry is either password protected or RACF protected, a user can list the
catalog’s contents and find out each entry’s passwords.
2. Passwords are ignored for system-managed data sets. For these, you must have
RACF alter authority.
Related reading:
v For information about using the DEFINE PAGESPACE parameter to define the
page size, see z/OS DFSMS Access Method Services for Catalogs.
v For details on specifying information for a data set, especially for
system-managed data sets, see “Specifying Cluster Information” on page 106
and “Using Access Method Services Parameters” on page 106.
v For information about how VSAM handles duplicate data sets, see “Duplicate
Data Set Names” on page 105.
You can also use the access method services REPRO command to copy a data set
to an output device. For more information about REPRO see “Copying and
Merging Data Sets” on page 117.
The access method services VERIFY command provides a means of checking and
restoring end-of-data-set values after system failure.
The access method services EXAMINE command lets the user analyze and report
on the structural inconsistencies of key-sequenced data set clusters. The EXAMINE
command is described in Chapter 15, “Checking VSAM Key-Sequenced Data Set
Clusters for Structural Errors,” on page 235.
Related reading: For more information about VERIFY, see “Using VERIFY to
Process Improperly Closed Data Sets” on page 50. For information about using the
DIAGNOSE command to indicate the presence of nonvalid data or relationships in
the BCS and VVDS, see z/OS DFSMS Managing Catalogs.
The listing can be customized by limiting the number of entries, and the
information about each entry, that is printed.
You can obtain the same list while using the interactive storage management
facility (ISMF) by issuing the CATLIST line operator on the Data Set List panel.
The list is placed into a data set, which you can view immediately after issuing the
request.
Related reading: See z/OS DFSMS Using the Interactive Storage Management Facility
for information about the CATLIST line operator.
Entry-sequenced and linear data sets are printed in physical sequential order.
Key-sequenced data sets can be printed in key order or in physical-sequential
order. Fixed-length or variable-length RRDSs are printed in relative record number
sequence. A base cluster can be printed in alternate key sequence by specifying a
path name as the data set name for the cluster.
Only the data content of logical records is printed. System-defined control fields
are not printed. Each record printed is identified by one of the following:
v The relative byte address (RBA) for entry-sequenced data sets.
v The key for indexed-sequential and key-sequenced data sets, and for alternate
indexes
v The record number for fixed-length or variable-length RRDSs.
Related reading: See z/OS MVS Programming: Authorized Assembler Services Guide
for information about program authorization. See “Authorized Program Facility
and Access Method Services” on page 62 for information about using the PRINT
command to print a catalog.
Restriction: If the system finds four logical and/or physical errors while
attempting to read the input, printing ends abnormally.
Use the ERASE parameter if you want to erase the components of a cluster or
alternate index when deleting it. ERASE overwrites the data set. Use the
NOSCRATCH parameter if you do not want the data set entry (DSCB) removed
from the VTOC. NOSCRATCH nullifies an ERASE parameter on the same DELETE
command.
Use access method services to delete a VSAM cluster or a path which has
associated alternate indexes defined with NOUPGRADE. However, if you perform
the delete using JCL by specifying a DD statement with DISP=(OLD,DELETE), all
volumes that are necessary to delete the alternate index are not allocated. The
delete operation fails with an error message when the job step ends.
Topic Location
Example of Defining a VSAM Data Set 128
Examples of Defining Temporary VSAM Data Sets 130
Examples of Defining Alternate Indexes and Paths 131
The following set of examples contain a wide range of functions available through
access method services commands that let you define:
v VSAM data sets
v Temporary VSAM data sets
v Alternate indexes and paths
See z/OS DFSMS Access Method Services for Catalogs for examples of the other
functions available through access method services.
IF LASTCC = 0 THEN -
DEFINE CLUSTER(NAME (EXAMPL1.KSDS) VOLUMES(VSER05)) -
DATA (KILOBYTES (50 5))
/*
//STEP2 EXEC PGM=IDCAMS
//SYSPRINT DD SYSOUT=*
//SYSABEND DD SYSOUT=*
//AMSDUMP DD SYSOUT=*
//INDSET4 DD DSNAME=SOURCE.DATA,DISP=OLD,
// VOL=SER=VSER02,UNIT=3380
//SYSIN DD *
IF LASTCC = 0 THEN -
LISTCAT ENTRIES(EXAMPL1.KSDS)
IF LASTCC = 0 THEN -
PRINT INDATASET(EXAMPL1.KSDS)
/*
The following access method services commands are used in this example:
See Chapter 18, “Using Job Control Language for VSAM,” on page 265 for
examples of creating VSAM data sets through JCL. See z/OS DFSMS Access Method
Services for Catalogs for more details and examples of these or other access method
services commands.
The first DEFINE command defines a user catalog named USERCATX. The
USERCATALOG keyword specifies that a user catalog is to be defined. The
command’s parameters follow.
The REPRO command here loads the VSAM key-sequenced data set named
EXAMPL1.KSDS from an existing data set called SOURCE.DATA (that is described
by the INDSET4 DD statement). The command’s parameters are:
INFILE Identifies the data set containing the source data. The ddname of the DD
statement for this data set must match the name specified on this
parameter.
OUTDATASET Identifies the name of the data set to be loaded. Access method services
dynamically allocates the data set. The data set is cataloged in the
master catalog.
If the REPRO operation is successful, the data set’s catalog entry is listed, and the
contents of the data set just loaded are printed.
LISTCAT Lists catalog entries. The ENTRIES parameter identifies the names of the
entries to be listed.
PRINT Prints the contents of a data set. The INDATASET parameter is required
and identifies the name of the data set to be printed. Access method
services dynamically allocates the data set. The data set is cataloged in
the master catalog. No password is required because the cluster
component is not password protected.
ALLOC -
DSNAME(&CLUSTER) -
NEW -
RECORG(ES) -
SPACE(1,10) -
AVGREC(M) -
LRECL(256) -
STORCLAS(TEMP)
/*
DSNAME Specifies the data set name. If you specify a data set name for a
system-managed temporary data set, it must begin with & or &&. The
DSNAME parameter is optional for temporary data sets only. If you do
not specify a DSNAME, the system generates a qualified data set name
for the temporary data set.
NEW Specifies that a new data set is created in this job step.
RECORG Specifies a VSAM entry-sequenced data set.
SPACE Specifies an average record length of 1 and a primary quantity of 10.
AVGREC Specifies that the primary quantity specified on the SPACE keyword
represent the number of records in megabytes (multiplier of 1,048,576).
LRECL Specifies a record length of 256 bytes.
STORCLAS Specifies a storage class for the temporary data set. The STORCLAS
keyword is optional. If you do not specify STORCLAS for the new data
set and your storage administrator has provided an ACS routine, the
ACS routine can select a storage class.
JCL Statements
The IDCUT1 and IDCUT2 DD statements describe the DSNAMES and a volume
containing data space made available to BLDINDEX for defining and using two
sort work data sets in the event an external sort is performed. The data space is
not used by BLDINDEX if enough virtual storage is available to perform an
internal sort.
Commands
The first DEFINE command defines a VSAM alternate index over the base cluster
EXAMPL1.KSDS.
The second DEFINE command defines a path over the alternate index. After the
alternate index is built, opening with the path name causes processing of the base
cluster through the alternate index.
NAME The NAME parameter is required and names the object being defined.
PATHENTRY The PATHENTRY parameter is required and specifies the name of the
alternate index over which the path is defined and its master password.
READPW Specifies a read password for the path; it is propagated to the
master-password level.
CATALOG The CATALOG parameter is required, because the master catalog is
password protected. It specifies the name of the master catalog and its
update or master password that is required for defining in a protected
catalog.
The BLDINDEX command builds an alternate index. Assume that enough virtual
storage is available to perform an internal sort. However, DD statements with the
default ddnames of IDCUT1 and IDCUT2 are provided for two external sort work
data sets if the assumption is incorrect and an external sort must be performed.
INDATASET The INDATASET parameter identifies the base cluster. Access method
services dynamically allocates the base cluster. The base cluster’s cluster
entry is not password protected even though its data and index
components are.
OUTDATASET The OUTDATASET parameter identifies the alternate index. Access
method services dynamically allocates the alternate index. The update-
or higher-level password of the alternate index is required.
CATALOG The CATALOG parameter specifies the name of the master catalog. If it
is necessary for BLDINDEX to use external sort work data sets, they will
be defined in and deleted from the master catalog. The master password
permits these actions.
The PRINT command causes the base cluster to be printed using the alternate key,
using the path defined to create this relationship. The INDATASET parameter
identifies the path object. Access method services dynamically allocates the path.
The read password of the path is required.
Topic Location
Creating an Access Method Control Block 136
Creating an Exit List 136
Opening a Data Set 137
Creating a Request Parameter List 138
Manipulating the Contents of Control Blocks 140
Requesting Access to a Data Set 141
Closing Data Sets 151
Operating in SRB or Cross-Memory Mode 152
Using VSAM Macros in Programs 153
To process VSAM data sets, you use VSAM macros. You can use the following
procedure for processing a VSAM data set to read, update, add, or delete data:
1. Create an access method control block to identify the data set to be opened
using the ACB or GENCB macro.
2. Create an exit list to specify the optional exit routines that you supply, using
the EXLST or GENCB macro.
3. Optionally, create a resource pool, using the BLDVRP macro. (See Chapter 13,
“Sharing Resources Among VSAM Data Sets,” on page 207.)
4. Connect your program to the data set you want to process, using the OPEN
macro.
5. Create a request parameter list to define your request for access, using the RPL
or GENCB macro.
6. Manipulate the control block contents using the GENCB, TESTCB, MODCB and
SHOWCB macros.
7. Request access to the data set, using one or more of the VSAM request macros
(GET, PUT, POINT, ERASE, CHECK, and ENDREQ).
8. Disconnect your program from the data set, using the CLOSE macro.
The virtual resource pool for all components of the clusters or alternate indexes
must be successfully built before any open is issued to use the resource pool;
otherwise, the results might be unpredictable or performance problems might
occur.
For information about the syntax of each macro, and for coded examples of the
macros, see z/OS DFSMS Macro Instructions for Data Sets.
The ACB, RPL, and EXLST are created by the caller of VSAM. When storage is
obtained for these blocks, virtual storage management assigns the PSW key of the
requestor to the subpool storage. An authorized task can change its PSW key. Since
VSAM record management runs in the protect key of its caller, such a change
might make previously acquired control blocks unusable because the storage key
of the subpool containing these control blocks no longer matches the VSAM
caller’s key.
Include the following information in your ACB for OPEN to prepare the kind of
processing your program requires:
v The address of an exit list for your exit routines. Use the EXLST macro to
construct the list.
v If you are processing concurrent requests, the number of requests (STRNO)
defined for processing the data set. For more information about concurrent
requests see “Making Concurrent Requests” on page 149.
v The size of the I/O buffer virtual storage space and/or the number of I/O
buffers that you are supplying for VSAM to process data and index records.
v The password required for the type of processing desired. Passwords are not
supported for system-managed data sets. You must have RACF authorization for
the type of operation to be performed.
v The processing options that you plan to use:
– Keyed, addressed, or control interval, or a combination
– Sequential, direct, or skip sequential access, or a combination
– Retrieval, storage, or update (including deletion), or a combination
– Shared or nonshared resources.
v The address and length of an area for error messages from VSAM.
v If using RLS, see Chapter 14, “Using VSAM Record-Level Sharing,” on page 219.
You can use the ACB macro to build an access method control block when the
program is assembled, or the GENCB macro to build a control block when the
program is run. See “Manipulating the Contents of Control Blocks” on page 140 for
information about the advantages and disadvantages of using GENCB.
The EXLST macro is coordinated with the EXLST parameter of an ACB or GENCB
macro used to generate an ACB. To use the exit list, you must code the EXLST
parameter in the ACB.
You can use the EXLST macro to build an exit list when the program is assembled,
or the GENCB macro to build an exit list when the program is run. For
information about the advantages and disadvantages of using GENCB see
“Manipulating the Contents of Control Blocks” on page 140.
v An error during OPEN can cause a component that is open for update
processing to close improperly, leaving on the open-for-output indicator. When
VSAM detects an open-for-output indicator, it issues an implicit VERIFY
command and a message that indicates whether the VERIFY command was
successful.
If a subsequent OPEN is issued for update, VSAM turns off the open-for-output
indicator at CLOSE. If the data set was open for input, however, VSAM leaves
on the open-for-output indicator.
v Check the password your program specified in the ACB PASSWD parameter
against the appropriate password (if any) in the catalog definition of the data.
The system does not support passwords for system-managed data sets. A
password of one level authorizes you to do everything that a password of a
lower level authorizes. You must have RACF authorization for the operation.
The password requirement depends on the kind of access that is specified in the
access method control block:
– Full access lets you perform all operations (retrieve, update, insert, and
delete) on a data set on any associated index or catalog record. The master
password lets you delete or alter the catalog entry for the data set or catalog
it protects.
– Control-interval update access requires the control password or RACF control
authority. The control lets you use control-interval access to retrieve, update,
insert, or delete records in the data set it protects. For information about the
use of control-interval access, see Chapter 11, “Processing Control Intervals,”
on page 179.
Control-interval read access requires only the read password or RACF read
authority, that lets you examine control intervals in the data set it protects.
The read password or RACF read authority does not let you add, change, or
delete records.
– Update access requires the update password, which lets you retrieve, update,
insert, or delete records in the data set it protects.
– Read access requires the read password, that lets you examine records in the
data set it protects. The read password does not permit you to add, change,
or delete records.
Note: RACF protection supersedes password protection for a data set. RACF
checking is bypassed for a caller that is in supervisor state or key 0. For
more information on password and RACF protection, see Chapter 5,
“Protecting Data Sets,” on page 53.
You can use the RPL macro to generate a request parameter list (RPL) when your
program is assembled, or the GENCB macro to build a request parameter list when
your program is run. For information about the advantages and disadvantages of
using GENCB, see “Manipulating the Contents of Control Blocks” on page 140.
When you define your request, specify only the processing options appropriate for
that particular request. Parameters not required for a request are ignored. For
example, if you switch from direct to sequential retrieval with a request parameter
list, you do not have to zero out the address of the field containing the search
argument (ARG=address).
You can chain request parameter lists together to define a series of actions for a
single GET or PUT. For example, each parameter list in the chain could contain a
unique search argument and point to a unique work area. A single GET macro
would retrieve a record for each request parameter list in the chain. All RPLs in a
chain must refer to the same ACB.
Each request parameter list in a chain should have the same OPTCD
subparameters. Having different subparameters can cause logical errors. You
cannot chain request parameter lists for updating or deleting records—only for
retrieving records or storing new records. You cannot process records in the I/O
buffer with chained request parameter lists. (RPL OPTCD=UPD and RPL
OPTCD=LOC are nonvalid for a chained request parameter list.)
When you are using chained RPLs, if an error occurs anywhere in the chain, the
RPLs following the one in error are made available without being processed and
are posted complete with a feedback code of zero.
The GENCB, MODCB, TESTCB, and SHOWCB macros build a parameter list that
describes, in codes, the actions indicated by the parameters you specify. The
parameter list is passed to VSAM to take the indicated actions. An error can occur
if you specify the parameters incorrectly.
You can use the WAREA parameter to provide an area of storage in which to
generate the control block. This work area has a 64K (X'FFFF') size limit. If you do
not provide storage when you generate control blocks, the ACB, RPL, and EXLST
reside below 16 MB unless LOC=ANY is specified.
After issuing a TESTCB macro, examine the PSW condition code. If the TESTCB is
not successful, register 15 contains an error code and VSAM passes control to an
error routine, if one has been specified. For a keyword specified as an option or a
name, you test for an equal or unequal comparison; for a keyword specified as an
address or a number, you test for an equal, unequal, high, low, not-high, or
not-low condition.
VSAM compares A to B, where A is the contents of the field and B is the value to
compare. A low condition means, for example, A is lower than B — that is, the
value in the control block is lower than the value you specified. If you specify a
list of option codes for a keyword (for example, MACRF=(ADR,DIR)), each of
them must equal the corresponding value in the control block for you to get an
equal condition.
Some of the fields can be tested at any time; others, only after a data set is opened.
The ones that can be tested only after a data set is opened can, for a key-sequenced
data set, pertain either to the data or to the index, as specified in the OBJECT
parameter.
You can display fields using the SHOWCB macro at the same time you test the
fields.
With addressed access of a key-sequenced data set, VSAM does not insert or add
new records.
Sequential Insertion. If the new record belongs after the last record of the control
interval and the record contains free space, the new record is inserted into the
existing control interval. If the control interval does not contain sufficient free
space, the new record is inserted into a new control interval without a true split.
If the new record does not belong at the end of the control interval and there is
free space in the control interval, it is placed in sequence into the existing control
interval. If adequate free space does not exist in the control interval, a control
interval split occurs at the point of insertion. The new record is inserted into the
original control interval and the following records are inserted into a new control
interval.
Mass sequential insertion observes control interval and control area free space
specifications when the new records are a logical extension of the control interval
or control area (that is, when the new records are added beyond the highest key or
relative record number used in the control interval or control area).
When several groups of records in sequence are to be mass inserted, each group
can be preceded by a POINT with RPL OPTCD=KGE to establish positioning. KGE
specifies that the key you provide for a search argument must be equal to the key
or relative record number of a record.
Direct Insertion—CI Split. If the control interval has enough available space, the
record is inserted. If the control interval does not have enough space to hold the
record, the entire CI is split, unless the record is the last key in the file. The last
record is always placed in a new, empty CI and does not show up as a CI split.
If the insertion is to the end of the control interval, the record is placed in a new
control interval.
As for a fixed-length RRDS, you can insert records into a variable-length RRDS
either sequentially or directly.
Retrieving Records
The GET macro is used to retrieve records. To retrieve records for update, use the
GET macro with the PUT macro. When you retrieve records either sequentially or
directly, VSAM returns the length of the retrieved record to the RECLEN field of
the RPL.
Sequential Retrieval
Records can be retrieved sequentially using keyed access or addressed access.
Keyed Sequential Retrieval. The first time your program accesses a data set for
keyed sequential access (RPL OPTCD=(KEY,SEQ)), VSAM is positioned at the first
record in the data set in key sequence if and only if the following is true:
1. Nonshared resources are being used.
2. There have not been any previous requests against the file.
If VSAM picks a string that has been used previously this implicit positioning does
not occur. Therefore, with concurrent or multiple RPL’s, it is best to initiate your
own POINTs and positioning to prevent logic errors.
With shared resources, you must always use a POINT macro to establish position.
A GET macro can then retrieve the record. Certain direct requests can also hold
position. See Table 11 on page 147 for details on when positioning is retained or
released. VSAM checks positioning when processing modes are changed between
requests.
If, after positioning, you issue a direct request through the same request parameter
list, VSAM drops positioning unless NSP or UPD was specified in the RPL OPTCD
parameter.
When a POINT is followed by a VSAM GET/PUT request, both the POINT and
the subsequent request must be in the same processing mode. For example, a
POINT with RPL OPTCD=(KEY,SEQ,FWD) must be followed by GET/PUT with
RPL OPTCD=(KEY,SEQ,FWD); otherwise, the GET/PUT request is rejected.
For skip-sequential retrieval, you must indicate the key of the next record to be
retrieved. VSAM skips to the next record’s index entry by using horizontal pointers
in the sequence set to find the appropriate sequence-set index record and scan its
entries. The key of the next record to be retrieved must always be higher in
sequence than the key of the preceding record retrieved.
Direct Retrieval
Records can also be retrieved directly using keyed access or addressed access.
Keyed Direct Retrieval. For a key-sequenced data set does not depend on prior
positioning. VSAM searches the index from the highest level down to the sequence
set to retrieve a record. Specify the record to be retrieved by supplying one of the
following:
v The exact key of the record
v An approximate key, less than or equal to the key field of the record
v A generic key
You can use an approximate specification when you do not know the exact key. If
a record actually has the key specified, VSAM retrieves it. Otherwise, it retrieves
the record with the next higher key. Generic key specification for direct processing
causes VSAM to retrieve the first record having that generic key. If you want to
retrieve all the records with the generic key, specify RPL OPTCD=NSP in your
direct request. That causes VSAM to position itself at the next record in key
sequence. Then retrieve the remaining records sequentially.
A fixed-length RRDS has no index. VSAM takes the number of the record to be
retrieved and calculates the control interval that contains it and its position within
the control interval.
Updating Records
The GET and PUT macros are used to update records. A GET for update retrieves
the record and the following PUT for update stores the record the GET retrieved.
When you update a record in a key-sequenced data set, you cannot alter the
primary-key field.
However, do not process only the data component if you plan to update the data set.
Always open the cluster when updating a key-sequenced data set.
Deleting Records
After a GET for update retrieves a record, an ERASE macro can delete the record.
The ERASE macro can be used only with a key-sequenced data set or a
fixed-length or variable-length RRDS. When you delete a record in a
key-sequenced data set or variable-length RRDS, the record is physically erased.
The space the record occupied is then available as free space.
You can erase a record from the base cluster of a path only if the base cluster is a
key-sequenced data set. If the alternate index is in the upgrade set in which
UPGRADE was specified when the alternate index was defined, it is modified
automatically when you erase a record. If the alternate key of the erased record is
unique, the alternate index data record with that alternate key is also deleted.
When you erase a record from a fixed-length RRDS, the record is set to binary
zeros and the control information for the record is updated to indicate an empty
slot. Reuse the slot by inserting another record of the same length into it.
With an entry-sequenced data set, you are responsible for marking a record you
consider to be deleted. As far as VSAM is concerned, the record is not deleted.
Reuse the space occupied by a record marked as deleted by retrieving the record
for update and storing in its place a new record of the same length.
Note:
1. A sequential GET request for new control intervals releases the previous buffer.
2. The ENDREQ macro and the ERASE macro with RPL OPTCD=DIR release data
buffers and positioning.
3. Certain options that retain positioning and buffers on normal completion
cannot do so if the request fails with an error code. See z/OS DFSMS Macro
Instructions for Data Sets to determine if positioning is maintained if a logical
error occurs.
The following operation uses but immediately releases a buffer and does not retain
positioning:
GET RPL OPTCD=(DIR,NUP,MVE)
If you are doing multiple string update processing, you must consider VSAM
lookaside processing and the rules surrounding exclusive use. Lookaside means
VSAM checks its buffers to see if the control interval is already present when
requesting an index or data control interval.
For GET to update requests, the buffer is obtained in exclusive control, and read
from the device for the latest copy of the data. If the buffer is already in exclusive
control of another string, the request fails with an exclusive control feedback code.
If you are using shared resources, the request can be queued, or can return an
exclusive control error.
If you are using nonshared resources, VSAM does not queue requests that have
exclusive control conflicts, and you are required to clear the conflict. If a conflict is
found, VSAM returns a logical error return code, and you must stop activity and
clear the conflict. If the RPL that caused the conflict had exclusive control of a
control interval from a previous request, you issue an ENDREQ before you attempt
to clear the problem. Clear the conflict in one of three ways:
v Queue until the RPL holding exclusive control of the control interval releases
that control, then reissue the request.
v Issue an ENDREQ against the RPL holding exclusive control to force it to release
control immediately.
v Use shared resources and issue MRKBFR MARK=RLS.
Note: If the RPL includes a correctly specified MSGAREA and MSGLEN, the
address of the RPL holding exclusive control is provided in the first word of the
MSGAREA. The RPL field, RPLDDDD, contains the RBA of the requested control
interval.
Strings (sometimes called place holders) are like cursors, each represents a position
in the data set and are like holding your finger in a book to keep the place. The
same ACB is used for all requests, and the data set needs to be opened only once.
This means, for example, you could be processing a data set sequentially using one
RPL, and at the same time, using another RPL, directly access selected records
from the same data set.
Keep in mind, though, that strings are not “owned” by the RPL any longer than
the request holds its position. Once a request gives up its position (for example,
with an ENDREQ), that string is free to be used by another request and must be
repositioned in the data set by the user.
For each request, a string defines the set of control blocks for the exclusive use of
one request. For example, if you use three RPLs, you should specify three strings.
If the number of strings you specify is not sufficient, and you are using NSR, the
operating system dynamically extends the number of strings as needed by the
concurrent requests for the ACB. Strings allocated by dynamic string addition are
not necessarily in contiguous storage.
Dynamic string addition does not occur with LSR and GSR. Instead, you get a
logic error if you have more requests than available strings.
The maximum number of strings that can be defined or added by the system is
255. Therefore, the maximum number of concurrent requests holding position in
one data set at any one time is 255.
When you use direct or skip-sequential access to process a path, a record from the
base data set is returned according to the alternate key you specified in the
argument field of the RPL macro. If the alternate key is not unique, the record first
entered with that alternate key is returned and a feedback code (duplicate key) is
set in the RPL. To retrieve the remaining records with the same alternate key,
specify RPL OPTCD=NSP when retrieving the first record with a direct request,
and switch to sequential processing.
You can insert and update data records in the base cluster using a path if:
v The PUT request does not result in nonunique alternate keys in an alternate
index (defined with the UNIQUEKEY attribute). However, if a nonunique
alternate key is generated and the NONUNIQUEKEY attribute is specified,
updating can occur.
v You do not change the key of reference between the time the record was
retrieved for update and the PUT is issued.
v You do not change the primary key.
When the alternate index is in the upgrade set, the alternate index is modified
automatically by inserting or updating a data record in the base cluster. If the
updating of the alternate index results in an alternate index record with no
pointers to the base cluster, the alternate-index record is erased.
Rule: When you use SHAREOPTIONS 2, 3, and 4, you must continue to ensure
read/write integrity when issuing concurrent requests (such as GETs and PUTs) on
the base cluster and its associated alternate indexes. Failure to ensure read/write
integrity might temporarily cause “No Record Found” or “No Associated Base
Record” errors for a GET request. Bypass such errors by reissuing the GET request,
but it is best to prevent the errors by ensuring read/write integrity.
Once the request is completed, CHECK releases control to the next instruction in
your program, and frees up the RPL for use by another request.
Ending a Request
Suppose you determine that you do not want to complete a request that you
initiated. For example, suppose you determine during the processing immediately
following a GET that you do not want the record you just requested. You can use
the ENDREQ macro to cancel the request. Using the ENDREQ macro has the
following advantages:
v Avoids checking an unwanted asynchronous request.
v Writes any unwritten data or index buffers in use by the string.
v Cancels the VSAM positioning on the data set for the RPL.
Recommendation: If you issue the ENDREQ macro, it is important that you check
the ENDREQ return code to make sure it completes successfully. If an
asynchronous request does not complete ENDREQ successfully, you must issue the
CHECK macro. The data set cannot be closed until all asynchronous requests
successfully complete either ENDREQ or CHECK. ENDREQ waits for the target
RPL to post, so it should not be issued in an attempt to end a hung request.
If a record management error occurs while CLOSE is flushing buffers, the data
set’s catalog information is not updated. The catalog cannot properly reflect the
data set’s status and the index cannot accurately reflect some of the data records. If
the program enters an abnormal termination routine (ABEND), all open data sets
are closed. The VSAM CLOSE invoked by ABEND does not update the data set’s
catalog information, it does not complete outstanding I/O requests, and buffers are
not flushed. The catalog cannot properly reflect the cluster’s status, and the index
cannot accurately reference some of the data records. Use the access method
services VERIFY command to correct catalog information. The use of VERIFY is
described in “Using VERIFY to Process Improperly Closed Data Sets” on page 50.
If a VSAM data set is closed and CLOSE TYPE=T is not specified, you must
reopen the data set before performing any additional processing on it.
When you issue a temporary or a permanent CLOSE macro, VSAM updates the
data set’s catalog records. If your program ends with an abnormal end (ABEND)
without closing a VSAM data set the data set’s catalog records are not updated,
and contain inaccurate statistics.
Restriction: The following close options are ignored for VSAM data sets:
v FREE=CLOSE JCL parameter
v FREE=CLOSE requested through dynamic allocation, DALCLOSE
VSAM does not synchronize cross-memory mode requests. For non-ICI processing,
the RPL must specify WAITX, and a UPAD exit (user processing exit routine) must
be provided in an exit list to handle the wait and post processing for cross-memory
requests; otherwise a VSAM error code is returned.
For cross-memory mode requests, VSAM does not do wait processing when a
UPAD for wait returns to VSAM. For non-cross-memory task mode, however, if
the UPAD taken for wait returns with ECB not posted, VSAM issues a WAIT
supervisor call instruction (SVC). For either mode, when a UPAD is taken for post
processing returns, VSAM assumes the ECB has been marked complete and does
not do post processing.
SRB mode does not require UPAD. If a UPAD is provided for an SRB mode
request, it is taken only for I/O wait and resource wait processing.
RPL return code to indicate that you must change processing mode so that you are
running under a task control block (TCB) in the address space in which the data
set was opened. You cannot be in cross-memory mode. Then reissue the request to
permit the SVC to be issued by VSAM. The requirement for VSAM to issue an
SVC is kept to a minimum. Areas identified as requiring a TCB not in
cross-memory mode are EXCEPTIONEXIT, loaded exits, EOV (end-of-volume),
dynamic string addition, and alternate index processing.
See Chapter 16, “Coding VSAM User-Written Exit Routines,” on page 241 for more
information.
//ddname DD DSNAME=dsname,DISP=(OLD|SHR)
[,OPTCD= [DIR|SEQ|SKP]
[ . . . . . . . . . . . .])]
.........
[,BUFND=number]
[,BUFNI=number]
[,BUFSP=number]
[,MACRF=([DIR][,SEQ][,SKP]
[,IN][,OUT]
[,NRS ][,RST]
[,STRNO=number]
[,PASSWD=address]
[,EXLST=address]
........
Figure 21 on page 155 is a skeleton program that shows the relationship of VSAM
macros to each other and to the rest of the program.
START CSECT
SAVE(14,12) Standard entry code
.
B INIT Branch around file specs
Topic Location
Optimizing Control Interval Size 157
Optimizing Control Area Size 161
Optimizing Free Space Distribution 162
Using Index Options 177
Obtaining Diagnostic Information 178
Migrating from the Mass Storage System 178
Using Hiperbatch 178
Most of the options are specified in the access method services DEFINE command
when a data set is defined. Sometimes options can be specified in the ACB and
GENCB macros and in the DD AMP parameter.
Control interval size affects record processing speed and storage requirements in
the following ways:
v Buffer space. Data sets with large control interval sizes require more buffer
space in virtual storage. For information about how much buffer space is
required, see “Determining I/O Buffer Space for Nonshared Resource” on page
166.
v I/O operations. Data sets with large control interval sizes require fewer I/O
operations to bring a given number of records into virtual storage; fewer index
records must be read. It is best to use large control interval sizes for sequential
and skip-sequential access. Large control intervals are not beneficial for keyed
direct processing of a key-sequenced data set or variable-length RRDS.
v Free space. Free space is used more efficiently (fewer control interval splits and
less wasted space) as control interval size increases relative to data record size.
For more information about efficient use of free space, see “Optimizing Free
Space Distribution” on page 162.
The valid control interval sizes and block sizes for the data or index component
are from 512 to 8192 bytes in increments of 512 bytes, and from 8 KB to 32 KB in
increments of 2 KB. When you choose a CI size that is not a multiple of 512 or
2048, VSAM chooses the next higher multiple. For a linear data set, the size
specified is rounded up to 4096 if specified as 4096 or less. It is rounded to the
next higher multiple of 4096 if specified as greater than 4096.
The block size of the index component is always equal to the control interval size.
However, the block size for the data component and index components might
differ.
Example: Valid control interval sizes are 512, 1024, 1536, 2048, 3584, 4096, ... 8192,
10 240, and 12 288, and so on.
Unless the data set was defined with the SPANNED attribute, the control interval
must be large enough to hold a data record of the maximum size specified in the
RECORDSIZE parameter. Because the minimum amount of control information in
a control interval is 7 bytes, a control interval is normally at least 7 bytes larger
than the largest record in the component. For compressed data sets, a control
interval is at least 10 bytes larger than the largest record after it is compressed.
This allows for the control information and record prefix. Since the length of a
particular record is hard to predict and since the records might not compress, it is
best to assume that the largest record is not compressed. If the control interval size
you specify is not large enough to hold the maximum size record, VSAM increases
the control interval size to a multiple of the minimum physical block size. The
control interval size VSAM provides is large enough to contain the record plus the
overhead.
For a variable-length RRDS, a control interval is at least 11 bytes larger than the
largest record.
The use of the SPANNED parameter removes this constraint by permitting data
records to be continued across control intervals. The maximum record size is then
equal to the number of control intervals per control area multiplied by control
interval size minus 10. The use of the SPANNED parameter places certain
restrictions on the processing options that can be used with a data set. For
example, records of a data set with the SPANNED parameter cannot be read or
written in locate mode. For more information about spanned records see “Spanned
Records” on page 77.
PB PB PB PB PB PB PB PB PB PB PB PB
PB - Physical block
Figure 22. Control Interval Size, Physical Track Size, and Track Capacity
The information about a track is divided into physical blocks. Control interval size
must be a whole number of physical blocks. Control intervals can span tracks.
However, poor performance results if a control interval spans a cylinder boundary,
because the read/write head must move between cylinders.
The physical block size is always selected by VSAM. VSAM chooses the largest
physical block size that exactly divides into the control interval size. The block size
is also based on device characteristics.
If you specify free space for a key-sequenced data set or variable-length RRDS, the
system determines the number of bytes to be reserved for free space. For example,
if control interval size is 4096, and the percentage of free space in a control interval
has been defined as 20%, 819 bytes are reserved. Free space calculations drop the
fractional value and use only the whole number.
To find out what values are actually set in a defined data set, issue the access
method services LISTCAT command.
You might need a larger CI than the size that VSAM calculated, depending on the
allocation unit, the data CI size, the key length, and the key content as it affects
compression. (It is rare to have the entire key represented in the index, because of
key compression.) If the keys for the data set do not compress according to the
estimated ratio (3:1), the index CI size that VSAM calculated might be too small,
resulting in the inability to address CIs in one or more CAs. This results in
allocated space that is unusable in the data set. After the first define (DEFINE), a
catalog listing (LISTCAT) shows the number of control intervals in a control area
and the key length of the data set.
You can use the number of control intervals and the key length to estimate the size
of index record necessary to avoid a control area split, which occurs when the
index control interval size is too small. To make a general estimate of the index
control interval size needed, multiply one half of the key length (KEYLEN) by the
number of data control intervals per control area (DATA CI/CA):
(KEYLEN/2) * DATA CI/CA ≤ INDEX CISIZE
The use of a 2:1 ratio rather than 3:1, which VSAM uses, allows for some of the
additional overhead factors in the actual algorithm for determining the CI size.
the size of the index control interval is increased, if possible. If the size cannot
be increased, VSAM decreases the number of control intervals in the control
area.
2. Specifies maximum record size as 2560 and data control interval size as 2560,
and have no spanned records. VSAM adjusts the data control interval size to
3072 to permit space for control information in the data control interval.
3. Specifies buffer space as 4K, index control interval size as 512, and data control
interval size as 2K. VSAM decreases the data control interval to 1536. Buffer
space must include space for two data control intervals and one index control
interval at DEFINE time. For more information about buffer space requirements
see “Determining I/O Buffer Space for Nonshared Resource” on page 166.
The following examples show how the control-area size is generally determined by
the primary and secondary allocation amount. The index control-interval size and
buffer space can also affect the control-area size. The following examples are based
on the assumption that the index CI size is large enough to handle all the data CIs
in the CA and the buffer space is large enough not to affect the CI sizes:
v CYLINDERS(5,10)—Results in a 1-cylinder control-area size.
v KILOBYTES(100,50)—The system determines the control area based on 50 KB,
resulting in a 1-track control-area size.
v RECORDS(2000,5)—Assuming 10 records would fit on a track, results in a
1-track control-area size.
v TRACKS(100,3)—Results in a 3-track control-area size.
v TRACKS(3,100)—Results in a 3-track control-area size.
A spanned record cannot be larger than the size of a control area minus the size of
the control information (10 bytes per control interval). Therefore, do not specify a
primary or secondary allocation that is not large enough to contain the largest
spanned record.
Note: If space is allocated in kilobytes, megabytes, or records, the system sets the
control area size equal to multiples of the minimum number of tracks or cylinders
required to contain the specified kilobytes, megabytes, or records. Space is not
assigned in units of bytes or records.
If the control area is smaller than a cylinder, its size will be an integral multiple of
tracks, and it can span cylinders. However, a control area can never span an extent
of a data set, which is always composed of a whole number of control areas. For
more information about allocating space for a data set, see “Allocating Space for
VSAM Data Sets” on page 108.
The amount of free space you need depends on the number and location of records
to be inserted, lengthened, or deleted. Too much free space can result in:
v Increased number of index levels, that affects run times for direct processing.
v More direct access storage required to contain the data set.
v More I/O operations required to sequentially process the same number of
records.
Too little free space can result in an excessive number of control interval and
control area splits. These splits are time consuming, and have the following
additional effects:
v More time is required for sequential processing because the data set is not in
physical sequence.
v More seek time is required during processing because of control area splits.
Use LISTCAT or the ACB JRNAD exit to monitor control area splits. See “JRNAD
Exit Routine to Journalize Transactions” on page 247.
When splits become frequent, reorganize the data set using REPRO or EXPORT.
Reorganization creates a smaller, more efficient data set with fewer control
intervals. However, reorganizing a data set is time consuming.
Con-
R1 R2 R3 R4 R5 R6 trol
info
Byte number 500 1000 2500 3267
For this data set, each control interval is 4096 bytes. In each control interval, 10
bytes are reserved for control information. Because control interval free space is
specified as 20%, 819 bytes are reserved as free space. (4096 × .20 = 819). Round
down. The free space threshold is 3267 bytes. The space between the threshold and
the control information is reserved as free space.
Because the records loaded in the data set are 500-byte records, there is not enough
space for another record between byte 3000 and the free space threshold at byte
3267. These 267 bytes of unused space are also used as free space. This leaves 1086
bytes of free space; enough to insert two 500-byte records. Only 86 bytes are left
unusable.
When you specify free space, ensure that the percentages of free space you specify
yield full records and full control intervals with a minimum amount of unusable
space.
No additions. If no records will be added and if record sizes will not be changed,
there is no need for free space.
Few additions. If few records will be added to the data set, consider a free space
specification of (0 0). When records are added, new control areas are created to
provide room for additional insertions.
If the few records to be added are fairly evenly distributed, control interval free
space should be equal to the percentage of records to be added. (FSPC (nn 0),
where nn equals the percentage of records to be added.)
Mass insertion. If you are inserting a group of sequential records, take full
advantage of mass insertion by using the ALTER command to change free space to
(0 0) after the data set is loaded. For more information about mass insertion see
“Inserting and Adding Records” on page 142.
Additions to a specific part of the data set. If new records will be added to only a
specific part of the data set, load those parts where additions will not occur with a
free space of (0 0). Then, alter the specification to (n n) and load those parts of the
data set that will receive additions. The example in “Altering the Free Space
Specification When Loading a Data Set” demonstrates this.
Assume that a large key-sequenced data set is to contain records with keys from 1
through 300 000. It is expected to have no inserts in key range 1 through 100 000,
some inserts in key range 100 001 through 200 000, and heavy inserts in key range
200 001 through 300 000.
SYSTEM Force system-managed buffering and let the system determine the buffering
technique based on the ACB MACRF and storage-class specification.
USER Bypass system-managed buffering.
SO System-managed buffering with sequential optimization.
SW System-managed buffering weighted for sequential processing.
DO System-managed buffering with direct optimization.
DW System-managed buffering weighted for direct optimization.
VSAM must always have sufficient space available to process the data set as
directed by the specified processing options.
Optionally, specify RMODE31 in your JCL DD AMP parameter to let the user
override any RMODE31 values specified when the ACB was created. If you do not
specify RMODE31 in the JCL AMP parameter and ACCBIAS=SYSTEM, the default
value, RMODE31=BUFF, is in effect. If you attempt to reference the VSAM buffers
directly (as in LOCATE mode), your program must run in 31-bit addressing mode.
If your program runs in 24-bit addressing mode and you need to access VSAM
buffers directly, code RMODE31=CB or RMODE31=NONE in the JCL AMP
parameter.
Related reading: For more information, see z/OS HCD User’s Guide.
Use the following dynamic allocation options when allocating a VSAM data set to
use uncaptured UCBs above the 16 MB line and reduce storage usage:
v XTIOT option (S99TIOEX)—This option requires that your program be APF
authorized, in supervisor state, or in a system key.
v NOCAPTURE option (S99ACUCB)—Specify this option to use 4-byte actual
UCB addresses. This option does not require your program to be authorized.
v DSAB-above-the-line option (S99DSABA)—Specify this option to place the data
set association control block (DSAB) above the 16 MB line. You must use this
option with S99TIOEX.
Related reading: For more information, see z/OS MVS Programming: Authorized
Assembler Services Guide.
To indicate that VSAM is to use SMB, specify either of the following options:
v Specify the ACCBIAS subparameter of the JCL DD statement AMP parameter
and an appropriate value for record access bias.
v Specify Record Access Bias in the data class and an application processing
option in the ACB.
For system-managed buffering (SMB), the data set must use both of the following
options:
v System Management Subsystem (SMS) storage
v Extended format (DSNTYPE=ext in the data class)
JCL takes precedence over the specification in the data class. You must specify
NSR. SMB either weights or optimizes buffer handling toward sequential or direct
processing.
To optimize your extended format data sets, use the ACCBIAS subparameter of the
AMP parameter along with related subparameters SMBVSP, SMBDFR, and
SMBHWT. You can also use these subparameters with Record Access
Bias=SYSTEM in the data class. These subparameters are only for Direct Optimized
processing.
Processing Techniques
The information in this section is for planning purposes only. It is not absolute or
exact regarding storage requirements. You should use it only as a guideline for
estimating storage requirements. Individual observations might vary depending on
specific implementations and processing.
You can choose or specify any of the four processing techniques that SMB
implements:
v Direct Optimized (DO)
v Sequential Optimized (SO)
v Direct Weighted (DW)
v Sequential Weighted (SW)
The following three options, SMBVSP, SMBDFR, and SMBHWT, are only for
processing with the Direct Optimized technique.
v SMBVSP. This option specifies the amount of virtual storage to obtain for
buffers when opening the data set. You can specify virtual buffer size in
kilobytes, from 1K to 2048000K, or in megabytes, from 1M to 2048M. This value
is the total amount of virtual storage that you can address in a single address
space. This value does not specify storage that the system or the access method
requires.
v SMBDFR. This option lets you defer writing buffers to the medium either until
the buffer is required for a different request or until the data set is closed.
CLOSE TYPE=T does not write the buffers to the medium when the system uses
LSR processing for direct optimization. Defaults for deferred write processing
depend upon the SHAREOPTIONS values, which you specify when you define
the data set. The default for SHAREOPTIONS (1,3) and (2,3) is deferred write.
The default for SHAREOPTIONS (3,3), (4,3), and (x, 4) is nondeferred write. If
the user specifies a value for SMBDFR, this value always takes precedence over
any defaults.
v SMBHWT. This option permits the specification of a whole decimal value from
1-99 for allocating Hiperspace buffers. The allocation is based on a multiple of
the number of virtual buffers that have been allocated.
The basis for the default technique is the application specification for ACB
MACRF=(DIR,SEQ,SKP) Also, specification of the following values in the
associated storage class (SC) influence the default technique:
v Direct millisecond response
v Direct bias
v Sequential millisecond response
v Sequential bias
You can specify the technique externally by using the ACCBIAS subparameter of
the AMP= parameter. The system invokes the function only during data set OPEN
processing. After SMB makes the initial decisions during that process, it has no
further involvement.
| Table 12 is a guideline showing what access bias SMB chooses for certain
| parameter specifications.
| Table 12. SMB access bias guidelines
| BIAS Selection based on ACB MACRF= and Storage Class MSR/BIAS
| MACRF Options MSR/BIAS Value Specified in Storage Class
| SEQ DIR Both None
| DIR DW DO DO DO
| SEQ - default SO SW SO SO
| SKP DW DW DW DW
| (SEQ,SKP) SO SW SW SW
| (DIR,SEQ) or SW DW DW DW
| (DIR,SKP) or
| (DIR,SEQ,SKP)
|
| Abbreviations used in this table:
| v DO = Direct Optimized
| v DW = Direct Weighted
| v SO = Sequential Optimized
| v SW = Sequential Weighted.
| Note: This table can only be used as guideline to show what Access Bias SMB will
| choose when ACCBIAS=SYSTEM is specified in JCL AMP parameter, or
| when RECORD_ACCESS_BIAS=SYSTEM is specified in Dataclass. There are
| exceptions in determining the actual Access Bias. Other factors that can
| influence the decision are amount of storage available, whether it is AIX or
| Base component, and if DSN or DDN sharing is in effect.
| In the case where ACCBIAS=DO is specifically asked for on JCL AMP parameter,
| SMB may default to DW if there is not enough storage. To avoid this situation,
| there are two techniques:
| 1. Allocate more storage for the job.
| 2. Specify SMBVSP=xx on JCL to limit the amount of storage SMB will use for
| DO. For details of how to use SMBVSP, see the related section.
The two techniques are Create Optimized (CO) and Create Recovery Optimized
(CR).
Direct Optimized (DO) Guidelines. DO could result in a requirement for the most
additional processor virtual storage. This results in the creation of a local shared
resources (LSR) pool for each data set opened with this technique in a single
application program. The size of the data set is a major factor in the processor
virtual storage requirement for buffering. The size of the pool is based on the
actual data set size at the time the pool is created. This means that the processor
virtual storage requirement increases with each OPEN after records have been
added and the data set has been extended beyond its previous size.
A separate pool is built for both data and index components, if applicable, for each
data set. There is no capability to share a single pool by multiple data sets.
| However, DSN sharing and DDN sharing is supported. The index pool is sized to
accommodate all records in the index component. The data pool is sized to
accommodate approximately 20% of the user records in the data set. As discussed
previously, this size can change based on data set growth. A maximum pool size
for the data component will be identified. These buffers are acquired above the 16
MB line unless overridden by the use of the RMODE31 parameter on the
AMP=parameter.
The use of the SMBVSP parameter on AMP=parameter can be used to restrict the
size of the pool that is built for the data component. There is no capability to
override the size of the pool for the index records. The SMBHWT parameter can be
used to provide buffering in Hiperspace in combination with virtual buffers for the
data component. The value of this parameter is used as a multiplier of the virtual
buffer space for Hiperspace buffers. This can reduce the size required for an
application region, but does have implications related to processor cycle
requirements. That is, all application requests must orient to a virtual buffer
address. If the required data is in a Hiperspace buffer, the data must be moved to
a virtual buffer after “stealing” a virtual buffer and moving that buffer to a least
recently used (LRU) Hiperspace buffer.
If the optimum amount of storage required for this option is not available, SMB
will reduce the number of buffers and retry the request. For data, SMB will make
two attempts, with a reduced amount and a minimum amount. For an index, SMB
reduces the amount of storage only once, to minimum amount. If all attempts fail,
the DW technique is used. The system issues an IEC161I message to advise that
this has happened. In addition, SMF type-64 records indicate whether a reduced or
minimum amount of resource is being used for a data pool and whether DW is
used. For more information, see z/OS MVS System Management Facilities (SMF).
Restrictions on the Use of Direct Optimized (DO). The Direct Optimized (DO)
technique is elected if the ACB only specifies the MACRF=(DIR) option for
accessing the data set. If either SEQ|SKP are specified, either in combination with
DIR or independently, DO is not selected. The selection can be overridden by the
user specification of ACCBIAS=DO on the AMP=parameter of the associated DD
statement.
There are some restrictions for the use of the Direct Optimized (DO) technique:
1. The application must position the data set to the beginning for any sequential
processing. This assumes the first retrieval will be set to that point of the data
set.
| 2. Applications that use multiple strings can hang if the position is held while
| other requests process. An example of this is an application that has one
| request doing sequential GETs while another request does PUTs.
Sequential Optimized (SO) Guidelines. This technique provides the most efficient
buffers for sequential application processing such as data set backup. The size of
the data set is not a factor in the processor virtual storage that is required for
buffering. The buffering implementation (NSR) specified by the application will not
be changed for this technique. Approximately 500K of processor virtual storage for
buffers, defaulted to above 16 MB, is required for this technique.
The size of the data set is a minor factor in the storage that is required for
buffering. This technique does not change the buffering implementation that the
application specified (NSR). This technique requires approximately 100K of
processor storage for buffers, with a default of 16 MB.
The size of the data set is a minor factor in the amount of processor virtual storage
that buffering requires. This technique does not change the buffering
implementation that the application specified (NSR). This technique requires
approximately 100K of processor virtual storage for buffers, with the default above
16 MB.
Create Optimized (CO) Guidelines. This is the most efficient technique, as far as
physical I/Os to the data component, for loading a VSAM data set. It only applies
when the data set is in initial load status and when defined with the SPEED
option. The system invokes it internally, with no user control other than the
specification of RECORD ACCESS BIAS in the data class or an AMP=(ACCBIAS=)
value of SYSTEM.
The size of the data set is not a factor in the amount of storage that buffering
requires. This technique does not change the buffering implementation that the
application specified (NSR). This technique requires a maximum of approximately
2 MB of processor virtual storage for buffers, with the default above 16 MB.
Create Recovery Optimized (CR) Guidelines. The system uses this technique
when a data set defined with the RECOVERY option is in initial load status. The
system invokes CR internally, with no user control other than the specification of
RECORD ACCESS BIAS in the data class or an AMP=(ACCBIAS=) value of
SYSTEM.
The size of the data set is not a factor in the amount of storage that buffering
requires. This technique does not change the buffering implementation that the
application specified (NSR). This technique requires a maximum of approximately
1 MB of processor virtual storage for buffers, with the default above 16 MB.
The storage for buffers for SMB techniques is obtained above 16 MB. If the
application runs as AMODE=RMODE=24 and issues locate-mode requests (RPL
OPTCD=(,LOC)), the AMP= parameter must specify RMODE31=NONE for data
sets that use SMB.
SMB might not be the answer to all application program buffering requirements.
The main purpose of SMB is to improve performance buffering options for batch
application processing, beyond the options that the standard defaults provide. In
the case of many large data sets and apparently random access to records, it might
be better to implement a technique within the application program to share a
common resource pool. The application program designer might know the access
technique for the data set, but SMB cannot predict it. In such applications, it would
be better to let the application program designer define the size and number of
buffers for each pool. This is not unlike the requirements of high-performance
database systems.
When processing a data set directly, VSAM reads only one data control interval at
a time. For output processing (PUT for update), VSAM immediately writes the
updated control interval, if OPTCD=NSP is not specified in the RPL macro.
Unused index buffers do not degrade performance, so you should always specify
an adequate number. For optimum performance, the number of index buffers
should at least equal the number of high-level index set control intervals plus one
per string to contain the entire high-level index set and one sequence set control
interval per string in virtual storage.
VSAM reads index buffers one at a time, and if you use shared resources, keep
your entire index set in storage. Index buffers are loaded when the index is
referred to. When many index buffers are provided, index buffers are not reused
until a requested index control interval is not in storage. Note that additional index
buffers is not used for more than one sequence set buffer per string unless shared
resource pools are used. For large data sets, specify the number of index buffers
equal to the number of index levels.
VSAM keeps as many index-set records as the buffer space allows in virtual
storage. Ideally, the index would be small enough to permit the entire index set to
remain in virtual storage. Because the characteristics of the data set cannot allow a
small index, you should be aware of how an index I/O buffers is used to
determine how many to provide.
For straight sequential processing environments, start with four data buffers per
string. One buffer is used only for formatting control areas and splitting control
intervals and control areas. The other three are used to support the read-ahead
function, so that sequential control intervals are placed in buffers before any
records from the control interval are requested. By specifying enough data buffers,
you can access the same amount of data per I/O operation with small data control
intervals as with large data control intervals.
When SHAREOPTIONS 4 is specified for the data set, the read-ahead function can
be ineffective because the buffers are refreshed when each control interval is read.
Therefore, for SHAREOPTIONS 4, keeping data buffers at a minimum can actually
improve performance.
If you experience a performance problem waiting for input from the device, you
should specify more data buffers to improve your job’s run time. More data buffers
let you do more read-ahead processing. An excessive number of buffers, however,
can cause performance problems, because of excessive paging.
For mixed processing situations (sequential and direct), start with two data buffers
per string and increase BUFND to three per string, if paging is not a problem.
When processing the data set sequentially, VSAM reads ahead as buffers become
available. For output processing (PUT for update), VSAM does not immediately
write the updated control interval from the buffer unless a control interval split is
required. The POINT macro does not cause read-ahead processing unless RPL
OPTCD=SEQ is specified; POINT positions the data set for subsequent sequential
retrieval.
The BUFSP, BUFND, BUFNI, and STRNO parameters apply only to the path’s
alternate index when the base cluster is opened for processing with its alternate
index. The minimum number of buffers are allocated to the base cluster unless the
cluster’s BUFFERSPACE value (specified in the DEFINE command) or BSTRNO
value (specified in the ACB macro) permits more buffers. VSAM assumes direct
processing and extra buffers are allocated between data and index components
accordingly.
Two data buffers and one index buffer are always allocated for each alternate
index in the upgrade set. If the path’s alternate index is a member of the upgrade
set, the minimum buffer increase for each allocation is one for data buffers and one
for index buffers. Buffers are allocated to the alternate index as though it were a
key-sequenced data set. When a path is opened for output and the path alternate
index is in the upgrade set, specify ACB MACRF=DSN and the path alternate
index shares buffers with the upgrade alternate index.
Acquiring Buffers
Data and index buffers are acquired and allocated only when the data set is
opened. VSAM dynamically allocates buffers based on parameters in effect when
the program opens the data set. Parameters that influence the buffer allocation are
in the program’s ACB: MACRF=(IN|OUT, SEQ|SKP, DIR), STRNO=n, BUFSP=n,
BUFND=n, and BUFNI=n. Other parameters that influence buffer allocation are in
the DD statement’s AMP specification for BUFSP, BUFND, and BUFNI, and the
BUFFERSPACE value in the data set’s catalog record.
If you open a data set whose ACB includes MACRF=(SEQ,DIR), buffers are
allocated according to the rules for sequential processing. If the RPL is modified
later in the program, the buffers allocated when the data set was opened do not
change.
Data and index buffer allocation (BUFND and BUFNI) can be specified only by the
user with access to modify the ACB parameters, or through the AMP parameter of
the DD statement. Any program can be assigned additional buffer space by
modifying the data set’s BUFFERSPACE value, or by specifying a larger BUFSP
value with the AMP parameter in the data set’s DD statement.
When a buffer’s contents are written, the buffer’s space is not released. The control
interval remains in storage until overwritten with a new control interval; if your
program refers to that control interval, VSAM does not have to reread it. VSAM
checks to see if the desired control interval is in storage, when your program
processes records in a limited key range, you might increase throughput by
providing extra data buffers. Buffer space is released when the data set is closed.
You ensure virtual storage for index-set records by specifying enough virtual
storage for index I/O buffers when you begin to process a key-sequenced data set
or variable-length RRDS. VSAM keeps as many index-set records in virtual storage
as possible. Whenever an index record must be retrieved to locate a data record,
VSAM makes room for it by deleting the index record that VSAM judges to be
least useful under the prevailing circumstances. It is generally the index record that
belongs to the lowest index level or that has been used the least. VSAM does not
keep more than one sequence set index record per string unless shared resource
pools are used.
area. This reduces the number of control area splits. This option also keeps to a
minimum the number of index levels required, thereby reducing search time and
improving performance. However, this option can increase rotational delay and
data transfer time for the index-set control intervals. It also increases virtual
storage requirements for index records.
Using different volumes lets VSAM gain access to an index and to data at the same
time. Also, the smaller amount of space required for an index makes it economical
to use a faster storage device for it.
Using Hiperbatch
Hiperbatch™ is a VSAM extension designed to improve performance in specific
situations. It uses the data lookaside facility (DLF) services in MVS to provide an
alternate fast path method of making data available to many batch jobs. Through
Hiperbatch, applications can take advantage of the performance benefits of MVS
without changing existing application programs or the JCL used to run them. For
more information about using Hiperbatch, see MVS Hiperbatch Guide.
Topic Location
Access to a Control Interval 180
Structure of Control Information 181
User Buffering 186
Improved Control Interval Access 187
Control Blocks in Common (CBIC) Option 188
Control interval access gives you access to the contents of a control interval; keyed
access and addressed access give you access to individual data records.
Restriction: You cannot use control interval access to access a compressed data set.
The data set can be opened for control interval access to permit VERIFY and
VERIFY REFRESH processing only.
With control interval access, you have the option of letting VSAM manage I/O
buffers or managing them yourself (user buffering). With keyed and addressed
access, VSAM always manages I/O buffers. If you select user buffering, you have
the further option of using improved control interval access, which provides faster
processing than normal control interval access. With user buffering, only control
interval processing is permitted. See “Improved Control Interval Access” on page
187.
When using control interval processing, you are responsible for maintaining
alternate indexes. If you have specified keyed or addressed access (ACB
MACRF={KEY|ADR},...) and control interval access, then those requests for keyed
or addressed access (RPL OPTCD={ KEY|ADR},...) cause VSAM to upgrade the
alternate indexes. Those requests specifying control interval access will not
upgrade the alternate indexes. You are responsible for upgrading them. Upgrading
an alternate index is described in “Maintaining Alternate Indexes” on page 121.
With NUB (no user buffering) and NCI (normal control interval access), specify in
the MACRF parameter that the data set is to be opened for keyed and addressed
access, and for control interval access. For example, MACRF=(CNV, KEY, SKP, DIR,
SEQ, NUB, NCI, OUT) is a valid combination of subparameters.
Usually, control interval access with no user buffering has the same freedoms and
limitations as keyed and addressed access have. Control interval access can be
synchronous or asynchronous, can have the contents of a control interval moved to
your work area (OPTCD=MVE) or left in VSAM’s I/O buffer (OPTCD=LOC), and
can be defined by a chain of request parameter lists (except with OPTCD=LOC
specified).
Except for ERASE, all the request macros (GET, PUT, POINT, CHECK, and
ENDREQ) can be used for normal control interval access. To update the contents of
a control interval, you must (with no user buffering) previously have retrieved the
contents for update. You cannot alter the contents of a control interval with
OPTCD=LOC specified.
Both direct and sequential access can be used with control interval access, but skip
sequential access may not. That is, specify OPTCD=(CNV,DIR) or (CNV,SEQ), but
not OPTCD=(CNV,SKP).
With sequential access, VSAM takes an EODAD exit when you try to retrieve the
control interval whose CIDF is filled with 0s or, if there is no such control interval,
when you try to retrieve a control interval beyond the last one. A control interval
with such a CIDF contains no data or unused space, and is used to represent the
software end-of-file. However, VSAM control interval processing does not prevent
you from using a direct GET or a POINT and a sequential GET to retrieve the
software end-of-file. The search argument for a direct request with control interval
access is the RBA of the control interval whose contents are desired.
The RPL (or GENCB) parameters AREA and AREALEN have the same use for
control interval access related to OPTCD=MVE or LOC as they do for keyed and
addressed access. With OPTCD=MVE, AREA gives the address of the area into
which VSAM moves the contents of a control interval. With OPTCD=LOC, AREA
gives the address of the area into which VSAM puts the address of the I/O buffer
containing the contents of the control interval.
You can load an entry-sequenced data set with control interval access. If you open
an empty entry-sequenced data set, VSAM lets you use only sequential storage.
That is, issue only PUTs, with OPTCD=(CNV,SEQ,NUP). PUT with OPTCD=NUP
stores information in the next available control interval (at the end of the data set).
You cannot load or extend a data set with improved control interval access. VSAM
also prohibits you from extending a fixed-length or variable-length RRDS through
normal control interval access.
180 z/OS V1R7.0 DFSMS Using Data Sets
Processing Control Intervals
Note: A linear data set has no control information imbedded in the control
interval. All of the bytes in the control interval are data bytes; there are no CIDFs
or RDFs.
Figure 25 shows the relative positions of data, unused space, and control
information in a control interval.
For more information about the structure of a control interval, see “Control
Intervals” on page 74.
Control information consists of a CIDF (control interval definition field) and, for a
control interval containing at least one record, record slot, or record segment, one
or more RDFs (record definition fields). The CIDF and RDFs are ordered from right
to left. The format of the CIDF is the same even if the control interval size contains
multiple smaller physical records.
In an entry-sequenced data set, when there are unused control intervals beyond the
last one that contains data, the first of the unused control intervals contains a CIDF
filled with 0s. In a key-sequenced data set or an RRDS, the first control interval in
the first unused control area (if any) contains a CIDF filled with 0s. A CIDF filled
with 0s represents the software end-of-file.
An RDF is a 3-byte field that contains a 1-byte control field and a 2-byte binary
number, as the following table shows.
Length and
Offset Bit Pattern Description
0(0) 1 Control Field.
x... ..xx Reserved.
.x.. .... Indicates whether there is (1) or is not (0) a paired RDF to the left of
this RDF.
..xx .... Indicates whether the record spans control intervals:
00 No.
01 Yes; this is the first segment.
10 Yes; this is the last segment.
11 Yes; this is an intermediate segment.
.... x... Indicates what the 2-byte binary number gives:
0 The length of the record, segment, or slot described by this
RDF.
1 The number of consecutive nonspanned records of the same
length, or the update number of the segment of a spanned
record.
.... .x.. For a fixed-length RRDS, indicates whether the slot described by this
RDF does (0) or does not (1) contain a record.
1(1) 2 Binary number:
v When bit 4 of byte 0 is 0, gives the length of the record, segment,
or slot described by this RDF.
v When bit 4 of byte 0 is 1 and bits 2 and 3 of byte 0 are 0, gives
the number of consecutive records of the same length.
v When bit 4 of byte 0 is 1 and bits 2 and 3 of byte 0 are not 0,
gives the update number of the segment described by this RDF.
Figure 26 on page 184 shows the contents of the CIDF and RDFs of a 512-byte
control interval containing nonspanned records of different lengths.
The four RDFs and the CIDF comprise 16 bytes of control information as follows:
v RDF4 describes the fifth record.
v RDF3 describes the fourth record.
v RDF2 and RDF1 describe the first three records.
v The first 2-byte field in the CIDF gives the total length of the five records-8a,
which is the displacement from the beginning of the control interval to the free
space.
v The second 2-byte field gives the length of the free space, which is the length of
the control interval minus the total length of the records and the control
information-512 minus 8a minus 16, or 496 minus 8a.
Figure 27 shows contents of the CIDF and RDFs for a spanned record with a length
of 1306 bytes.
There are three 512-byte control intervals that contain the segments of the record.
The number “n” in RDF2 is the update number. Only the control interval that
contains the last segment of a spanned record can have free space. Each of the
other segments uses all but the last 10 bytes of a control interval.
In a key-sequenced data set, the control intervals might not be contiguous or in the
same order as the segments (for example, the RBA of the second segment can be
lower than the RBA of the first segment).
All the segments of a spanned record must be in the same control area. When a
control area does not have enough control intervals available for a spanned record,
the entire record is stored in a new control area.
Every control interval in a fixed-length RRDS contains the same number of slots
and the same number of RDFs; one for each slot. The first slot is described by the
rightmost RDF. The second slot is described by the next RDF to the left, and so on.
User Buffering
With control interval access, you have the option of user buffering. If you use the
user buffering option, you need to provide buffers in your own area of storage for
use by VSAM.
User buffering is required for improved control interval access (ICI) and for PUT
with OPTCD=NUP.
If you specify user buffering, you cannot specify KEY or ADR in the MACRF
parameter; you can only specify CNV. That is, you cannot intermix keyed and
addressed requests with requests for control interval access.
To use ICI, you have to specify user buffering (UBF), which provides the option of
specifying improved control interval access:
ACB MACRF=(CNV,UBF,ICI,...),...
You cannot load or extend a data set using ICI. Improved control interval
processing is not permitted for extended format data sets.
A processing program can achieve the best performance with improved control
interval access by combining it with SRB dispatching. SRB dispatching is described
in z/OS MVS Programming: Authorized Assembler Services Guide and “Operating in
SRB or Cross-Memory Mode” on page 152.
To release exclusive control after a GET for update, you must issue a PUT for
update, a GET without update, or a GET for update for a different control interval.
With improved control interval access, the following assumptions are in effect for
VSAM (with no checking):
v An RPL whose ACB has MACRF=ICI has OPTCD=(CNV, DIR, SYN).
v A PUT is for update (RPL OPTCD=UPD).
v Your buffer length (specified in RPL AREALEN=number) is correct.
Because VSAM does not check these parameters, you should debug your program
with ACB MACRF=NCI, then change to ICI.
With improved control interval access, VSAM does not take JRNAD exits and does
not keep statistics (which are normally available through SHOWCB).
You can use 64-bit real storage for all VSAM data sets, whether they are
extended-format data sets. You can obtain buffer storage from any real address
location available to the processor. The location can have a real address greater
than 2 gigabytes or can be in 31-bit real storage with a real address less than 2
gigabytes.
The CBIC option is invoked when a VSAM data set is opened. To invoke the CBIC
option, you set the CBIC flag (located at offset X'33' (ACBINFL2) in the ACB, bit 2
(ACBCBIC)) to one. When your program opens the ACB with the CBIC option set,
your program must be in supervisor state with a protect key from 0 to 7.
Otherwise, VSAM will not open the data set.
If another address space accesses the data set’s control block structure in the CSA
through VSAM record management, the following conditions should be observed:
v An OPEN macro should not be issued against the data set.
v The ACB of the user who opened the data set with the CBIC option must be
used.
v CLOSE and temporary CLOSE cannot be issued for the data set (only the user
who opened the data set with the CBIC option can close the data set).
v The address space accessing the data set control block structure must have the
same storage protect key as the user who opened the data set with the CBIC
option.
v User exit routines should be accessible from all address spaces accessing the data
set with the CBIC option.
Topic Location
Subtask Sharing 192
Cross-Region Sharing 197
Cross-System Sharing 200
Control Block Update Facility (CBUF) 201
Techniques of Data Sharing 203
When you define VSAM data sets, you can specify how the data is to be shared
within a single system or among multiple systems that can have access to your
data and share the same direct access devices. Before you define the level of
sharing for a data set, you must evaluate the consequences of reading incorrect
data (a loss of read integrity) and writing incorrect data (a loss of write
integrity)—situations can result when one or more of the data set’s users do not
adhere to guidelines recommended for accessing shared data sets.
The extent to which you want your data sets to be shared depends on the
application. If your requirements are similar to those of a catalog, where there can
be many users on more than one system, more than one user should be permitted
to read and update the data set simultaneously. At the other end of the spectrum is
an application where high security and data integrity require that only one user at
a time have access to the data.
When your program issues a GET request, VSAM reads an entire control interval
into virtual storage (or obtains a copy of the data from a control interval already in
virtual storage). If your program modifies the control interval’s data, VSAM
ensures within a single control block structure that you have exclusive use of the
information in the control interval until it is written back to the data set. If the data
set is accessed by more than one program at a time, and more than one control
block structure contains buffers for the data set’s control intervals, VSAM cannot
ensure that your program has exclusive use of the data. You must obtain exclusive
control yourself, using facilities such as ENQ/RESERVE and DEQ.
Two ways to establish the extent of data set sharing are the data set disposition
specified in the JCL and the share options specified in the access method services
DEFINE or ALTER command. If the VSAM data set cannot be shared because of
the disposition specified in the JCL, a scheduler allocation failure occurs. If your
program attempts to open a data set that is in use and the share options specified
do not permit concurrent use of the data, the open fails, and a return code is set in
the ACB error field.
During load mode processing, you cannot share data sets. Share options are
overridden during load mode processing. When a shared data set is opened for
create or reset processing, your program has exclusive control of the data set
within your operating system.
You can use ENQ/DEQ to issue VSAM requests, but not to serialize the system
resources that VSAM uses.
Subtask Sharing
Subtask sharing is the ability to perform multiple OPENs to the same data set
within a task or from different subtasks in a single address space and still share a
single control block structure. Subtask sharing allows many logical views of the
data set while maintaining a single control block structure. With a single control
block structure, you can ensure that you have exclusive control of the buffer when
updating a data set.
If you share multiple control block structures within a task or address space,
VSAM treats this like cross-address space sharing. You must adhere to the
guidelines and restrictions specified in “Cross-Region Sharing” on page 197.
OPEN ACB1,DDN=DD1
OPEN ACB2,DDN=DD1
v Data set name sharing, with multiple ACBs pointing to multiple DD statements
with different ddnames. The data set names are related with an ACB open
specification (MACRF=DSN). For example:
//DD1 DD DSN=ABC
//DD2 DD DSN=ABC
OPEN ACB1,DDN=DD1,MACRF=DSN
OPEN ACB2,DDN=DD2,MACRF=DSN
Multiple ACBs must be in the same address space, and they must be opening to
the same base cluster. The connection occurs independently of the path selected to
the base cluster. If the ATTACH macro is used to create a new task that will be
processing a shared data set, let the ATTACH keyword SZERO to default to YES or
code SZERO=YES. This causes subpool 0 to be shared with the subtasks. For more
information about the ATTACH macro see z/OS MVS Programming: Authorized
Assembler Services Reference ALE-DYN. This also applies to when you are sharing
one ACB in a task or different subtasks. To ensure correct processing in the shared
environment, all VSAM requests should be issued in the same key as the job step
TCB key.
Alternatively, you can do this by changing the GENCB ACB macro in the
application program.
To test to see if the new function is in effect, the TESTCB ACB macro can be coded
into the application program.
The application program must be altered to handle the exclusive control error
return code. Register 15 will contain 8 and the RPLERRCD field will contain 20
(X'14'). The address of the RPL that owns the resource is placed in the first word in
the RPL error message area. The VSAM avoid LSR exclusive control wait option
cannot be changed after OPEN.
Spheres. A sphere is a VSAM cluster and its associated data sets. The cluster is
originally defined with the access method services ALLOCATE command, the
DEFINE CLUSTER command, or through JCL. The most common use of the sphere
is to open a single cluster. The base of the sphere is the cluster itself. When
opening a path (which is the relationship between an alternate index and base
cluster) the base of the sphere is again the base cluster. Opening the alternate index
as a data set results in the alternate index becoming the base of the sphere. In
Figure 29 on page 196, DSN is specified for each ACB, and output processing is
specified.
CLUSTER.REAL.PATH
CLUSTER.REAL
CLUSTER.REAL.AIX (UPGRADE) CLUSTER.ALIAS
Figure 29. Relationship Between the Base Cluster and the Alternate Index
If you add a fourth statement, the base of the sphere changes, and multiple control
block structures are created for the alternate index CLUSTER.REAL.AIX:
4. OPEN ACB=(CLUSTER.REAL.AIX)
v Does not add to existing structure as the base of the sphere is not the same.
v SHAREOPTIONS are enforced for CLUSTER.REAL.AIX since multiple control block
structures exist.
Shared Subtasks
When processing multiple subtasks sharing a single control block, concurrent GET
and PUT requests are allowed. A control interval is protected for write operations
using an exclusive control facility provided in VSAM record management. Other
PUT requests to the same control interval are not allowed and a logical error is
returned to the user issuing the request macro. Depending on the selected buffer
option, nonshared (NSR) or shared (LSR/GSR) resources, GET requests to the same
control interval as that being updated can or cannot be allowed. Figure 28 on page
194 illustrates the exclusive control facility.
When a subtask issues OPEN to an ACB that will share a control block structure
that can have been previously used, issue the POINT macro to obtain the position
for the data set. In this case, it should not be assumed that positioning is at the
beginning of the data set.
Cross-Region Sharing
The extent of data set sharing within one operating system depends on the data set
disposition and the cross-region share option specified when you define the data
set. Independent job steps or subtasks in an MVS system or multiple systems with
global resource serialization (GRS) can access a VSAM data set simultaneously. For
more information about GRS see z/OS MVS Planning: Global Resource Serialization.
To share a data set, each user must specify DISP=SHR in the data set’s DD
statement.
This option requires that the user’s program use ENQ/DEQ to maintain data
integrity while sharing the data set, including the OPEN and CLOSE processing.
User programs that ignore the write integrity guidelines can cause VSAM
program checks, lost or inaccessible records, uncorrectable data set failures, and
other unpredictable results. This option places responsibility on each user
sharing the data set.
v Cross-region SHAREOPTIONS 4: The data set can be fully shared by any
number of users, and buffers used for direct processing are refreshed for each
request. This setting does not allow any type of non-RLS access when the data
set is already open for RLS processing. With this option, as in SHAREOPTIONS
3, each user is responsible for maintaining both read and write integrity for the
data the program accesses. See the description of SHAREOPTIONS 3 for
ENQ/DEQ and warning information that applies equally to SHAREOPTIONS 4.
With options 3 and 4 you are responsible for maintaining both read and write
integrity for the data the program accesses. These options require your program to
use ENQ/DEQ to maintain data integrity while sharing the data set, including the
OPEN and CLOSE processing. User programs that ignore the write integrity
guidelines can cause VSAM program checks, lost or inaccessible records,
uncorrectable data set failures, and other unpredictable results. These options place
heavy responsibility on each user sharing the data set.
When your program requires that no updating from another control block structure
occur before it completes processing of the requested data record, your program
can issue an ENQ to obtain exclusive use of the VSAM data set. If your program
completes processing, it can relinquish control of the data set with a DEQ. If your
program is only reading data and not updating, it is probably a good practice to
serialize the updates and have the readers wait while the update is occurring. If
your program is updating, after the update has completed the ENQ/DEQ bracket,
the reader must determine the required operations for control block refresh and
buffer invalidation based on a communication mechanism or assume that
everything is down-level and refresh each request.
Protecting the cluster name with DISP processing and the components by VSAM
OPEN SHAREOPTIONS is the normally accepted procedure. When a shared data
set is opened with DISP=OLD, or is opened for reset processing (IDCAMS REUSE
command), or is empty, the data set is processed using SHAREOPTIONS 1 rules.
Scheduler disposition processing is the same for VSAM and non-VSAM data sets.
This is the first level of share protection.
interval back into the data set. When this occurs, your program has lost read
integrity. The control interval copy in your program’s buffer is no longer the
current copy.
The following should be considered when you are providing read integrity:
v Establish ENQ/DEQ procedures for all requests, read and write.
v Decide how to determine and invalidate buffers (index and/or data) that are
possibly down-level.
v Do not permit secondary allocation for an entry-sequenced data set or for a
fixed-length or variable-length RRDS. If you do allow secondary allocation you
should provide a communication mechanism to the read-only tasks that the
extents are increased, force a CLOSE, then issue another OPEN. Providing a
buffer refresh mechanism for index I/O will accommodate secondary allocations
for a key-sequenced data set.
v With an entry-sequenced data set or a fixed-length or variable-length RRDS, you
must also use the VERIFY macro before the GET macro to update possible
down-level control blocks.
v Generally, the loss of read integrity results in down-level data records and
erroneous no-record-found conditions.
The considerations that apply to read integrity also apply to write integrity. The
serialization for read could be done as a shared ENQ and for write as an exclusive
ENQ. You must ensure that all I/O is performed to DASD before dropping the
serialization mechanism (usually the DEQ).
Cross-System Sharing
These share options allow you to specify SHAREOPTION 1 or 2 sharing rules with
SHAREOPTION 3 or 4 record management processing. Use either of the following
share options when you define a data set that must be accessed or updated by
more than one operating system simultaneously:
v Cross-system SHAREOPTION 3. The data set can be fully shared. With this
option, the access method uses the control block update facility (CBUF) to help.
With this option, as in cross-region SHAREOPTIONS 3, each user is responsible
for maintaining both read and write integrity for the data the program accesses.
User programs that ignore write integrity guidelines can cause VSAM program
checks, uncorrectable data set failures, and other unpredictable results. This
option places heavy responsibility on each user sharing the data set. The
RESERVE and DEQ macros are required with this option to maintain data set
integrity.
v Cross-system SHAREOPTION 4. The data set can be fully shared, and buffers
used for direct processing are refreshed for each request.
This option requires that you use the RESERVE and DEQ macros to maintain
data integrity while sharing the data set. Output processing is limited to update
and/or add processing that does not change either the high-used RBA or the
RBA of the high key data control interval if DISP=SHR is specified. For
information about using RESERVE and DEQ, see z/OS MVS Programming:
Authorized Assembler Services Reference ALE-DYN and z/OS MVS Programming:
Authorized Assembler Services Reference LLA-SDU.
v Control area splits and the addition of a new high-key record for a new control
interval that results from a control interval split are not allowed; VSAM returns
a logical error to the user’s program if this condition should occur.
v The data and sequence-set control interval buffers are marked nonvalid
following I/O operation to a direct access storage device.
Job steps of two or more systems can gain access to the same data set regardless of
the disposition specified in each step’s JCL. To get exclusive control of a volume, a
task in one system must issue a RESERVE macro. For other methods of obtaining
exclusive control using global resource serialization (GRS) see z/OS MVS Planning:
Global Resource Serialization.
CBUF eliminates the restriction that prohibits control area splits under cross-region
SHAREOPTION 4. Therefore, you do not need to restrict code to prevent control
area splits, or allow for the control area split error condition. The restriction to
prohibit control area splits for cross-systems SHAREOPTION 4 still exists.
CBUF processing is not provided if the data set has cross-system SHAREOPTION
4, but does not reside on shared DASD when it is opened. That is, the data set is
still processed as a cross-system SHAREOPTION 4 data set on shared DASD.
When a key-sequenced data set or variable-length RRDS has cross-system
SHAREOPTION 4, control area splits are prevented. Also, split of the control
interval containing the high key of a key range (or data set) is prevented. With
control interval access, adding a new control interval is prevented.
Table 13 on page 202 shows how the SHAREOPTIONS specified in the catalog and
the disposition specified on the DD statement interact to affect the type of
processing.
Cross-Region Sharing
To maintain write integrity for the data set, your program must ensure that there is
no conflicting activity against the data set until your program completes updating
the control interval. Conflicting activity can be divided into two categories:
1. A data set that is totally preformatted and the only write activity is
update-in-place.
In this case, the sharing problem is simplified by the fact that data cannot
change its position in the data set. The lock that must be held for any write
operation (GET/PUT RPL OPTCD=UPD) is the unit of transfer that is the control
Compare the calculated values. If they are equal, you are assured the control
interval has not moved. If they are not equal, dequeue resource from step
“c” and start over at step “a”.
g. Issue a PUT for the RPL that has the parameters OPTCD=(SYN,KEY,DIR,UPD).
This does not hold position in the buffer. You can do one of the following:
v Issue a GET for the RPL that has the parameters
OPTCD=(SYN,KEY,UPD,DIR),ARG=MYKEY. This will acquire position of the
buffer.
v Issue a PUT for the RPL that has the parameters
OPTCD=(SYN,KEY,DIR,NSP). This does hold position in the buffer.
h. Issue an ENDREQ. This forces I/O to DASD, will drop the position, and
cause data buffer invalidation.
i. Dequeue MYDATA.DSNAME.RELCI.
2. A data set in which record additions and updates with length changes are
permitted.
In this case, the minimum locking unit is a control area to accommodate control
interval splits. A higher level lock must be held during operations involving a
control area split. The split activity must be serialized at a data set level. To
perform a multilevel locking procedure, you must be prepared to use the
information provided during VSAM JRNAD processing in your program. This
user exit is responsible for determining the level of data movement and
obtaining the appropriate locks.
Higher concurrency can be achieved by a hierarchy of locks. Based on the
particular condition, one or more of the locking hierarchies must be obtained.
Lock Condition
Control Interval Updating a record in place or adding a record to a control interval
without causing a split.
Control Area Adding a record or updating a record with a length change, causing a
control interval split, but not a control area split.
Data Set Adding a record or updating a record with a length change, causing a
control area split.
Cross-System Sharing
With cross-system SHAREOPTIONS 3, you have the added responsibility of
passing the VSAM shared information (VSI) and invalidating data and/or index
buffers. This can be done by using an informational control record as the low key
or first record in the data set. The following information is required to accomplish
the necessary index record invalidation:
1. Number of data control interval splits and index updates for sequence set
invalidation
2. Number of data control area splits for index set invalidation
All data buffers should always be invalidated. See “Techniques of Data Sharing”
on page 203 for the required procedures for invalidating buffers. To perform
selective buffer invalidation, an internal knowledge of the VSAM control blocks is
required.
Your program must serialize the following types of requests (precede the request
with an ENQ and, when the request completes, issue a DEQ):
v All PUT requests.
v POINT, GET-direct-NSP, GET-skip, and GET-for-update requests that are
followed by a PUT-insert, PUT-update, or ERASE request.
v VERIFY requests. When VERIFY is run by VSAM, your program must have
exclusive control of the data set.
v Sequential GET requests.
Similarly, the location of the VSI on the receiving processor can be located. The VSI
level number must be incremented in the receiving VSI to inform the receiving
processor that the VSI has changed. To update the level number, assuming the
address of the VSI is in register 1:
LA 0,1 Place increment into register 0
AL 0,64(,1) Add level number to increment
ST 0,64(,1) Save new level number
If the data set can be shared between z/OS operating systems, a user’s program in
another system can concurrently access the data set. Before you open the data set
specifying DISP=OLD, it is your responsibility to protect across systems with
ENQ/DEQ using the UCB option. This protection is available with GRS or
equivalent functions.
Topic Location
Provision of a Resource Pool 207
Management of I/O Buffers for Shared Resources 212
Restrictions and Guidelines for Shared Resources 216
This chapter is intended to help you share resources among your VSAM data sets.
VSAM has a set of macros that lets you share I/O buffers and I/O-related control
blocks among many VSAM data sets. In VSAM, an I/O buffer is a virtual storage
area from which the contents of a control interval are read and written. Sharing
these resources optimizes their use, reducing the requirement for virtual storage
and therefore reducing paging of virtual storage.
Sharing these resources is not the same as sharing a data set itself (that is, sharing
among different tasks that independently open it). Data set sharing can be done
with or without sharing I/O buffers and I/O-related control blocks. For
information about data set sharing see Chapter 12, “Sharing VSAM Data Sets,” on
page 191.
There are also macros that let you manage I/O buffers for shared resources.
Sharing resources does not improve sequential processing. VSAM does not
automatically position itself at the beginning of a data set opened for sequential
access, because placeholders belong to the resource pool, not to individual data
sets. When you share resources for sequential access, positioning at the beginning
of a data set has to be specified explicitly with the POINT macro or the direct GET
macro with RPL OPTCD=NSP. You may not use a resource pool to load records
into an empty data set.
When you issue BLDVRP, you specify for the resource pool the size and number of
virtual address space buffers for each virtual buffer pool.
The use of Hiperspace buffers can reduce the amount of I/O to a direct access
storage device (DASD) by caching data in expanded storage. The data in a
Hiperspace buffer is preserved unless there is an expanded storage shortage and
the expanded storage that backs the Hiperspace buffer is reclaimed by the system.
VSAM invalidates a Hiperspace buffer when it is copied to a virtual address space
buffer and, conversely, invalidates a virtual address space buffer when it is copied
to a Hiperspace buffer. Therefore at most there is only one copy of the control
interval in virtual address space and Hiperspace. When a modified virtual address
space buffer is reclaimed, it is copied to Hiperspace and to DASD.
For the data pool or the separate index pool at OPEN time, a data set is assigned
the one buffer pool with buffers of the appropriate size—either the exact control
interval size requested, or the next larger size available.
You may have both a global resource pool and one or more local resource pools.
Tasks in an address space that have a local resource pool may use either the global
resource pool, under the restrictions described below, or the local resource pool.
There may be multiple buffer pools based on buffer size for each resource pool.
To share resources locally, a task in the address space issues BLDVRP TYPE=LSR,
DATA|INDEX. To share resources globally, a system task issues BLDVRP
TYPE=GSR. The program that issues BLDVRP TYPE=GSR must be in supervisor
state with key 0 - 7.
You can share resources locally or globally, with the following restrictions:
v LSR (local shared resources). You can build up to 255 data resource pools and
255 index resource pools in one address space. Each resource pool must be built
individually. The data pool must exist before the index pool with the same share
pool identification can be built. The parameter lists for these multiple LSR pools
can reside above or below 16 MB. The BLDVRP macro RMODE31 parameter
indicates where VSAM is to obtain virtual storage when the LSR pool control
blocks and data buffers are built.
These resource pools are built with the BLDVRP macro TYPE=LSR and
DATA|INDEX specifications. Specifying MACRF=LSR on the ACB or
GENCB-ACB macros causes the data set to use the LSR pools built by the
BLDVRP macro. The DLVRP macro processes both the data and index resource
pools.
v GSR (global shared resources). All address spaces for a given protection key in
the system share one resource pool. Only one resource pool can be built for each
of the protection keys 0 - 7. With GSR, an access method control block and all
related request parameter lists, exit lists, data areas, and extent control blocks
must be in the common area of virtual storage with a protection key the same as
the resource pool. To get storage in the common area with that protection key,
issue the GETMAIN macro while in that key, for storage in subpool 241. If you
need to share a data set among address spaces, multiple systems, or both,
consider using record-level sharing (RLS) instead of GSR.
The separate index resource pools are not supported for GSR.
The Hiperspace buffers (specified in the BLDVRP macro) are not supported for
GSR.
Generate ACBs, RPLs, and EXLSTs with the GENCB macro: code the WAREA
and LENGTH parameters. The program that issues macros related to that global
resource pool must be in supervisor state with the same key. (The macros are
BLDVRP, CHECK, CLOSE, DLVRP, ENDREQ, ERASE, GENCB, GET, GETIX,
MODCB, MRKBFR, OPEN, POINT, PUT, PUTIX, SCHBFR, SHOWCB, TESTCB,
and WRTBFR. The SHOWCAT macro is not related to a resource pool, because a
program can issue this macro independently of an opened data set.)
For example, to find the control interval size using SHOWCB: open the data set for
nonshared resources processing, issue SHOWCB, close the ACB, issue BLDVRP,
open the ACB for LSR or GSR.
Tip: Because Hiperspace buffers are in expanded storage, you do not need to
consider their size and number when you calculate the size of the virtual resource
pool.
For each VSAM cluster that will share the virtual resource pool you are building,
follow this procedure:
1. Determine the number of concurrent requests you expect to process. The
number of concurrent requests represents STRNO for the cluster.
2. Specify BUFFERS=(SIZE(STRNO+1)) for the data component of the cluster.
v If the cluster is a key-sequenced cluster and the index CISZ (control interval
size) is the same as the data CISZ, change the specification to
BUFFERS=(SIZE(2 X STRNO)+1).
v If the index CISZ is not the same as the data component CISZ, specify
BUFFERS=(dataCISZ(STRNO+1),indexCISZ(STRNO)).
Following this procedure provides the minimum number of buffers needed to
support concurrently active STRNO strings. An additional string is not
dynamically added to a shared resource pool. The calculation can be repeated for
each cluster which will share the resource pool, including associated alternate
index clusters and clusters in the associated alternate index upgrade sets.
applications where a resource pool is shared by multiple data sets and not all data
set strings are active concurrently, less than the recommended number of buffers
may produce satisfactory results.
If the specified number of buffers is not adequate, VSAM will return a logical error
indicating the out-of-buffer condition.
The SHOWCAT macro is described in z/OS DFSMS Macro Instructions for Data Sets.
The statistics cannot be used to redefine the resource pool while it is in use. You
have to make adjustments the next time you build it.
For buffer pool statistics, the keywords described below are specified in FIELDS.
These fields may be displayed only after the data set described by the ACB is
opened. Each field requires one fullword in the display work area:
Field Description
BFRFND The number of requests for retrieval that could be satisfied without an
I/O operation (the data was found in a buffer).
BUFRDS The number of reads to bring data into a buffer.
NUIW The number of nonuser-initiated writes (that VSAM was forced to do
because no buffers were available for reading the contents of a control
interval).
STRMAX The maximum number of placeholders currently active for the resource
pool (for all the buffer pools in it).
UIW The number of user-initiated writes (PUTs not deferred or WRTBFRs, see
“Deferring Write Requests” on page 212).
NSR, the default, indicates the data set does not use shared resources. LSR
indicates it uses the local resource pool. GSR indicates it uses the global resource
pool.
If the VSAM control blocks and data buffers reside above 16 MB, RMODE31=ALL
must be specified in the ACB before OPEN is issued. If the OPEN parameter list or
the VSAM ACB resides above 16 MB, the MODE=31 parameter of the OPEN macro
must also be coded.
When an ACB indicates LSR or GSR, VSAM ignores its BSTRNO, BUFNI, BUFND,
BUFSP, and STRNO parameters because VSAM will use the existing resource pool
for the resources associated with these parameters.
If more than one ACB is opened for LSR processing of the same data set, the LSR
pool identified by the SHRPOOL parameter for the first ACB will be used for all
subsequent ACBs.
For a data set described by an ACB with MACRF=GSR, the ACB and all related
RPLs, EXLSTs, ECBs, and data areas must be in the common area of virtual storage
with the same protection key as the resource pool.
If the DLVRP parameter list is to reside above 16 MB, the MODE=31 parameter
must be coded.
Deferring writes saves I/O operations when subsequent requests can be satisfied
by the data in the buffer pool. If you are going to update control intervals more
than once, data processing performance will be improved by deferring writes.
You indicate that writes are to be deferred by coding MACRF=DFR in the ACB,
along with MACRF=LSR or GSR.
ACB MACRF=({LSR|GSR},{DFR|NDF},...),...
VSAM notifies the processing program when an unmodified buffer has been found
for the current request and there will be no more unmodified buffers into which to
read the contents of a control interval for the next request. (VSAM will be forced to
write a buffer to make a buffer available for the next I/O request.) VSAM sets
register 15 to 0 and puts 12 (X'0C') in the feedback field of the RPL that defines the
PUT request detecting the condition.
VSAM also notifies the processing program when there are no buffers available to
be assigned to a placeholder for a request. This is a logical error (register 15
contains 8 unless an exit is taken to a LERAD routine). The feedback field in the
RPL contains 152 (X'98'). You may retry the request; it gets a buffer if one is freed.
TRANSID specifies a number from 0 to 31. The number 0, which is the default,
indicates that requests defined by the RPL are not associated with other requests. A
number from 1 to 31 relates the requests defined by this RPL to the requests
defined by other RPLs with the same transaction ID.
You can find out what transaction ID an RPL has by issuing SHOWCB or TESTCB.
SHOWCB FIELDS=([TRANSID],...),...
If the ACB to which the RPL is related has MACRF=GSR, the program issuing
SHOWCB or TESTCB must be in supervisor state with the same protection key as
the resource pool. With MACRF=GSR specified in the ACB to which the RPL is
related, a program check can occur if SHOWCB or TESTCB is issued by a program
that is not in supervisor state with protection key 0 - 7. For more information
about using SHOWCB and TESTCB see “Manipulating the Contents of Control
Blocks” on page 140.
You can specify the DFR option in an ACB without using WRTBFR to write
buffers. A buffer is written when VSAM needs one to satisfy a GET request, or all
modified buffers are written when the last of the data sets that uses them is closed.
Besides using WRTBFR to write buffers whose writing is deferred, you can use it
to write buffers that are marked for output with the MRKBFR macro, which is
described in “Marking a Buffer for Output: MRKBFR” on page 215.
VSAM notifies the processing program when there are no more unmodified buffers
into which to read the contents of a control interval. (VSAM would be forced to
write buffers when another GET request required an I/O operation.) VSAM sets
register 15 to 0 and puts 12 (X'0C') in the feedback field of the RPL that defines the
PUT request that detects the condition.
VSAM also notifies the processing program when there are no buffers available to
which to assign a placeholder for a request. This is a logical error (register 15
contains 8 unless an exit is taken to a LERAD routine); the feedback field in the
RPL contains 152 (X'98'). You may retry the request; it gets a buffer if one is freed.
When sharing the data set with a user in another region, your program might
want to write the contents of a specified buffer without writing all other modified
buffers. Your program issues the WRTBFR macro to search your buffer pool for a
buffer containing the specified RBA. If found, the buffer is examined to verify that
it is modified and has a use count of zero. If so, VSAM writes the contents of the
buffer into the data set.
The ddname field of the physical error message identifies the data set that was
using the buffer, but, because the buffer might have been released, its contents
might be unavailable. You can provide a JRNAD exit routine to record the contents
of buffers for I/O errors. It can be coordinated with a physical error analysis
routine to handle I/O errors for buffers whose writing has been deferred. If a
JRNAD exit routine is used to cancel I/O errors during a transaction, the physical
error analysis routine will get only the last error return code. See “SYNAD Exit
Routine to Analyze Physical Errors” on page 256 and “JRNAD Exit Routine to
Journalize Transactions” on page 247 for information about the SYNAD and
JRNAD routines.
See “JRNAD Exit Routine to Journalize Transactions” on page 247 for information
describing the contents of the registers when VSAM exits to the JRNAD routine,
and the fields in the parameter list pointed to by register 1.
Note: For compressed format data sets, the RBA of the compressed record is
unpredictable. The RBA of another record or the address of the next record in the
buffer cannot be determined using the length of the current record or the length of
the record provided to VSAM.
The buffer pool to be searched is the one used by the data component defined by
the ACB to which your RPL is related. If the ACB names a path, VSAM searches
the buffer pool used by the data component of the alternate index. (If the path is
defined over a base cluster alone, VSAM searches the buffer pool used by the data
component of the base cluster.) VSAM begins its search at the buffer you specify
and continues until it finds a buffer that contains an RBA in the range or until the
highest numbered buffer is searched.
For the first buffer that satisfies the search, VSAM returns its address
(OPTCD=LOC) or its contents (OPTCD=MVE) in the work area whose address is
specified in the AREA parameter of the RPL and returns its number in register 0. If
the search fails, Register 0 is returned with the user specified buffer number and a
one-byte SCHBFR code of X'0D'. To find the next buffer that contains an RBA in
the range, issue SCHBFR again and specify the number of the next buffer after the
first one that satisfied the search. You continue until VSAM indicates it found no
buffer that contains an RBA in the range or until you reach the end of the pool.
Finding a buffer that contains a desired RBA does not get you exclusive control of
the buffer. You may get exclusive control only by issuing GET for update. SCHBFR
does not return the location or the contents of a buffer that is already under the
exclusive control of another request.
MRKBFR MARK=OUT, indicates that the buffer’s contents are modified. You must
modify the contents of the buffer itself, not a copy. Therefore, when you issue
SCHBFR or GET to locate the buffer, you must specify RPL OPTCD=LOC. (If you
use OPTCD=MVE, you get a copy of the buffer but do not learn its location.) The
buffer is written when a WRTBFR is issued or when VSAM is forced to write a
buffer to satisfy a GET request.
If you are sharing a buffer or have exclusive control of it, you can release it from
shared status or exclusive control with MRKBFR MARK=RLS. If the buffer was
marked for output, MRKBFR with MARK=RLS does not nullify it; the buffer is
eventually written. Sequential positioning is lost. MRKBFR with MARK=RLS is
similar to the ENDREQ macro.
v If a physical I/O error is found while writing a control interval to the direct
access device, the buffer remains in the resource pool. The write-required flag
(BUFCMW) and associated mod bits (BUFCMDBT) are turned off, and the BUFC
is flagged in error (BUFCER2=ON). The buffer is not replaced in the pool, and
buffer writing is not attempted. To release this buffer for reuse, a WRTBFR
macro with TYPE=DS can be issued or the data set can be closed (CLOSE issues
the WRTBFR macro).
v When you use the BLDVRP macro to build a shared resource pool, some of the
VSAM control blocks are placed in a system subpool and others in subpool 0.
When a task ends, the system frees subpool 0 unless it is shared with another
task. The system does not free the system subpool until the job step ends. Then,
if another task attempts to use the resource pool, an abend might occur when
VSAM attempts to access the freed control blocks. This problem does not occur
if the two tasks share subpool 0. Code in the ATTACH macro the SZERO=YES
parameter, or the SHSPL or SHSPV parameters. SZERO=YES is the default.
v GSR is not permitted for compressed data sets.
Topic Location
Controlling Access to VSAM Data Sets 219
Accessing Data Sets Using DFSMStvs and VSAM Record-Level Sharing 219
Specifying Read Integrity 233
Specifying a Timeout Value for Lock Requests 233
| VSAM record-level sharing (RLS) is an access option for VSAM data sets that
| allows transactional (such as Customer Information Control System (CICS) and
| DFSMStvs), and non-transactional applications to concurrently access data. This
| option provides multisystem sharing of VSAM data sets across a z/OS Parallel
| Sysplex®. VSAM RLS exploits the data sharing technology of the coupling facility
| (CF) including a CF-based lock manager and a CF cache manager. VSAM RLS uses
| the CF-based lock manager and the CF cache manager in its implementation of
| record-level sharing.
| Note: VSAM RLS requires that the data sets be System Managed Storage (SMS)
| data sets. To be eligible for RLS, a data set that is not already SMS-managed
| must be converted to SMS.
RLS is a mode of access to VSAM data sets. RLS is an access option interpreted at
open time. Select the option by specifying a new JCL parameter (RLS) or by
specifying MACRF=RLS in the ACB. The RLS MACRF option is mutually exclusive
with the MACRF NSR (nonshared resources), LSR (local shared resources), and
GSR (global shared resources) options. This topic uses the term non-RLS access to
distinguish between RLS access and NSR, LSR, and GRS access.
Access method services do not use RLS when performing an IDCAMS EXPORT,
IMPORT, PRINT, or REPRO command. If the RLS keyword is specified in the DD
statement of a data set to be opened by access method services, the keyword is
ignored and the data set is opened and accessed in non-RLS mode. See “Using
Non-RLS Access to VSAM Data Sets” on page 227 for more information about
non-RLS access.
RLS access is supported for KSDS, ESDS, RRDS, and VRRDS data sets, and for
VSAM alternate indexes.
The VSAM RLS functions are provided by the SMSVSAM server. This server
resides in a system address space. The address space is created and the server is
started at MVS IPL time. VSAM internally performs cross-address space accesses
and linkages between requestor address spaces and the SMSVSAM server address
space.
The SMSVSAM server owns two data spaces. One data space is called the
SMSVSAM data space. It contains some VSAM RLS control blocks and a
system-wide buffer pool. VSAM RLS uses the other data space, called MMFSTUFF,
to collect activity monitoring information that is used to produce SMF records.
VSAM provides the cross-address space access and linkage between the requestor
address spaces and the SMSVSAM address and data spaces. See Figure 30.
CICS AOR CICS AOR Batch job VSAM RLS Data space Data space
address space address space address space (SMSVSAM)
address space RLS buffer RLS activity
OPEN OPEN OPEN
pool
ACB ACB ACB Monitoring
MACRF=RLS MACRF=RLS MACRF=RLS information
RLS internal
GET/PUT GET/PUT GET/PUT
control blocks
Figure 30. VSAM RLS address and data spaces and requestor address spaces
| VSAM RLS data buffers occupy the largest share of the SMSVSAM data-space
| storage. In some cases, storage limits on the data buffers may create performance
| slowdowns in high-volume transaction environments. To avoid any storage limits
| and potentially enhance performance, VSAM RLS offers the option to move RLS
| data buffers into 64-bit addressable virtual storage. This option can be activated by
| assigning VSAM data sets to a data class with ISMF that specifies
| RlsAboveTheBar(YES). IBM recommends that you use this option, especially for
| applications with a high rate of critical CICS transactions. For details on setting up
| and using this option, see “Using 64-Bit Addressable Data Buffers” on page 225
VSAM RLS has multiple levels of CF caching. The value of the SMS DATACLAS
RLS CF Cache Value keyword determines the level of CF caching. The default
value, ALL, indicates that RLS caches both the data and index parts of the VSAM
data set in the coupling facility. If you specify NONE, then RLS caches only the
index part of the VSAM data set. If you specify UPDATESONLY, then RLS caches
data in the coupling facility only during write operations.
All active systems in a sysplex must have the greater than 4K CF caching feature
before the function is enabled.
Users of multiple CICS regions have a file owning region (FOR) where the local
file definitions reside. The access to the data set from the FOR is through the local
file definition. Local data sets are accessed by the CICS application-owning region
(AOR) submitting requests directly to VSAM. The remote definition contains
information on the region and local filename. Sharing of data sets among regions
or systems is achieved by having a remote file definition in any other region that
wants to access the data set. If you are not using VSAM RLS, sharing is achieved
by having remote definitions for the local file in any region that wants to share it.
Figure 31 shows the AOR, FOR, and VSAM request flow prior to VSAM RLS.
VSAM
VSAM CICS
FOR
VSAM
MVS 1 MVS n
The CICS AOR’s function ships VSAM requests to access a specific data set to the
CICS FOR that owns the file that is associated with that data set. This distributed
access form of data sharing has existed in CICS for some time.
With VSAM RLS, multiple CICS AORs can directly share access to a VSAM data
set without CICS function shipping. With VSAM RLS, CICS continues to provide
the transactional functions. The transactional functions are not provided by VSAM
RLS itself. VSAM RLS provides CF-based record-level locking and CF data caching.
Figure 32 shows a CICS configuration with VSAM RLS.
SMSVSAM SMSVSAM
address space address space
MVS 1 MVS n
VSAM RLS is a multisystem server. The CICS AORs access the shared data sets by
submitting requests directly to the VSAM RLS server. The server uses the CF to
serialize access at the record level.
Related reading: For more information on using CICS to recover data sets, see
CICS Recovery and Restart Guide. For an overview of CICS, see CICS System
Definition Guide.
You can specify VSAM recoverable data set control attributes in IDCAMS (access
method services) DEFINE and ALTER commands. In the data class, you can
specify LOG along with the BWO and LOGSTREAMID parameters. If you want to
be able to back up a data set while it is open, you should define them using the
IDCAMS BWO(TYPECICS) parameter. Only a CICS application or DFSMStvs can
open a recoverable data set for output because VSAM RLS does not provide the
logging and other transactional functions required for writing to a recoverable data
set.
When a data set is opened in a non-RLS access mode (NSR, LSR, or GSR), the
recoverable attributes of the data set do not apply and are ignored. The recoverable
data set rules have no impact on existing programs that do not use RLS access.
The CICS rollback (backout) function removes changes made to the recoverable
data sets by a transaction. When a transaction terminates abnormally, CICS
implicitly performs a rollback.
The commit and rollback functions protect an individual transaction from changes
that other transactions make to a recoverable data set or other recoverable
resource. This lets the transaction logic focus on the function it is providing and
not have to be concerned with data recovery or cleanup in the event of problems
or failures.
| To provide 64-bit data buffering for a data set with VSAM RLS, all the following
| must be true:
| v The system must be at level z/OS 1.7 or higher.
| v The data set must belong to a data class with the attribute “RLS Above the 2-GB
| Bar” set to Yes.
| v The active IGDSMSxx member of SYS1.PARMLIB must have the keyword
| RlsAboveTheBarMaxPoolsize set to a number between 500 megabytes or 2
| terabytes for the system.
| In addition, to enhance performance for each named system, whether or not the
| buffers are above the 2-gigabyte bar, you can set the keyword RlsFixedPoolSize in
| IGDSMSxx to specify the amount of total real storage to be permanently fixed to
| be used as data buffers.
| RlsAboveTheBarMaxPoolSize(sysname1,value1;sysname2,value2;...) or
| (ALL,value)
| Specifies the total size of the buffer pool, in megabytes, to reside above the
| 2-megabyte bar on each named system or all systems. The system programmer
| can specify different values for individual systems in the sysplex, or one value
| that applies to all the systems in the sysplex. Valid values are 0 (the default), or
| values between 500 megabytes and 2000000 megabytes (2 terabytes).
| RlsFixedPoolSize(sysname1,value1;sysname2,value2;...) or (ALL,value)
| Specifies the total real storage, in megabytes (above or below the 2-gigabyte
| bar) to be permanently fixed or “pinned” for the use of VSAM RLS data
| buffers. The default is 0. The system programmer can specify different values
| for individual systems in the sysplex, or one value that applies to all the
| systems in the sysplex. If the specified amount is 80% or more of the available
| real storage, the amount is capped at 80% and a write-to-operator (WTO)
| message is issued to warn of the limit being reached.
| To help determine the amount of real storage to use for VSAM RLS buffering,
| you can check SMF record Type 42, subtype 19, for the hit ratio for VSAM RLS
| buffers. A large number of misses indicates a need for more real storage to be
| pinned for the use of VSAM RLS.
| These values can be changed later by using the SET SMS=xx command with these
| keywords specified in the IGDSMSxx member, or by using the SETSMS command
| with the specific keywords to be changed.
| As usual, the changed parameters in IGDSMSxx cannot take effect until SET
| SMS=xx has been issued.
| Note: The changes with SETSMS and SET SMS=xx do not take effect immediately.
| If the data set has been opened on a system, the SETSMS and SET SMS
| changes to RlsAboveTheBarMaxPoolSize and RlsFixedPoolSize do not take
| effect on that system until the SMSVSAM address space is recycled. If the
| data set has not been opened on that system, the SETSMS and SET SMS
| changes to the two keywords will take effect when the data set is opened
| the first time on the system, without the need to recycle SMSVSAM.
| For more information about specifying IGDSMSxx parameters, see z/OS MVS
| Initialization and Tuning Reference. For more information about checking SMF
| records, seez/OS MVS System Management Facilities (SMF).See z/OS MVS System
| Commandsfor detailed information on the SET SMS and SETSMS commands.
VSAM RLS can ensure read integrity across splits. It uses the cross-invalidate
function of the CF to invalidate copies of data and index CI in buffer pools other
than the writer’s buffer pool. This ensures that all RLS readers, DFSMStvs, CICS,
and non-CICS outside DFSMStvs, are able to see any records moved by a
concurrent CI or CA split. On each GET request, VSAM RLS tests validity of the
buffers and when invalid, the buffers are refreshed from the CF or DASD.
VSAM RLS provides record locking and buffer coherency across the CICS and
non-CICS read/write sharers of nonrecoverable data sets. However, the record lock
on a new or changed record is released as soon as the buffer that contains the
change has been written to the CF cache and DASD. This differs from the case in
which a DFSMStvs or CICS transaction modifies VSAM RLS recoverable data sets
and the corresponding locks on the added and changed records remain held until
the end of the transaction.
For sequential and skip-sequential processing, VSAM RLS does not write a
modified control interval (CI) until the processing moves to another CI or an
ENDREQ is issued by the application. If an application or the VSAM RLS server
ends abnormally, these buffered changes are lost. To help provide data integrity,
the locks for those sequential records are not released until the records are written.
While VSAM RLS permits read and write sharing of nonrecoverable data sets
across DFSMStvs and CICS and non-CICS applications, most applications are not
designed to tolerate this sharing. The absence of transactional recovery requires
very careful design of the data and the application.
data set is already open for non-RLS output, an open for RLS fails. Therefore, at
any time, a data set can be open for non-RLS write access or open for RLS access.
CICS and VSAM RLS provide a quiesce function to assist in the process of
switching a data set from CICS RLS usage to non-RLS usage.
| Note: For non-recoverable data sets, either transactional (CICS or DFSMStvs) RLS
| or non-transactional (non-CICS and non-DFSMStvs) RLS is acceptable. For
| recoverable data sets:
| v Transactional RLS can share with: any transactional RLS accesses, and
| input-only non-transactional RLS accesses (non-transactional RLS cannot
| update recoverable data sets)
| v Non-transactional RLS can share with any non-transactional RLS as long
| as they do not update the recoverable data sets.
| For example, if OPEN1 already successfully opened the data set to be accessed
| with RLS, the subsequent OPEN2 attempting to open it for non-RLS output would
| fail, regardless of whether or not the data set is recoverable. With the same
| OPEN1, if the data set is recoverable, OPEN2 can open it for non-transactional
| (that is, non-commit protocol) RLS input-only access.
Share Options
For non-RLS access, VSAM uses the share options settings to determine the type of
sharing permitted. If you set the cross-region share option to 2, a non-RLS open for
input is permitted while the data set is already open for RLS access. VSAM
provides full read and write integrity for the RLS users, but does not provide read
integrity for the non-RLS user. A non-RLS open for output is not permitted when
already opened for RLS.
VSAM RLS provides full read and write sharing for multiple users; it does not use
share options settings to determine levels of sharing. When an RLS open is
requested and the data set is already open for non-RLS input, VSAM does check
the cross-region setting. If it is 2, then the RLS open is permitted. The open fails for
any other share option or if the data set has been opened for non-RLS output.
Locking
Non-RLS provides local locking (within the scope of a single buffer pool) of the
VSAM control interval. Locking contention can result in an “exclusive control
conflict” error response to a VSAM record management request.
When you request a user-managed rebuild for a lock structure, the validity check
function determines if there is enough space for the rebuild process to complete. If
there is not enough space, the system rejects the request and displays an
informational message.
When you request an alter operation for a lock structure, the validity check
function determines if there is enough space for the alter process to complete. If
there is not enough space, the system displays a warning message that includes the
size recommendation.
VSAM RLS supports a timeout value that you can specify through the RPL, in the
PARMLIB, or in the JCL. CICS uses this parameter to ensure that a transaction
does not wait indefinitely for a lock to become available. VSAM RLS uses a
timeout function of the DFSMS lock manager.
When an ESDS is used with VSAM RLS, to serialize the processing of ESDS
records, an exclusive, sysplex-wide data-set level “add to end” lock is held each
time a record is added to the end of the data set. Reading and updating of existing
records do not acquire the lock. Non-RLS VSAM does not need such serialization
overhead because it does not serialize ESDS record additions across the sysplex.
Recommendation: Carefully design your use of ESDS with RLS; otherwise, you
might see performance differences between accessing ESDSs with and without
RLS.
Retaining locks: VSAM RLS uses share and exclusive record locks to control
access to the shared data. An exclusive lock is used to ensure that a single user is
updating a specific record. The exclusive lock causes any read-with-integrity
request for the record by another user (CICS transaction or non-CICS application)
to wait until the update is finished and the lock released.
exclusive locks on records of recoverable data sets held by the transaction must
remain held. However, other users waiting for these locks should not continue to
wait. The outage is likely to be longer than the user would want to wait. When
these conditions occur, VSAM RLS converts these exclusive record locks into
retained locks.
Both exclusive and retained locks are not available to other users. When another
user encounters lock contention with an exclusive lock, the user’s lock request
waits. When another user encounters lock contention with a retained lock, the lock
request is immediately rejected with “retained lock” error response. This results in
the VSAM record management request that produced the lock request failing with
“retained lock” error response.
If you close a data set in the middle of a transaction or unit of recovery and it is
the last close for this data set on this system, then RLS converts the locks from
active to retained.
Supporting non-RLS access while retained locks exist: Retained locks are created
when a failure occurs. The locks need to remain until completion of the
corresponding recovery. The retained locks only have meaning for RLS access.
Lock requests issued by RLS access requests can encounter the retained locks.
Non-RLS access does not perform record locking and therefore would not
encounter the retained locks.
To ensure integrity of a recoverable data set, VSAM does not permit non-RLS
update access to the data set while retained locks exist for that data set. There can
be situations where an installation must execute some non-CICS applications that
require non-RLS update access to the data set. VSAM RLS provides an IDCAMS
command (SHCDS PERMITNONRLSUPDATE) that can be used to set the status of
a data set to enable non-RLS update access to a recoverable data set while retained
locks exist. This command does not release the retained locks. If this function is
used, VSAM remembers its usage and informs the CICSs that hold the retained
locks when they later open the data set with RLS.
v Hiperbatch
v Catalogs, VVDS, the JRNAD exit, and any JCL AMP= parameters in JCL
v Data that is stored in z/OS UNIX System Services
v Striped VSAM data sets
The VSAM RLS record management request task must be the same task that
opened the ACB, or the task that opened the ACB must be in the task hierarchy.
That is, the record management task was attached by the task that opened the
ACB, or by a task that was attached by the task that opened the ACB.
cluster. This would normally result in an error response return code 8 and
reason code 144. Before giving this response to the NRI request, VSAM RLS
obtains a shared lock on the base cluster record that was pointed to by the
alternate index. This ensures that if the record was being modified, the change
and corresponding alternate index upgrade completes. The record lock is
released. VSAM retries the access. The retry should find the record correctly.
This internal record locking may encounter locking errors such as deadlock or
timeout. Your applications must be prepared to accept locking error return
codes that may be returned on GET or POINT NRI requests. Normally such
errors will not occur.
2. CR—consistent read
This tells VSAM RLS to obtain a SHARE lock on the record accessed by a GET
or POINT request. It ensures the reader does not see an uncommitted change
made by another transaction. Instead, the GET/POINT waits for the change to
be committed or backed out and the EXCLUSIVE lock on the record to be
released.
3. CRE—consistent read explicit
This is the same as CR, except VSAM RLS keeps the SHARE lock on the record
until end-of-transaction. This option is only available to CICS or DFSMStvs
transactions. VSAM does not understand end-of-transaction for non-CICS or
non-DFSMStvs usage.
This capability is often referred to as REPEATABLE READ.
The record locks obtained by the VSAM RLS GET requests with CRE option
inhibit update or erase of the records by other concurrently executing
transactions. However, the CRE requests do not inhibit the insert of other
records by other transactions. The following cases need to be considered when
using this function.
a. If a GET DIR (direct) or SKP (skip sequential) request with CRE option
receives a “record not found” response, VSAM RLS does not retain a lock
on the nonexistent record. The record could be inserted by another
transaction.
b. A sequence of GET SEQ (sequential) requests with CRE option results in a
lock being held on each record that was returned. However, no additional
locks are held that would inhibit the insert of new records in between the
records locked by the GET CRE sequential processing. If the application
were to re-execute the previously executed sequence of GET SEQ,CRE
requests, it would see any newly inserted records. Within the transactional
recovery community, these records are referred to as “phantom” records.
The VSAM RLS CRE function does not inhibit phantom records.
To serialize the adding of ESDS records across the sysplex, VSAM RLS obtains an
“add-to-end” lock exclusively for every record added to the end of the data set. If
applications frequently add records to the same ESDS, the requests are serially
processed and therefore, performance degradation might be experienced.
In comparison, non-RLS VSAM has a different set of functions and does not
require serializing ESDS record additions across the sysplex. If an ESDS is shared
among threads, carefully design your use of ESDS with RLS to lessen any possible
impact to performance, as compared to the use of ESDSs with non-RLS VSAM.
Note: For VSAM RLS, the system obtains a global data-set-level lock only for
adding an ESDS record to the data set, not for reading or updating existing
ESDS records. Therefore, GET requests and PUT updates on other records
for the data sets do not obtain the “add-to-end” lock. Those updates can be
processed while another thread holds the “add-to-end” lock.
How long the RLS “add-to-end” lock is held depends on whether the data set is
recoverable and on the type of PUT request that adds the record. If the data set is
recoverable, RLS does not implicitly release the lock. The lock is explicitly released
by ENDREQ, IDAEADD, or IDALKREL. For nonrecoverable data sets, the PUT
SEQ command releases the lock after writing a few buffers, whereas the PUT DIR
command releases the lock at the end of the request.
CRE gives DFSMStvs access to VSAM data sets open for input or output. CR or
NRI gives DFSMStvs access to VSAM recoverable data sets only for output.
Related reading:
v For information about how to use these read integrity options for DFSMStvs
access, see z/OS DFSMStvs Planning and Operating Guide.
v For complete descriptions of these subparameters, see the description of the RLS
parameter in z/OS MVS JCL Reference.
For information about the RLSTMOUT parameter, see the description of the EXEC
statement in z/OS MVS JCL Reference.
Related reading:
v For information about avoiding deadlocks and additional information about
specifying a timeout value, see z/OS DFSMStvs Planning and Operating Guide.
v z/OS MVS Initialization and Tuning Guide.
| Index Trap
| For VSAM RLS, there is an index trap that checks each index record before writing
| it. The trap detects the following index corruptions:
| v High-used greater than high-allocated
| v Duplicate or invalid index pointer
| v Out-of-sequence index record
| v Invalid section entry
| v Invalid key length.
| For more information about the VSAM RLS index trap for system programmers,
| see z/OS DFSMSdfp Diagnosis.
Topic Location
EXAMINE Command 235
How to Run EXAMINE 236
Samples of Output from EXAMINE Runs 238
This chapter describes how the service aid, EXAMINE, is used to analyze a
key-sequenced data set (KSDS) cluster for structural errors.
EXAMINE Command
EXAMINE is an access method services command that lets users analyze and
collect information on the structural consistency of key-sequenced data set clusters.
This service aid consists of two tests: INDEXTEST and DATATEST.
INDEXTEST examines the index component of the key-sequenced data set cluster
by cross-checking vertical and horizontal pointers contained within the index
control intervals, and by performing analysis of the index information. It is the
default test of EXAMINE.
DATATEST evaluates the data component of the key-sequenced data set cluster by
sequentially reading all data control intervals, including free space control
intervals. Tests are then carried out to ensure record and control interval integrity,
free space conditions, spanned record update capacity, and the integrity of internal
VSAM pointers contained within the control interval.
For a description of the EXAMINE command syntax, see z/OS DFSMS Access
Method Services for Catalogs.
EXAMINE Users
EXAMINE end users fall into two categories:
1. Application Programmer/Data Set Owner. These users want to know of any
structural inconsistencies in their data sets, and they are directed to
corresponding recovery methods that IBM supports by the appropriate
summary messages. The users’ primary focus is the condition of their data sets;
therefore, they should use the ERRORLIMIT(0) parameter of EXAMINE to
suppress printing of detailed error messages.
2. System Programmer/Support Personnel. System programmers or support
personnel need the information from detailed error messages to document or
fix a problem with a certain data set.
Users must have master level access to a catalog or control level access to a data
set to examine it. Master level access to the master catalog is also sufficient to
examine a user catalog.
For further considerations for data set sharing, see Chapter 12, “Sharing VSAM
Data Sets,” on page 191.
DATATEST reads the sequence set from the index component and the entire data
component of the KSDS cluster. So, it should take considerably more time and
more system resources than INDEXTEST.
If you are using EXAMINE to document an error in the data component, run both
tests. If you are using EXAMINE to document an error in the index component, it
is usually not necessary to run DATATEST.
If you are using EXAMINE to confirm a data set’s integrity, your decision to run
one or both tests depends on the time and resources available.
authorization failure when the check indicated above is made. This is normal, and,
if you have master level access to the catalog being examined, the examination can
continue.
Recommendation: When you analyze a catalog, use the VERIFY command before
you use the EXAMINE command.
Chapter 15. Checking VSAM Key-Sequenced Data Set Clusters for Structural Errors 237
Checking VSAM Key-Sequenced Data Set Clusters for Structural Errors
condition), all supportive and individual data set structural error messages are
printed. Note that the status and statistical messages, summary messages, and
function-not-performed messages are not under the control of ERRORLIMIT, and
print regardless of the ERRORLIMIT settings. The ERRORLIMIT parameter is used
separately by INDEXTEST and DATATEST. For more information about using this
parameter see z/OS DFSMS Access Method Services for Catalogs.
EXAMINE NAME(EXAMINE.KD05) -
INDEXTEST -
DATATEST
Because of this severe INDEXTEST error, DATATEST did not run in this particular
case.
EXAMINE then displayed the prior key (11), the data control interval at relative
byte address decimal 512, and the offset address hexadecimal 9F into the control
interval where the duplicate key was found.
Chapter 15. Checking VSAM Key-Sequenced Data Set Clusters for Structural Errors 239
Checking VSAM Key-Sequenced Data Set Clusters for Structural Errors
Topic Location
Guidelines for Coding Exit Routines 241
EODAD Exit Routine to Process End of Data 245
EXCEPTIONEXIT Exit Routine 246
JRNAD Exit Routine to Journalize Transactions 247
LERAD Exit Routine to Analyze Logical Errors 253
RLSWAIT Exit Routine 254
SYNAD Exit Routine to Analyze Physical Errors 256
UPAD Exit Routine for User Processing 258
User-Security-Verification Routine 261
You can use the EXLST VSAM macro to create an exit list. EXLST parameters
EODAD, JRNAD, LERAD, SYNAD and UPAD are used to specify the addresses of
your user-written routines. Only the exits marked active are executed.
You can use access methods services commands to specify the addresses of
user-written routines to perform exception processing and user-security verification
processing.
Related reading:
v For information about the EXLST macro, see z/OS DFSMS Macro Instructions for
Data Sets.
v For information about exits from access methods services commands, see z/OS
DFSMS Access Method Services for Catalogs.
Table 15 on page 242 shows the exit locations available from VSAM.
Programming Guidelines
Usually, you should observe these guidelines in coding a routine:
v Code your routine reentrant.
v Save and restore registers (see individual routines for other requirements).
v Be aware of registers used by the VSAM request macros.
v Be aware of the addressing mode (24-bit or 31-bit) in which your exit routine
will receive control.
v Determine if VSAM or your program should load the exit routine.
A user exit that is loaded by VSAM is invoked in the addressing mode specified
when the module was link edited. A user exit that is not loaded by VSAM receives
control in the same addressing mode as the issuer of the VSAM
record-management, OPEN, or CLOSE request that causes the exit to be taken. It is
the user’s responsibility to ensure that the exit is written for the correct addressing
mode.
Your exit routine can be loaded within your program or by using JOBLIB or
STEPLIB with the DD statement to point to the library location of your exit
routine.
Related reading: When you code VSAM user exit routines, you should have
available z/OS DFSMS Macro Instructions for Data Sets and z/OS DFSMS Access
Method Services for Catalogs and be familiar with their contents.
If the LERAD, EODAD, or SYNAD exit routine reuses the RPL passed to it, you
should be aware of these factors:
v The exit routine is called again if the request issuing the reused RPL results in
the same exception condition that caused the exit routine to be entered
originally.
v The original feedback code is replaced with the feedback code that indicates the
status of the latest request issued against the RPL. If the exit routine returns to
VSAM, VSAM (when it returns to the user’s program) sets register 15 to also
indicate the status of the latest request.
v JRNAD, UPAD, and exception exits are extensions of VSAM and, therefore, must
return to VSAM in the same processing mode in which they were entered (that
is, cross-memory, SRB, or task mode).
VSAM has lost track of its reentry point to your main program. If the exit
routine returns to VSAM, VSAM issues an error return code.
Register Contents
Table 16 gives the contents of the registers when VSAM exits to the IGW8PNRU
routine.
Table 16. Contents of registers at entry to IGW8PNRU exit routine
Register Contents
0 Not applicable.
1 Address of IGWUNLR (in key 8 storage).
Address of an area to be used as an autodata area (in key 8 storage).
3 Length of the autodata area.
4-13 Unpredictable. Register 13, by convention, contains the address of your
processing program’s 72-byte save area, which must not be used as a save
area by the IGW8PNRU routine if it returns control to VSAM.
14 Return address to VSAM.
15 Entry address to the IGW8PNRU routine.
Programming Considerations
The following programming considerations apply to the batch override exit:
v The name of this exit must be IGW8PNRU.
v The exit must be loadable from any system that might do peer recovery for
another system.
Recommendation: It is possible for this exit to perform other processing, but IBM
strongly recommends that the exit not attempt to update any recoverable resources.
Register Contents
Table 17 gives the contents of the registers when VSAM exits to the EODAD
routine.
Table 17. Contents of registers at entry to EODAD exit routine
Register Contents
0 Unpredictable.
1 Address of the RPL that defines the request that occasioned VSAM’s
reaching the end of the data set. The register must contain this address if
you return to VSAM.
2-13 Unpredictable. Register 13, by convention, contains the address of your
processing program’s 72-byte save area, which must not be used as a save
area by the EODAD routine if it returns control to VSAM.
14 Return address to VSAM.
15 Entry address to the EODAD routine.
Programming Considerations
The typical actions of an EODAD routine are to:
v Examine RPL for information you need, for example, type of data set
v Issue completion messages
v Close the data set
v Terminate processing without returning to VSAM.
If the routine returns to VSAM and another GET request is issued for access to the
data set, VSAM exits to the LERAD routine.
The type of data set whose end was reached can be determined by examining the
RPL for the address of the access method control block that connects the program
to the data set and testing its attribute characteristics.
If the exit routine issues GENCB, MODCB, SHOWCB, or TESTCB and returns to
VSAM, it must provide a save area and restore registers 13 and 14, which are used
by these macros.
When your EODAD routine completes processing, return to your main program as
described in “Return to a Main Program” on page 243.
Register Contents
Table 18 gives the contents of the registers when VSAM exits to the
EXCEPTIONEXIT routine.
Table 18. Contents of registers at entry to EXCEPTIONEXIT routine
Register Contents
0 Unpredictable.
1 Address of the RPL that contains a feedback return code and the address of
a message area, if any.
2-13 Unpredictable. Register 13, by convention, contains the address of your
processing program’s 72-byte save area, which must not be used by the
routine if it returns control to VSAM.
14 Return address to VSAM.
15 Entry address to the exception exit routine.
Programming Considerations
The exception exit is taken for the same errors as a SYNAD exit. If you have both
an active SYNAD routine and an EXCEPTIONEXIT routine, the exception exit
routine is processed first.
The exception exit is associated with the attributes of the data set (specified by the
DEFINE) and is loaded on every call. Your exit must reside in the LINKLIB and
the exit cannot be called when VSAM is in cross-memory mode.
When your exception exit routine completes processing, return to your main
program as described in “Return to a Main Program” on page 243.
Related reading: For information about how exception exits are established,
changed, or nullified, see z/OS DFSMS Access Method Services for Catalogs.
Register Contents
Table 19 gives the contents of the registers when VSAM exits to the JRNAD
routine.
Table 19. Contents of registers at entry to JRNAD exit routine
Register Contents
0 Byte 0—the subpool ID token created by a BLDVRP request. Bytes 2 -
3—the relative buffer number, that is, the buffer array index within a buffer
pool.
1 Address of a parameter list built by VSAM.
2-3 Unpredictable.
4 Address of buffer control block (BUFC).
5-13 Unpredictable.
14 Return address to VSAM.
15 Entry address to the JRNAD routine.
Programming Considerations
If the JRNAD is taken for I/O errors, a journal exit can zero out, or otherwise alter,
the physical-error return code, so that a series of operations can continue to
completion, even though one or more of the operations failed.
The contents of the parameter list built by VSAM, pointed to by register 1, can be
examined by the JRNAD exit routine, which is described in Table 20 on page 251.
If the exit routine issues GENCB, MODCB, SHOWCB, or TESTCB, it must restore
register 14, which is used by these macros, before it returns to VSAM.
If the exit routine uses register 1, it must restore it with the parameter list address
before returning to VSAM. (The routine must return for completion of the request
that caused VSAM to exit.)
The JRNAD exit must be indicated as active before the data set for which the exit
is to be used is opened, and the exit must not be made inactive during processing.
If you define more than one access method control block for a data set and want to
have a JRNAD routine, the first ACB you open for the data set must specify the
exit list that identifies the routine.
When the data set being processed is extended addressable, the JRNAD exits
dealing with RBAs are not taken or are restricted due to the increase in the size of
the field required to provide addressability to RBAs which may be greater than 4
GB. The restrictions are for the entire data set without regard to the specific RBA
value.
Journalizing Transactions
For journalizing transactions (when VSAM exits because of a GET, PUT, or
ERASE), you can use the SHOWCB macro to display information in the request
parameter list about the record that was retrieved, stored, or deleted
(FIELDS=(AREA,KEYLEN,RBA,RECLEN), for example). You can also use the
TESTCB macro to find out whether a GET or a PUT was for update
(OPTCD=UPD).
If your JRNAD routine only journals transactions, it should ignore reason X'0C'
and return to VSAM; conversely, it should ignore reasons X'00', X'04', and X'08' if it
records only RBA changes.
RBA Changes
For recording RBA changes, you must calculate how many records there are in the
data being shifted or moved, so you can keep track of the new RBA for each. If all
the records are the same length, you calculate the number by dividing the record
length into the number of bytes of data being shifted. If record length varies, you
can calculate the number by using a table that not only identifies the records (by
associating a record’s key with its RBA), but also gives their length.
You should provide a routine to keep track of RBA changes caused by control
interval and control area splits. RBA changes that occur through keyed access to a
key-sequenced data set must also be recorded if you intend to process the data set
later by direct-addressed access.
You might also want to use the JRNAD exit to maintain shared or exclusive control
over certain data or index control intervals; and in some cases, in your exit routine
you can reject the request for certain processing of the control intervals. For
example, if you used this exit to maintain information about a data set in a shared
environment, you might reject a request for a control interval or control area split
because the split might adversely affect other users of the data set.
USERPROG CSECT
SAVE(R14,R12) Standard entry code
.
.
.
BLDVRP BUFFERS=(512(3)), Build resource pool X
KEYLEN=4, X
STRNO=4, X
TYPE=LSR, X
SHRPOOL=1, X
RMODE31=ALL
OPEN (DIRACB) Logically connect KSDS1
.
.
.
PUT RPL=DIRRPL This PUT causes the exit routine USEREXIT
to be taken with an exit code X’50’ if
there is a CI or CA split
LTR R15,R15 Check return code from PUT
BZ NOCANCEL Retcode = 0 if USEREXIT did not cancel
CI/CA split
= 8 if cancel was issued, if
we know a CI or CA split
occurred
.
. Process the cancel situation
.
NOCANCEL . Process the noncancel situation
.
.
CLOSE (DIRACB) Disconnect KSDS1
DLVRP TYPE=LSR,SHRPOOL=1 Delete the resource pool
.
.
.
RETURN Return to caller.
.
.
.
DIRACB ACB AM=VSAM, X
DDNAME=KSDS1, X
BUFND=3, X
BUFNI=2, X
MACRF=(KEY,DDN,SEQ,DIR,OUT,LSR), X
SHRPOOL=1, X
EXLST=EXITLST
*
DIRRPL RPL AM=VSAM, X
ACB=DIRACB, X
AREA=DATAREC, X
AREALEN=128, X
ARG=KEYNO, X
KEYLEN=4, X
OPTCD=(KEY,DIR,FWD,SYN,NUP,WAITX), X
RECLEN=128
*
DATAREC DC CL128’DATA RECORD TO BE PUT TO KSDS1’
KEYNO DC F’0’ Search key argument for RPL
EXITLST EXLST AM=VSAM,JRNAD=(JRNADDR,A,L)
JRNADDR DC CL8’USEREXIT’ Name of user exit routine
END End of USERPROG
Parameter List
The parameter list built by VSAM contains reason codes to indicate why the exit
was taken, and also locations where you can specify return codes for VSAM to
take or not take an action on returning from your routine. The information
provided in the parameter list varies depending on the reason the exit was taken.
Table 20 shows the contents of the parameter list.
The parameter list will reside in the same area as the VSAM control blocks, either
above or below the 16 MB line. For example, if the VSAM data set was opened
and the ACB stated RMODE31=CB, the exit parameter list will reside above the 16
MB line. To access a parameter list that resides above the 16 MB line, you will
need to use 31-bit addressing.
Table 20. Contents of parameter list built by VSAM for the JRNAD exit
Offset Bytes Description
0(X'0') 4 Address of the RPL that defines the request that caused VSAM to exit to the routine.
4(X'4') 4 Address of a 5-byte field that identifies the data set being processed. This field has the
format:
4 bytes Address of the access method control block specified by the RPL that defines
the request occasioned by the JRNAD exit.
1 byte Indication of whether the data set is the data (X'01') or the index (X'02')
component.
8(X'8') 4 Variable, depends on the reason indicator at offset 20:
Offset 20
Contents at offset 8
X'0C' The RBA of the first byte of data that is being shifted or moved.
X'20' The RBA of the beginning of the control area about to be split.
X'24' The address of the I/O buffer into which data was going to be read.
X'28' The address of the I/O buffer from which data was going to be written.
X'2C' The address of the I/O buffer that contains the control interval contents that
are about to be written.
X'30' Address of the buffer control block (BUFC) that points to the buffer into which
data is about to be read under exclusive control.
X'34' Address of BUFC that points to the buffer into which data is about to be read
under shared control.
X'38' Address of BUFC that points to the buffer which is to be acquired in exclusive
control. The buffer is already in the buffer pool.
X'3C' Address of the BUFC that points to the buffer which is to be built in the buffer
pool in exclusive control.
X'40' Address of BUFC which points to the buffer whose exclusive control has just
been released.
X'44' Address of BUFC which points to the buffer whose contents have been made
invalid.
X'48' Address of the BUFC which points to the buffer into which the READ
operation has just been completed.
X'4C' Address of the BUFC which points to the buffer from which the WRITE
operation has just been completed.
Table 20. Contents of parameter list built by VSAM for the JRNAD exit (continued)
Offset Bytes Description
12(X'C') 4 Variable, depends on the reason indicator at offset 20:
Offset 20
Contents at offset 12
X'0C' The number of bytes of data that is being shifted or moved (this number does
not include free space, if any, or control information, except for a control area
split, when the entire contents of a control interval are moved to a new control
interval.)
X'20' Unpredictable.
X'24' Unpredictable.
X'28' Bits 0-31 correspond with transaction IDs 0-31. Bits set to 1 indicate that the
buffer that was being written when the error occurred was modified by the
corresponding transactions. You can set additional bits to 1 to tell VSAM to
keep the contents of the buffer until the corresponding transactions have
modified the buffer.
X'2C' The size of the control interval whose contents are about to be written.
X'30' Zero.
X'34' Zero.
X'38' Zero.
X'3C' Size of the buffer which is to be built in the buffer pool in exclusive control.
X'48' Size of the buffer into which the READ operation has just been completed.
X'4C' Size of the buffer from which the WRITE operation has just been completed.
16(X'10') 4 Variable, depends on the reason indicator at offset 20:
Offset 20
Contents at offset 16
X'0C' The RBA of the first byte to which data is being shifted or moved.
X'20' The RBA of the last byte in the control area about to be split.
X'24' The fourth byte contains the physical error code from the RPL FDBK field. You
use this fullword to communicate with VSAM. Setting it to 0 indicates that
VSAM is to ignore the error, bypass error processing, and let the processing
program continue. Leaving it nonzero indicates that VSAM is to continue as
usual: terminate the request that occasioned the error and proceed with error
processing, including exiting to a physical error analysis routine.
X'28' Same as for X'24'.
X'2C' The RBA of the control interval whose contents are about to be written.
X'48' Unpredictable.
X'4C' Unpredictable.
Table 20. Contents of parameter list built by VSAM for the JRNAD exit (continued)
Offset Bytes Description
20(X'14') 1 Indication of the reason VSAM exited to the JRNAD routine:
X'00' GET request.
X'04' PUT request.
X'08' ERASE request.
X'0C' RBA change.
X'10' Read spanned record segment.
X'14' Write spanned record segment.
X'18' Reserved.
X'1C' Reserved.
VSAM does not call the LERAD exit if the RPL feedback code is 64.
Register Contents
Table 21 gives the contents of the registers when VSAM exits to the LERAD exit
routine.
Table 21. Contents of registers at entry to LERAD exit routine
Register Contents
0 Unpredictable.
1 Address of the RPL that contains the feedback field the routine should
examine. The register must contain this address if you return to VSAM.
2-13 Unpredictable. Register 13, by convention, contains the address of your
processing program’s 72-byte save area, which must not be used as a save
area by the LERAD routine if the routine returns control to VSAM.
14 Return address to VSAM.
15 Entry address to the LERAD routine. The register does not contain the
logical-error indicator.
Programming Considerations
The typical actions of a LERAD routine are:
1. Examine the feedback field in the RPL to determine what error occurred
2. Determine what action to take based on error
3. Close the data set
4. Issue completion messages
5. Terminate processing and exit VSAM or return to VSAM.
If the LERAD exit routine issues GENCB, MODCB, SHOWCB, or TESTCB and
returns to VSAM, it must restore registers 1, 13, and 14, which are used by these
macros. It must also provide two save areas; one, whose address should be loaded
into register 13 before the GENCB, MODCB, SHOWCB, or TESTCB is issued, and
the second, to separately store registers 1, 13, and 14.
If the error cannot be corrected, close the data set and either terminate processing
or return to VSAM.
If a logical error occurs and no LERAD exit routine is provided (or the LERAD exit
is inactive), VSAM returns codes in register 15 and in the feedback field of the RPL
to identify the error.
When your LERAD exit routine completes processing, return to your main
program as described in “Return to a Main Program” on page 243.
The exit can do its own wait processing associated with the record management
request that is being asynchronously executed. When the record management
request is complete, VSAM will post the ECB that the user specified in the RPL.
For RLS, the RLSWAIT exit is entered only for a request wait, never for a resource-
or I/O- wait or post as with non-RLS VSAM UPAD.
The RLSWAIT exit is optional. It is used by applications that cannot tolerate VSAM
suspending the execution unit that issued the original record management request.
The RLSWAIT exit is required for record management request issued in cross
memory mode.
RLSWAIT should be specified on each ACB which requires the exit. If the exit is
not specified on the ACB via the EXLST, there is no RLSWAIT exit processing for
record management requests associated with that ACB. This differs from non-RLS
VSAM where the UPAD exit is associated with the control block structure so that
all ACBs connected to that structure inherit the exit of the first connector.
Register Contents
Table 22 gives the contents of the registers when RLSWAIT is entered in 31–bit
mode.
Table 22. Contents of registers for RLSWAIT exit routine
Register Contents
1 Address of the user RPL. If a chain of RPLs was passed on the original
record management request, this is the first RPL in the chain.
12 Reserved and must be the same on exit as on entry.
13 Reserved and must be the same on exit as on entry.
14 Return address. The exit must return to VSAM using this register.
15 Address of the RLSWAIT exit.
Request Environment
VSAM RLS record management requests must be issued in PRIMARY ASC mode
and cannot be issued in home, secondary, or AR ASC mode. The user RPL, EXLST,
ACB, must be addressable from primary. Open must have been issued from the
same primary address space. VSAM RLS record management request task must be
the same as the task that opened the ACB, or the task that opened the ACB must
be in the task hierarchy (i.e., the record management task was attached by that task
that opened the ACB, or by a task that was attached by the task that opened that
ACB). VSAM RLS record management requests must not be issued in SRB mode,
and must not have functional recovery routine (FRR) in effect.
If the record management request is issued in cross memory mode, then the caller
must be in supervisor state and must specify that an RLSWAIT exit is associated
with the request (RPLWAITX = ON). The request must be synchronous.
The RLSWAIT exit, if specified, is entered at the beginning of the request and
VSAM processes the request asynchronously under a separate execution unit.
VSAM RLS does not enter the RLSWAIT exit for post processing.
VSAM assumes that the ECB supplied with the request is addressable form both
home and primary, and that the key of the ECB is the same as the key of the
record management caller.
Register Contents
Table 23 gives the contents of the registers when VSAM exits to the SYNAD
routine.
Table 23. Contents of registers at entry to SYNAD exit routine
Register Contents
0 Unpredictable.
1 Address of the RPL that contains a feedback return code and the address of
a message area, if any. If you issued a request macro, the RPL is the one
pointed to by the macro. If you issued an OPEN, CLOSE, or cause an
end-of-volume to be done, the RPL was built by VSAM to process an
internal request. Register 1 must contain this address if the SYNAD routine
returns to VSAM.
2-13 Unpredictable. Register 13, by convention, contains the address of your
processing program’s 72-byte save area, which must not be used by the
SYNAD routine if it returns control to VSAM.
14 Return address to VSAM.
15 Entry address to the SYNAD routine.
Programming Considerations
A SYNAD routine should typically:
v Examine the feedback field in the request parameter list to identify the type of
physical error that occurred.
v Get the address of the message area, if any, from the request parameter list, to
examine the message for detailed information about the error
v Recover data if possible
v Print error messages if uncorrectable error
v Close the data set
v Terminate processing.
The main problem with a physical error is the possible loss of data. You should try
to recover your data before continuing to process. Input operation (ACB
MACRF=IN) errors are generally less serious than output or update operation
(MACRF=OUT) errors, because your request was not attempting to alter the
contents of the data set.
If the routine cannot correct an error, it might print the physical-error message,
close the data set, and terminate the program. If the error occurred while VSAM
was closing the data set, and if another error occurs after the exit routine issues a
CLOSE macro, VSAM doesn’t exit to the routine a second time.
If the SYNAD routine returns to VSAM, whether the error was corrected or not,
VSAM drops the request and returns to your processing program at the instruction
following the last executed instruction. Register 15 is reset to indicate that there
was an error, and the feedback field in the RPL identifies it.
Physical errors affect positioning. If a GET was issued that would have positioned
VSAM for a subsequent sequential GET and an error occurs, VSAM is positioned
at the control interval next in key (RPL OPTCD=KEY) or in entry (OPTCD=ADR)
sequence after the control interval involved in the error. The processing program
can therefore ignore the error and proceed with sequential processing. With direct
processing, the likelihood of re-encountering the control interval involved in the
error depends on your application.
If the exit routine issues GENCB, MODCB, SHOWCB, or TESTCB and returns to
VSAM, it must provide a save area and restore registers 13 and 14, which these
macros use.
When your SYNAD exit routine completes processing, return to your main
program as described in “Return to a Main Program” on page 243.
If a physical error occurs and no SYNAD routine is provided (or the SYNAD exit
is inactive), VSAM returns codes in register 15 and in the feedback field of the RPL
to identify the error.
Related reading:
v For a description of the SYNAD return codes, see z/OS DFSMS Macro
Instructions for Data Sets.
BR 14 Return to VSAM.
.
.
.
ERRCODE DC F’0’ RPL reason code from SHOWCB.
PERRMSG DS 0XL128 Physical error message.
If you are executing in cross-memory mode, you must have a UPAD routine and
RPL must specify WAITX.z/OS MVS Programming: Extended Addressability Guide
describes cross-memory mode. The UPAD routine is optional for
non-cross-memory mode.
Table 24 describes the conditions in which VSAM calls the UPAD routine for
synchronous requests with shared resources. UPAD routine exits are taken only for
synchronous requests with shared resources or improved control interval
processing (ICI).
Table 24. Conditions when exits to UPAD routines are taken
Sup. UPAD
XMM state needed I/O wait I/O post Resource wait Resource post
Yes Yes Yes UPAD taken UPAD taken UPAD taken UPAD taken
No Yes No UPAD taken if UPAD not taken UPAD taken if UPAD taken if either resource
requested even if requested requested owner or the deferred request
runs in XM mode
No No No UPAD taken if UPAD not taken UPAD taken if UPAD taken if either resource
requested even if requested requested owner or the deferred request
runs in XM mode
Note®:
v You must be in supervisor state when you are in cross-memory mode or SRB mode.
v RPL WAITX is required if UPAD is required. A UPAD routine can be taken only if RPL specifies WAITX.
v VSAM gives control to the UPAD exit in the same primary address space of the VSAM record management
request. However, VSAM can give control to UPAD with home and secondary ASIDs different from those of the
VSAM record management request because the exit was set up during OPEN.
v When a UPAD exit is taken to do post processing, make sure the ECB is marked posted before returning to VSAM.
VSAM does not check the UPAD return code and does not do post after UPAD has been taken. For
non-cross-memory task mode only, if the UPAD exit taken for wait returns with ECB not posted, VSAM issues a
WAIT SVC.
v The UPAD exit must return to VSAM in the same address space, mode, state, and addressing mode, and under the
same TCB or SRB from which the UPAD exit was called. Registers 1, 13, and 14 must be restored before the UPAD
exit returns to VSAM.
v ICI does not require UPAD for any mode. Resource wait and post processings do not apply to ICI.
Register Contents
Table 25 shows the register contents passed by VSAM when the UPAD exit routine
is entered.
Table 25. Contents of registers at entry to UPAD exit routine
Register Contents
0 Unpredictable.
1 Address of a parameter list built by VSAM.
2-12 Unpredictable.
13 Reserved.
14 Return address to VSAM.
15 Entry address of the UPAD routine.
Programming Considerations
The UPAD exit routine must be active before the data set is opened. The exit must
not be made inactive during processing. If the UPAD exit is desired and multiple
ACBs are used for processing the data set, the first ACB that is opened must
specify the exit list that identifies the UPAD exit routine.
You can use the UPAD exit to examine the contents of the parameter list built by
VSAM, pointed to by register 1. Table 26 describes this parameter list.
Table 26. Parameter list passed to UPAD routine
Offset Bytes Description
0(X'0') 4 Address of user’s RPL; address of system-generated RPL if
UPAD is taken for CLOSE processing or for an alternate
index through a path.
4(X'4') 4 Address of a 5-byte data set identifier. The first four bytes
of the identifier are the ACB address. The last byte
identifies the component; data (X'01'), or index (X'02').
8(X'8') 4 Address of the request’s ECB.
12(X'0C') 4 Reserved.
12(X'10') 1 UPAD flags:
If the UPAD exit routine modifies register 14 (for example, by issuing a TESTCB),
the routine must restore register 14 before returning to VSAM. If register 1 is used,
the UPAD exit routine must restore it with the parameter list address before
returning to VSAM.
The UPAD routine must return to VSAM under the same TCB from which it was
called for completion of the request that caused VSAM to exit. The UPAD exit
routine cannot use register 13 as a save area pointer without first obtaining its own
save area.
The UPAD exit routine, when taken before a WAIT during LSR or GSR processing,
might issue other VSAM requests to obtain better processing overlap (similar to
asynchronous processing). However, the UPAD routine must not issue any
synchronous VSAM requests that do not specify WAITX, because a started request
might issue a WAIT for a resource owned by a starting request.
If the UPAD routine starts requests that specify WAITX, the UPAD routine must be
reentrant. After multiple requests have been started, they should be synchronized
by waiting for one ECB out of a group of ECBs to be posted complete rather than
waiting for a specific ECB or for many ECBs to be posted complete. (Posting of
some ECBs in the list might be dependent on the resumption of some of the other
requests that entered the UPAD routine.)
If you are executing in cross-memory mode, you must have a UPAD routine and
RPL must specify WAITX. When waiting or posting of an event is required, the
UPAD routine is given control to do wait or post processing (reason code 0 or 4 in
the UPAD parameter list).
User-Security-Verification Routine
If you use VSAM password protection, you can also have your own routine to
check a requester’s authority. Your routine is invoked from OPEN, rather than via
an exit list. VSAM transfers control to your routine, which must reside in
SYS1.LINKLIB, when a requester gives a correct password other than the master
password.
If the USVR is being used by more than one task at a time, you must code the
USVR reentrant or develop another method for handling simultaneous entries.
When your USVR completes processing, it must return (in register 15) to VSAM
with a return code of 0 for authority granted or not 0 for authority withheld in
register 15. Table 27 gives the contents of the registers when VSAM gives control to
the USVR.
8 bytes The password that the requester gave (it has been verified by
VSAM).
– The user-security-authorization.
2-13 Unpredictable.
14 Return address to VSAM.
15 Entry address to the USVR. When the routine returns to VSAM, it indicates
by the following codes in register 15 if the requester has been authorized to
gain access to the data set:
0 Authority granted.
Topic Location
VSAM Options 263
VSAM Options
Using VSAM, you can obtain control blocks, buffers, and multiple local shared
resource (LSR) pools above or below 16 MB. However, if your program uses a
24-bit address, it can generate a program check if you attempt to reference control
blocks, buffers, or LSR pools located above 16 MB. With a 24-bit address, you do
not have addressability to the data buffers.
If you specify that control blocks, buffers, or pools can be above the line and
attempt to use locate mode to access records while in 24-bit mode, your program
will program check (ABEND 0C4).
Rule: You cannot specify the location of buffers or control blocks for RLS
processing. RLS ignores the ACB RMODE31= keyword.
Table 28 summarizes the 31-bit address keyword parameters and their use in the
applicable VSAM macros.
Table 28. 31-Bit Address Keyword Parameters
MACRO RMODE31= MODE= LOC=
ACB Virtual storage location of VSAM control blocks INVALID INVALID
and I/O buffers
BLDVRP Virtual storage location of VSAM LSR pool, Format of the BLDVRP INVALID
VSAM control blocks and I/O buffers parameter list (24-bit or
31-bit format)
CLOSE INVALID Format of the CLOSE INVALID
parameter list (24-bit or
31-bit format)
DLVRP INVALID Format of the DLVRP INVALID
parameter list (24-bit or
31-bit format)
GENCB RMODE31 values to be placed in the ACB that INVALID Location for the virtual
is being created. When the generated ACB is storage obtained by
opened, the RMODE31 values will then VSAM for the ACB,
determine the virtual storage location of VSAM RPL, or EXIT LIST.
control blocks and I/O buffers.
MODCB RMODE31 values to be placed in a specified INVALID INVALID
ACB
OPEN INVALID Format of the OPEN INVALID
parameter list (24-bit or
31-bit format)
Related reading:
v See “Obtaining Buffers Above 16 MB” on page 166 for information about
creating and accessing buffers that reside above 16 MB.
v See Chapter 13, “Sharing Resources Among VSAM Data Sets,” on page 207 for
information about building multiple LSR pools in an address space.
v See z/OS MVS JCL Reference for information about specifying 31-bit parameters
using the AMP=(RMODE31=) parameter.
Topic Location
Using JCL Statements and Keywords 265
Creating VSAM Data Sets with JCL 266
Retrieving an Existing VSAM Data Set 273
Optionally, the DSNAME parameter can be used to specify one of the components
of a VSAM data set. Each VSAM data set is defined as a cluster of one or more
components. Key-sequenced data sets contain a data component and an index
component. Entry-sequenced and linear data sets and fixed-length RRDSs contain
only a data component. Process a variable-length RRDS as a cluster. Each alternate
index contains a data component and an index component. For further information
on specifying a cluster name see “Naming a Cluster” on page 104.
Disposition
The disposition (DISP) parameter describes the status of a data set to the system
and tells the system what to do with the data set after the step or job terminates.
With SMS, you can optionally specify a data class that contains RECORG. If your
storage administrator, through the ACS routines, creates a default data class that
contains RECORG, you have the option of taking this default as well.
The following list contains the keywords, including RECORG, used to allocate a
VSAM data set. See z/OS MVS JCL Reference for a detailed description of these
keywords.
U—Use a scale of 1
K—Use a scale of 1024
M—Use a scale of 1 048 576
DATACLAS—Is a list of the data set allocation parameters and their default
values. The storage administrator can specify KEYLEN, KEYOFF, LRECL,
LGSTREAM, and RECORG in the DATACLAS definition, but you can override
them.
LGSTREAM—Specifies the log stream used. LOG and BWO parameters can be
derived from the data class.
RECORG—Specifies the type of data set desired: KS, ES, RR, LS.
REFDD—Specifies that the properties on the JCL statement and from the data class
of a previous DD statement should be used to allocate a new data set.
Your storage administrator defines the names of the storage classes you can specify
on the STORCLAS parameter. A storage class is assigned when you specify
STORCLAS or an ACS routine selects a storage class for the new data set.
Use the storage class to specify the storage service level to be used by SMS for
storage of the data set. The storage class replaces the storage attributes specified on
the UNIT and VOLUME parameter for non-system-managed data sets.
If a guaranteed space storage class is assigned to the data set (cluster) and volume
serial numbers are specified, space is allocated on all specified volumes if the
following conditions are met:
v All volumes specified belong to the same storage group.
v The storage group to which these volumes belong is in the list of storage groups
selected by the ACS routines for this allocation.
Allocation
You can allocate VSAM temporary data sets by specifying
RECORG=KS|ES|LS|RR as follows:
v By the RECORG keyword on the DD statement or the dynamic allocation
parameter
v By the data class (if the selected data class has the RECORG attribute)
v By the default data class established by your storage administrator (if the default
data class exists and has the RECORG attribute)
For additional information on temporary data sets see z/OS MVS JCL Reference and
z/OS MVS Programming: Assembler Services Guide. See “Example 4: Allocate a
Temporary VSAM Data Set” on page 272 for an example of creating a temporary
VSAM data set.
Explanation of Keywords:
v DSNAME specifies the data set name.
v DISP specifies that a new data set is to be allocated in this step and that the data
set is to be kept on the volume if this step terminates normally. If the data set is
not system managed, KEEP is the only normal termination disposition
subparameter permitted for a VSAM data set. Non-system-managed VSAM data
sets should not be passed, cataloged, uncataloged, or deleted.
v SPACE specifies an average record length of 80, a primary space quantity of 20
and a secondary space quantity of 2.
v AVGREC specifies that the primary and secondary space quantity specified on
the SPACE keyword represents the number of records in units (multiplier of 1).
If DATACLAS were specified in this example, AVGREC would override the data
class space allocation.
v RECORG specifies a VSAM key-sequenced data set.
v KEYLEN specifies that the length of the keys used in the data set is 15 bytes. If
DATACLAS were specified in this example, KEYLEN would override the data
class key length allocation.
v KEYOFF specifies an offset of zero of the first byte of the key in each record. If
DATACLAS were specified in this example, KEYOFF would override the data
class key offset allocation.
v LRECL specifies a record length of 250 bytes. If DATACLAS were specified in
this example, LRECL would override the data class record length allocation.
v The system determines an appropriate size for the control interval.
Explanation of Keywords:
v DSNAME specifies the data set name.
v DISP specifies that a new data set is to be allocated in this step and that the data
set is to be kept on the volume if this step terminates normally. Because a
system-managed data set is being allocated, all dispositions are valid for VSAM
data sets; however, UNCATLG is ignored.
v DATACLAS specifies a data class for the new data set. If SMS is not active, the
system syntax ignores DATACLAS. SMS also ignores the DATACLAS keyword if
you specify it for an existing data set, or a data set that SMS does not support.
This keyword is optional. If you do not specify DATACLAS for the new data set
and your storage administrator has provided an ACS routine, the ACS routine
can select a data class for the data set.
v STORCLAS specifies a storage class for the new data set. If SMS is not active,
the system syntax ignores STORCLAS. SMS also ignores the STORCLAS
keyword if you specify it for an existing data set.
This keyword is optional. If you do not specify STORCLAS for the new data set
and your storage administrator has provided an ACS routine, the ACS routine
can select a storage class for the data set.
v MGMTCLAS specifies a management class for the new data set. If SMS is not
active, the system syntax ignores MGMTCLAS. SMS also ignores the
MGMTCLAS keyword if you specify it for an existing data set.
This keyword is optional. If you do not specify MGMTCLAS for the new data
set and your storage administrator has provided an ACS routine, the ACS
routine can select a management class for the data set.
Explanation of Keywords:
v DSNAME specifies the data set name.
v DISP specifies that a new data set is to be allocated in this step and that the
system is to place an entry pointing to the data set in the system or user catalog.
v DATACLAS, STORCLAS, and MGMTCLAS are not required if your storage
administrator has provided ACS routines that will select the SMS classes for
you, and DATACLAS defines RECORG.
Explanation of Keywords:
v DSN specifies the data set name. If you specify a data set name for a temporary
data set, it must begin with & or &&. This keyword is optional, however. If you
do not specify a DSN, the system will generate a qualified data set name for the
temporary data set.
v DISP specifies that a new data set is to be allocated in this step and that the data
set is to be passed for use by a subsequent step in the same job. If KEEP or
CATLG are specified for a temporary data set, the system changes the
disposition to PASS and deletes the data set at job termination.
v RECORG specifies a VSAM entry-sequenced data set.
v SPACE specifies an average record length of 1 and a primary quantity of 10.
v AVGREC specifies that the primary quantity (10) specified on the SPACE
keyword represents the number of records in megabytes (multiplier of 1048576).
v LRECL specifies a record length of 256 bytes.
v STORCLAS specifies a storage class for the temporary data set.
This keyword is optional. If you do not specify STORCLAS for the new data set
and your storage administrator has provided an ACS routine, the ACS routine
can select a storage class.
If SMS is active, you can pass VSAM data sets within a job. The system replaces
PASS with KEEP for permanent VSAM data sets. When you refer to the data set
later in the job, the system obtains data set information from the catalog. Without
SMS you cannot pass VSAM data sets within a job.
Migration Consideration
If you have existing JCL that allocates a VSAM data set with DISP=(OLD,DELETE),
the system ignores DELETE and keeps the data set if SMS is inactive. If SMS is
active, DELETE is valid and the system deletes the data set.
AMP is only used with VSAM data sets. The AMP parameter takes effect when the
data set defined by the DD statement is opened.
DDNAME lets you postpone defining a data set until later in the job step.
DISP =(SHR|OLD[,PASS]) describes the status of a data set, and tells the system
what to do with the data set after the step or job ends.
DYNAM increases by one the control value for dynamically allocated resources
held for reuse.
FREE specifies when the system is to deallocate resources for the data set.
VOLUME =(PRIVATE|SER) identifies the volume on which a data set will reside.
With SMS, you do not need the AMP, UNIT, and VOLUMES parameters to retrieve
an existing VSAM data set. With SMS, you can use the DISP subparameters MOD,
NEW, CATLG, KEEP, PASS, and DELETE for VSAM data sets.
Certain JCL keywords should either not be used, or used only with caution when
processing VSAM data sets. See the VSAM data set section in z/OS MVS JCL User’s
Guide for a list of these keywords. Additional descriptions of these keywords also
appear in z/OS MVS JCL Reference.
Topic Location
Access to a Key-Sequenced Data Set Index 275
Format of an Index Record 279
Key Compression 282
VSAM lets you access indexes of key-sequenced data sets to help you diagnose
index problems. This can be useful if your index is damaged or if pointers are lost
and you want to know exactly what the index contains. You should not attempt to
duplicate or substitute the index processing done by VSAM during normal access
to data records.
Access using GETIX and PUTIX is direct, by control interval: VSAM requires RPL
OPTCD=(CNV,DIR). The search argument for GETIX is the RBA of a control
interval. The increment from the RBA of one control interval to the next is control
interval size for the index.
GETIX can be issued either for update or not for update. VSAM recognizes
OPTCD=NUP or UPD but interprets OPTCD=NSP as NUP.
RPL OPTCD=MVE or LOC can be specified for GETIX, but only OPTCD=MVE is
valid for PUTIX. If you retrieve with OPTCD=LOC, you must change OPTCD to
MVE to store. With OPTCD=MVE, AREALEN must be at least index control
interval size.
Beyond these restrictions, access to an index through GETIX and PUTIX follows
the rules found in Chapter 11, “Processing Control Intervals,” on page 179.
request macros for normal data processing. To open the index component alone,
specify: DSNAME=indexcomponentname in the DD statement identified in the ACB
(or GENCB) macro.
You can gain access to index records with addressed access and to index control
intervals with control interval access. The use of these two types of access for
processing an index is identical in every respect with their use for processing a
data component.
Prime Index
A key-sequenced data set always has an index that relates key values to the
relative locations of the logical records in a data set. This index is called the prime
index. The prime index, or simply index, has two uses:
v Locate the collating position when inserting records
v Locate records for retrieval
When a data set is initially loaded, records must be presented to VSAM in key
sequence. The index for a key-sequenced data set is built automatically by VSAM
as the data set is loaded with records. The index is stored in control intervals. An
index control interval contains pointers to index control intervals in the next lower
level, or one entry for each data control interval in a control area.
When a data control interval is completely loaded with logical records, free space,
and control information, VSAM makes an entry in the index. The entry consists of
the highest possible key in the data control interval and a pointer to the beginning
of that control interval. The highest possible key in a data control interval is one
less than the value of the first key in the next sequential data control interval.
Figure 35 shows that a single index entry, such as 19, contains all the information
necessary to locate a logical record in a data control interval.
Index Free
CI Ptr 19 25
CI
List
Figure 36 on page 277 shows that a single index control interval contains all the
information necessary to locate a record in a single data control area.
Free
Index CI Ptr 19 25
CI list
b c a
Index Levels
A VSAM index can consist of more than one index level. Each level contains a set
of records with entries giving the location of the records in the next lower level.
Figure 37 on page 278 shows the levels of a prime index and shows the
relationship between sequence set index records and control areas. The sequence
set shows both the horizontal pointers used for sequential processing and the
vertical pointers to the data set. Although the values of the keys are actually
compressed in the index, the figure shows the full key values.
H
D 2799 4200 6705
R
Index set
H H
D 1333 2383 2799 D 4200
R R
H H
Sequence set D 1021 1051 1333 D 1401 2344 2383
R R
Con- Con-
1001 1002 1009 FS trol 1334 1350 1400 FS trol
info info
Con- Con-
1022 1025 1033 FS trol 2345 2342 2363 FS trol
info info
Sequence Set. The index records at the lowest level are the sequence set. There is
one index sequence set level record for each control area in the data set. This
sequence set record gives the location of data control intervals. An entry in a
sequence set record consists of the highest possible key in a control interval of the
data component, paired with a pointer to that control interval.
Index Set. If there is more than one sequence set level record, VSAM automatically
builds another index level. Each entry in the second level index record points to
one sequence set record. The records in all levels of the index above the sequence
set are called the index set. An entry in an index set record consists of the highest
possible key in an index record in the next lower level, and a pointer to the
beginning of that index record. The highest level of the index always contains only
a single record.
When you access records sequentially, VSAM refers only to the sequence set. It
uses a horizontal pointer to get from one sequence set record to the next record in
collating sequence. When you access records directly (not sequentially), VSAM
follows vertical pointers from the highest level of the index down to the sequence
set to find vertical pointers to data.
Index control intervals are not grouped into control areas as are data control
intervals. When a new index record is required, it is stored in a new control
interval at the end of the index data set. As a result, the records of one index level
are not segregated from the records of another level, except when the sequence set
is separate from the index set. The level of each index record is identified by a
field in the index header (see “Header Portion”).
When an index record is replicated on a track, each copy of the record is identical
to the other copies. Replication has no effect on the contents of records.
Header Portion
The first 24 bytes of an index record is the header, which gives control information
about the index record. Table 29 shows its format. All lengths and displacements
are in bytes. The discussions in the following two sections amplify the meaning
and use of some of the fields in the header.
Table 29. Format of the Header of an Index Record
Field Offset Length Description
IXHLL 0(0) 2 Index record length. The length of the index record is equal to the length of
the control interval minus 7.
In a sequence-set record, this is the RBA of the control area governed by the
record. The RBA of a control interval in the control area is calculated by
multiplying data control interval length times the vertical pointer and adding
the result to the base RBA. Thus, the first control interval in a control area has
the same RBA as the control area (length times 0, plus base RBA, equals base
RBA).
The entries come immediately after the header. They are used from right to left.
The rightmost entry is immediately before the unused space (whose displacement
is given in IXHFSO in the header). When a free control interval gets used, its free
entry is converted to zero, the space becomes part of the unused space, and a new
index entry is created in the position determined by ascending key sequence.
Thus, the free control interval entry portion contracts to the left, and the index
entry portion expands to the left. When all the free control intervals in a control
area have been used, the sequence-set record governing the control area no longer
has free control interval entries, and the number of index entries equals the
number of control intervals in the control area. Note that if the index control
interval size was specified with too small a value, it is possible for the unused
space to be used up for index entries before all the free control intervals have been
used, resulting in control intervals within a data control area that cannot be used.
Figure 39 shows the format of the index entry portion of an index record. To
improve search speed, index entries are grouped into sections, of which there are
approximately as many as the square root of the number of entries. For example, if
there are 100 index entries in an index record, they are grouped into 10 sections of
10 entries each. (The number of sections does not change, even though the number
of index entries increases as free control intervals get used.)
The sections, and the entries within a section, are arranged from right to left.
IXHLEO in the header gives the displacement from the beginning of the index
record to the control information in the leftmost index entry. IXHSEO gives the
displacement to the control information in the leftmost index entry in the
rightmost section. You calculate the displacement of the control information of the
rightmost index entry in the index record (the entry with the lowest key) by
subtracting IXHFLPLN from IXHLL in the header (the length of the control
information in an index entry from the length of the record).
Each section is preceded by a 2-byte field that gives the displacement from the
control information in the leftmost index entry in the section to the control
information in the leftmost index entry in the next section (to the left). The last
(leftmost) section’s 2-byte field contains 0s.
Control information
Compressed key F L P
Key Compression
Index entries are variable in length within an index record because VSAM
compresses keys. That is, it eliminates redundant or unnecessary characters from
the front and back of a key to save space. The number of characters that can be
eliminated from a key depends on the relationship between that key and the
preceding and following keys.
For front compression, VSAM compares a key in the index with the preceding key
in the index and eliminates from the key those leading characters that are the same
as the leading characters in the preceding key. For example, if key 12356 follows
key 12345, the characters 123 are eliminated from 12356 because they are equal to
the first three characters in the preceding key. The lowest key in an index record
has no front compression; there is no preceding key in the index record.
There is an exception for the highest key in a section. For front compression, it is
compared with the highest key in the preceding section, rather than with the
preceding key. The highest key in the rightmost section of an index record has no
front compression; there is no preceding section in the index record.
What is called “rear compression” of keys is actually the process of eliminating the
insignificant values from the end of a key in the index. The values eliminated can
be represented by X'FF'. VSAM compares a key in the index with the following key
in the data and eliminates from the key those characters to the right of the first
character that are unequal to the corresponding character in the following key. For
example, if the key 12345 (in the index) precedes key 12356 (in the data), the
character 5 is eliminated from 12345 because the fourth character in the two keys is
the first unequal pair.
The first of the control information fields gives the number of characters
eliminated from the front of the key, and the second field gives the number of
characters that remain. When the sum of these two numbers is subtracted from the
full key length (available from the catalog when the index is opened), the result is
the number of characters eliminated from the rear. The third field indicates the
control interval that contains a record with the key.
The example in Figure 41 on page 285 gives a list of full keys and shows the
contents of the index entries corresponding to the keys that get into the index (the
highest key in each data control interval). A sequence-set record is assumed, with
vertical pointers 1 byte long. The index entries shown in the figure from top to
bottom are arranged from right to left in the assumed index record.
Key 12345 has no front compression because it is the first key in the index record.
Key 12356 has no rear compression because, in the comparison between 12356 and
12357, there are no characters following 6, which is the first character that is
unequal to the corresponding character in the following key.
You can always figure out what characters have been eliminated from the front of
a key. You cannot figure out the ones eliminated from the rear. Rear compression,
in effect, establishes the key in the entry as a boundary value instead of an exact
high key. That is, an entry does not give the exact value of the highest key in a
control interval, but gives only enough of the key to distinguish it from the lowest
key in the next control interval. For example, in Figure 41 on page 285 the last
three index keys are 12401, 124, and 134 after rear compression. Data records with
key field between:
v 12402 and 124FF are associated with index key 124.
v 12500 and 134FF are associated with index key 134.
If the last data record in a control interval is deleted, and if the control interval
does not contain the high key for the control area, then the space is reclaimed as
free space. Space reclamation can be suppressed by setting the RPLNOCIR bit,
which has an equated value of X'20', at offset 43 into the RPL.
The last index entry in an index level indicates the highest possible key value. The
convention for expressing this value is to give none of its characters and indicate
that no characters have been eliminated from the front. The last index entry in the
last record in the sequence set looks like this:
In a search, the two 0s signify the highest possible key value in this way:
v The fact that 0 characters have been eliminated from the front implies that the
first character in the key is greater than the first character in the preceding key.
v A length of 0 indicates that no character comparison is required to determine if
the search is successful. That is, when a search finds the last index entry, a hit
has been made.
K F L P
1 2 3 4 0 4 0 None 5
12345
12350
12353
K F L P
12354
5 6 3 2 1 123 none
12356
12357
F L P
12358
4 0 2 1235 9
12359
12370
12373
12380
12385
K F L P
12390
4 0 1 2 3 3 12 none
12401
12405
12410
F L P
12417
3 0 4 124 21
12421
12600
K F L P
13200
3 4 1 2 5 1 56
13456
Note: 'Full keys' are the full keys of the data records that reside in data CIs where highest
possible keys are compressed in the corresponding index entries.
Legend:
K-Characters left in key after compression
F-Number of characters eliminated from the front
L-Number of characters left in key after compression
P-Vertical pointer
Figure 42 on page 286 shows how the control interval is split and the index is
updated when a record with a key of 12 is inserted in the control area shown in
Figure 36 on page 277.
Index 14 19 25
CI
d c
1. A control interval split occurs in data control interval 1, where a record with
the key of 12 must be inserted.
2. Half the records in data control interval 1 are moved by VSAM to the free
space control interval (data control interval 3).
3. An index entry is inserted in key sequence to point to data control interval 3,
that now contains data records moved from data control interval 1.
4. A new index entry is created for data control interval 1, because after the
control interval split, the highest possible key is 14. Because data control
interval 3 now contains data, the pointer to this control interval is removed
from the free list and associated with the new key entry in the index. Note that
key values in the index are in proper ascending sequence, but the data control
intervals are no longer in physical sequence.
Only the last (leftmost) index entry for a spanned record contains the key of the
record. The key is compressed according to the rules described above. All the other
index entries for the record look like this:
F L P
Y 0 X
Topic Location
Format Selection 293
Fixed-Length Record Formats 294
Variable-Length Record Formats 296
Undefined-Length Record Format 302
ISO/ANSI Tapes 303
Record Format—Device Type Considerations 311
This chapter discusses record formats of non-VSAM data sets and device type
considerations. Records are stored in one of four formats:
v Fixed length (RECFM=F)
v Variable length (RECFM=V)
v ASCII variable length (RECFM=D)
v Undefined length (RECFM=U)
For information about disk format, see “Direct Access Storage Device (DASD)
Volumes” on page 8.
Format Selection
Before selecting a record format, you should consider:
v The data type (for example, EBCDIC) your program can receive and the type of
output it can produce
v The I/O devices that contain the data set
v The access method you use to read and write the records
v Whether the records can be blocked
Blocking is the process of grouping records into blocks before they are written on a
volume. A block consists of one or more logical records. Each block is written
between consecutive interblock gaps. Blocking conserves storage space on a
volume by reducing the number of interblock gaps in the data set, and increases
processing efficiency by reducing the number of I/O operations required to process
the data set.
If you do not specify a block size, the system generally determines a block size
that is optimum for the device to which your data set is allocated. See
“System-Determined Block Size” on page 329.
You select your record format in the data control block (DCB) using the options in
the DCB macro, the DD statement, dynamic allocation, automatic class selection
routines or the data set label. Before executing your program, you must supply the
operating system with the record format (RECFM) and device-dependent
information in data class, a DCB macro, a DD statement, or a data set label. A
Block Block
Blocked
Record A Record B Record C Record D Record E Record F
records
Record
a b Data
Unblocked
records Record A Record B Record C Record D
The records can be blocked or unblocked. If the data set contains unblocked
format-F records, one record constitutes one block. If the data set contains blocked
format-F records, the number of records within a block typically is constant for
every block in the data set. The data set can contain truncated (short) blocks. The
system automatically checks the length (except for card readers) on blocked or
unblocked format-F records. Allowances are made for truncated blocks.
The optional control character (a), used for stacker selection or carriage control, can
be included in each record to be printed or punched. The optional table reference
character (b) is a code to select the font to print the record on a page printer. See
“Using Optional Control Characters” on page 312 and “Table Reference Character”
on page 314.
Standard Format
During creation of a sequential data set (to be processed by BSAM or QSAM) with
fixed-length records, the RECFM subparameter of the DCB macro can specify a
standard format (RECFM=FS or FBS). A sequential data set with standard format
records (format-FS or -FBS) sometimes can be read more efficiently than a data set
with format-F or -FB records. This efficiency is possible because the system is able
to determine the address of each record to be read, because each track contains the
same number of blocks.
Restrictions
If the last block is truncated, you should never extend a standard-format data set
by coding:
v EXTEND or OUTINX on the OPEN macro
v OUTPUT, OUTIN, or INOUT on the OPEN macro with DISP=MOD on the
allocation
v CLOSE LEAVE, TYPE=T, followed by a WRITE
v POINT to after the last block, followed by a WRITE
v CNTRL on tape to after the last block, followed by a WRITE
If the data set becomes extended, it contains a truncated block that is not the last
block. Reading an extended data set with this condition results in a premature
end-of-data condition when the truncated block is read, giving the appearance that
the blocks following this truncated block do not exist.
Standard-format data sets that end in a short block on magnetic tape should not be
read backward because the data set would begin with a truncated block.
A format-F data set will not meet the requirements of a standard-format data set if
you do the following:
v Extend a fixed-length, blocked standard data set when the last block was
truncated.
v Use the POINT macro to prevent BSAM from filling a track other than the last
one. Do not skip a track when writing to a data set.
Standard format should not be used to read records from a data set that was
created using a record format other than standard, because other record formats
might not create the precise format required by standard.
If the characteristics of your data set are altered from the specifications described
above at any time, the data set should no longer be processed with the standard
format specification.
Chapter 20. Selecting Record Formats for Non-VSAM Data Sets 295
Selecting Record Formats for Non-VSAM Data Sets
Format-V Records
Figure 44 shows blocked and unblocked variable-length (format-V) records without
spanning. A block in a data set containing unblocked records is in the same format
as a block in a data set containing blocked records. The only difference is that with
blocked records each block can contain multiple records.
Block Block
LL LL
BDW Record A Record B Record C BDW Record D Record E Record F Blocked records
ll
RDW Data
ll 00 a b
Block Block
Record Record
0 LL 00 1 LLLL
The system uses the record or segment length information in blocking and
unblocking. The first four bytes of each record, record segment, or block make up a
descriptor word containing control information. You must allow for these
additional 4 bytes in both your input and output buffers.
There are two types of BDW. If bit 0 is zero, it is a nonextended BDW. Bits 1-15
contain the block length. Bits 16-31 are zeroes. The block length can be from 8 to
32 760 bytes. All access methods and device types support nonextended BDWs.
If bit 0 of the BDW is one, the BDW is an extended BDW and BDW bits 1-31
contain the block length. Extended BDWs are currently supported only on tape.
When writing, BSAM applications provide the BDW; for QSAM, the access method
creates the BDW. BSAM accepts an extended BDW if large block interface (LBI)
processing has been selected (DCBESLBI in the DCBE control block is set on) and
the output device is a magnetic tape. If an extended BDW is encountered and you
are not using LBI, or the output device is not magnetic tape, an ABEND 002 is
issued. IBM recommends that the BSAM user not provide an extended BDW
unless the block length is greater than 32 760 because an extended BDW would
prevent SAM reading the data on lower-level DFSMS systems. Other programs
that read the data set may also not support an extended BDW. QSAM creates
extended BDWs only for blocks whose length is greater than 32 760, otherwise the
nonextended format is used. When you read with either BSAM or QSAM, the
access method interrogates the BDW to determine its format.
For output, you must provide the RDW, except in data mode for spanned records
(described under “Controlling Buffers” on page 352). For output in data mode, you
must provide the total data length in the physical record length field (DCBPRECL)
of the DCB.
For input, the operating system provides the RDW, except in data mode. In data
mode, the system passes the record length to your program in the logical record
length field (DCBLRECL) of the DCB.
The optional control character (a) can be specified as the fifth byte of each record.
The first byte of data is a table reference character (b) if OPTCD=J has been
specified. The RDW, the optional control character, and the optional table reference
character are not punched or printed.
Chapter 20. Selecting Record Formats for Non-VSAM Data Sets 297
Selecting Record Formats for Non-VSAM Data Sets
Block
LL
ll ll ll
SDW Data SDW Data SDW Data
First Intermediate Last
segment segment segment
of logical of logical of logical
record ll ab record ll record ll
ll
RDW Data portion of logical record B
Logical record
Rest of data Data portion Data portion
(in user's work area)
ll ab portion of of intermediate of last
first segment segment segment
Optional table reference character
Optional control character
Reserved: 2 bytes
Record length: 2 bytes
When spanning is specified for blocked records, QSAM attempts to fill all blocks.
For unblocked records, a record larger than the block size is split and written in
two or more blocks. If your program is not using the large block interface, each
block contains only one record or record segment. Thus, the block size can be set
to the best block size for a given device or processing situation. It is not restricted
by the maximum record length of a data set. A record can, therefore, span several
blocks, and can even span volumes.
Spanned record blocks can have extended BDWs. See “Block Descriptor Word
(BDW)” on page 297.
When you use unit record devices with spanned records, the system assumes that
it is processing unblocked records and that the block size must be equivalent to the
length of one print line or one card. The system writes records that span blocks
one segment at a time.
When QSAM opens a spanned record data set in UPDAT mode, it uses the logical
record interface (LRI) to assemble all segments of the spanned record into a single,
logical input record, and to disassemble a single logical record into multiple
segments for output data blocks. A record area must be provided by using the
BUILDRCD macro or by specifying BFTEK=A in the DCB.
When you specify BFTEK=A, the open routine provides a record area equal to the
LRECL specification, which should be the maximum length in bytes. (An LRECL=0
is not valid.)
Chapter 20. Selecting Record Formats for Non-VSAM Data Sets 299
Selecting Record Formats for Non-VSAM Data Sets
The remaining bits of the third byte and all of the fourth byte are reserved for
possible future system use and must be 0.
The SDW for the first segment replaces the RDW for the record after the record is
segmented. You or the operating system can build the SDW, depending on which
access method is used.
v In the basic sequential access method, you must create and interpret the spanned
records yourself.
v In the queued sequential access method move mode, complete logical records,
including the RDW, are processed in your work area. GET consolidates segments
into logical records and creates the RDW. PUT forms segments as required and
creates the SDW for each segment.
Data mode is similar to move mode, but allows reference only to the data
portion of the logical record (that is, to one segment) in your work area. The
logical record length is passed to you through the DCBLRECL field of the data
control block.
In locate mode, both GET and PUT process one segment at a time. However, in
locate mode, if you provide your own record area using the BUILDRCD macro,
or if you ask the system to provide a record area by specifying BFTEK=A, then
GET, PUT, and PUTX process one logical record at a time.
You cannot use BFTEK=A or the BUILDRCD macro when the logical records
exceed 32 760 bytes. (BFTEK=A is ignored when LRECL=X is specified.)
Null Segments
A 1 in bit position 0 of the SDW indicates a null segment. A null segment means
that there are no more segments in the block. Bits 1-7 of the SDW and the
remainder of the block must be binary zeros. A null segment is not an
end-of-logical-record delimiter. (You do not have to be concerned about null
segments unless you have created a data set using null segments.)
Null segments are not recreated in PDSEs. For more information, see “Processing
PDSE Records” on page 444
Block
BDW LL
Block
length:
2 bytes
Reserved: ll ll ll
2 bytes
SDW Data SDW Data SDW Data
First Intermediate Last
segment segment segment
of logical of logical of logical
record ll record ll record ll
Segment Segment
Reserved: 1 byte control control
Segment control code: 1 byte code code
Segment length: 2 bytes
LL
Logical record
Data portion Data portion Data portion
(in user's work area) LL
00 of first of intermediate of last
segment segment segment
Reserved: 2 bytes
Block length: 2 bytes
When you specify spanned, unblocked record format for the basic direct access
method, and when a complete logical record cannot fit on the track, the system
tries to fill the track with a record segment. Thus, the maximum record length of a
data set is not restricted by track capacity. Segmenting records permits a record to
span several tracks, with each segment of the record on a different track. However,
because the system does not permit a record to span volumes, all segments of a
logical record in a direct data set are on the same volume.
Chapter 20. Selecting Record Formats for Non-VSAM Data Sets 301
Selecting Record Formats for Non-VSAM Data Sets
For format-U records, you must specify the record length when issuing the WRITE,
PUT, or PUTX macro. No length checking is performed by the system, so no error
indication will be given if the specified length does not match the buffer size or
physical record size.
In update mode, you must issue a GET or READ macro before you issue a PUTX
or WRITE macro to a data set on a direct access storage device. If you change the
record length when issuing the PUTX or WRITE macro, the record will be padded
with zeros or truncated to match the length of the record received when the GET
or READ macro was issued. No error indication will be given.
ISO/ANSI Tapes
ISO/ANSI tape records are written in format-F, format-D, format-S, or format-U.
Chapter 20. Selecting Record Formats for Non-VSAM Data Sets 303
Selecting Record Formats for Non-VSAM Data Sets
Format-F Records
For ISO/ANSI tapes, format-F records are the same as described in “Fixed-Length
Record Formats” on page 294, except for control characters, block prefixes, and
circumflex characters.
Block Prefixes. Record blocks can contain block prefixes. The block prefix can vary
from 0 to 99 bytes, but the length must be constant for the data set being
processed. For blocked records, the block prefix precedes the first logical record.
For unblocked records, the block prefix precedes each logical record.
Using QSAM and BSAM to read records with block prefixes requires that you
specify the BUFOFF parameter in the DCB. When using QSAM, you do not have
access to the block prefix on input. When using BSAM, you must account for the
block prefix on both input and output. When using either QSAM or BSAM, you
must account for the length of the block prefix in the BLKSIZE and BUFL
parameters of the DCB.
When you use BSAM on output records, the operating system does not recognize a
block prefix. Therefore, if you want a block prefix, it must be part of your record.
Note that you cannot include block prefixes in QSAM output records.
The block prefix can only contain EBCDIC characters that correspond to the 128,
seven-bit ASCII characters. Thus, you must avoid using data types such as binary,
packed decimal, and floating point that cannot always be converted into ASCII.
This is also true when CCSIDs are used when writing to ISO/ANSI V4 tapes.
Figure 48 shows the format of fixed-length records for ISO/ANSI tapes and where
control characters and block prefixes are positioned if they exist.
Optional Optional
block Record A Record B Record C block Record D Record E Record F
prefix prefix
Record
a Data
Optional control
Unblocked character: 1 byte
Block Block Block
Records
Chapter 20. Selecting Record Formats for Non-VSAM Data Sets 305
Selecting Record Formats for Non-VSAM Data Sets
Circumflex Characters. The GET routine tests each record (except the first) for all
circumflex characters (X'5E'). If a record completely filled with circumflex
characters is detected, QSAM ignores that record and the rest of the block. A
fixed-length record must not consist of only circumflex characters. This restriction
is necessary because circumflex characters are used to pad out a block of records
when fewer than the maximum number of records are included in a block, and the
block is not truncated.
Format-D Records
Format-D, format-DS, and format-DBS records are used for ISO/ANSI tape data
sets. ISO/ANSI records are the same as format-V records, with three exceptions:
v Block prefix
v Block size
v Control characters.
Block Prefix. A record block can contain a block prefix. To specify a block prefix,
code BUFOFF in the DCB macro. The block prefix can vary in length from 0 to 99
bytes, but its length must remain constant for all records in the data set being
processed. For blocked records, the block prefix precedes the RDW for the first or
only logical record in each block. For unblocked records, the block prefix precedes
the RDW for each logical record.
To specify that the block prefix is to be treated as a BDW by data management for
format-D or format-DS records on output, code BUFOFF=L as a DCB parameter.
Your block prefix must be 4 bytes long, and it must contain the length of the block,
including the block prefix. The maximum length of a format-D or format-DS,
BUFOFF=L block is 9999 because the length (stated in binary numbers by the user)
is converted to a 4 byte ASCII character decimal field on the ISO/ANSI tape when
the block is written. It is converted back to a 2 byte length field in binary followed
by two bytes of zeros when the block is read.
If you use QSAM to write records, data management fills in the block prefix for
you. If you use BSAM to write records, you must fill in the block prefix yourself. If
you are using chained scheduling to read blocked DB or DBS records, you cannot
code BUFOFF=absolute expression in the DCB. Instead, BUFOFF=L is required,
because the access method needs binary RDWs and valid block lengths to unblock
the records.
When you use QSAM, you cannot read the block prefix into your record area on
input. When using BSAM, you must account for the block prefix on both input and
output. When using either QSAM or BSAM, you must account for the length of the
block prefix in the BLKSIZE and BUFL parameters.
When using QSAM to access DB or DBS records, and BUFOFF=0 is specified, the
value of BUFL, if specified, must be increased by 4. If BUFL is not specified, then
BLKSIZE must be increased by 4. This permits a 4 byte QSAM internal processing
area to be included when the system acquires the buffers. These 4 bytes do not
become part of the user’s block.
When you use BSAM on output records, the operating system does not recognize
the block prefix. Therefore, if you want a block prefix, it must be part of your
record.
The block prefix can contain only EBCDIC characters that correspond to the 128,
seven-bit ASCII characters. Thus, you must avoid using data types (such as binary,
packed decimal, and floating point), that cannot always be converted into ASCII.
For DB and DBS records, the only time the block prefix can contain binary data is
when you have coded BUFOFF=L, which tells data management that the prefix is
a BDW. Unlike the block prefix, the RDW must always be binary. This is true
whether conversion or no conversion is specified with CCSID for Version 4 tapes.
Block Size. Version 3 tapes have a maximum block size of 2048. This limit can be
overridden by a label validation installation exit. For Version 4 tapes, the
maximum size is 32 760.
If you specify a maximum data set block size of 18 or greater when creating
variable-length blocks, then individual blocks can be shorter than 18 bytes. In those
cases data management pads each one to 18 bytes when the blocks are written
onto an ISO/ANSI tape. The padding character used is the ASCII circumflex
character, which is X’5E’.
Chapter 20. Selecting Record Formats for Non-VSAM Data Sets 307
Selecting Record Formats for Non-VSAM Data Sets
Block Block
Optional Optional
Blocked block Record A Record B Record C block Record D Record E Record F
records prefix prefix
ll
RDW Data
ll a
Block Block
Figure 49. Nonspanned Format-D Records for ISO/ANSI Tapes As Seen by the Program
Figure 50 on page 309 shows what spanned variable-length records for ISO/ANSI
tapes look like.
l l +1 l l +1 l l +1
SDW Data SDW Data SDW Data
Logical record
First segment Intermediate Last
(in LRI record area)
ll 00 of logical segment
logical
of segment of
record B logical
record B record B
Reserved: 2 bytes (must be zero)
Record length: 2 bytes
lll
RDW (binary) Complete logical record data
XLRI format
First segment Intermediate Last
logical record
(in XLRI record area)
0 lll of logical segment of segment of
logical logical
record B
record B record B
Record length: 3 bytes or 16776192
Reserved: 2 bytes (must be zero)
Figure 50. Spanned Variable-Length (Format-DS) Records for ISO/ANSI Tapes As Seen by the Program
Figure 50 shows the segment descriptor word (SDW), where the record descriptor
word (RDW) is located, and where block prefixes must be placed when they are
used. If you are not using IBM access methods see z/OS DFSMS Macro Instructions
for Data Sets for a description of ISO/ANSI record control words and segment
control words.
Chapter 20. Selecting Record Formats for Non-VSAM Data Sets 309
Selecting Record Formats for Non-VSAM Data Sets
QSAM or BSAM convert between ISO/ANSI segment control word (SCW) format
and IBM segment descriptor word (SDW) format. On output, the binary SDW LL
value (provided by you when using BSAM and by the access method when using
QSAM), is increased by 1 for the extra byte and converted to four ASCII numeric
characters. Because the binary SDW LL value will result in four numeric
characters, the binary value must not be greater than 9998. The fifth character is
used to designate which segment type (complete logical record, first segment, last
segment, or intermediate segment) is being processed.
On input, the four numeric characters designating the segment length are
converted to two binary SDW LL bytes and decreased by one for the unused byte.
The ISO/ANSI segment control character maps to the DS/DBS SDW control flags.
This conversion leaves an unused byte at the beginning of each SDW. It is set to
X'00'. See z/OS DFSMS Macro Instructions for Data Sets for more details on this
process.
On the tape, the SDW bytes are ASCII numeric characters even if the other bytes in
the record are not ASCII.
DS/DBS records with a record length of over 32 760 bytes can be processed using
XLRI. (XLRI is supported only in QSAM locate mode for ISO/ANSI tapes.) Using
the LRECL=X for ISO/ANSI causes an 013-DC ABEND.
LRECL=0K in the DCB macro specifies that the LRECL value will come from the
file label or JCL. When LRECL is from the label, the file must be opened as an
input file. The label (HDR2) value for LRECL will be converted to kilobytes and
rounded up when XLRI is in effect. When the ISO/ANSI label value for LRECL is
00 000 to show that the maximum record length can be greater than 99 999, you
must use LRECL=nK in the JCL or in the DCB to specify the maximum record
length.
You can express the LRECL value in JCL in absolute form or with the K notation.
When the DCB specifies XLRI, the system converts absolute values to kilobytes by
rounding up to an integral multiple of 1024. Absolute values are permissible only
from 5 to 32 760.
To show the record area size in the DD statement, code LRECL=nK, or specify a
data class that has the LRECL attribute you need. The value nK can range from 1K
to 16 383K (expressed in 1024 byte multiples). However, depending on the buffer
space available, the value you can specify in most systems will be much smaller
than 16 383K bytes. This value is used to determine the size of the record area
required to contain the largest logical record of the spanned format file.
When you use XLRI, the exact LRECL size is communicated in the three low-order
bytes of the RDW in the record area. This special RDW format exists only in the
record area to communicate the length of the logical record (including the 4 byte
RDW) to be written or read. (See the XLRI format of the RDW in Figure 50 on
page 309.) DCB LRECL shows the 1024 multiple size of the record area (rounded
up to the next nearest kilobyte). The normal DS/DBS SDW format is used at all
other times before conversion.
Format-U Records
Data can only be in format-U for ISO/ANSI Version 1 tapes (ISO 1001-1969 and
ANSI X3.27-1969). These records can be used for input only. They are the same as
the format-U records described in “Undefined-Length Record Format” on page 302
except the control characters must be ISO/ANSI control characters, and block
prefixes can be used.
Format-U records are not supported for Version 3 or Version 4 ISO/ANSI tapes. An
attempt to process a format-U record from a Version 3 or Version 4 tape results in
entering the label validation installation exit.
Chapter 20. Selecting Record Formats for Non-VSAM Data Sets 311
Selecting Record Formats for Non-VSAM Data Sets
The device-dependent (DEVD) parameter of the DCB macro specifies the type of
device where the data set’s volume resides:
Note: Because the DEVD option affects only for the DCB macro expansion, you are
guaranteed the maximum device flexibility by letting it default to DEVD=DA and
not coding any device-dependent parameter.
If the immediate destination of the data set is a device, such as a disk or tape,
which does not recognize the control character, the system assumes that the control
character is the first byte of the data portion of the record. If the destination of the
data set is a printer or punch and you have not indicated the presence of a control
character, the system regards the control character as the first byte of data. If the
destination of the data set is SYSOUT, the effect of the control characters is
determined at the ultimate destination of the data set. See z/OS DFSMS Macro
Instructions for Data Sets for a list of the control characters.
The optional control character must be in the first byte of format-F and format-U
records, and in the fifth byte of format-V records and format-D records where
BUFOFF=L. If the immediate destination of the data set is a sequential DASD data
set or an IBM standard or ISO/ANSI standard labelled tape, OPEN records the
presence and type of control characters in the data set label. This is so that a
program that copies the data set to a print, punch, or SYSOUT data set can
propagate RECFM and therefore control the type of control character.
Except for a PDSE or compressed format data set, the size of a block cannot exceed
what the system can write on a track. For PDSEs and compressed format data sets,
the access method simulates blocks, and you can select a value for BLKSIZE
without regard to the track length. A compressed format data set is a type of
extended format data set that is stored in a data format that can contain records
that the access method compressed.
When you create a tape data set with variable-length record format-V or format-D,
the control program pads any data block shorter than 18 bytes. For format-V
records, it pads to the right with binary zeros so that the data block length equals
18 bytes. For format-D (ASCII) records, the padding consists of ASCII circumflex
characters, which are equivalent to X'5E's.
Table 31 shows how the tape density (DEN) specifies the recording density in bits
per inch per track.
Table 31. Tape Density (DEN) Values
DEN 7-Track Tape 9-Track Tape
1 556 (NRZI) N/A
2 800 (NRZI) 800 (NRZI)1
3 N/A 1600 (PE)2
4 N/A 6250 (GCR)3
Note:
1. NRZI is for nonreturn-to-zero-inverted mode.
2. PE is for phase encoded mode.
3. GCR is for group coded recording mode.
Chapter 20. Selecting Record Formats for Non-VSAM Data Sets 313
Selecting Record Formats for Non-VSAM Data Sets
When DEN is not specified, the highest density capable by the unit will be used.
The DEN parameter has no effect on an 18-track or 36-track tape cartridge.
The track recording technique (TRTCH) for 7-track tape can be specified as follows.
The track recording technique (TRTCH) for magnetic tape drives with Improved
Data Recording Capability can be specified as:
The system programmer sets the 3480 default for COMP or NOCOMP in the
DEVSUPxx member of SYS1.PARMLIB.
Using a Printer
Records of a data set that you write directly or indirectly to a printer with BSAM
or QSAM can contain control characters. See “Using Optional Control Characters”
on page 312. Independently of whether the records contain control characters, they
can contain table reference characters.
A numeric table reference character (such as 0) selects the font to which the
character corresponds. The characters’ number values represent the order in which
you specified the font names with the CHARS parameter. In addition to using
table reference characters that correspond to font names specified in the CHARS
parameter, you can code table reference characters that correspond to font names
specified in the PAGEDEF control structure. With CHARS, valid table reference
characters vary and range between 0 and 3. With PAGEDEF, they range between 0
and 126. The system treats table reference characters with values greater than the
limit as 0 (zero).
Indicate the presence of table reference characters by coding OPTCD=J in the DCB
macro, in the DD statement, or in the dynamic allocation call.
The system processes table reference characters on printers such as the IBM 3800
and IBM 3900 that support the CHARS and PAGEDEF parameters on the DD
statement. If the device is a printer that does not support CHARS or PAGEDEF, the
system discards the table reference character. This is true both for printers that are
allocated directly to the job step and for SYSOUT data sets. This makes it
unnecessary for your program to know whether the printer supports table
reference characters.
If the immediate destination of the data set for which OPTCD=J was specified is
DASD, the system treats the table reference characters as part of the data. The
system also records the OPTCD value in the data set label. If the immediate
destination is tape, the system does not record the OPTCD value in the data set
label.
Record Formats
The printer can accept format-F, format-V, and format-U records. The system does
not print the first 4 bytes (record descriptor word or segment descriptor word) of
format-V records or record segments. For format-V records, at least 1 byte of data
must follow the record or segment descriptor word or the carriage control
character. The system does not print the carriage control character, if you specify it
in the RECFM parameter. The system does not position the printer to channel 1 for
the first record unless you use a carriage control character to specify this position.
Because each line of print corresponds to one record, the record length should not
exceed the length of one line on the printer. For variable-length spanned records,
each line corresponds to one record segment; block size should not exceed the
length of one line on the printer.
If you do not specify carriage control characters, you can specify printer spacing
(PRTSP) as 0, 1, 2, or 3. If you do not specify PRTSP, the system assumes 1.
For all QSAM RECFM=FB printer data sets, the system adjusts the block size in the
DCB to equal the logical record length. The system treats this data set as
RECFM=F. If the system builds the buffers for this data set, the BUFL parameter
determines the buffer length. If you do not specify the BUFL parameter, the system
uses the adjusted block size for the buffer length.
To reuse the DCB with a block size larger than the logical record length, you must
reset DCBBLKSI in the DCB and ensure that the buffers are large enough to
contain the largest block size. To ensure the buffer size, specify the BUFL
parameter before the first open of the data set. Or you can issue the FREEPOOL
macro after each CLOSE macro, so that the system builds a new buffer pool of the
correct size each time it opens the data set.
A record size of 80 bytes is called EBCDIC mode (E) and a record size of 160 bytes
is called column binary mode (C). Each punched card corresponds to one physical
record. Therefore, you should restrict the maximum record size to EBCDIC mode
(80 bytes) or column binary mode (160 bytes). When column binary mode is used
for the card punch, BLKSIZE must be 160 unless you are using PUT. Then you can
specify BLKSIZE as 160 or a multiple of 160, and the system handles this as
described under “PUT—Write a Record” on page 365. Specify the read/punch
mode of operation (MODE) parameter as either card image column binary mode
(C) or EBCDIC mode (E). If this information is omitted, E is assumed. The stacker
Chapter 20. Selecting Record Formats for Non-VSAM Data Sets 315
Selecting Record Formats for Non-VSAM Data Sets
For all QSAM RECFM=FB card punch data sets, the block size in the DCB is
adjusted by the system to equal the logical record length. This data set is treated as
RECFM=F. If the system builds the buffers for this data set, the buffer length is
determined by the BUFL parameter. If the BUFL parameter was not specified, the
adjusted block size is used for the buffer length.
If the DCB is to be reused with a block size larger than the logical record length,
you must reset DCBBLKSI in the DCB and ensure that the buffers are large enough
to contain the largest block size expected. You can ensure the buffer size by
specifying the BUFL parameter before the first time the data set is opened, or by
issuing the FREEPOOL macro after each CLOSE macro so the system will build a
new buffer pool of the correct size each time the data set is opened.
Punch error correction on the IBM 2540 Card Read Punch is not performed.
The IBM 3525 Card Punch accepts only format-F records for print and associated
data sets. Other record formats are permitted for the read data set, punch data set,
and interpret punch data set.
Topic Location
Processing Sequential and Partitioned Data Sets 318
Using OPEN to Prepare a Data Set for Processing 323
Selecting Data Set Options 327
Changing and Testing the DCB and DCBE 336
Using CLOSE to End the Processing of a Data Set 338
Opening and Closing Data Sets: Considerations 341
Positioning Volumes 344
Managing SAM Buffer Space 347
Constructing a Buffer Pool 348
Controlling Buffers 352
Choosing Buffering Techniques and GET/PUT Processing Modes 355
Using Buffering Macros with Queued Access Method 356
Using Buffering Macros with Basic Access Method 356
For each data set that you want to process, there must be a corresponding data
control block (DCB) and data definition (DD) statement or its dynamic allocation
equivalent. The characteristics of the data set and device-dependent information
can be supplied by either source. As specified in z/OS MVS JCL User’s Guide and
z/OS MVS JCL Reference, the DD statement must also supply data set identification.
Your program, SMS, and exit routines can supply device characteristics, space
allocation requests, and related information. You establish the logical connection
between a DCB and a DD statement by specifying the name of the DD statement
in the DDNAME field of the DCB macro, or by completing the field yourself
before opening the data set.
You can process a non-VSAM data set to read, update, or add data by following
this procedure:
1. Create a data control block (DCB) to identify the data set to be opened. A DCB
is required for each data set and is created in a processing program by a DCB
macro.
When the program is run, the data set name and other important information
(such as data set disposition) are specified in a JCL statement called the data
definition (DD) statement, or in a call to dynamic allocation.
2. Optionally supply a data control block extension (DCBE). You can supply
options and test data set characteristics that the system stores in the DCBE.
3. Connect your program to the data set you want to process, using the OPEN
macro. The OPEN macro also positions volumes, writes data set labels and
allocates virtual storage. You can consider various buffering macros and
options.
4. Request access to the data set. For example, if you are using BSAM to process a
sequential data set, you can use the READ, WRITE, NOTE, or POINT macro.
5. Disconnect your program from the data set, using the CLOSE macro. The
CLOSE macro also positions volumes, creates data set labels, completes writing
queued output buffers, and frees virtual and auxiliary storage.
Primary sources of information to be placed in the data control block are a DCB
macro, data definition (DD) statement, a dynamic allocation SVC 99 parameter list,
a data class, and a data set label. A data class can be used to specify all of your
data set’s attributes except data set name and disposition. Also, you can provide or
change some of the information during execution by storing the applicable data in
the appropriate field of the DCB or DCBE.
It is the intent of IBM that your programs that use documented programming
interfaces and work on the current level of the system will run at least equally well
on future levels of the system. However, IBM cannot guarantee that. Characteristics
such as certain reason codes that are documented only in z/OS DFSMSdfp Diagnosis
are not part of the intended programming interface. Examples of potential
problems are:
v Your program has a timing dependency such as a READ or WRITE macro
completes before another event. In some cases READ or WRITE is synchronous
with your program.
v Your program tests a field or control block that is not part of the intended
programming interface. An example is status indicators not documented in
Figure 112 on page 526.
v Your program relies on the system to enforce a restriction such as the maximum
value of something. For example, the maximum block size on DASD used to be
less than 32 760 bytes, the maximum NCP value for BSAM used to be 99 and the
maximum block size on tape used to be 32 760.
v New releases might introduce new return and reason codes for system functions.
For these reasons, the operating system has many options. It is not the intent of
IBM to require extensive education to use assembly language programming. The
purpose of this section is to show how to read and write sequential data sets
simply in High Level Assembler while maximizing ease of use, migration potential,
the likelihood of coexistence, and device independence, while getting reasonable
performance.
You can use the examples in this section to read or write sequential data sets and
partitioned members. These include ordinary disk data sets, extended format data
sets, compressed format data sets, PDS members, PDSE members, UNIX files,
UNIX FIFOs, spooled data sets (SYSIN and SYSOUT), real or VM simulated unit
record devices, TSO/E terminals, magnetic tapes, dummy data sets, and most
combinations of them in a concatenation.
Recommendations:
v Use QSAM because it is simpler. Use BSAM if you need to read or write
nonsequentially or you need more control of I/O completion. With BSAM you
can issue the NOTE, POINT, CNTRL, and BSP macros. These macros work
differently on various device classes. See “Record Format—Device Type
Considerations” on page 311 and “Achieving Device Independence” on page
399. Use BPAM if you need to access more than one member of a PDS or PDSE.
v Specify LRECL and RECFM in the DCB macro if your program’s logic depends
on the record length and record format. If you omit either of them, your
program is able to handle more types of data but you have to write more code.
See Chapter 20, “Selecting Record Formats for Non-VSAM Data Sets,” on page
293.
v Use format-F or format-V records, and specify blocking (RECFM=FB or VB). This
allows longer blocks. Format-U generally is less efficient. Format-D works only
on certain types of tape.
v Omit the block size in the DCB macro. Code BLKSIZE=0 in the DCBE macro to
use the large block interface. When your program is reading, this allows it to
adapt to the appropriate block size for the data set. If the data set has no label
(such as for an unlabeled tape), the user can specify the block size in the DD
statement or dynamic allocation. For some data set types (such as PDSEs and
UNIX files) there is no real block size; the system simulates any valid block size
and there is a default.
When your program is writing and you omit DCB BLKSIZE and code DCBE
BLKSIZE=0, this enables the user to select the block size in the DD statement or
dynamic allocation. The user should only do this if there is a reason to do so,
such as a reading program cannot accept large blocks. If the user does not
specify a block size, OPEN selects one that is valid for the LRECL and RECFM
and is optimal for the device. Coding BLKSIZE=0 in the DCBE macro lets OPEN
select a block size that exceeds 32 760 bytes if large block interface (LBI)
processing is being used, thereby possibly shortening run time significantly. If
OPEN might select a block size that is larger than the reading programs can
handle, the user can code the BLKSZLIM keyword in the DD statement or the
dynamic allocation equivalent or rely on the block size limit in the data class or
in the DEVSUPxx PARMLIB member.
If you want to provide your own default for BLKSIZE and not let OPEN do it,
you can provide a DCB OPEN exit routine. See “DCB OPEN Exit” on page 543.
The installation OPEN exit might override your program’s selection of DCB
parameters.
v Omit BUFL (buffer length) because it relies on the value of the sum of BLKSIZE
and KEYLEN and because it cannot exceed 32 760.
v Omit BUFNO (number of buffers) for QSAM, BSAM, and BPAM and NCP if you
use BSAM or BPAM. Let OPEN select QSAM BUFNO. This is particularly
important with striped data sets. The user can experiment with different values
for QSAM BUFNO to see if it can improve run time.
With BSAM and BPAM, code MULTSDN and MULTACC in the DCBE macro.
See “Improving Performance for Sequential Data Sets” on page 401.
With QSAM, BSAM, and BPAM this generally has no effect on the EXCP count
that is reported in SMF type 14, 15, 21, and 30 records. On DASD, this counts
blocks that are transferred and not the number of channel programs. This causes
the counts to be repeatable and not to depend on random factors in the system.
v Omit BUFOFF because it works only with tapes with ISO/ANSI standard labels
or no labels.
v If you choose BSAM or BPAM in 31-bit addressing mode, do not use the BUILD
or GETPOOL macro and do not request OPEN to build a buffer pool. If you
code a nonzero BUFNO value, you are requesting OPEN to build a buffer pool.
Such a buffer pool resides below the line. Use your own code to allocate data
areas above the line.
v Code A or M for RECFM or code OPTCD=J only if your program logic requires
reading or writing control characters. These are not the EBCDIC or ASCII control
characters such as carriage return, line feed, or new page.
v Omit KEYLEN, DEVD, DEN, TRTCH, MODE, STACK, and FUNC because they
are device dependent. KEYLEN also makes the program run slower unless you
code KEYLEN=0. The user can code most of them in the DD statement if
needed.
v Omit BFALN, BFTEK, BUFCB, EROPT, and OPTCD because they probably are
not useful, except OPTCD=J. OPTCD=J specifies that the records contain table
reference characters. See “Table Reference Character” on page 314.
v LOCATE mode (MACRF=(GL,PL)) might be more efficient than move mode.
This depends on your program’s logic. The move mode requires QSAM to move
the data an extra time.
v If your program runs with 31-bit addressing mode (AMODE), code
RMODE31=BUFF in the DCBE so that the QSAM buffers are above the 16 MB
line. A nonreentrant, RMODE 24 program (residing below the 16 MB line) is
simpler than a reentrant or RMODE 31 program because the DCB must reside
below the line in storage that is separate for each open data set.
v Code a SYNAD (I/O error) routine to prevent the 001 ABEND that the system
issues when a data set has an I/O error. In the SYNAD routine, issue the
SYNADAF macro, write the message, and terminate the program. This writes a
message and avoids a dump because the dump is not likely to be useful.
v Use extended-format data sets even if you are not using striping. They tend to
be more efficient, and OPEN provides a more efficient default for BUFNO.
Avoid writing many blocks that are shorter than the maximum for the data set
because short blocks waste disk space.
Figure 52 is the same as Figure 51 on page 321 but converted to be reentrant and
reside above the 16 MB line:
COPYPROG CSECT
COPYPROG RMODE ANY
COPYPROG AMODE 31
GETMAIN R,LV=Arealen,LOC=(BELOW,64)
LR R11,R1
USING MYAREA,R11
USING IHADCB,InDCB
USING DCBE,INDCBE
MVC IHADCB(AreaLen),MYDCB Copy DCB and DCBE
LA R0,DCBE Point DCB copy to
ST R0,DCBDCBE DCBE copy
OPEN (IHADCB,),MF=(E,INOPEN) Open to read
LTR R15,R15 Branch if DDname seems not
BNZ ... to be defined
* Loop to read all the records
LOOP GET INDCB Get address of a record in R1
... Process a record
B LOOP Branch to read next record
* I/O error routine for INDCB
IOERROR SYNADAF ACSMETH=QSAM Get message area
MVI 6(R1),X'80' Set WTO MCS flags
MVC 8(16,R1),=CL16’I/O Error’ Put phrase on binary fields
MVC 128(4,R1),=X'00000020' Set ROUTCDE=11 (WTP)
WTO MF=(E,4(R1)) Write message to user
SYNADRLS Release SYNADAF area, fall through
* The GET macro branches here after all records have been read
EOD CLOSE MF=(E,INOPEN) Close the data set
* FREEPOOL not needed due to RMODE31=BUFF
... Rest of program
MYDCB DCB DDNAME=INPUT,MACRF=GL,RECFM=VB, *
DCBE=MYDCBE
MYDCBE DCBE EODAD=EOD,SYNAD=IOERROR,BLKSIZE=0,RMODE31=BUFF
OPEN (,INPUT),MF=L,MODE=24
AreaLen EQU *-MYDCB
DCBD DSORG=QS,DEVD=DA
IHADCBE Could be above 16 MB line
MYAREA DSECT
INDCB DS XL(DCBLNGQS)
INDCBE DS XL(DCBEEND-DCBE)
INOPEN OPEN (,),MF=L
The DCB is filled in with information from the DCB macro, the JFCB, or an
existing data set label. If more than one source specifies information for a
particular field, only one source is used. A DD statement takes priority over a data
set label, and a DCB macro over both.
You can change most DCB fields either before the data set is opened or when the
operating system returns control to your program (at the DCB OPEN user exit).
Some fields can be changed during processing. Do not try to change a DCB field,
such as data set organization, from one that permitted the data set to be allocated
to a system-managed volume, to one that makes the data set ineligible to be
system-managed. For example, do not specify a data set organization in the DD
statement as physical sequential and, after the data set has been allocated to a
system-managed volume, try to open the data set with a DCB that specifies the
data set as physical sequential unmovable. The types of data sets that cannot be
system-managed are listed in Chapter 2, “Using the Storage Management
Subsystem,” on page 27.
6 Installation
4 DCB OPEN
7
exit routine
Old
Data Set
Label
DA6D4074
Figure 53. Sources and Sequence of Operations for Completing the DCB
When the data set is closed, the DCB is restored to the condition it had before the
data set was opened (except that the buffer pool is not freed) unless you coded
RMODE31=BUFF and OPEN accepted it.
Because macros are expanded during the assembly of your program, you must
supply the macro forms to be used in processing each data set in the associated
DCB macro. You can supply buffering requirements and related information in the
DCB and DCBE macro, the DD statement, or by storing the applicable data in the
appropriate field of the DCB or DCBE before the end of your DCB exit routine. If
the addresses of special processing routines (EODAD, SYNAD, or user exits) are
omitted from the DCB and DCBE macro, you must complete them in the DCB or
DCBE before they are required.
If the data set resides on a direct access volume, you can code UPDAT in the
processing method parameter to show that records can be updated.
RDBACK is supported only for magnetic tape. By coding RDBACK, you can
specify that a magnetic tape volume containing format-F or format-U records is to
be read backward. (Variable-length records cannot be read backward.)
You can override the INOUT, OUTIN, UPDAT, or OUTINX at execution time by
using the IN or OUT options of the LABEL parameter of the DD statement, as
discussed in z/OS MVS JCL Reference. The IN option indicates that a BSAM data set
opened for INOUT or a direct data set opened for UPDAT is to be read only. The
OUT option indicates that a BSAM data set opened for OUTIN or OUTINX is to be
written in only.
Restriction: Unless allowed by the label validation installation exit, OPEN for
OUTPUT or OUTIN with DISP=MOD, INOUT, EXTEND, or OUTINX requests
cannot be processed for ISO/ANSI Version 3 tapes or for non-IBM-formatted
Version 4 tapes, because this kind of processing updates only the closing label of
the file, causing a label symmetry conflict. An unmatching label should not frame
the other end of the file. This restriction does not apply to IBM-formatted
ISO/ANSI Version 4 tapes.
Related reading: For information about the label validation installation exit, see
z/OS DFSMS Installation Exits.
Processing PDSEs. For PDSEs, INOUT is treated as INPUT. OUTIN, EXTEND, and
OUTINX are treated as OUTPUT.
In Figure 54 the data sets associated with three DCBs are to be opened
simultaneously.
OPEN (TEXTDCB,,CONVDCB,(OUTPUT),PRINTDCB, X
(OUTPUT))
Installation Exits). If you use LBI, the maximum block size is 32 760 except on
magnetic tape, where the maximum is larger.
System-determined block size: The system can derive the best block size for
DASD, tape, and spooled data sets. The system does not derive a block size for
BDAM, old, or unmovable data sets, or when the RECFM is U. See
“System-Determined Block Size” on page 329 for more information on
system-determined block sizes for DASD and tape data sets.
Minimum block size: If you specify a block size other than zero, there is no
minimum requirement for block size except that format-V blocks have a minimum
block size of 8. However, if a data check occurs on a magnetic tape device, any
block shorter than 12 bytes in a read operation, or 18 bytes in a write operation, is
treated as a noise record and lost. No check for noise is made unless a data check
occurs.
You request LBI by coding a BLKSIZE value, even 0, in the DCBE macro or by
turning on the DCBEULBI bit before completion of the DCB OPEN exit. Coding
BLKSIZE causes the bit to be on. It is best if this bit is on before you issue the
OPEN macro. That lets OPEN merge a large block size into the DCBE.
Your DCB OPEN exit can test bit DCBESLBI to learn if the access method supports
LBI. If your program did not request unlike attributes processing (by turning on bit
DCBOFPPC) before issuing OPEN, then DCBESLBI being on means that all the
data sets in the concatenation support LBI. If your program requested unlike
attributes processing before OPEN, then DCBESLBI being on each time that the
system calls your DCB OPEN exit or JFCBE exit means only that the next data set
supports LBI. After the exit, OPEN leaves DCBESLBI on only if DCBEULBI also is
on. Your exit routine can change DCBEULBI. Never change DCBESLBI.
Another way to learn if the data set type supports LBI is to issue a DEVTYPE
macro with INFO=AMCAP. See z/OS DFSMSdfp Advanced Services. After the DCB
OPEN exit, the following items apply when DCBESLBI is on:
v OPEN is honoring your request for LBI.
v Do not use the BLKSIZE field in the DCB. The system uses it. Use the BLKSIZE
field in the DCBE. For more information about DCBE field descriptions see z/OS
DFSMS Macro Instructions for Data Sets.
328 z/OS V1R7.0 DFSMS Using Data Sets
Data Control Block (DCB)
v You can use extended BDWs with format-V records. Format-V blocks longer
than 32 760 bytes require an extended BDW. See “Block Descriptor Word (BDW)”
on page 297.
v When reading with BSAM or BPAM, your program determines the length of the
block differently. See “Determining the Length of a Block when Reading with
BSAM, BPAM, or BDAM” on page 403.
v When writing with BSAM or BPAM, your program sets the length of each block
differently. See “Writing a Short Format-FB Block with BSAM or BPAM” on page
405.
v When reading undefined-length records with QSAM, your program learns the
length of the block differently. See the GET macro description in z/OS DFSMS
Macro Instructions for Data Sets.
To write format-U or format-D blocks without BUFOFF=L, you must code the ’S’
parameter for the length field on the WRITE macro. For more information, see
z/OS DFSMS Macro Instructions for Data Sets.
v When writing undefined-length records with QSAM, you store the record length
in the DCBE before issuing each PUT. See z/OS DFSMS Macro Instructions for
Data Sets.
v After an I/O error, register 0 and the status area in the SYNAD routine are
slightly different, and the beginning of the area returned by the SYNADAF
macro is different. See Figure 112 on page 526 and z/OS DFSMS Macro
Instructions for Data Sets.
v If the block size exceeds 32 760, you cannot use the BUILD, GETPOOL, or
BUILDRCD macro or the BUFL parameter.
v Your program cannot request exchange buffering (BFTEK=E), OPTCD=H (VSE
embedded checkpoints) or open with the UPDAT option.
v With LBI, fixed-length unblocked records greater than 32 760 bytes are not
supported by QSAM.
The system determines the block size for a data set as follows:
1. OPEN calculates a block size.
Note: A block size may be determined during initial allocation of a DASD data
set. OPEN will either use that block size or calculate a new block size if
any of the data set characteristics (LRECL,RECFM) were changed from
the values specified during initial allocation.
2. OPEN compares the calculated block size to a block size limit, which affects
only data sets on tape because the minimum value of the limit is 32 760.
3. OPEN attempts to decrease the calculated block size to be less than or equal to
the limit.
The block size limit is the first nonzero value from the following items:
Your program can obtain the BLKSZLIM value that is in effect by issuing the
RDJFCB macro with the X'13' code (see z/OS DFSMSdfp Advanced Services).
Because larger blocks generally cause data transfer to be faster, why would you
want to limit it? Some possible reasons follow:
v A user will take the tape to an operating system or older z/OS system or
application program that does not support the large size that you want. The
other operating system might be a backup system that is used only for disaster
recovery. An OS/390® system before Version 2 Release 10 does not support the
large block interface that is needed for blocks longer than 32 760.
v You want to copy the tape to a different type of tape or to DASD without
reblocking it, and the maximum block size for the destination is less than you
want. An example is the IBM 3480 Magnetic Tape Subsystem, whose maximum
block size is 65 535. The optimal block size for an IBM 3590 is 224 KB or 256 KB,
depending on the level of the hardware. To copy from an optimized 3590 to a
3480 or 3490, you must reblock the data.
v A program that reads or writes the data set and runs in 24-bit addressing mode
might not have enough buffer space for very large blocks.
DASD Data Sets: When you create (allocate space for) a new DASD data set, the
system derives the optimum block size and saves it in the data set label if all of
the following are true:
v Block size is not available or specified from any source. BLKSIZE=0 can be
specified.
v You specify LRECL or it is in the data class. The data set does not have to be
SMS managed.
v You specify RECFM or it is in the data class. It must be fixed or variable.
v You specify DSORG as PS or PO or you omit DSORG and it is PS or PO in the
data class.
Your DCB OPEN exit can examine the calculated block size in the DCB or DCBE if
no source other than the system supplied the block size.
When a program opens a DASD data set for writing the first time since it was
created, OPEN derives the optimum block size again after calling the optional DCB
OPEN exit if all the following are true:
v Either of the following conditions is true:
– The block size in the DCB (or DCBE with LBI) is zero.
– The system determined the block size when the data set was created, and
RECFM or LRECL in the DCB is different from the data set label.
v LRECL is in the DCB.
v RECFM is in the DCB and it is fixed or variable.
v The access method is BSAM, BPAM, or QSAM.
For a compressed format data set, the system does not consider track length. The
access method simulates blocks whose length is independent of the real physical
block size. The system-determined block size is optimal in terms of I/O buffer size.
The system chooses a value for the BLKSIZE parameter as it would for an IBM
standard labeled tape as in Table 33 on page 333 and always limits it to 32 760.
This value is stored in the DCB or DCBE and DS1BLKL in the DSCB. However,
regardless of the block size found in the DCB and DSCB, the actual size of the
physical blocks written to DASD is calculated by the system to be optimal for the
device.
The system does not determine the block size for the following types of data sets:
v Unmovable data sets
v Data sets with a record format of U
v Existing data sets with DISP=OLD (data sets being opened with the INPUT,
OUTPUT, or UPDAT options on the OPEN macro)
v Direct data sets
v When extending data sets
Unmovable data sets cannot be system managed. There are exceptions, however, in
cases where the checkpoint/restart function has set the unmovable attribute for
data sets that are already system managed. This setting prevents data sets opened
previously by a checkpointed application from being moved until you no longer
want to perform a restart on that application.
Tape Data Sets: The system can determine the optimum block size for tape data
sets. The system sets the block size at OPEN on return from the DCB OPEN exit
and installation DCB OPEN exit if:
v The block size in DCBBLKSI is zero (or DCBEBLKSI if using LBI).
v The record length is not zero.
v The record format is fixed or variable.
v The tape data set is open for OUTPUT or OUTIN.
v The access method is BSAM or QSAM.
Rule: For programming languages, the program must specify the file is blocked to
get tape system-determined block size. For example, with COBOL, the program
should specify BLOCK CONTAINS 0 RECORDS.
The system-determined block size depends on the record format of the tape data
set. Table 33 shows the block sizes that are set for tape data sets.
Table 33. Rules for Setting Block Sizes for Tape Data Sets or Compressed Format Data Sets
RECFM Block Size Set
F or FS LRECL
FB or FBS (Label type=AL Highest possible multiple of LRECL that is ≤ 2048 if LRECL
Version 3) ≤ 2048
Highest possible multiple of LRECL that is ≤ 32 760 if
LRECL > 2048
FB or FBS (Label type=AL Not tape or not LBI: highest possible multiple of LRECL that
Version 4 or not AL) is ≤ 32 760
RECFM Allowances:
v RECFM=D is not allowed for SL tapes
v RECFM=V is not allowed for AL tapes
v When creating a direct data set, the DSORG in the DCB macro must specify PS
or PSU and the DD statement must specify DA or DAU.
v PS is for sequential and extended format DSNTYPE.
v PO is the data set organization for both PDSEs and PDSs. DSNTYPE is used to
distinguish between PDSEs and PDSs.
Rule: Do not specify nonzero key length when opening a PDSE or extended format
data set for output.
For buffered tape devices, the write validity check option delays the device end
interrupt until the data is physically on tape. When you use the write validity
check option, you get none of the performance benefits of buffering and the
average data transfer rate is much less.
Rule: OPTCD=W is ignored for PDSEs and for extended format data sets.
DD Statement Parameters
Each of the data set description fields of the DCB, except for direct data sets, can
be specified when your job is to be run. Also, data set identification and
disposition, and device characteristics, can be specified at that time. To allocate a
data set, you must specify the data set name and disposition in the DD statement.
In the DD statement, you can specify a data class, storage class, and management
class, and other JCL keywords.You can specify the classes using the JCL keywords
DATACLAS, STORCLAS, and MGMTCLAS. If you do not specify a data class,
storage class, or management class, the ACS routines assign classes based on the
defaults defined by your storage administrator. Storage class and management
class can be assigned only to data sets that are to be system managed.
ACS Routines. Your storage administrator uses the ACS routines to determine
which data sets are to be system managed. The valid classes that can either be
specified in your DD statement or assigned by the ACS routines are defined in the
SMS configuration by your storage administrator. The ACS routines analyze your
JCL, and if you specify a class that you are not authorized to use or a class that
does not exist, your allocation fails. For more information about specifying data
class, storage class, and management class in your DD statement see z/OS MVS
JCL User’s Guide.
Data Class. Data class can be specified for both system-managed and
non-system-managed data sets. It can be specified for both DASD and tape data
sets. You can use data class together with the JCL keyword LIKE for tape data sets.
This simplifies migration to and from system-managed storage. When you allocate
a data set, the ACS routines assign a data class to the data set, either the data class
you specify in your DD statement, or the data class defined as the default by your
storage administrator. The data set is allocated using the information contained in
the assigned data class. See your storage administrator for information on the data
classes available to your installation and z/OS DFSMSdfp Storage Administration
Reference for more information about allocating system-managed data sets and
using SMS classes.
You can override any of the information contained in a data class by specifying the
values you want in your DD statement or dynamic allocation. A data class can
contain any of the following information.
Related reading: For more information on the JCL keywords that override data
class information, see z/OS MVS JCL User’s Guide and z/OS MVS JCL Reference.
The easiest data set allocation is one that uses the data class, storage class, and
management class defaults defined by your storage administrator. The following
example shows how to allocate a system-managed data set:
//ddname DD DSNAME=NEW.PLI,DISP=(NEW,KEEP)
You cannot specify the keyword DSNTYPE with the keyword RECORG in the JCL
DD statement. They are mutually exclusive.
You should not attempt to change the data set characteristics of a system-managed
data set to characteristics that make it ineligible to be system managed. For
example, do not specify a data set organization in the DD statement as PS and,
after the data set has been allocated to a system-managed volume, change the DCB
to specify DSORG=PSU. That causes abnormal end of your program.
The DCBD macro generates a dummy control section (DSECT) named IHADCB.
Each field name symbol consists of DCB followed by the first 5 letters of the
keyword subparameter for the DCB macro. For example, the symbolic name of the
block size parameter field is DCBBLKSI. (For other DCB field names see z/OS
DFSMS Macro Instructions for Data Sets.)
The attributes of each DCB field are defined in the dummy control section. Use the
DCB macro’s assembler listing to determine the length attribute and the alignment
of each DCB field.
...
OPEN (TEXTDCB,INOUT),MODE=31
...
EOFEXIT CLOSE (TEXTDCB,REREAD),MODE=31,TYPE=T
LA 10,TEXTDCB
USING IHADCB,10
MVC DCBSYNAD+1(3),=AL3(OUTERROR)
B OUTPUT
INERROR STM 14,12,SYNADSA+12
...
OUTERROR STM 14,12,SYNADSA+12
...
TEXTDCB DCB DSORG=PS,MACRF=(R,W),DDNAME=TEXTTAPE, C
EODAD=EOFEXIT,SYNAD=INERROR
DCBD DSORG=PS
...
The data set defined by the data control block TEXTDCB is opened for both input
and output. When the application program no longer needs it for input, the
EODAD routine closes the data set temporarily to reposition the volume for
output. The EODAD routine then uses the dummy control section IHADCB to
change the error exit address (SYNAD) from INERROR to OUTERROR.
The EODAD routine loads the address TEXTDCB into register 10, the base register
for IHADCB. Then it moves the address OUTERROR into the DCBSYNAD field of
the DCB. Even though DCBSYNAD is a fullword field and contains important
information in the high-order byte,change only the 3 low-order bytes in the field.
All unused address fields in the DCB, except DCBEXLST, are set to 1 when the
DCB macro is expanded. Many system routines interpret a value of 1 in an address
field as meaning no address was specified, so use it to dynamically reset any field
you do not need.
The IHADCBE macro generates a dummy control section (DSECT) named DCBE.
For the symbols generated see z/OS DFSMS Macro Instructions for Data Sets.
All address fields in the DCBE are 4 bytes. All undefined addresses are set to 0.
In Figure 56 the data sets associated with three DCBs are to be closed
simultaneously. Because no volume positioning parameters (LEAVE, REWIND) are
specified, the positioning indicated by the DD statement DISP parameter is used.
CLOSE (TEXTDCB,,CONVDCB,,PRINTDCB)
The TYPE=T parameter causes the system control program to process labels,
modify some of the fields in the system control blocks for that data set, and
reposition the volume (or current volume for multivolume data sets) in much the
same way that the normal CLOSE macro does. When you code TYPE=T, you can
specify that the volume is either to be positioned at the end of data (the LEAVE
option) or to be repositioned at the beginning of data (the REREAD option).
Magnetic tape volumes are repositioned either immediately before the first data
record or immediately after the last data record. The presence of tape labels has no
effect on repositioning.
When a DCB is shared among multiple tasks, only the task that opened the data
set can close it unless TYPE=T is specified.
Figure 57, which assumes a sample data set containing 1000 blocks, shows the
relationship between each positioning option and the point where you resume
processing the data set after issuing the temporary close.
REREAD (with tape data set open Immediately after block 1000
for read backward)
Figure 57. Record Processed when LEAVE or REREAD is Specified for CLOSE TYPE=T
Releasing Space
The close function attempts to release unused tracks or cylinders for a data set if
all of the following are true:
v The SMS management class specifies YI or CI for the partial release attribute, or
you specified RLSE for the SPACE parameter in the DD statement or RELEASE
in the TSO ALLOCATE command.
v You did not specify TYPE=T on the CLOSE macro.
v The DCB was opened with the OUTPUT, OUTIN, OUTINX, INOUT or EXTEND
option and the last operation before CLOSE was WRITE (and CHECK), STOW
or PUT.
v No other DCB for this data set in the address space was open.
v No other address space in any system is allocated to the data set.
v The data set is sequential or partitioned.
v Certain functions of dynamic allocation are not currently executing in the
address space.
| For a multivolume data set that is not in extended format, or is in extended format
| with a stripe count of 1, CLOSE releases space only on the current volume.
Space is released on a track boundary if the extent containing the last record was
allocated in units of tracks or in units of average record or block lengths with
ROUND not specified. Space is released on a cylinder boundary if the extent
containing the last record was allocated in units of cylinders or in units of average
block lengths with ROUND specified. However, a cylinder boundary extent could
be released on a track boundary if:
v The DD statement used to access the data set contains a space parameter
specifying units of tracks or units of average block lengths with ROUND not
specified, or
v No space parameter is supplied in the DD statement and no secondary space
value has been saved in the data set label for the data set.
After the data set has been closed, the DCB can be used for another data set. If you
do not close the data set before a task completes, the operating system tries to
close it automatically. If the DCB is not available to the system at that time, the
operating system abnormally ends the task, and data results can be unpredictable.
The operating system, however, cannot automatically close any DCBs in dynamic
storage (outside your program) or after the normal end of a program that was
brought into virtual storage by the loader. Therefore, reentrant or loaded programs
must include CLOSE macros for all open data sets.
The short form parameter list must reside below 16 MB, but the calling program
can be above 16 MB. The long form parameter list can reside above or below 16
MB. VSAM and VTAM® access control blocks (ACBs) can reside above 16 MB.
Although you can code MODE=31 on the OPEN or CLOSE call for a DCB, the
DCB must reside below 16 MB. Therefore, the leading byte of the 4-byte DCB
address must contain zeros. If the byte contains something other than zeros, an
error message is issued. If an OPEN was attempted, the data set is not opened. If a
CLOSE was attempted, the data set is not closed. For both types of parameter lists,
the real address can be above the 2 GB bar. Therefore, you can code LOC=(xx,64)
on the GETMAIN or STORAGE macro.
You need to keep the mode that is specified in the MF=L and MF=E versions of
the OPEN macro consistent. The same is true for the CLOSE macro. If MODE=31 is
specified in the MF=L version of the OPEN or CLOSE macro, MODE=31 must also
be coded in the corresponding MF=E version of the macro. Unpredictable results
occur if the mode that is specified is not consistent.
For data sets other than system managed with DSORG=PS or null, the program
will receive unpredictable results such as reading residual data from a prior user,
getting an I/O error, or getting an ABEND. Reading residual data can cause your
program to appear to run correctly, but you can get unexpected output from the
residual data. You can use one of the following methods to make the data set
appear null:
1. At allocation time, specify a primary allocation value of zero; such as
SPACE=(TRK,(0,10)) or SPACE=(CYL,(0,50)). This technique does not work with
a VIO data set because creation includes the secondary space amount.
2. After allocation time, put an end-of-file mark at the beginning of the data set,
by running a program that opens the data set for output and closes it without
writing anything.
After you delete your data set containing confidential data, you can be certain
another user cannot read your residual data if you use the erase feature described
in “Erasing DASD Data” on page 60.
Attention: If you specify multiple DD statements in the same job step for an
SMS-managed data set on DASD, and also specify DISP=MOD or
issue the OPEN macro with options EXTEND or OUTINX, a data
integrity exposure occurs when the data set is extended on additional
volumes. This new volume information is not available to the other
DD statements in the job step for the same data set. Therefore, the
data on the new volumes is overlaid if the data set is opened for
output processing using one of the other DD statements in the same
job step and the data set is extended.
Open/Close/EOV Errors
There are two classes of errors that can occur during open, close, and
end-of-volume processing: determinate and indeterminate errors. Determinate
errors are errors associated with an ABEND issued by OPEN, CLOSE, or EOV. For
example, a condition associated with the 213 completion code with a return code
of 04 might be detected during open processing, indicating that the data set label
could not be found for a data set being opened. In general, the OPEN, CLOSE and
other system functions attempt to react to errors with return codes and
determinate abends; however, in some cases, the result is indeterminate errors,
such as program checks. In such cases, you should examine the last action taken
by your program. Pay particular attention to bad addresses supplied by your
program or overlaid storage.
To determine the status of any DCB after an error, check the OPEN (or CLOSE)
return code in register 15 or test DCBOFOPN. See z/OS DFSMS Macro Instructions
for Data Sets.
During task termination, the system issues a CLOSE macro for each data set that is
still open. If the task terminates abnormally due to a determinate system ABEND
for an output QSAM data set on tape, the close routines that would normally
finish processing buffers are bypassed. Any outstanding I/O requests are purged.
Thus, your last data records might be lost for a QSAM output data set on tape.
However, if the data set resides on DASD, the close routines perform the buffer
flushing, which writes the last records to the data set. If you cancel the task, the
buffer is lost.
Installation Exits
Four installation exit routines are provided for abnormal end with ISO/ANSI
Version 3 or Version 4 tapes.
v The label validation exit is entered during OPEN/EOV if a nonvalid label
condition is detected and label validation has not been suppressed. Nonvalid
conditions include incorrect alphanumeric fields, nonstandard values (for
example, RECFM=U, block size greater than 2048, or a zero generation number),
nonvalid label sequence, nonsymmetrical labels, nonvalid expiration date
sequence, and duplicate data set names. However, Version 4 tapes allow block
size greater than 2048, nonvalid expiration date sequence, and duplicate data set
names.
v The validation suppression exit is entered during OPEN/EOV if volume security
checking has been suppressed, if the volume label accessibility field contains an
ASCII space character, or if RACF accepts a volume and the accessibility field
does not contain an uppercase A through Z .
v The volume access exit is entered during OPEN/EOV if a volume is not RACF
protected and the accessibility field in the volume label contains an ASCII
uppercase A through Z .
v The file access exit is entered after locating a requested data set if the
accessibility field in the HDR1 label contains an ASCII uppercase A through Z.
Positioning Volumes
Volume positioning is releasing the DASD or tape volume or rotating the tape
volume so that the read-write head is at a particular point on the tape. The
following sections discuss the steps in volume positioning: releasing the volume,
processing end-of-volume, positioning the volume.
There are two ways to code the CLOSE macro that can result in releasing a data
set and the volume on which it resides at the time the data set is closed:
1. For non-VSAM data sets, you can code the following with the FREE=CLOSE
parameter:
CLOSE (DCB1,DISP) or
CLOSE (DCB1,REWIND)
See z/OS MVS JCL Reference for information about using and coding the
FREE=CLOSE parameter of the DD statement.
2. If you do not code FREE=CLOSE on the DD statement, you can code:
CLOSE (DCB1,FREE)
In either case, tape data sets and volumes are freed for use by another job step.
Data sets on direct access storage devices are freed and the volumes on which they
reside are freed if no other data sets on the volume are open. For additional
information on volume disposition and coding restrictions on the CLOSE macro,
see z/OS MVS JCL User’s Guide.
If you issue a CLOSE macro with the TYPE=T parameter, the system does not
release the data set or volume. They can be released using a subsequent CLOSE
without TYPE=T or by the unallocation of the data set.
Processing End-of-Volume
The access methods pass control to the data management end-of-volume (EOV)
routine when another volume or concatenated data set is present and any of the
following conditions is detected:
v Tape mark (input tape volume).
v File mark or end of last extent (input direct access volume).
v End-of-data indicator (input device other than magnetic tape or direct access
volume). An example of this would be the last card read on a card reader.
v End of reel or cartridge (output tape volume).
v End of last allocated extent (output direct access volume).
If the LABEL parameter of the associated DD statement shows standard labels, the
EOV routine checks or creates standard trailer labels. If you specify SUL or AUL,
the system passes control to the appropriate user label routine if you specify it in
your exit list.
If your DD statement specifies multiple volume data sets, the EOV routine
automatically switches the volumes. When an EOV e condition exists on an output
data set, the system allocates additional space, as indicated in your DD statement.
If no more volumes are specified or if more than specified are required, the storage
is obtained from any available volume on a device of the same type. If no such
volume is available, the system issues an ABEND.
If you perform multiple opens and closes without writing any user data in the area
of the end-of-tape reflective marker, then header and trailer labels can be written
past the marker. Access methods detect the marker. Because the creation of empty
data sets does not involve access methods, the end-of-tape marker is not detected,
which can cause the tape to run off the end of the reel.
Exception: The system calls your optional DCB OPEN exit routine instead of your
optional EOV exit routine if all of the following are true:
v You are reading a concatenation.
v You read the end of a data set other than the last or issued an FEOV macro on
its last volume.
v You turned on the DCB “unlike” attributes bit. See “Concatenating Unlike Data
Sets” on page 396.
Recommendation: If EOV processing extends a data set on the same volume or a
new volume for DASD output, EXTEND issues an enqueue on SYSVTOC.
(SYSVTOC is the enqueue major name for the GRS resource.) If the system issues
the EOV request for a data set on a volume where the application already holds
the SYSVTOC enqueue, this request abnormally terminates. To prevent this
problem from occurring, perform either step:
v Allocate an output data set that is large enough not to require a secondary
extent on the volume.
v Place the output data set on a different volume than the one that holds the
SYSVTOC enqueue.
LEAVE— Positions a labeled tape to the point following the tape mark that follows
the data set trailer label group. Positions an unlabeled volume to the point
following the tape mark that follows the last block of the data set.
REREAD—Positions a labeled tape to the point preceding the data set header label
group. Positions an unlabeled tape to the point preceding the first block of the data
set.
If the tape was last read backward, LEAVE and REREAD have the following
effects.
LEAVE—Positions a labeled tape to the point preceding the data set header label
group, and positions an unlabeled tape to the point preceding the first block of the
data set.
REREAD—Positions a labeled tape to the point following the tape mark that
follows the data set trailer label group. Positions an unlabeled tape to the point
following the tape mark that follows the last block of the data set.
The resultant action when an end-of-volume condition arises depends on (1) how
many tape units are allocated to the data set, and (2) how many volumes are
specified for the data set in the DD statement. The UNIT and VOLUME
parameters of the DD statement associated with the data set determine the number
of tape units allocated and the number of volumes specified. If the number of
volumes is greater than the number of units allocated, the current volume will be
rewound and unloaded. If the number of volumes is less than or equal to the
number of units, the current volume is merely rewound.
For magnetic tape volumes that are not being unloaded, positioning varies
according to the direction of the last input operation and the existence of tape
labels. When a JCL disposition of PASS or RETAIN is specified, the result is the
same as the OPEN or CLOSE LEAVE option. The CLOSE disposition option takes
precedence over the OPEN option and the OPEN and CLOSE disposition options
take precedence over the JCL.
Forcing End-of-Volume
The FEOV macro directs the operating system to start the end-of-volume
processing before the physical end of the current volume is reached. If another
volume has been specified for the data set or a data set is concatenated after the
current data set, volume switching takes place automatically. The REWIND and
LEAVE volume positioning options are available.
If an FEOV macro is issued for a spanned multivolume data set that is being read
using QSAM, errors can occur when the next GET macro is issued. Make sure that
each volume begins with the first (or only) segment of a logical record. Input
routines cannot begin reading in the middle of a logical record.
The FEOV macro can only be used when you are using BSAM or QSAM. FEOV is
ignored if issued for a SYSOUT data set. If you issue FEOV for a spooled input
346 z/OS V1R7.0 DFSMS Using Data Sets
Data Control Block (DCB)
data set, control passes to your end-of-data (EODAD) routine or your program is
positioned to read the next data set in the concatenation.
You can assign more than one buffer to a data set by associating the buffer with a
buffer pool. A buffer pool must be constructed in a virtual storage area allocated
for a given number of buffers of a given length.
The number of buffers you assign to a data set should be a trade-off against the
frequency with which you refer to each buffer. A buffer that is not referred to for a
fairly long period could be paged out. If much of this were allowed, throughput
could decrease.
Using QSAM, buffer segments and buffers within the buffer pool are controlled
automatically by the system. However, you can notify the system that you are
finished processing the data in a buffer by issuing a release (RELSE) macro for
input, or a truncate (TRUNC) macro for output. This simple buffering technique
can be used to process a sequential data set. IBM recommends not using the
RELSE or QSAM TRUNC macros because they can cause your program to become
dependent on the size of each block.
When using QSAM to process tape blocks larger than 32 760 bytes, you must let
the system build the buffer pool automatically during OPEN. The macros
GETPOOL, BUILD, and BUILDRCD do not support the large block size or buffer
size. If, during QSAM OPEN, or a BSAM OPEN with a nonzero BUFNO the
system finds that the DCB has a buffer pool, and that the buffer length is smaller
than the data set block size, an ABEND 013 is issued.
For QSAM, IBM recommends that you let the system build the buffer pool
automatically during OPEN and omit the BUFL parameter. This simplifies your
program. It permits concatenation of data sets in any order of block size. If you
code RMODE31=BUFF on the DCBE macro, the system attempts to get buffers
above the line.
When you use BSAM or BPAM, OPEN builds a buffer pool only if you code a
nonzero value for BUFNO. OPEN issues ABEND 013-4C if BUFL is nonzero and is
less than BLKSIZE in the DCB or DCBE, depending on whether you are using LBI.
If the system builds the buffer pool for a BSAM user, the buffer pool resides below
the 16 MB line.
If you use the basic access methods, you can use buffers as work areas rather than
as intermediate storage areas. You can control the buffers in a buffer pool directly
by using the GETBUF and FREEBUF macros.
For BSAM, IBM recommends that you allocate data areas or buffers through
GETMAIN, STORAGE, or CPOOL macros and not through BUILD, GETPOOL, or
by the system during OPEN. Allocated areas can be above the line. Areas that you
allocate can be better integrated with your other areas.
Recommendation: For QSAM, use the automatic technique so that the system can
rebuild the pool automatically when using concatenated data sets.
For the basic access methods, these techniques cannot build buffers above the 16
MB line or build buffers longer than 32 760 bytes.
If QSAM is used, the buffers are automatically returned to the pool when the data
set is closed. If you did not use the BUILD macro and the buffer pool is not above
the 16 MB line due to RMODE31=BUFF on the DCBE macro, you should use the
FREEPOOL macro to return the virtual storage area to the system. If you code
RMODE31=BUFF on a DCBE macro, then FREEPOOL has no effect and is optional.
The system automatically frees the buffer pool.
The following applies to DASD, most tape devices, spooled, subsystem, and
dummy data sets, TSO/E terminals, and UNIX files. For both data areas and
buffers that have virtual addresses greater than 16 MB or less than 16 MB, the real
address can exceed 2 GB. In other words, the real addresses of buffers can have 64
bits. IBM recommends that when you obtain storage for buffers or data areas with
GETMAIN or STORAGE that you specify that the real addresses can be above the
2 GB bar. Therefore, you can code LOC=(xx,64). To get storage with real addresses
below the 2 GB bar, you can code LOC=(xx,ANY) or LOC=(xx,31). This coding has
no effect on your application program unless it deals with real storage addresses,
which is uncommon. For reel tape devices, the real addresses must be 24-bit.
Buffer alignment provides alignment for only the buffer. If records from ASCII
magnetic tape are read and the records use the block prefix, the boundary
alignment of logical records within the buffer depends on the length of the block
prefix. If the length is 4, logical records are on fullword boundaries. If the length is
8, logical records are on doubleword boundaries.
If you use the BUILD macro to construct the buffer pool, alignment depends on
the alignment of the first byte of the reserved storage area.
When you code RMODE31=BUFF for QSAM, the theoretical upper limit for the
size of the buffer pool is 2 GB. This imposes a limit on the buffer size, and thus on
block size, of 2 GB divided by the number of buffers. If the system is to build the
buffer pool, and the computed buffer pool size exceeds 2 GB, an ABEND 013 is
issued. In practice, you can expect maximum buffer pool size to be less than 2 GB
because of maximum device block sizes.
A BUILD macro, issued during execution of your program, uses the reserved
storage area to build a buffer pool. The address of the buffer pool must be the
same as that specified for the buffer pool control block (BUFCB) in your DCB. The
BUFCB parameter cannot refer to an area that resides above the 16 MB line. The
buffer pool control block is an 8 byte field preceding the buffers in the buffer pool.
The number (BUFNO) and length (BUFL) of the buffers must also be specified. The
length of BUFL must be at least the block size.
When the data set using the buffer pool is closed, you can reuse the area as
required. You can also reissue the BUILD macro to reconstruct the area into a new
buffer pool to be used by another data set.
You can assign the buffer pool to two or more data sets that require buffers of the
same length. To do this, you must construct an area large enough to accommodate
the total number of buffers required at any one time during execution. That is, if
each of two data sets requires 5 buffers (BUFNO=5), the BUILD macro should
specify 10 buffers. The area must also be large enough to contain the 8 byte buffer
pool control block.
You can issue the BUILD macro in 31-bit mode, but the buffer area cannot reside
above the line and be associated with a DCB. In any case, real addresses can point
above the 2 GB bar.
You must be processing with QSAM in the locate mode and you must be
processing either VS/VBS or DS/DBS records, if you want to access the
variable-length, spanned records as logical records. If you issue the BUILDRCD
macro before the data set is opened, or during your DCB exit routine, you
automatically get logical records rather than segments of spanned records.
Only one logical record storage area is built, no matter how many buffers are
specified; therefore, you cannot share the buffer pool with other data sets that
might be open at the same time.
You can issue the BUILDRCD macro in 31-bit mode, but the buffer area cannot
reside above the line and be associated with a DCB.
The GETPOOL macro causes the system to allocate a virtual storage area to a
buffer pool. The system builds a buffer pool control block and stores its address in
the data set’s DCB. If you choose to issue the GETPOOL macro, issue it either
before opening the data set or during your DCB’s OPEN exit routine.
When using GETPOOL with QSAM, specify a buffer length (BUFL) at least as large
as the block size or omit the BUFL parameter.
n—Extended format but not compressed format and not LBI (n = 2 × blocks per
track × number of stripes)
1—Compressed format data set, PDSE, SYSIN, SYSOUT, SUBSYS, UNIX files
5—All others
If you are using the basic access method to process a direct data set, you must
specify dynamic buffer control. Otherwise, the system does not construct the buffer
pool automatically.
If all of your GET, PUT, PUTX, RELSE, and TRUNC macros for a particular DCB
are issued in 31-bit mode, then you should consider supplying a DCBE macro with
RMODE31=BUFF.
Because a buffer pool obtained automatically is not freed automatically when you
issue a CLOSE macro unless the system recognized your specification of
RMODE31=BUFF on the DCBE macro, you should also issue a FREEPOOL or
FREEMAIN macro (see “Freeing a Buffer Pool”).
If the OPEN macro was issued while running in problem state, protect key of zero,
a buffer pool that was obtained by OPEN should be released by issuing the
FREEMAIN macro instead of the FREEPOOL macro. This is necessary because the
buffer pool acquired under these conditions will be in storage assigned to subpool
252 (in user key storage).
In Figure 58, a static storage area named INPOOL is allocated during program
assembly.
... Processing
BUILD INPOOL,10,52 Structure a buffer pool
OPEN (INDCB,,OUTDCB,(OUTPUT))
... Processing
ENDJOB CLOSE (INDCB,,OUTDCB)
... Processing
RETURN Return to system control
INDCB DCB BUFNO=5,BUFCB=INPOOL,EODAD=ENDJOB,---
OUTDCB DCB BUFNO=5,BUFCB=INPOOL,---
CNOP 0,8 Force boundary alignment
INPOOL DS CL528 Buffer pool
...
The BUILD macro, issued during execution, arranges the buffer pool into 10
buffers, each 52 bytes long. Five buffers are assigned to INDCB and five to
OUTDCB, as specified in the DCB macro for each. The two data sets share the
buffer pool because both specify INPOOL as the buffer pool control block. Notice
that an additional 8 bytes have been allocated for the buffer pool to contain the
buffer pool control block.
In Figure 59, two buffer pools are constructed explicitly by the GETPOOL macros.
...
GETPOOL INDCB,10,52 Construct a 10-buffer pool
GETPOOL OUTDCB,5,112 Construct a 5-buffer pool
OPEN (INDCB,,OUTDCB,(OUTPUT))
...
ENDJOB CLOSE (INDCB,,OUTDCB)
FREEPOOL INDCB Release buffer pools after all
* I/O is complete
FREEPOOL OUTDCB
...
RETURN Return to system control
INDCB DCB DSORG=PS,BFALN=F,LRECL=52,RECFM=F,EODAD=ENDJOB,---
OUTDCB DCB DSORG=IS,BFALN=D,LRECL=52,KEYLEN=10,BLKSIZE=104, C
... RKP=0,RECFM=FB,---
Ten input buffers are provided, each 52 bytes long, to contain one fixed-length
record. Five output buffers are provided, each 112 bytes long, to contain 2 blocked
records plus an 8 byte count field. . Notice that both data sets are closed before the
buffer pools are released by the FREEPOOL macros. The same procedure should be
used if the buffer pools were constructed automatically by the OPEN macro.
Controlling Buffers
You can use several techniques to control which buffers are used by your program.
The advantages of each depend to a great extent on the type of job you are doing.
The queued access methods permits simple buffering. The basic access methods
permits either direct or dynamic buffer control.
Move Mode. The system moves the record from a system input buffer to your
work area, or from your work area to an output buffer.
Data Mode (QSAM Format-V Spanned Records Only). Data mode works the
same as the move mode, except only the data portion of the record is moved.
Locate Mode. The system does not move the record. Instead, the access method
macro places the address of the next input or output buffer in register 1. For
QSAM format-V spanned records, if you have specified logical records by
specifying BFTEK=A or by issuing the BUILDRCD macro, the address returned in
register 1 points to a record area where the spanned record is assembled or
segmented.
PUT-Locate Mode. The PUT-locate routine uses the value in the DCBLRECL field
to determine if another record will fit into your buffer. Therefore, when you write a
short record, you can get the largest number of records per block by modifying the
DCBLRECL field before you issue a PUT-locate to get a buffer segment for the
short record. Perform the following steps:
1. Record the length of the next (short) record into DCBLRECL.
2. Issue PUT-locate.
3. Move the short record into the buffer segment.
GET-Locate Mode. Two processing modes of the PUTX macro can be used with a
GET-locate macro. The update mode returns an updated record to the data set
from which it was read. The output mode transfers an updated record to an output
data set. There is no actual movement of data in virtual storage. See z/OS DFSMS
Macro Instructions for Data Sets for information about the processing mode specified
by the parameter of the PUTX macro.
QSAM in an Application
The term simple buffering refers to the relationship of segments within the buffer.
All segments in a simple buffer are together in storage and are always associated
with the same data set. Each record must be physically moved from an input
buffer segment to an output buffer segment. The record can be processed within
either segment or in a work area.
If you use simple buffering, records of any format can be processed. New records
can be inserted and old records deleted as required to create a new data set. The
following examples of using QSAM use buffers that could have been constructed
in any way previously described.
GET-move, PUT-move. Moved from an input buffer to a work area where it can be
processed and moved to an output buffer.
GET
PUT
System
GET returns a record address in register 1. This address remains valid until the
next GET or CLOSE for the DCB. Your program passes the address to the PUT
macro in register 0. PUT copies the record synchronously.
GET-move, PUT-locate. The PUT macro locates the address of the next available
output buffer. PUT returns its address in register 1 and your program passes it to
the GET macro in register 0.
On the GET macro you specify the address of the output buffer into which the
system moves the next input record.
A filled output buffer is not written until the next PUT macro is issued. PUT
returns a buffer address before GET moves a record. This means that when GET
branches to the end-of-data routine because all data has been read, the output
buffer still needs a record. Your program should replace the unpredictable output
buffer content with another record, which you might set to blanks or zeros. The
next PUT or CLOSE macro writes the record.
GET-move, PUT-move. The GET macro (step A, Figure 61) specifies the address of
the work area into which the system moves the next record from the input buffer.
GET
System
PUT
System
The PUT macro (step B, Figure 61) specifies the address of the work area from
which the system moves the record into the next output buffer.
GET-locate, PUT-locate. The GET macro (step A, Figure 62) locates the address of
the next available input buffer. GET returns the address in register 1.
GET
PUT
Program
The PUT macro (step B, Figure 62) locates the address of the next available output
buffer. PUT returns its address in register 1. You must then move the record from
the input buffer to the output buffer (step C, Figure 62). Your program can process
each record either before or after the move operation.
A filled output buffer is not written until the next PUT, TRUNC or CLOSE macro
is issued.
Be careful not to issue an extra PUT before issuing CLOSE or FEOV. Otherwise,
when the CLOSE or FEOV macro tries to write your last record, the extra PUT will
write a meaningless record or produce a sequence error.
UPDAT mode. When a data set is opened with UPDAT specified (Figure 63), only
GET-locate and PUTX-update are supported.
GET
INPUT/OUTPUT INPUT/OUTPUT
PUTX
(No movement of data takes place)
The GET macro locates the next input record to be processed and returns its
address in register 1. You can update the record and issue a PUTX macro that will
cause the block to be written back in its original location in the data set after all
the logical records in that block have been processed.
If you modify the contents of a buffer but do not issue a PUTX macro for that
record, the system can still write the modified record block to the data set. This
happens with blocked records when you issue a PUTX macro for one or more
other records in the buffer.
Exchange Buffering
Exchange buffering is no longer supported. Its request is ignored by the system
and move mode is used instead.
When you are using locate mode, the record address returned from the most recent
GET macro remains valid until you issue the next GET. Issuing a RELSE macro
does change the effect of a previous PUTX macro.
If the locate mode is being used, the system assumes a record has been placed in
the buffer segment pointed to by the last PUT macro.
The last block of a data set is truncated by the CLOSE routine. A data set that
contains format-F records with truncated blocks generally cannot be read as
efficiently as a standard format-F data set.
A TRUNC macro issued against a PDSE does not create a short block because the
block boundaries are not saved on output. On input, the system uses the block size
specified in the DCB or DCBE for reading the PDSE. Logical records are packed
into the user buffer without respect to the block size specified when the PDSE
member was created.
To help the storage administrator find programs that issue a QSAM TRUNC macro
for PDSEs, the SMF type 15 record (see z/OS MVS System Management Facilities
(SMF)) contains an indicator that the program did it.
Recommendation: Avoid using the QSAM TRUNC macro. Many data set copying
and backup programs reblock the records. This means they do not preserve the
block boundaries that your program can have set.
Topic Location
Accessing Data with READ and WRITE 359
Accessing Data with GET and PUT 364
Analyzing I/O Errors 368
The READ and WRITE macros process blocks, not records. Thus, you must block
and unblock records. Buffers, allocated by either you or the operating system, are
filled or emptied individually each time a READ or WRITE macro is issued. The
READ and WRITE macros only start I/O operations. To ensure the operation is
completed successfully, you must issue a CHECK, WAIT, or EVENTS macro to test
the data event control block (DECB). The only exception is that, when the SYNAD
or EODAD routine is entered, do not issue a CHECK, WAIT, or EVENTS macro for
outstanding READ or WRITE requests.
The DECB is examined by the CHECK routine when the I/O operation is
completed to determine if an uncorrectable error or exceptional condition exists. If
it does, CHECK passes control to your SYNAD routine. If you have no SYNAD
routine, the task is abnormally ended.
Rule: DCB and DECBs must reside below 16 MB, but their central storage
addresses can be above the 2 GB bar.
request. For example, if you specify NCP=3 in your DCB for the data set and you
are reading records from the data set, you can code the following macros in your
program:
...
READ DECB1,...
READ DECB2,...
READ DECB3,...
CHECK DECB1
CHECK DECB2
CHECK DECB3
...
Figure 83 on page 433 shows this technique, except for the FIND macro and
DSORG=PO in the DCB macro. To process a sequential data set, code DSORG=PS.
You can easily adapt this technique to use WRITE or READ.
Reading a Block
The READ macro retrieves a data block from an input data set and places it in a
designated area of virtual storage. To permit overlap of the input operation with
processing, the system returns control to your program before the read operation is
completed. You must test the DECB created for the read operation for successful
completion before the block is processed or the DECB is reused.
When you use the READ macro for BSAM to read a direct data set with spanned
records and keys, and you specify BFTEK=R in your DCB, the data management
routines displace record segments after the first in a record by key length. This is
called offset reading. With offset reading you can expect the block descriptor word
and the segment descriptor word at the same locations in your buffer or buffers,
even if you read the first segment of a record (preceded in the buffer by its key), or
a subsequent segment (which does not have a key).
You can specify variations of the READ macro according to the organization of the
data set being processed and the type of processing to be done by the system as
follows.
Direct
D Use the direct access method.
I Locate the block using a block identification.
K Locate the block using a key.
F Provide device position feedback.
X Maintain exclusive control of the block.
R Provide next address feedback.
U Next address can be a capacity record or logical record, whichever
occurred first.
Writing a Block
The WRITE macro places a data block in an output data set from a designated area
of virtual storage. The WRITE macro can also be used to return an updated data
block to a data set. To permit overlap of output operations with processing, the
system returns control to your program before the write operation is completed.
You must test the DECB that is created for the write operation for successful
completion before you reuse the DECB. For ASCII tape data sets, do not issue
more than one WRITE on the same block, because the WRITE macro causes the
data in the record area to be converted from EBCDIC to ASCII. Or, if CCSIDs are
specified for ISO/ANSI V4 tapes, from the CCSID specified for the application
program to the CCSID of the data records on tape.
As with the READ macro, you can specify variations of the WRITE macro
according to the organization of the data set and type of processing to be done by
the system as follows.
Sequential
SF Write the data set sequentially.
Direct
SD Write a dummy fixed-length record. (BDAM load mode)
SZ Write a capacity record (R0). The system supplies the data, writes the
capacity record, and advances to the next track. (BDAM load mode)
SFR Write the data set sequentially with next-address feedback. (BDAM load
mode, variable spanned)
D Use the direct access method.
I Search argument identifies a block.
K Search argument is a key.
A Add a new block.
F Provide record location data (feedback).
X Release exclusive control.
The check routine passes control to the appropriate exit routines specified in the
DCB or DCBE for error analysis (SYNAD) or, for sequential or PDSs, end-of-data
(EODAD). It also automatically starts the end-of-volume procedures (volume
switching or extending output data sets).
If you specify OPTCD=Q in the DCB, CHECK causes input data to be converted
from ASCII to EBCDIC or, if CCSIDs are specified for ISO/ANSI V4 tapes, from
the CCSID of the data records on tape to the CCSID specified for the application
program.
If the system calls your SYNAD or EODAD routine, then all other I/O requests for
that DCB have been terminated, although they have not necessarily been posted.
There is no need to test them for completion or issue CHECK for them.
If you use overlapped BSAM or BPAM READ or WRITE macros, your program
can run faster if you use the MULTACC parameter on the DCBE macro. If you do
that and use WAIT or EVENTS for the DCB, then you must also use the TRUNC
macro. See TRUNC information in “Ensuring I/O Initiation with the TRUNC
Macro” on page 362 and “DASD and Tape Performance” on page 403.
For BDAM, a WAIT macro must be issued for each READ or WRITE macro if
MACRF=C is not coded in the associated DCB. When MACRF=C is coded, a
CHECK macro must be issued for each READ or WRITE macro. Because the
CHECK macro incorporates the function of the WAIT macro, a WAIT is normally
unnecessary. The EVENTS macro or the ECBLIST form of the WAIT macro can be
useful, though, in selecting which of several outstanding events should be checked
first. Each operation must then be checked or tested separately.
Most programs do not care about how much data in the data set is on each
volume and if there is a failure, they do not care what the failure was. A person is
more likely to want to know the cause of the failure.
In some cases you might want to take special actions before some of the system’s
normal processing of an exceptional condition. One such exceptional condition is
reading a tape mark and another such exceptional condition is writing at the end
of the tape.
With BSAM, your program can detect when it has reached the end of a magnetic
tape and do some processing before BSAM’s normal processing to go to another
volume. To do that, do the following:
1. Instead of issuing the CHECK macro, issue the WAIT or EVENTS macro. Use
the ECB, which is the first word in the DECB. The first byte is called the post
code. As a minor performance enhancement, you can skip all three macros if
the second bit of the post code already is 1.
2. Inspect the post code. Do one of the following:
a. Post code is X'7F': The READ or WRITE is successful. If you are reading
and either the tape label type is AL or OPTCD=Q is in effect, then you must
issue the CHECK macro to convert between ASCII and EBCDIC. Otherwise,
the CHECK is optional and you can continue normal processing as if your
program had issued the CHECK macro.
b. Post code is not X'7F': You cannot issue another READ or WRITE
successfully unless you take one of the following actions. All later READs
or WRITEs that you issued for the DCB have post codes that you cannot
predict, but they are guaranteed not to have started. If your only reason to
issue WAIT or EVENTS is to wait for multiple events, then issue CHECK to
Because the operating system controls buffer processing, you can use as many I/O
buffers as needed without reissuing GET or PUT macros to fill or empty buffers.
Usually, more than one input block is in storage at a time, so I/O operations do
not delay record processing.
Because the operating system overlaps I/O with processing, you need not test for
completion, errors, or exceptional conditions. After a GET or PUT macro is issued,
control is not returned to your program until an input area is filled or an output
area is available. Exits to error analysis (SYNAD) and end-of-volume or
end-of-data (EODAD) routines are automatically taken when necessary.
GET—Retrieve a Record
The GET macro obtains a record from an input data set. It operates in a
logical-sequential and device-independent manner. The GET macro schedules the
filling of input buffers, unblocks records, and directs input error recovery
procedures. For spanned-record data sets, it also merges record segments into
logical records.
After all records have been processed and the GET macro detects an end-of-data
indication, the system automatically checks labels on sequential data sets and
passes control to your end-of-data exit (EODAD) routine. If an end-of-volume
condition is detected for a sequential data set, the system automatically switches
volumes if the data set extends across several volumes, or if concatenated data sets
are being processed.
specified for the application program. This parameter is supported only for a
magnetic tape that does not have IBM standard labels.
PUT—Write a Record
The PUT macro writes a record into an output data set. Like the GET macro, it
operates in a logical-sequential and device-independent manner. As required, the
PUT macro blocks records, schedules the emptying of output buffers, and handles
output error correction procedures. For sequential data sets, it also starts automatic
volume switching and label creation, and also segments records for spanning.
If the PUT macro is directed to a card punch or printer, the system automatically
adjusts the number of records or record segments per block of format-F or
format-V blocks to 1. Thus, you can specify a record length (LRECL) and block size
(BLKSIZE) to provide an optimum block size if the records are temporarily placed
on magnetic tape or a direct access volume.
For spanned variable-length records, the block size must be equivalent to the
length of one card or one print line. Record size might be greater than block size in
this case.
When you use the PUTX macro to update, each record is returned to the data set
referred to by a previous locate mode GET macro. The buffer containing the
updated record is flagged and written back to the same location on the direct
access storage device where it was read. The block is not written until a GET
macro is issued for the next buffer, except when a spanned record is to be updated.
In that case, the block is written with the next GET macro.
When you use the PUTX macro to write a new output data set, you can add new
records by using the PUT macro. As required, the PUTX macro blocks records,
schedules the writing of output buffers, and handles output error correction
procedures.
Parallel input processing provides a logical input record from a queue of data sets
with equal priority. The function supports QSAM with input processing, simple
buffering, locate or move mode, and fixed-, variable-, or undefined-length records.
Spanned records, track-overflow records, dummy data sets, and SYSIN data sets
are not supported.
Parallel input processing can be interrupted at any time to retrieve records from a
specific data set, or to issue control instructions to a specific data set. When the
retrieval process has been completed, parallel input processing can be resumed.
Data sets can be added to or deleted from the data set queue at any time. You
should note, however, that, as each data set reaches an end-of-data condition, the
data set must be removed from the queue with the CLOSE macro before a
subsequent GET macro is issued for the queue. Otherwise, the task could be ended
abnormally.
Use the PDAB macro to create and format a work area that identifies the
maximum number of DCBs that can be processed at any one time. If you exceed
the maximum number of entries specified in the PDAB macro when adding a DCB
to the queue with the OPEN macro, the data set will not be available for parallel
input processing. However, it will be available for sequential processing.
When issuing a parallel GET macro, register 1 must always point to a PDAB. You
can load the register or let the GET macro do it for you. When control is returned
to you, register 1 contains the address of a logical record from one of the data sets
in the queue. Registers 2 - 13 contain their original contents at the time the GET
macro was issued. Registers 14, 15, and 0 are changed.
Through the PDAB, you can find the data set from which the record was retrieved.
A fullword address in the PDAB (PDADCBEP) points to the address of the DCB. It
should be noted that this pointer could be nonvalid from the time a CLOSE macro
is issued to the issuing of the next parallel GET macro.
In Figure 64 on page 367, not more than three data sets (MAXDCB=3 in the PDAB
macro) are open for parallel processing at a time.
...
OPEN (DATASET1,(INPUT),DATASET2,(INPUT),DATASET3, X
(INPUT),DATASET4,(OUTPUT))
TM DATASET1+DCBQSWS-IHADCB,DCBPOPEN Opened for
* parallel processing
BZ SEQRTN Branch on no to
* sequential routine
TM DATASET2+DCBQSWS-IHADCB,DCBPOPEN
BZ SEQRTN
TM DATASET3+DCBQSWS-IHADCB,DCBPOPEN
BZ SEQRTN
GETRTN GET DCBQUEUE,TYPE=P
LR 10,1 Save record pointer
...
... Record updated in place
...
PUT DATASET4,(10)
B GETRTN
EODRTN L 2,DCBQUEUE+PDADCBEP-IHAPDAB
L 2,0(0,2)
CLOSE ((2))
CLC ZEROS(2),DCBQUEUE+PDANODCB-IHAPDAB Any DCBs left?
BL GETRTN Branch if yes
...
DATASET1 DCB DDNAME=DDNAME1,DSORG=PS,MACRF=GL,RECFM=FB, X
LRECL=80,EODAD=EODRTN,EXLST=SET3XLST
DATASET2 DCB DDNAME=DDNAME2,DSORG=PS,MACRF=GL,RECFM=FB, X
LRECL=80,EODAD=EODRTN,EXLST=SET3XLST
DATASET3 DCB DDNAME=DDNAME3,DSORG=PS,MACRF=GL,RECFM=FB, X
LRECL=80,EODAD=EODRTN,EXLST=SET3XLST
DATASET4 DCB DDNAME=DDNAME4,DSORG=PS,MACRF=PM,RECFM=FB, X
LRECL=80
DCBQUEUE PDAB MAXDCB=3
SET3XLST DC 0F’0’,AL1(EXLLASTE+EXLPDAB),AL3(DCBQUEUE)
ZEROS DC X'0000'
DCBD DSORG=QS
PDABD
IHAEXLST , DCB exit list mapping
...
The number of bytes required for PDAB is equal to 24 + 8n, where n is the value
of the keyword, MAXDCB.
If data definition statements and data sets are supplied, DATASET1, DATASET2,
and DATASET3 are opened for parallel input processing as specified in the input
processing OPEN macro. Other attributes of each data set are QSAM (MACRF=G),
simple buffering by default, locate or move mode (MACRF=L or M), fixed-length
records (RECFM=F), and exit list entry for a PDAB (X'92'). Note that both locate
and move modes can be used in the same data set queue. The mapping macros,
DCBD and PDABD, are used to refer to the DCBs and the PDAB respectively.
In Figure 64 when one or more data sets are opened for parallel processing, the
GET routine retrieves a record, saves the pointer in register 10, processes the
record, and writes it to DATASET4. This process continues until an end-of-data
condition is detected on one of the input data sets. The end-of-data routine locates
the completed input data set and removes it from the queue with the CLOSE
macro. A test is then made to determine whether any data sets remain on the
queue. Processing continues in this manner until the queue is empty.
The SYNADAF message can come in two parts, with each message being an
unblocked variable-length record. If the data set being analyzed is not a PDSE,
extended format data set, or UNIX file; only the first message is filled in. If the
data set is a PDSE, extended format data set or UNIX file, both messages are filled
in. An 'S' in the last byte of the first message means a second message exists. This
second message is located 8 bytes past the end of the first message.
The text of the first message is 120 characters long, and begins with a field of 36,
42, or 20 blanks. You can use the blank field to add your own remarks to the
message. The text of the second message is 128 characters long and ends with a
field of 76 blanks that are reserved for later use. This second message begins in the
fifth byte in the message buffer.
Example: A typical message for a tape data set with the blank field omitted
follows:
,TESTJOBb,STEP2bbb,283,TA,MASTERbb,READb,DATA CHECKbbbbb,0000015,BSAMb
That message shows that a data check occurred during reading of the 15th block of
a data set being processed with BSAM. The data set was identified by a DD
statement named MASTER, and was on a magnetic tape volume on unit 283. The
name of the job was TESTJOB; the name of the job step was STEP2.
Example: Two typical messages for a PDSE with the blank fields omitted follow:
,PDSEJOBb,STEP2bbb,0283,D,PDSEDDbb,READb,DATA CHECKbbbbb,
00000000100002,BSAMS
That message shows that a data check occurred during reading of a block referred
to by a BBCCHHR of X'00000000100002' of a PDSE being processed by BSAM. The
data set was identified by a DD statement named PDSEDD, and was on a DASD
on unit 283. The name of the job was PDSEJOB. The name of the job step was
STEP2. The 'S' following the access method 'BSAM' means that a second message
has been filled in. The second message identifies the record in which the error
occurred. The concatenation number of the data set is 3 (the third data set in a
368 z/OS V1R7.0 DFSMS Using Data Sets
Accessing Records
concatenation), the TTR of the member is X'000005', and the relative record number
is 2. The SMS return and reason codes are zero, meaning that no error occurred in
SMS.
If the error analysis routine is entered because of an input error, the first 6 or 16
bytes of the first message (at offset 8) contain binary information. If no data was
transmitted, these first bytes are blanks. If the error did not prevent data
transmission, these first bytes contain the address of the input buffer and the
number of bytes read. You can use this information to process records from the
block. For example, you can print each record after printing the error message.
Before printing the message, however, you should replace the binary information
with EBCDIC characters.
The SYNADAF macro provides its own save area and makes this area available to
your error analysis routine. When used at the entry point of a SYNAD routine, it
fulfills the routine’s responsibility for providing a save area. See z/OS DFSMS
Macro Instructions for Data Sets for more information on the SYNADAF macro.
Topic Location
PDSEs 379
Direct Data Sets (BDAM) 380
Factors to Consider When Opening and Closing Data Sets 381
Control of Checkpoint Data Sets on Shared DASD Volumes 381
System Use of Search Direct for Input Operations 383
Enhanced Data Integrity for Shared Sequential Data Sets 374
There are two conditions under which a data set on a direct access device can be
shared by two or more tasks:
v Two or more DCBs are opened and used concurrently by the tasks to refer to the
same, shared data set (multiple DCBs).
v Only one DCB is opened and used concurrently by multiple tasks in a single job
step (a single, shared DCB).
Except for PDSEs, the system does not protect data integrity when multiple DCBs
are open for output and the DCBs access a data set within the same job step. The
system ensures that only one program in the sysplex can open a PDS with the
OUTPUT option, even if you specify DISP=SHR. If a second program issues OPEN
with the OUTPUT option, for the PDS with DISP=SHR, while a DCB is still open
with the OUTPUT option, the second program gets a 213-30 ABEND. This does not
apply to two programs in one address space with DISP=OLD or MOD, which
would cause overlaid data. This 213-30 enforcement mechanism does not apply
when you issue OPEN with the UPDAT option. Therefore programs that issue
OPEN with UPDAT and DISP=SHR can corrupt the PDS directory. Use DISP=OLD
to avoid the possibility of an abend during the processing of a PDS for output or
of corrupting the directory when it is open for update. If a program writes in a
PDS while protected with DISP=NEW, DISP=OLD, or DISP=MOD, a program
reading from outside of the GRS complex might see unpredictable results such as
members that are temporarily missing or overlaid.
The DCBE must not be shared by multiple DCBs that are open. After the DCB is
successfully closed, you may open a different DCB pointing to the same DCBE.
The operating system provides job control language (JCL) statements and macros
that help you ensure the integrity of the data sets you want to share among the
tasks that process them. Figure 65 and Figure 66 on page 373 show which JCL and
macros you should use, depending on the access method your task is using and
the mode of access (input, output, or update). Figure 65 describes the processing
procedures you should use if more than one DCB has been opened to the shared
data set. The DCBs can be used by tasks in the same or different job steps.
The purpose of the RLSE value for the space keyword in the DD statement is to
cause CLOSE to free unused space when the data set becomes closed. The system
does not perform this function if the DD has DISP=SHR or more than one DCB is
open to the data set.
MULTIPLE DCBs
Figure 65. JCL, Macros, and Procedures Required to Share a Data Set Using Multiple DCBs
DISP=SHR. Each job step sharing an existing data set must code SHR as the
subparameter of the DISP parameter on the DD statement for the shared data set
to let the steps run concurrently. For more information about ensuring data set
integrity see z/OS MVS JCL User’s Guide.
Related reading: For more information about sharing PDSEs see “Sharing PDSEs”
on page 470. If the tasks are in the same job step, DISP=SHR is not required. For
more information about detecting sharing violations with sequential data sets, see
“Enhanced Data Integrity for Shared Sequential Data Sets” on page 374.
No facility. There are no facilities in the operating system for sharing a data set
under these conditions.
ENQ on data set. Besides coding DISP=SHR on the DD statement for the data set
that is to be shared, each task must issue ENQ and DEQ macros naming the data
set or block as the resource for which exclusive control is required. The ENQ must
be issued before the GET (READ); the DEQ macro should be issued after the PUTX
or CHECK macro that ends the operation.
Related reading: For more information about using the ENQ and DEQ macros see
z/OS MVS Programming: Assembler Services Reference ABE-HSP.
Guarantee discrete blocks. When you are using the access method that provides
blocking and unblocking of records (QSAM), it is necessary that every task
updating the data set ensure that it is not updating a block that contains a record
being updated by any other task. There are no facilities in the operating system for
ensuring that discrete blocks are being processed by different tasks.
ENQ on block. If you are updating a shared data set (specified by coding
DISP=SHR on the DD statement) using BSAM or BPAM, your task and all other
tasks must serialize processing of each block of records by issuing an ENQ macro
before the READ macro and a DEQ macro after the CHECK macro that follows the
WRITE macro you issued to update the record. If you are using BDAM, it provides
for enqueuing on a block using the READ exclusive option that is requested by
coding MACRF=X in the DCB and an X in the type operand of the READ and
WRITE macros. For an example of the use of the BDAM macros see “Exclusive
Control for Updating” on page 574.
Figure 66 describes the macros you can use to serialize processing of a shared data
set when a single DCB is being shared by several tasks in a job step.
Figure 66. Macros and Procedures Required to Share a Data Set Using a Single DCB
ENQ. When a data set is being shared by two or more tasks in the same job step
(all that use the same DCB), each task processing the data set must issue an ENQ
macro on a predefined resource name before issuing the macro or macros that
begin the I/O operation. Each task must also release exclusive control by issuing
the DEQ macro at the next sequential instruction following the I/O operation.
Note also that if two tasks are writing different members of a PDS, each task
should issue the ENQ macro before the FIND macro and issue the DEQ macro
after the STOW macro that completes processing of the member. See z/OS MVS
Programming: Assembler Services Reference ABE-HSP for more information about the
ENQ and DEQ macros.
ENQ on block. When updating a shared direct data set, every task must use the
BDAM exclusive control option that is requested by coding MACRF=X in the DCB
macro and an X in the type operand of the READ and WRITE macros. See
“Exclusive Control for Updating” on page 574 for an example of the use of BDAM
macros. Note that all tasks sharing a data set must share subpool 0. See the
ATTACH macro description in z/OS MVS Programming: Assembler Services Reference
ABE-HSP.
Data sets can also be shared both ways at the same time. More than one DCB can
be opened for a shared data set, while more than one task can be sharing one of
the DCBs. Under this condition, the serialization techniques specified for direct
data sets in Figure 65 on page 372 satisfy the requirement. For sequential and
PDSs, the techniques specified in Figure 65 and Figure 66 must be used.
Open and Close of Data Sets Shared by More than One Task. When more than
one task is sharing a data set, the following restrictions must be recognized. Failure
to comply with these restrictions endangers the integrity of the shared data set.
v All tasks sharing a DCB must be in the job step that opened the DCB. See
Chapter 23, “Sharing Non-VSAM Data Sets,” on page 371.
v Any task that shares a DCB and starts any input or output operations using that
DCB must ensure that all those operations are complete before terminating the
task. A CLOSE macro issued for the DCB ends all input and output operations.
v A DCB can be closed only by the task that opened it.
Shared Direct Access Storage Devices. At some installations, DASDs are shared
by two or more independent computing systems. Tasks run on these systems can
share data sets stored on the device. Accessing a shared data set or the same
storage area on shared DASD by multiple independent systems requires careful
planning. Without proper intersystem communication, data integrity could be
endangered.
To ensure data integrity in a shared DASD environment, your system must have
global resource serialization (GRS) active or a functionally equivalent global
serialization method.
Related reading: For information on data integrity for shared DASD, see z/OS
MVS Programming: Authorized Assembler Services Guide. For details on GRS, see z/OS
MVS Planning: Global Resource Serialization.
The enhanced data integrity function prevents this type of data loss. This data
integrity function either ends the program that is opening a sequential data set that
is already opened for writing, or it writes only a warning message but allows the
data set to open. Only sequential data sets can use the enhanced data integrity
function.
Related reading: For an overview of the enhanced data integrity function, see z/OS
DFSMS Using the New Functions.
Determine whether your system requires the data integrity function. Can the
applications allow concurrent access to sequential data sets for output or update,
and still maintain data integrity?
Perform the following steps to set up data integrity processing for your system.
1. Create a new SYS1.PARMLIB member, IFGPSEDI. The IFGPSEDI member
contains the MODE variable and an optional list of data set names to be
excluded from data integrity processing. IFGPSEDI can be in any data set in
the SYS1.PARMLIB concatenation.
_______________________________________________________________
2. Set IFGPSEDI to one of the following MODE values. MODE must start in the
first column of the first record.
MODE(WARN)
The program issues a warning message when an application attempts
to open for output a shared data set that is already open, but it allows
the current open to continue. This situation is called a data integrity
violation.
MODE(ENFORCE)
The program abends when a data integrity violation occurs.
MODE(DISABLE)
Data integrity processing is disabled.
_______________________________________________________________
3. Use DSN(data_set_name) to specify which data sets, if any, to include in the
exclude list in the IFGPSEDI member.
The data set name can be a partially qualified or fully-qualified name. The
data set name also can contain an asterisk or percent sign.
When you specify MODE(WARN) or MODE(ENFORCE), data integrity processing
bypasses data sets that are in the exclude list in IFGPSEDI. The exclude list
excludes all data sets with that same name in the system. (If the data set is not
system managed, multiple data sets with the same name could exist on
different volumes, so they would be excluded.)
_______________________________________________________________
4. Once you have created the IFGPSEDI member, activate data integrity
processing by IPLing the system or starting the IFGEDI task. The IFGEDI task
builds a data integrity table from the data in IFGPSEDI.
_______________________________________________________________
Result: After you activate data integrity processing, message IEC983I displays. The
system issues this message during IPL or after you start the IFGEDI task. This
message indicates whether data integrity processing is active and the mode
(WARN, ENFORCE, or DISABLE).
Recommendation: The best way to identify applications that require data integrity
processing is to activate it in warning mode. Then review the warning messages
for the names of data sets that are identified. After you update the exclude list in
the IFGPSEDI member with the data sets to be protected, consider activating data
integrity processing in enforce mode.
Related reading: For more information on setting IFGPSEDI, see z/OS MVS
Initialization and Tuning Reference.
Result: You know you have set up data integrity processing on multiple systems
when message IEC983I displays on each system.
Enhanced data integrity is not effective for data sets that are shared across multiple
sysplexes.
Attention: If you exclude data sets from data integrity processing, you must
ensure that all applications bypass data integrity processing to avoid
accidental destruction of data when multiple applications attempt to
open the data sets for output. If data integrity problems occur,
examine the SMF 14 and 15 records to see which data sets bypassed
data integrity processing.
v Set the DCBEEXPS flag in the DCBE macro to allow concurrent users to open
the data sets for output or update processing. Set bit 7, DCBEFLG2, to X’01’ by
using the instruction OI DCBEFLG2,DCBEEXPS in the DCBE macro.
To set and honor the DCBEEXPS flag, application programs must meet any one
of the following criteria:
– The application is authorized program facility (APF) authorized.
– The application is running in PSW supervisor state.
– The application is running in system key (0–7) when it opens the data set.
If none of the above are true, the DCBEEXPS flag is ignored.
v If the application is authorized, specify the NODSI flag in the program
properties table (PPT). The NODSI flag bypasses data integrity processing.
v If the application is authorized, dynamically allocate the data set with no data
integrity (NODSI) specified to bypass data integrity processing. In the
DYNALLOC macro, specify NODSI to set the S99NORES flag.
Recommendation: Changes to IFGPSEDI take effect when you restart the IFGEDI
task. If any of the data sets in the exclude list are open when you restart IFGEDI,
this change takes effect after the data sets are closed and reopened.
Related reading: For more information on using dynamic allocation, see the z/OS
MVS Programming: Authorized Assembler Services Guide.
If the exclude list is empty (no data set names specified) and IFGPSEDI specifies
MODE(WARN) or MODE(ENFORCE), data integrity processing occurs for all sequential
data sets.
You can set applications to bypass data integrity processing for the data set that is
being opened in the following ways:
v Specify the DCBEEXPS exclude flag in the DCBE macro.
v Specify the SCTNDSI exclude flag in the step control block.
v Dynamically allocate the data set with S99NORES specified. This action sets the
DSABNODI exclude flag for the data set
v Request the NODSI flag in the program properties table for the application
program.
Related reading: For more information on the warning messages and abends for
data integrity processing, and the flags for SMF record types 14 and 15, see the
z/OS DFSMSdfp Diagnosis and z/OS MVS System Management Facilities (SMF).
Table 35 describes the different conditions for when data integrity is disabled and
also for data integrity warnings.
Table 35. Messages for Data Integrity Processing
Mode Condition Message SMF Record Result
MODE(DISABLE) Enhanced data integrity is not Sequential data
active (even if no data set sets can be
names are in the enhanced opened for
data integrity table). output
concurrently.
IFGPSEDI not in Enhanced data integrity is not Sequential data
SYS1.PARMLIB active. sets can be
opened for
output
concurrently.
MODE(WARN) If the data set is being opened IEC984I SMF type 14 The data set is
for input when it is already opened.
opened for output, and the SMF14INO flag
data set name is not in the
enhanced data integrity table,
and the application does not
bypass enhanced data
integrity.
MODE(WARN) If the data set is being opened IEC984I SMF type 15 The data set is
for output when it is already opened.
opened for output, and the SMF14OPO flag
data set name is not in the
enhanced data integrity table,
and the application does not
bypass enhanced data
integrity.
MODE(WARN) If the data set is being opened IEC985I SMF type 14 The data set is
for input when it is already opened.
opened for output, and the SMF14EXT flag
data set name is in the table (if in EDI table) or
or the application bypasses SMF14EPS flag
enhanced data integrity. (if bypass requested)
Note: if the data set is excluded from enhanced data integrity processing for any
reason, the SMF14 and SMF15 records will reflect that fact even for the first open
of the data set. Also, in ENFORCE mode the SMF14OPO and SFM14INO flags are
only set if there is inconsistency in the concurrent opens (the data set was not
excluded during the first open but was excluded during later ones).
Table 36. Different Conditions for Data Integrity Violations
Mode Condition Message or SMF Result
Record
MODE(ENFORCE) If the data set is being opened ABEND 213-FD The second open of the data set
for output when it is already for output fails.
opened for output, and the data
set name is not in the enhanced
data integrity table and the
application does not bypass
enhanced data integrity.
MODE(ENFORCE) If the data set is being opened SMF type 14 The second open of the data set
for input when it is already for input is allowed.
opened for output, and the data SMF14INO flag
set name is not in the table and
the application does not bypass
enhanced data integrity.
MODE(ENFORCE) If the data set is being opened SMF type 14 The second open of the data set
for input when it is already for input is allowed.
opened for output, and the data SMF14EXT flag
set name is in the table or the (if in EDI table),
application bypasses enhanced SMF14EPS flag
data integrity. (if bypass requested),
SMF14INO flag
MODE(ENFORCE) If the data set is being opened SMF type 15 The second open of the data set
for output when it is already for output is allowed.
opened for output, and the data SMF14EXT
set name is in the enhanced data (if in EDI table),
integrity table or the application SMF14EPS flag
bypasses enhanced data (if bypass requested),
integrity. SMF14OPO flag
PDSEs
See “Sharing PDSEs” on page 470 for information about sharing PDSEs.
On systems that assure data set integrity across multiple systems, you may be
authorized to create checkpoints on shared DASD through the RACF facility class
“IHJ.CHKPT.volser”, where “volser” is the volume serial of the volume to contain
the checkpoint data set. Data set integrity across multiple systems is provided
when enqueues on the major name “SYSDSN”, minor name “data set name” are
treated as global resources (propagated across all systems in the complex) using
multisystem global resource serialization (GRS) or an equivalent function.
If a checkpoint data set is on shared DASD, DFSMS issues the SAF RACROUTE
macro requesting authorization against a facility class profile of IHJ.CHKPT.volser
during checkpoint (“volser” is the volume serial number where the checkpoint
data set resides).
If the system programmer cannot insure data set integrity on any shared DASD
volumes, the system programmer need not take any further action (for instance, do
not define any profile to RACF which would cover IHJ.CHKPT.volser). You cannot
take checkpoints on shared DASD volumes.
If data set integrity is assured on all shared DASD volumes and the system
programmer wants to perform a checkpoint on any of these volumes, build a
facility class generic profile with a name of IHJ.CHKPT.* with UACC of READ.
If data set integrity cannot be assured on some of the volumes, build discrete
profiles for each of these volumes with profile names of IHJ.CHKPT.volser with
UACC of NONE. These “volume-specific” profiles are in addition to the generic
profiles described above to permit checkpoints on shared DASD volumes for which
data set integrity is assured.
If the system programmer wants to let some, but not all, users to create
checkpoints on the volumes, build the generic profiles with UACC of NONE and
permit READ access only to those specific users or groups of users.
Information in a checkpoint data set includes the location on the disk or tape
where the application is currently reading or writing each open data set. If a data
set that is open at the time of the checkpoint is moved to another location before
the restart, you cannot restart the application from the checkpoint because the
location-dependent information recorded by checkpoint/restart is no longer valid.
There are several system functions (for example, DFSMShsm or DFSMSdss) that
might automatically move a data set without the owner specifically requesting it.
To ensure that all checkpointed data sets remain available for restart, the
checkpoint function sets the unmovable attribute for each SMS-managed sequential
data set that is open during the checkpoint. An exception is the data set containing
the actual recorded checkpoint information (the checkpoint data set), which does
not require the unmovable attribute.
You can move checkpointed data sets when you no longer need them to perform a
restart. DFSMShsm and DFSMSdss FORCECP(days) enable you to use operations
such as migrate, copy, or defrag to move an SMS-managed sequential data set
based on a number of days since the last access. DFSMShsm recall, and DFSMSdss
restore and copy, are operations that turn off the unmovable attribute for the target
data set.
See z/OS Security Server RACF Command Language Reference for information about
RACF commands and z/OS Security Server RACF Security Administrator’s Guide for
information about using and planning for RACF options.
If you do not have RACF or an equivalent product, the system programmer can
write an MVS router exit that is invoked by SAF and can be used to achieve the
above functions. See z/OS MVS Programming: Authorized Assembler Services Guide for
information about writing this exit.
When sharing data sets, you must consider the restrictions of search direct. Search
direct can cause unpredictable results when multiple DCBs are open and the data
sets are being shared, and one of the applications is adding records. You might get
the wrong record. Also, you might receive unpredictable results if your application
has a dependency that is incompatible with the use of search direct.
Topic Location
Job Entry Subsystem 385
SYSIN Data Set 386
SYSOUT Data Set 386
With spooling, unit record devices are used at full speed if enough buffers are
available. They are used only for the time needed to read, print, or punch the data.
Without spooling, the device is occupied for the entire time it takes the job to
process. Also, because data is stored instead of being transmitted directly, output
can be queued in any order and scheduled by class and by priority within each
class.
Scheduling provides the highest degree of system availability through the orderly
use of system resources that are the objects of contention.
SYSIN and SYSOUT data sets cannot be system managed. SYSIN and SYSOUT
must be either BSAM or QSAM data sets and you open and close them in the
same manner as any other data set processed on a unit record device. Because
SYSIN and SYSOUT data sets are spooled on intermediate devices, you should
avoid using device-dependent macros (such as FEOV, CNTRL, PRTOV, or BSP) in
processing these data sets. See “Achieving Device Independence” on page 399. You
can use PRTOV, but it will have no effect. For more information about SYSIN and
SYSOUT parameters see z/OS MVS JCL User’s Guide and z/OS MVS JCL Reference.
Your SYNAD routine is entered if an error occurs during data transmission to or
from an intermediate storage device. Again, because the specific device is
indeterminate, your SYNAD routine code should be device independent. If you
specify the DCB open exit routine in an exit list, it will be entered in the usual
manner. See “DCB Exit List” on page 535 for the DCB exit list format and “DCB
OPEN Exit” on page 543.
A SYSIN data set cannot be opened by more than one DCB at the same time; that
would result in an S013 ABEND.
If no record format is specified for the SYSIN data set, a record format of fixed is
supplied. Spanned records (RECFM=VS or VBS) cannot be specified for SYSIN.
The minimum record length for SYSIN is 80 bytes. For undefined records, the
entire 80-byte image is treated as a record. Therefore, a read of less than 80 bytes
results in the transfer of the entire 80-byte image to the record area specified in the
READ macro. For fixed and variable-length records, an ABEND results if the
LRECL is less than 80 bytes.
The logical record length value of SYSIN (JFCLRECL field in the JFCB) is filled in
with the logical record length value of the input data set. This logical record length
value is increased by 4 if the record format is variable (RECFM=V or VB).
The logical record length can be a size other than the size of the input device, if
the SYSIN input stream is supplied by an internal reader. JES supplies a value in
the JFCLRECL field of the JFCB if that field is found to be zero.
The block size value (the JFCBLKSI field in the JFCB) is filled in with the block
size value of the input data set. This block size value is increased by 4 if the record
format is variable (RECFM=V or VB). JES supplies a value in the JFCBLKSI field of
the JFCB if that field is found to be 0.
JES permits multiple opens to the same SYSOUT data set, and the records are
interspersed. However, you need to ensure that your application serializes the data
set. For more information about serialization see Chapter 23, “Sharing Non-VSAM
Data Sets,” on page 371.
From open to close of a particular data control block you should not change the
DCB indicators of the presence or type of control characters. When directed to disk
or tape, all the DCB’s for a particular data set should have the same type of control
characters. For a SYSOUT data set, the DCBs can have either type of control
character or none. The result depends on the ultimate destination of the data set.
For local printers and punches, each record is processed according to its control
character.
When you use QSAM with fixed-length blocked records or BSAM, the DCB block
size parameter does not have to be a multiple of logical record length (LRECL) if
the block size is specified in the SYSOUT DD statement. Under these conditions, if
block size is greater than, but not a multiple of, LRECL, the block size is reduced
to the nearest lower multiple of LRECL when the data set is opened.
You can specify blocking for SYSOUT data sets, even though your LRECL is not
known to the system until execution. Therefore, the SYSOUT DD statement of the
go step of a compile-load-go procedure can specify a block size without the block
size being a multiple of LRECL.
You should omit the DEVD parameter in the DCB macro, or you should code
DEVD=DA.
You can use the SETPRT macro to affect the attributes and scheduling of a
SYSOUT data set.
Your program is responsible for printing format, pagination, header control, and
stacker select. You can supply control characters for SYSOUT data sets in the
normal manner by specifying ANSI or machine characters in the DCB. Standard
controls are provided by default if they are not explicitly specified. The length of
output records must not exceed the allowable maximum length for the ultimate
device. Cards can be punched in EBCDIC mode only.
You can supply table reference characters (TRC’s) for SYSOUT data sets by
specifying OPTCD=J in the DCB. When the data set is printed, if the printer does
not support TRC’s then the system discards them.
See ″Processing SYSIN, SYSOUT, and System Data Sets″ under “Coding Processing
Methods” on page 326.
Topic Location
Creating a Sequential Data Set 389
Retrieving a Sequential Data Set 390
Concatenating Data Sets Sequentially 391
Modifying Sequential Data Sets 398
Achieving Device Independence 399
Improving Performance for Sequential Data Sets 401
Determining the Length of a Block when Reading with BSAM, BPAM, or 403
BDAM
Writing a Short Format-FB Block with BSAM or BPAM 405
Processing Extended-Format Sequential Data Sets 406
Processing Large Format Data Sets 411
You must use sequential data sets for all magnetic tape devices, punched cards,
and printed output. A data set residing on DASD, regardless of organization, can
be processed sequentially.
The example in Figure 67 shows that the GET-move and PUT-move require two
movements of the data records.
OPEN (INDATA,,OUTDATA,(OUTPUT))
NEXTREC GET INDATA,WORKAREA Move mode
AP NUMBER,=P’1’
UNPK COUNT,NUMBER Record count adds 6
OI COUNT+5,X’F0’ Set zone bits
PUT OUTDATA,COUNT bytes to each record
B NEXTREC
ENDJOB CLOSE (INDATA,,OUTDATA)
...
COUNT DS CL6
WORKAREA DS CL50
NUMBER DC PL4’0’
SAVE14 DS F
INDATA DCB DDNAME=INPUTDD,DSORG=PS,MACRF=(GM),EODAD=ENDJOB, X
LRECL=50,RECFM=FB
OUTDATA DCB DDNAME=OUTPUTDD,DSORG=PS,MACRF=(PM), X
LRECL=56,RECFM=FB
...
If the record length (LRECL) does not change during processing, but only one
move is necessary, you can process the record in the input buffer segment. A
GET-locate provides a pointer to the current segment.
Related reading: See “QSAM in an Application” on page 352 for more information.
The example in Figure 68 on page 391 is similar to that in Figure 67. However,
because there is no change in the record length, the records can be processed in the
input buffer. Only one move of each data record is required.
Related reading: See “QSAM in an Application” on page 352 for more information.
....
OPEN (INDATA,,OUTDATA,(OUTPUT),ERRORDCB,(OUTPUT))
NEXTREC GET INDATA Locate mode
LR 2,1 Save pointer
AP NUMBER,=P’1’
UNPK 0(6,2),NUMBER Process in input area
PUT OUTDATA Locate mode
MVC 0(50,1),0(2) Move record to output buffer
B NEXTREC
ENDJOB CLOSE (INDATA,,OUTDATA,,ERRORDCB)
...
NUMBER DC PL4’0’
INDATA DCB DDNAME=INPUTDD,DSORG=PS,MACRF=(GL),EODAD=ENDJOB
OUTDATA DCB DDNAME=OUTPUTDD,DSORG=PS,MACRF=(PL)
ERRORDCB DCB DDNAME=SYSOUTDD,DSORG=PS,MACRF=(PM),RECFM=V, C
BLKSIZE=128,LRECL=124
SAVE2 DS F
...
A sequential concatenation can include sequential data sets, PDS members, PDSE
members, and UNIX files. With sequential concatenation, the system treats a PDS,
PDSE, or UNIX member as if it were a sequential data set. The system treats a
striped extended-format data set as if it were a single-volume data set.
End-of-Data-Set (EODAD) Processing. When the change from one data set to
another is made, label exits are taken as required; automatic volume switching is
also performed for multiple volume data sets. When your program reads past the
end of a data set, control passes to your end-of-data-set (EODAD) routine only if
the last data set in the concatenation has been processed.
Consecutive Data Sets on a Tape Volume. To save time when processing two
consecutive sequential data sets on a single tape volume, specify LEAVE in your
OPEN macro, or DISP=(OLD,PASS) in the DD statement, even if you otherwise
would code DISP=(OLD,KEEP).
Reading Directories. You can use BSAM to read PDS and PDSE directories. You
can use BPAM to read UNIX directories and files. For more information, see
Chapter 28, “Processing z/OS UNIX Files,” on page 481.
If either of the data sets in a transition is system managed, you can treat the
transition as like. However, you must ensure that both data sets meet all like
concatenation rules, or unpredictable results can occur (for example, OPEN
ABENDs).
Your program indicates whether the system is to treat the data sets as like or unlike
by setting the bit DCBOFPPC. The DCB macro assembles this bit as 0, which
indicates like data sets. See “Concatenating Unlike Data Sets” on page 396.
Related reading: For more information, see “Concatenating UNIX Files and
Directories” on page 499 and “Concatenating Extended-Format Data Sets with
Other Data Sets” on page 410.
With like concatenation, if the program has an end-of-volume exit, it is called at the
beginning of each volume of each data set except the first volume of the first data
set. If the type of data set does not have volumes, the system treats it as having
one volume.
v KEYLEN
v NCP or BUFNO
With like concatenation the system can change the following when switching to
another data set:
v BLKSIZE and BUFL for QSAM
v Field DCBDEVT in the DCB (device type)
v TRTCH (tape recording technique)
v DEN (tape density)
With or without concatenation the system sets LRECL in the DCB for each QSAM
GET macro when reading format-V, format-D, or format-U records, except with
XLRI. GET issues an ABEND if it encounters a record that is longer than LRECL
was at the completion of OPEN.
If your program indicates like concatenation (by taking no special action about
DCBOFPPC) and one of the like concatenation rules is broken, the results are
unpredictable. A typical result is an I/O error, resulting in an ABEND, or entry to
the SYNAD routine. The program might even appear to run correctly.
If the open routine for QSAM obtains the buffer pool automatically, the data set
transition process might free the buffer pool and obtain a new one for the next
concatenated data set. The buffer address that GET returns is valid only until the
next GET or FEOV macro runs. The transition process frees the buffer pool and
obtains a new, system-created buffer pool during end-of-volume concatenation
processing. The procedure does not free the buffer pool for the last concatenated
data set unless you coded RMODE31=BUFF. You should also free the
system-created buffer pool before you attempt to reopen the DCB, unless you
coded RMODE31=BUFF.
If you have enabled a larger block size, OPEN searches later concatenated data sets
for the largest acceptable block size and stores it in the DCB or DCBE. A block size
is acceptable if it comes from a source that does not also have a RECFM or LRECL
inconsistent with the RECFM or LRECL already in the DCB.
For format-V records, if a data set has an LRECL value that is larger than the value
in the DCB, the block size for that data set is not considered during OPEN.
A RECFM value of U in the DCB is consistent with any other RECFM value.
BSAM considers the following RECFM values compatible with the specified record
format for the first data set:
v F or FB—Compatible record formats are F, FB, FS, and FBS.
v V or VB—Compatible record formats are V and VB.
v U—All other record formats are compatible.
BSAM OPEN Processing Before First Data Set: OPEN tests the JFCB for each
data set after the one being opened. The JFCB contains information coded when
the data set was allocated and information that OPEN can have stored there before
it was dynamically reconcatenated.
All of the above processing previously described occurs for any data set that is
acceptable to BSAM. The OPEN that you issue does not read tape labels for data
sets after the first. Therefore, if there is a tape data set after the first that has a
block size larger than all of the prior specifications, the BLKSIZE value must be
specified on the DD statement. The system later reads those tape labels but it is too
late for the system to discover a larger block size at that time.
For each data set whose JFCB contains a block size of 0 and is on permanently
resident DASD, OPEN obtains the data set characteristics from the data set label
(DSCB). If they are acceptable and the block size is larger, OPEN copies the block
size to the DCB or DCBE.
For each JFCB or DSCB that this function of OPEN examines, OPEN turns off the
DCB’s standard bit, if the block size differs from the DCB or DCBE block size and
the DCB has fixed standard.
If DCBBUFL, either from the DCB macro or the first DD statement, is nonzero,
then that value will be an upper limit for BLKSIZE from another data set. No
block size from a later DD statement or DSCB is used during OPEN if it is larger
than that DCBBUFL value. OPEN ignores that larger block size on the assumption
that you will turn on the unlike attributes bit later, will not read to that data set, or
the data set does not actually have blocks that large.
When OPEN finds an inconsistent record format, it issues the following message:
IEC034I INCONSISTENT RECORD FORMATS rrr AND iii,ddname+cccc,dsname
cccc Specifies the number of the DD statement after the first one, where +1
means the second data set in the concatenation.
//INPUT DD *
... (instream data set)
// DD DSN=D42.MAIN.DATA,DISP=SHR
// DD DSN=D42.SUPPL.DATA,UNIT=(3590,2),DISP=OLD,BLKSIZE=150000
This example requires the application to use the large block interface because the
BLKSIZE value is so large.
OPEN finds that the block size value for the second DD is larger than for the first
DD, which normally is 80. If the second DD is for a disk data set, its maximum
block size is 32 760. BSAM OPEN for the first DD uses the BLKSIZE from the third
DD because it is the largest.
Unless you have some way of determining the characteristics of the next data set
before it is opened, you should not reset the DCBOFLGS field to indicate like
attributes during processing. When you concatenate data sets with unlike attributes
(that is, turn on the DCBOFPPC bit of the DCBOFLGS field), the EOV exit is not
taken for the first volume of any data set. If the program has a DCB OPEN exit it
is called at the beginning of every data set in the concatenation.
If your program turns DCBOFPPC on before issuing OPEN, each time the system
calls your DCB OPEN exit routine or JFCBE exit, DCBESLBI in your DCBE is on
only if the current data set being started supports large block interface (LBI). If you
want to know in advance if all the data sets support LBI, your program can take
one of the following actions:
v Leave DCBOFPPC off until after OPEN. You do not need it on until your
program attempts to read a record.
v Issue the DEVTYPE macro with INFO=AMCAP. See z/OS DFSMSdfp Advanced
Services.
When a new data set is reached and DCBOFPPC is on, you must reissue the GET
or READ macro that detected the end of the data set because with QSAM, the new
data set can have a longer record length, or with BSAM the new data set can have
a larger block size. You might need to allocate larger buffers. Figure 70 shows a
possible routine for determining when a GET or READ must be reissued.
PROBPROG DCBEXIT
Set Reread
Open Switch On
Set Bit 4
Set Reread of OFLGS
Switch Off to 1
Return to
Read and Open *
CHECK
or GET
Process
*Return is to control program
address in register 14
Figure 70. Reissuing a READ or GET for Unlike Concatenated Data Sets
You might need to take special precautions if the program issues multiple READ
macros without intervening CHECK or WAIT macros for those READS. Do not
issue WAIT or CHECK macros to READ requests that were issued after the READ
that detected end-of-data. These restrictions do not apply to data set to data set
transition of like data sets, because no OPEN or CLOSE operation is necessary
between data sets.
You can code OPTCD=B in the DD statement, or you can code it for dynamic
allocation. You cannot code OPTCD=B in the DCB macro. This parameter has an
effect only during the reading of IBM, ISO, or ANSI standard labelled tapes. In
those cases, it causes the system to treat the portion of the data set on each tape
volume as a complete data set.
In this way, you can read tapes in which the trailer labels incorrectly are
end-of-data instead of end-of-volume.
If you specify OPTCD=B in the DD statement for a multivolume tape data set, the
system generates the equivalent of individual concatenated DD statements for each
volume serial number and allocates one tape drive for each volume.
Restriction: If you have a variable-blocked spanned (VBS) data set that spans
volumes in such a way that one segment (for example, the first segment) is at the
end of the first volume and the next segment (for example, the middle segment) is
at the beginning of the next volume, and you attempt to treat these volumes as
separate data sets, the integrity of the data cannot be guaranteed. QSAM will
abend. QSAM’s job is to ensure that it can put all of the segments together. This
restriction will also be based on the data and whether the segments are split up
between volumes.
Updating in Place
When you update a data set in place, you read, process, and write records back to
their original positions without destroying the remaining records on the track. The
following rules apply:
v You must specify the UPDAT option in the OPEN macro to update the data set.
To perform the update, you can use only the READ, WRITE, CHECK, NOTE,
and POINT macros or you use only GET and PUTX macros. To use PUTX, code
MACRF=(GL,PL) on the DCB macro.
v You cannot delete any record or change its length.
v You cannot add new records.
v The data set must be on a DASD.
v You must rewrite blocks in the same order in which you read them.
The READ and WRITE macros must be execute forms that refer to the same data
event control block (DECB). The DECB must be provided by the list forms of the
READ or WRITE macros.
Restriction: You cannot use the UPDAT option to open a compressed-format data
set, so an update-in-place is not allowed on it.
Related reading: See z/OS DFSMS Macro Instructions for Data Sets for information
about the execute and list forms of the READ and WRITE macros.
read with write operations, however, because operations of one type must be
checked for completion before operations of the other type are started or resumed.
Note that each pending read or write operation requires a separate DECB. If a
single DECB were used for successive read operations, only the last record read
could be updated.
Related reading: See Figure 84 on page 436 for an example of an overlap achieved
by having a read or write request outstanding while each record is being
processed.
The system ensures that the data set labels on prior volumes do not have the
last-volume indicator on. The volume with the last-volume bit on is not necessarily
the last volume that contains space for the data set or is indicated in the catalog. A
later volume might also have the last volume bit on.
When you later extend the data set with DISP=MOD or OPEN with EXTEND or
OUTINX, OPEN must determine the volume containing the last user data.
With a system-managed data set, OPEN tests each volume from the first to the last
until it finds the last-used volume.
terminal, or a dummy data set. Other data set organizations (partitioned, direct,
and VSAM) are device-dependent because they require the use of DASD.
Device-Dependent Macros
The following is a list of device-dependent macros and macro parameters.
Consider only the logical layout of your data record without regard for the type of
device used. Even if your data is on a direct access volume, treat it as if it were on
a magnetic tape. For example, when updating, you must create a new data set
rather than attempt to update the existing data set.
WRITE—Specify forward writing (SF) only; use only to create new records or
modify existing records.
NOTE/POINT—These macros are valid for both magnetic tape and direct access
volumes. To maintain independence of the device type and of the type of data set
(sequential, extended-format, PDSE, and so forth), do not test or modify the word
returned by NOTE or calculate a word to pass to POINT.
BSP—This macro is valid for magnetic tape or direct access volumes. However, its
use would be an attempt to perform device-dependent action.
SETPRT—Valid only for directly allocated printers and for SYSOUT data sets.
However, if the data set resides on DASD, the close routines perform the buffer
flushing which writes the last records to the data set. If you cancel the task, the
buffer is lost.
DEVD —Specify DA if any DASD might be used. Magnetic tape and unit-record
equipment DCBs will fit in the area provided during assembly. Specify unit-record
devices only if you expect never to change to tape or DASD.
The I/O performance is improved by reducing both the processor time and the
channel start/stop time required to transfer data to or from virtual storage. Some
factors that affect performance follow:
v Address space type (real or virtual)
v Block size. Larger blocks are more efficient. You can get significant performance
improvement by using LBI, large block interface. It allows tape blocks longer
than 32 760 bytes.
v BUFNO for QSAM
The system defaults to chained scheduling for non DASD, except for printers and
format-U records, and for those cases in which it is not permitted.
Chained scheduling is most valuable for programs that require extensive input and
output operations. Because a data set using chained scheduling can monopolize
available time on a channel in a V=R region, separate channels should be assigned,
if possible, when more than one data set is to be processed.
data set, not the actual length of the block read in. Each record descriptor word
(RDW), if present, is not converted from ASCII to binary.
Related reading: See “Using Optional Control Characters” on page 312 and z/OS
DFSMS Macro Instructions for Data Sets for more information about control
characters.
In QSAM, the value of BUFNO determines how many buffers will be chained
together before I/O is initiated. The default value of BUFNO is described in
“Constructing a Buffer Pool Automatically” on page 350. When enough buffers are
available for reading ahead or writing behind, QSAM attempts to read or write
those buffers in successive revolutions of the disk.
In BSAM and BPAM, the first READ or WRITE instruction initiates I/O unless the
system is honoring your MULTACC specification in the DCBE macro. The system
puts subsequent I/O requests (without an associated CHECK or WAIT instruction)
in a queue. When the first I/O request completes normally, the system checks the
queue for pending I/O requests and builds a channel program for as many of
these requests as possible. The number of I/O requests that the system can chain
together is the maximum number of requests that the system can process in one
I/O event. This limit is less than or equal to the NCP value.
For better performance with BSAM and BPAM, use the technique described in
“Using Overlapped I/O with BSAM” on page 359 and Figure 83 on page 433.
For sequential data sets and PDSs, specifying a nonzero MULTACC value on a
DCBE macro can result in more efficient channel programs. You can also code a
nonzero MULTSDN value. If MULTSDN is nonzero and DCBNCP is zero, OPEN
determines a value for NCP and stores that value in DCBNCP before giving
control to the DCB open exit. If MULTACC is nonzero and your program uses the
WAIT or EVENTS macro on a DECB or depends on a POST exit for a DECB, then
you must precede that macro or dependence by a CHECK or TRUNC macro.
Note:
1. For compressed format data sets, MULTACC is ignored since all buffering is
handled internally by the system.
2. For tape data sets using large block interface (LBI) that have a block size
greater than 32 768, the system-determined NCP value is between 2 and 16. If
the calculated value is <2, it is set to 2, and if it is >16, it is set to 16.
For unblocked and undefined record formats, each block contains one logical
record.
1. Fixed-length, unblocked records: The length of all records is the value in the
DCBBLKSI field of the DCB without LBI or the DCBEBLKSI field of the DCBE
with LBI. You can use this method with BSAM or BPAM.
2. Variable-length records and Format-D records with BUFOFF=L: The block
descriptor word in the block contains the length of the block. You can use this
method with BSAM or BPAM. “Block Descriptor Word (BDW)” on page 297
describes the BDW format.
3. Format-D records without BUFOFF=L: The block length is in DCBLRECL after
you issue the CHECK macro. It remains valid until you again issue a CHECK
macro.
4. Undefined-length records when using LBI or for fixed-length blocked: The
method described in the following paragraphs can be used to calculate the
block length. You can use this method with BSAM, BPAM, or BDAM. (It should
not be used when using chained scheduling with format-U records. In that
case, the length of a record cannot be determined.
a. After issuing the CHECK macro for the DECB for the READ request, but
before issuing any subsequent data management macros that specify the
DCB for the READ request, obtain the status area address in the word that
is 16 bytes from the start of the DECB.
b. If you are not using LBI, take the following steps:
1) Obtain the residual count that has been stored in the status area. The
residual count is in the halfword, 14 bytes from the start of the status
area.
2) Subtract this residual count from the number of data bytes requested to
be read by the READ macro. If 'S' was coded as the length parameter of
the READ macro, the number of bytes requested is the value of
DCBBLKSI at the time the READ was issued. If the length was coded in
the READ macro, this value is the number of data bytes and it is
contained in the halfword 6 bytes from the beginning of the DECB. The
result of the subtraction is the length of the block read.
If you are using LBI for BSAM or BPAM, subtract 12 from the address of
the status area. This gives the address of the 4 bytes that contain the length
of the block read.
5. Undefined-length records when not using LBI: The actual length of the record
that was read is returned in the DCBLRECL field of the DCB. Because of this
use of DCBLRECL, you should omit LRECL. Use this method only with BSAM,
or BPAM or after issuing a QSAM GET macro.
Figure 71 on page 405 shows an example of determining the length of a record
when using BSAM to read undefined-length records.
...
OPEN (DCB,(INPUT))
LA DCBR,DCB
USING IHADCB,DCBR
...
READ DECB1,SF,DCB,AREA1,’S’
READ DECB2,SF,DCB,AREA2,50
...
CHECK DECB1
LH WORK1,DCBBLKSI Block size at time of READ
L WORK2,DECB1+16 Status area address
SH WORK1,14(WORK2) WORK1 has block length
...
CHECK DECB2
LH WORK1,DECB2+6 Length requested
L WORK2,DECB2+16 Status area address
SH WORK1,14(WORK2) WORK1 has block length
...
MVC DCBBLKSI,LENGTH3 Length to be read
READ DECB3,SF,DCB,AREA3
...
CHECK DECB3
LH WORK1,LENGTH3 Block size at time of READ
L WORK2,DECB+16 Status area address
SH WORK1,14(WORK2) WORK1 has block length
...
DCB DCB ...RECFM=U,NCP=2,...
DCBD
...
Figure 71. One Method of Determining the Length of a Record when Using BSAM to Read
Undefined-Length or Blocked Records
When you write a short block to an extended-format data set, the system pads it to
full length but retains the value of what your program said is the length. When
you read such a block, be aware that the system reads as many bytes as the block
can have and is not limited by the length specified for the write. If you know that
a particular block is short and you plan to read it to a short data area, then you
must decrease DCBBLKSI or DCBEBLKSI with LBI to the length of the short area
before the READ.
You change block size in the DCB or DCBE before issuing the WRITE macro. It
must be a multiple of the LRECL parameter in the DCB. After this is done, any
subsequent WRITE macros issued write records with the new block length until
you change the block size again.
This technique works for all data sets supported by BSAM or BPAM. With
extended-format sequential data sets, the system actually writes all blocks in the
data set as the same size, but on a READ returns the length specified on the
WRITE for the block.
Recommendation: You can create short blocks for PDSEs but their block
boundaries are not saved when the data set is written to DASD. Therefore, if your
program is dependent on short blocks, do not use a PDSE.
Related reading: See “Processing PDSE Records” on page 444 for information
about using short blocks with PDSEs.
Using Hiperbatch
Hiperbatch is an extension of QSAM designed to improve performance in specific
situations. Hiperbatch uses the data lookaside facility (DLF) services to provide an
alternate fast path method of making data available to many batch jobs. Through
Hiperbatch, applications can take advantage of the performance benefits of the
operating system without changing existing application programs or the JCL used
to run them.
Either Hiperbatch or extended-format data sets can improve performance, but they
cannot be used for the same data set.
Related reading: See MVS Hiperbatch Guide for information about using
Hiperbatch. See z/OS MVS System Commands for information about the DLF
commands.
Hiperbatch Striping
Uses Hiperspace Requires certain hardware
Improved performance requires multiple Performance is best with only one program
reading programs at the same time at a time
Relatively few data sets in the system can Larger number of data sets can be used at
use it at once once
QSAM only QSAM and BSAM
Large data sets with high I/O activity are the best candidates for striped data sets.
Data sets defined as extended-format sequential must be accessed using BSAM or
QSAM, and not EXCP or BDAM.
Related reading: See “Determining the Length of a Block when Reading with
BSAM, BPAM, or BDAM” on page 403 for more information.
Types of Compression
Two compression techniques are available for compressed format data sets. They
are DBB-based compression and tailored compression. These techniques determine
the method used to derive a compression dictionary for the data sets:
v DBB-based compression (also referred to as GENERIC). With DBB-based
compression (the original form of compression used with both sequential and
VSAM KSDS compressed format data sets), the system selects a set of dictionary
building blocks (DBBs), found in SYS1.DBBLIB, which best reflects the initial
data written to the data set. The system can later reconstruct the dictionary by
using the information in the dictionary token stored in the catalog.
v Tailored compression. With tailored compression, the system attempts to derive
a compression dictionary tailored specifically to the initial data written to the
data set. Once derived, the compression dictionary is stored in system blocks
which are imbedded within the data set itself. An OPEN for input reconstructs
the dictionary by reading in the system blocks containing the dictionary.
This form of compression is not supported for VSAM KSDSs.
The form of compression the system is to use for newly created compressed format
data sets can be specified at either or both the data set level and at the installation
level. At the data set level, the storage administrator can specify TAILORED or
GENERIC on the COMPACTION option in the data class. When the data class
does not specify at the data set level, it is based on the
COMPRESS(TAILORED|GENERIC) parameter found in the IGDSMSxx member of
SYS1.PARMLIB. If the data class specifies the compression form, this takes
precedence over that which is specified in SYS1.PARMLIB. COMPRESS(GENERIC)
refers to generic DBB-based compression. This is the default.
COMPRESS(TAILORED) refers to tailored compression. When this member is
activated using SET SMS=xx or IPL, new compressed format data sets are created
in the form specified. The COMPRESS parameter in PARMLIB is ignored for
VSAM KSDSs. For a complete description of this parameter see z/OS DFSMSdfp
Storage Administration Reference.
v The data format for a compressed format data set consists of physical blocks
whose length has no correlation to the logical block size of the data set in the
DCB, DCBE, and the data set label. The actual physical block size is calculated
by the system and is never returned to the user. However, the system maintains
the user’s block boundaries when the data set is created so that the system can
return the original user blocks to the user when the data set is read.
v A compressed format data set cannot be opened for update.
v When issued for a compressed format data set, the BSAM CHECK macro does
not ensure that data is written to DASD. However, it does ensure that the data
in the buffer has been moved to an internal system buffer, and that the user
buffer is available to be reused.
v The block locator token returned by NOTE and used as input to POINT
continues to be the relative block number (RBN) within each logical volume of
the data set. A multistriped data set is seen by the user as a single logical
volume. Therefore, for a multistriped data set the RBN is relative to the
beginning of the data set and incorporates all stripes. To provide compatibility,
this RBN refers to the logical user blocks within the data set as opposed to the
physical blocks of the data set.
v However, due to the NOTE/POINT limitation of the 3 byte token, issuing a
READ or WRITE macro for a logical block whose RBN value exceeds 3 bytes
results in an ABEND if the DCB specifies NOTE/POINT (MACRF=P).
v When the data set is created, the system attempts to derive a compression token
when enough data is written to the data set (between 8K and 64K for DBB
compression and much more for tailored compression). If the system is
successful in deriving a compression token, the access method attempts to
compress any additional records written to the data set. However, if an efficient
compression token could not be derived, the data set is marked as
noncompressible and there is no attempt to compress any records written to the
data set. However, if created with tailored compression, it is still possible to
have system blocks imbedded within the data set although a tailored dictionary
could not be derived.
If the compressed format data set is closed before the system is able to derive a
compression token, the data set is marked as noncompressible. Additional
OPENs for output do not attempt to generate a compression token once the data
set has been marked as noncompressible.
v A compressed format data set can be created using the LIKE keyword and not
just through a DATACLAS.
An extended-format sequential data set that is allocated with more than one stripe
cannot be extended to more volumes. An extended-format sequential data set with
multiple stripes has one stripe per volume. A stripe cannot extend to another
volume. When the space is filled on one of the volumes for the current set of
stripes, the system cannot extend the data set any further.
Related reading: For information on specifying the sustained data rate in the
storage class, which determines the number of stripes in an extended-format
sequential data set, see the z/OS DFSMSdfp Storage Administration Reference. For
more information on the SPACE parameter, see the z/OS MVS JCL Reference.
If you use BSAM, you can set a larger NCP value or have the system calculate an
NCP value by means of the DCBE macro MULTSDN parameter. You can also
If you use QSAM, you can request more buffers using the BUFNO parameter. Your
program can calculate BUFNO according to the number of stripes. Your program
can test DCBENSTR in the DCBE during the DCB open exit routine.
Existing programs need to be changed and reassembled if you want any of the
following:
v To switch from 24-bit addressing mode to 31-bit mode SAM.
v To ask the system to determine an appropriate NCP value. Use the MULTSDN
parameter of the DCBE macro.
v To get maximum benefit from BSAM performance chaining. You must change
the program by adding the DCBE parameter to the DCB macro and including
the DCBE macro with the MULTACC parameter. If the program uses WAIT or
EVENTS or a POST exit (instead of, or in addition to, the CHECK macro), your
program must issue the TRUNC macro whenever the WAIT or EVENTS macro
is about to be issued or the POST exit is depended upon to get control.
Related reading: For more information, see “DASD and Tape Performance” on
page 403 and the DCBE and IHADCBE macros in z/OS DFSMS Macro Instructions
for Data Sets.
Space for a new data set: If you specify the BLKSIZE parameter or the average
block size when allocating space for a new extended-format data set, consider the
32-byte suffix that the system adds to each block. Programs do not see this suffix.
The length of the suffix is not included in the BLKSIZE value in the DCB, DCBE,
JFCB, or DSCB.
Space for an existing data set: Some programs read the data set control block
(DSCB) to calculate the number of tracks used or the amount of unused space. For
extended-format data sets, the fields DS1LSTAR and DS1TRBAL have different
meanings than for sequential data sets. You can change your program to test
DS1STRIP, or you can change it to test DCBESIZE in the DCBE. DSCB fields are
described in z/OS DFSMSdfp Advanced Services. For the DCBE fields, see z/OS
DFSMS Macro Instructions for Data Sets.
Extended-format data sets can use more than 65 535 tracks on each volume. They
use DS1TRBAL with DS1LSTAR to represent one less than the number of tracks
containing data. Thus, for extended-format data sets, DS1TRBAL does not reflect
the amount of space remaining on the last track written. Programs that rely on
DS1TRBAL to determine the amount of free space must first check if the data set is
an extended-format data set.
| especially very large ones like spool data sets, dumps, logs, and traces. Unlike
| extended-format data sets, which also support greater than 65 535 tracks per
| volume, large format data sets are compatible with EXCP and don’t need to be
| SMS-managed.
| Data sets defined as large format must be accessed using QSAM, BSAM, or EXCP.
| Restrictions: The following types of data sets cannot be allocated as large format
| data sets:
| v PDS, PDSE, and direct data sets
| v Virtual I/O data sets, password data sets, and system dump data sets.
| v High level languages do not support large format data sets when the
| BLOCKTOKENSIZE(REQUIRE) option in IGDSMSxx member of SYS1.PARMLIB
| (the default value) is in effect.
| Related reading: See “Allocating System-Managed Data Sets” on page 31for more
| information.
| OPEN will issue an ABEND 213-10 for large format sequential data sets if the
| access method is not QSAM, BSAM, or EXCP. OPEN will issue an ABEND 213-14,
| 213-15, 213-16, or 213-17 and EOV will issue ABEND 737-44 or 737-45 if the
| application program cannot access the whole data set on the volume (primary,
| secondary, or a subsequent volume).
Topic Location
Structure of a PDS 415
PDS Directory 416
Allocating Space for a PDS 419
Creating a PDS 420
Processing a Member of a PDS 424
Retrieving a Member of a PDS 430
Modifying a PDS 434
Concatenating PDSs 437
Reading a PDS Directory Sequentially 438
Structure of a PDS
A PDS is stored only on a direct access storage device. It is divided into
sequentially organized members, each described by one or more directory entries.
Each member has a unique name, 1 to 8 characters long, stored in a directory that
is part of the data set. The records of a given member are written or retrieved
sequentially.
The main advantage of using a PDS is that, without searching the entire data set,
you can retrieve any individual member after the data set is opened. For example,
in a program library that is always a PDS, each member is a separate program or
subroutine. The individual members can be added or deleted as required. When a
member is deleted, the member name is removed from the directory, but the space
used by the member cannot be reused until the data set is reorganized; that is,
compressed using the IEBCOPY utility.
The directory, a series of 256-byte records at the beginning of the data set, contains
an entry for each member. Each directory entry contains the member name and the
starting location of the member within the data set (see Figure 72 on page 416).
You can also specify as many as 62 bytes of information in the entry. The directory
entries are arranged by name in alphanumeric collating sequence.
Related reading: See z/OS DFSMS Macro Instructions for Data Sets for the macros
used with PDSs.
Directory
records Entry for Entry for Entry for Entry for
member A member B member C member K
Space from
Member C deleted
member
Member B Member K
Member K
Member K Member A
Available
Member A area
The starting location of each member is recorded by the system as a relative track
address (from the beginning of the data set) rather than as an absolute track
address. Thus, an entire data set that has been compressed can be moved without
changing the relative track addresses in the directory. The data set can be
considered as one continuous set of tracks regardless of where the space was
actually allocated.
If there is not sufficient space available in the directory for an additional entry, or
not enough space available within the data set for an additional member, or no
room on the volume for additional extents, no new members can be stored. A
directory cannot be extended and a PDS cannot cross a volume boundary.
PDS Directory
The directory of a PDS occupies the beginning of the area allocated to the data set
on a direct access volume. It is searched and maintained by the BLDL, FIND, and
STOW macros. The directory consists of member entries arranged in ascending
order according to the binary value of the member name or alias.
PDS member entries vary in length and are blocked into 256-byte blocks. Each
block contains as many complete entries as will fit in a maximum of 254 bytes.
Any remaining bytes are left unused and are ignored. Each directory block
contains a 2-byte count field that specifies the number of active bytes in a block
(including the count field). In Figure 73, each block is preceded by a
hardware-defined key field containing the name of the last member entry in the
block, that is, the member name with the highest binary value. Figure 73 shows the
format of the block returned when using BSAM to read the directory.
Bytes 8 2 254
count field. It can also contain a user data field. The last entry in the last used
directory block has a name field of maximum binary value (all 1s, a TTR field of
zeros, and a zero-length user data field).
TTR—Is a pointer to the first block of the member. TT is the number of the track,
starting from 0 for the beginning of the data set, and R is the number of the block,
starting from 1 for the beginning of that track.
C—Specifies the number of halfwords contained in the user data field. It can also
contain additional information about the user data field, as shown below:
The operating system supports a maximum of three pointers in the user data field.
Additional pointers can be contained in a record called a note list discussed in the
following note. The pointers can be updated automatically if the data set is moved
or copied by a utility program such as IEHMOVE. The data set must be marked
unmovable under any of the following conditions:
v More than three pointers are used in the user data field.
Chapter 26. Processing a Partitioned Data Set (PDS) 417
Processing a Partitioned Data Set (PDS)
v The pointers in the user data field or note list do not conform to the standard
format.
A note list for a PDS containing variable length records does not conform to
standard format. Variable-length records contain BDWs and RDWs that are
treated as TTRXs by IEHMOVE.
v The pointers are not placed first in the user data field.
v Any direct access address (absolute or relative) is embedded in any data blocks
or in another data set that refers to the data set being processed.
3-7—Contains a binary value indicating the number of halfwords of user data. This
number must include the space used by pointers in the user data field.
You can use the user data field to provide variable data as input to the STOW
macro. If pointers to locations within the member are provided, they must be 4
bytes long and placed first in the user data field. The user data field format is as
follows:
TT—Is the relative track address of the note list or the area to which you are
pointing.
N—Is a binary value that shows the number of additional pointers contained in a
note list pointed to by the TTR. If the pointer is not to a note list, N=0.
A note list consists of additional pointers to blocks within the same member of a
PDS. You can divide a member into subgroups and store a pointer to the beginning
of each subgroup in the note list. The member can be a load module containing
many control sections (CSECTs), each CSECT being a subgroup pointed to by an
entry in the note list. Use the NOTE macro to point to the beginning of the
subgroup after writing the first record of the subgroup. Remember that the pointer
to the first record of the member is stored in the directory entry by the system.
If a note list exists, as shown above, the list can be updated automatically when
the data set is moved or copied by a utility program such as IEHMOVE. Each
4-byte entry in the note list has the following format:
TT—Is the relative track address of the area to which you are pointing.
To place the note list in the PDS, you must use the WRITE macro. After checking
the write operation, use the NOTE macro to determine the address of the list and
place that address in the user data field of the directory entry.
The linkage editor builds a note list for the load modules in overlay format. The
addresses in the note list point to the overlay segments that are read into the
system separately.
Restriction: Note lists are not supported for PDSEs. If a PDS is to be converted to
a PDSE, the PDS should not use note lists.
If you do not specify a block size and the record format is fixed or variable, OPEN
determines an optimum block size for you. Therefore, you do not need to perform
calculations based on track length. When you allocate space for your data set,
specify the average record length in kilobytes or megabytes by using the SPACE
and AVGREC parameters, and have the system use the block size it calculated for
your data set.
If your data set is large, or if you expect to update it extensively, it might be best
to allocate a large data set. A PDS cannot occupy more than 65 535 tracks and
cannot extend beyond one volume. If your data set is small or is seldom changed,
let the system calculate the space requirements to avoid wasted space or wasted
time used for recreating the data set.
VSAM, extended format, HFS, and PDSE data sets can occupy more than 65 535
tracks.
Calculating Space
If you want to estimate the space requirements yourself, you need to answer the
following questions to estimate your space requirements accurately and use the
space efficiently.
v What is the average size of the members to be stored on your direct access
volume?
v How many members will fit on the volume?
v Will you need directory entries for the member names only, or will aliases be
used? If so, how many?
v Will members be added or replaced frequently?
You can calculate the block size yourself and specify it in the BLKSIZE parameter
of the DCB or DCBE. For example, if the average record length is close to or less
than the track length, or if the track length exceeds 32 760 bytes the most efficient
use of the direct access storage space can be made with a block size of one-third or
one-half the track length.
For a 3380 DASD, you might then ask for either 75 tracks, or 5 cylinders, thus
permitting for 3 480 ,000 bytes of data. Assuming the allocation size of 3 480 000
bytes and an average length of 70 000 bytes for each member, you need space for
at least 50 directory entries. If each member also has an average of three aliases,
space for an additional 150 directory entries is required.
Each member in a data set and each alias need one directory entry apiece. If you
expect to have 10 members (10 directory entries) and an average of 3 aliases for
each member (30 directory entries), allocate space for at least 40 directory entries.
Space for the directory is expressed in 256-byte blocks. Each block contains from 3
to 21 entries, depending on the length of the user data field. If you expect 200
directory entries, request at least 10 blocks. Any unused space on the last track of
the directory is wasted unless there is enough space left to contain a block of the
first member.
Any of the following space specifications would allocate approximately the same
amount of space for a 3380 DASD. Ten blocks have been allocated for the directory.
The first two examples would not allocate a separate track for the directory. The
third example would result in allocation of 75 tracks for data, plus 1 track for
directory space.
SPACE=(CYL,(5,,10))
SPACE=(TRK,(75,,10))
SPACE=(23200,(150,,10))
SPACE=(80,(43500,,10)),AVGREC=U
Recommendation: The SPACE parameter can be derived from either the data class,
the LIKE keyword, or the DD statement. Specify the SPACE parameter in the DD
statement if you do not want to use the space allocation amount defined in the
data class.
Related reading: For more information on using the SPACE and AVGREC
parameters, see Chapter 3, “Allocating Space on Direct Access Volumes,” on page
35 in this manual, and also see z/OS MVS JCL Reference and z/OS MVS JCL User’s
Guide.
Creating a PDS
You can create a PDS or members of a PDS with BSAM, QSAM, or BPAM.
The following steps create the data set and its directory, write the records of the
member, and make a 12-byte entry in the directory:
1. Code DSORG=PS or DSORG=PSU in the DCB macro.
2. In the DD statement specify that the data is to be stored as a member of a new
PDS, that is, DSNAME=name(membername) and DISP=NEW.
3. Optionally specify a data class in the DD statement or let the ACS routines
assign a data class.
4. Use the SPACE parameter to request space for the member and the directory in
the DD statement, or obtain the space from the data class.
5. Process the member with an OPEN macro, a series of PUT or WRITE macros,
and the CLOSE macro. A STOW macro is issued automatically when the data
set is closed.
//PDSDD DD ---,DSNAME=MASTFILE(MEMBERK),SPACE=(TRK,(100,5,7)),
// DISP=(NEW,CATLG),DCB=(RECFM=FB,LRECL=80,BLKSIZE=80)---
...
OPEN (OUTDCB,(OUTPUT))
...
PUT OUTDCB,OUTAREA Write record to member
...
CLOSE (OUTDCB) Automatic STOW
...
OUTAREA DS CL80 Area to write from
OUTDCB DCB ---,DSORG=PS,DDNAME=PDSDD,MACRF=PM
If the preceding conditions are true but you code DSORG=PO (to use BPAM) and
your last operation on the DCB before CLOSE is a STOW macro, CLOSE does not
issue the STOW macro.
Converting PDSs
You can use IEBCOPY or DFSMSdss COPY to convert the following data sets:
v a PDS to a PDSE
v a PDSE to a PDS
Related reading: See “Converting PDSs to PDSEs and Back” on page 478 for
examples of using IEBCOPY and DFSMSdss to convert PDSs to PDSEs.
Related reading: For more information, see z/OS UNIX System Services Command
Reference.
Adding Members
To add additional members to the PDS, follow the procedure described in
Figure 75 on page 421. However, a separate DD statement (with the space request
omitted) is required for each member. The disposition should be specified as
modify (DISP=MOD). The data set must be closed and reopened each time a new
member is specified on the DD statement.
You can use the basic partitioned access method (BPAM) to process more than one
member without closing and reopening the data set. Use the STOW, BLDL, and
FIND macros to provide more information with each directory entry, as follows:
v Request space in the DD statement for the entire data set and the directory.
v Define DSORG=PO or DSORG=POU in the DCB macro.
v Use WRITE and CHECK to write and check the member records.
v Use NOTE to note the location of any note list written within the member, if
there is a note list, or to note the location of any subgroups. A note list is used
to point to the beginning of each subgroup in a member.
v When all the member records have been written, issue a STOW macro to enter
the member name, its location pointer, and any additional data in the directory.
The STOW macro writes an end-of-file mark after the member.
v Continue to use the WRITE, CHECK, NOTE, and STOW macros until all the
members of the data set and the directory entries have been written.
//PDSDD DD ---,DSN=MASTFILE,DISP=MOD,SPACE=(TRK,(100,5,7))
...
OPEN (OUTDCB,(OUTPUT))
LA STOWREG,STOWLIST Load address of STOW list
...
BLDL also searches a concatenated series of directories when (1) a DCB is supplied
that is opened for a concatenated PDS or (2) a DCB is not supplied, in which case
the search order begins with the TASKLIB, then proceeds to the JOBLIB or
STEPLIB (themselves perhaps concatenated) followed by LINKLIB.
| You can alter the sequence of directories searched if you supply a DCB and specify
| START= or STOP= parameters. These parameters allow you to specify the first and
| last concatenation numbers of the data sets to be searched.
You can improve retrieval time by directing a subsequent FIND macro to the BLDL
list rather than to the directory to locate the member to be processed.
By specifying the BYPASSLLA option, you can direct BLDL to search PDS and
PDSE directories on DASD only. If BYPASSLLA is coded, the BLDL code will not
call LLA to search for member names.
The BLDL list must begin with a 4-byte list descriptor that specifies the number of
entries in the list and the length of each entry (12 to 76 bytes). (See Figure 77 on
page 425.) If you specify the BYPASSLLA option, an 8-byte BLDL prefix must
precede the 4-byte list descriptor.
The first 8 bytes of each entry contain the member name or alias. The next 6 bytes
contain the TTR, K, Z, and C fields. If there is no user data entry, only the TTR and
C fields are required. If additional information is to be supplied from the directory,
as many as 62 bytes can be reserved.
DESERV
The DESERV macro returns system managed directory entries (SMDE) for specific
members or all members of opened PDS or PDSEs. You can specify either DESERV
GET or DESERV GET_ALL.
FUNC=GET
DESERV GET returns SMDEs for specific members of opened PDS or PDSEs, or a
concatenation of PDSs and PDSEs. The data set can be opened for either input,
output, or update. The SMDE contains the PDS or PDSE directory. The SMDE is
mapped by the macro IGWSMDE and contains a superset of the information that
is mapped by IHAPDS. The SMDE returned can be selected by name or by BLDL
directory entry.
Input by Name List: If you want to select SMDEs by name, you supply a list of
names to be sorted in ascending order, without duplicates. Each name is comprised
of a two-byte length field followed by the characters of the name. When searching
for names with less than eight characters, the names are padded on the right with
blanks to make up eight characters. Names greater than eight characters will have
trailing blanks and nulls stripped (to a minimum length of eight) before the search.
removed from the system until the connection is released. To specify the
connection, use the CONN_INTENT=HOLD parameter.
All connections made through a single call to GET are associated with a single
unique connect identifier. The connect identifier may be used to release all the
connections in a single invocation of the RELEASE function. Figure 78 shows an
example of DESERV GET:
*****************
*****************
NAME_LIST (DESL) *****************
Name 1
*****************
*****************
Name 2 *****************
*****************
Name 3 *****************
*****************
*****************
*****************
*****************
Output block structure: Area
Buffer Header
(DESB)
NAME_LIST (DESL)
Name 1 SMDE_1
Name 2 SMDE_3
Name 3 SMDE_2
*****************
UNUSED
*****************
...................
*****************
Figure 78. DESERV GET by NAME_LIST Control Block Structure
FUNC=GET_ALL
The GET_ALL function returns SMDEs for all the member names in a PDS, a
PDSE, or a concatenation of PDSs and PDSEs. Member-level connections can be
established for each member found in a PDSE. A caller uses the CONCAT
parameter to indicate which data set in the concatenation is to be processed, or if
all of the data sets in the concatenation are to be processed.
If the caller requests that DESERV GET_ALL return all the SMDE directory entries
for an entire concatenation, the SMDEs are returned in sequence as sorted by the
SMDE_NAME field without returning duplicate names. As with the GET function,
all connections can be associated with a single connect identifier established at the
time of the call. This connect identifier can then be used to release all the
connections in a single invocation of the RELEASE function. Figure 80 on page 428
shows an overview of control blocks related to the GET_ALL function.
There are two ways you can direct the system to the right member when you use
the FIND macro. Specify the address of an area containing the name of the
member, or specify the address of the TTR field of the entry in a BLDL list you
have created, by using the BLDL macro. In the first case, the system searches the
directory of the data set for the relative track address. In the second case, no search
is required, because the relative track address is in the BLDL list entry.
If you want to process only one member, you can process it as a sequential data
set (DSORG=PS) using either BSAM or QSAM. You specify the name of the
member you want to process and the name of the PDS in the DSNAME parameter
of the DD statement. When you open the data set, the system places the starting
address in the DCB so that a subsequent GET or READ macro begins processing at
that point. You cannot use the FIND, BLDL, or STOW macro when you are
processing one member as a sequential data set.
Because the DCBRELAD address in the DCB is updated when the FIND macro is
used, you should not issue the FIND macro after WRITE and STOW processing
without first closing the data set and reopening it for INPUT processing.
You can also use the STOW macro to delete, replace, or change a member name in
the directory and store additional information with the directory entry. Because an
alias can also be stored in the directory the same way, you should be consistent in
altering all names associated with a given member. For example, if you replace a
member, you must delete related alias entries or change them so that they point to
the new member. An alias cannot be stored in the directory unless the member is
present.
Although you can use any type of DCB with STOW, it is intended to be used with
a BPAM DCB. If you use a BPAM DCB, you can issue several writes to create a
member followed by a STOW to write the directory entry for the member.
Following this STOW, your application can write and stow another member.
If you add only one member to a PDS, and specify the member name in the
DSNAME parameter of the DD statement, it is not necessary for you to use BPAM
and a STOW macro in your program. If you want to do so, you can use BPAM and
STOW, or BSAM or QSAM. If you use a sequential access method, or if you use
BPAM and issue a CLOSE macro without issuing a STOW macro, the system will
issue a STOW macro using the member name you have specified on the DD
statement.
Note that no checks are made in STOW to ensure that a stow with a BSAM or
QSAM DCB came from CLOSE. When the system issues the STOW, the directory
entry that is added is the minimum length (12 bytes). This automatic STOW macro
will not be issued if the CLOSE macro is a TYPE=T or if the TCB indicates the task
is being abnormally ended when the DCB is being closed. The DISP parameter on
the DD statement determines what directory action parameter will be chosen by
the system for the STOW macro.
If DISP=NEW or MOD was specified, a STOW macro with the add option will be
issued. If the member name on the DD statement is not present in the data set
directory, it will be added. If the member name is already present in the directory,
the task will be abnormally ended.
If DISP=OLD was specified, a STOW macro with the replace option will be issued.
The member name will be inserted into the directory, either as an addition, if the
name is not already present, or as a replacement, if the name is present.
Thus, with an existing data set, you should use DISP=OLD to force a member into
the data set; and DISP=MOD to add members with protection against the
accidental destruction of an existing member.
//PDSDD DD ---,DSN=MASTFILE(MEMBERK),DISP=SHR
...
OPEN (INDCB) Open for input, automatic FIND
...
GET INDCB,INAREA Read member record
...
CLOSE (INDCB)
...
When your program is run, OPEN searches the directory automatically and
positions the DCB to the member.
The system supplies a value for NCP during OPEN. For performance reasons, the
example shown in Figure 83 automatically takes advantage of the NCP value
calculated in OPEN or set by the user on the DD statement. If the FIND macro is
omitted and DSORG on the DCB changed to PS, the example shown in Figure 83
works to read a sequential data set with BSAM. The logic to do that is summarized
in “Using Overlapped I/O with BSAM” on page 359.
To retrieve a member of a PDS using the NOTE and POINT macros, take the
following steps. Figure 82 on page 432 is an example that uses note lists, which
should not be used with PDSEs.
1. Code DSORG=PO or POU in the DCB macro.
2. In the DD statement specify the data set name of the PDS by coding
DSNAME=name.
3. Issue the BLDL macro to get the list of member entries you need from the
directory.
4. Repeat the following steps for each member to be retrieved:
a. Use the FIND macro to prepare for reading the member records. If you use
the POINT macro it will not work in a partitioned concatenation.
b. The records can be read from the beginning of the member, or a note list
can be read first, to obtain additional locations that point to subcategories
within the member. If you want to read out of sequential order, use the
POINT macro to point to blocks within the member.
c. Read (and check) the records until all those required have been processed.
d. Your end-of-data-set (EODAD) routine receives control at the end of each
member. At that time, you can process the next member or close the data
set.
Figure 82 on page 432 shows the technique for processing several members
without closing and reopening. This demonstrates synchronous reading.
//PDSDD DD ---,DSN=D42.MASTFILE,DISP=SHR
...
OPEN (INDCB) Open for input, no automatic FIND
...
BLDL INDCB,BLDLLIST Retrieve the relative disk locations
* of several names in virtual storage
LA BLDLREG,BLDLLIST+4 Point to the first entry
INAREA DS CL80
INDCB DCB ---,DSORG=PO,DDNAME=PDSDD,MACRF=R
TTRN DS F TTRN of the NOTE list to point at
NOTEREG EQU 4 Register to address NOTE list entries
NOTELIST DS 0F NOTE list
DS F NOTE list entry (4 byte TTRN)
DS 19F one entry per subgroup
BLDLREG EQU 5 Register to address BLDL list entries
BLDLLIST DS 0F List of member names for BLDL
DC H’10’ Number of entries (10 for example)
DC H’18’ Number of bytes per entry
DC CL8’MEMBERA’ Name of member
DS CL3 TTR of first record (created by BLDL)
DS X K byte, concatenation number
DS X Z byte, location code
DS X C byte, flag and user data length
DS CL4 TTRN of NOTE list
... one list entry per member (18 bytes each)
Figure 82. Retrieving Several Members and Subgroups of a PDS without Overlapping I/O
Time and CPU Time
The example in Figure 83 on page 433 does not use large block interface (LBI).
With BPAM there is no advantage in the current release to using LBI because the
block size cannot exceed 32 760 bytes. You can convert the example to BSAM by
omitting the FIND macro and changing DSORG in the DCB to PS. With BSAM LBI
you can read tape blocks that are longer than 32 760 bytes.
The technique shown in Figure 83 is more efficient than the technique shown in
Figure 82 on page 432 because the access method is transferring data while the
program is processing data that was previously read.
Figure 83. Reading a Member of a PDS or PDSE using Asynchronous BPAM (Part 1 of 2)
Figure 83. Reading a Member of a PDS or PDSE using Asynchronous BPAM (Part 2 of 2)
Tip: You can convert Figure 83 on page 433 to use LBI by making the following
changes:
v Add BLKSIZE=0 in the DCBE macro. Coding a nonzero value also requests LBI,
but it overrides the block size.
v After line (1), test whether the access method supports LBI. This is in case the
type of data set or the level of operating system does not support LBI. Insert
these lines to get the maximum block size:
TM DCBEFLG1,DCBESLBI Branch if access method does
BZ ROUND not support LBI
L R1,DCBEBLKSI Get maximum size of a block
v After line (2) get the size of the block:
TM DCBEFLG1,DCBESLBI Branch if
BZ RECORD1 not using LBI
SH R1,=X'12' Point to size
L R0,0(,R1) Get size of block
Modifying a PDS
A member of a PDS can be updated in place, or it can be deleted and rewritten as
a new member.
Updating in Place
A member of a PDS can be updated in place. Only one user can update at a time.
When you update-in-place, you read records, process them, and write them back to
their original positions without destroying the remaining records. The following
rules apply:
v You must specify the UPDAT option in the OPEN macro to update the data set.
To perform the update, you can use only the READ, WRITE, GET, PUTX,
CHECK, NOTE, POINT, FIND, BLDL, and STOW macros.
v You cannot update concatenated data sets.
v You cannot delete any record or change its length; you cannot add new records.
v You do not need to issue a STOW macro unless you want to change the user
data in the directory entry.
v You cannot use LBI.
//PDSDD DD DSNAME=MASTFILE(MEMBERK),DISP=OLD,---
...
UPDATDCB DCB DSORG=PS,DDNAME=PDSDD,MACRF=(R,W),NCP=2,EODAD=FINISH
READ DECBA,SF,UPDATDCB,AREAA,MF=L Define DECBA
READ DECBB,SF,UPDATDCB,AREAB,MF=L Define DECBB
AREAA DS --- Define buffers
AREAB DS ---
...
OPEN (UPDATDCB,UPDAT) Open for update
LA 2,DECBA Load DECB addresses
LA 3,DECBB
READRECD READ (2),SF,MF=E Read a record
NEXTRECD READ (3),SF,MF=E Read the next record
CHECK (2) Check previous read operation
In the following statements, 'R2' and 'R3' refer to the records that were read using the
DECBs whose addresses are in registers 2 and 3, respectively. Either register can point to
either DECBA or DECBB.
R2UPDATE CALL UPDATE,((2)) Call routine to update R2
* Must issue CHECK for the other outstanding READ before switching to WRITE.
* Unfortunately this CHECK can send us to EODAD.
Note the use of the execute and list forms of the READ and WRITE macros,
identified by the parameters MF=E and MF=L.
With QSAM
Update a member of a PDS using the locate mode of QSAM (DCB specifies
MACRF=(GL,PL)) and using the GET and PUTX macros. The DD statement must
specify the data set and member name in the DSNAME parameter. This method
permits only the updating of the member specified in the DD statement.
Rewriting a Member
There is no actual update option that can be used to add or extend records in a
PDS. If you want to extend or add a record within a member, you must rewrite the
complete member in another area of the data set. Because space is allocated when
the data set is created, there is no need to request additional space. Note, however,
that a PDS must be contained on one volume. If sufficient space has not been
allocated, the data set must be reorganized by the IEBCOPY utility program or
ISPF.
When you rewrite the member, you must provide two DCBs, one for input and
one for output. Both DCB macros can refer to the same data set, that is, only one
DD statement is required.
Concatenating PDSs
Two or more PDSs can be automatically retrieved by the system and processed
successively as a single data set. This technique is known as concatenation. Two
types of concatenation are: sequential and partitioned.
Sequential Concatenation
To process sequentially concatenated data sets, use a DCB that has DSORG=PS.
Each DD statement can include the following types of data sets:
v Sequential data sets, which can be on disk, tape, instream (SYSIN), TSO
terminal, card reader, and subsystem
v UNIX files
v PDS members
v PDSE members
Restriction: You cannot use this technique to read a z/OS UNIX directory.
Partitioned Concatenation
Concatenated PDSs are processed with a DSORG=PO in the DCB. When PDSs are
concatenated, the system treats the group as a single data set. A partitioned
concatenation can contain a mixture of PDSs, PDSEs, and UNIX directories.
Partitioned concatenation is supported only when the DCB is open for input.
Concatenated PDSs are always treated as having like attributes, except for block
size. They use the attributes of the first data set only, except for the block size.
BPAM OPEN uses the largest block size among the concatenated data sets. All
attributes of the first data set are used, even if they conflict with the block size
parameter specified. For concatenated format-F data sets (blocked or unblocked),
the LRECL for each data set must be equal.
You process a concatenation of PDSs the same way you process a single PDS,
except that you must use the FIND macro to begin processing a member. You
cannot use the POINT (or NOTE) macro until after issuing the FIND macro the
appropriate member. If two members of different data sets in the concatenation
have the same name, the FIND macro determines the address of the first one in the
concatenation. You would not be able to process the second one in the
concatenation. The BLDL macro provides the concatenation number of the data set
to which the member belongs in the K field of the BLDL list. (See
“BLDL—Construct a Directory Entry List” on page 424.)
This technique works when PDSs and PDSEs are concatenated. However, you
cannot use this technique to sequentially read a UNIX directory. The system
considers this to be a like sequential concatenation. See “Reading a PDSE
Directory” on page 476.
Topic Location
Advantages of PDSEs 439
Structure of a PDSE 441
Processing PDSE Records 444
Allocating Space for a PDSE 447
Defining a PDSE 450
Creating a PDSE Member 451
Processing a Member of a PDSE 454
Retrieving a Member of a PDSE 468
Sharing PDSEs 470
Modifying a Member of a PDSE 474
Reading a PDSE Directory 476
Concatenating PDSEs 477
Converting PDSs to PDSEs and Back 478
PDSE Address Spaces 479
Advantages of PDSEs
This section compares PDSEs to PDSs.
A PDSE is a data set divided into sequentially organized members, each described
by one or more directory entries. PDSEs are stored only on direct access storage
devices. In appearance, a PDSE is similar to a PDS. For accessing a PDS directory
or member, most PDSE interfaces are indistinguishable from PDS interfaces.
However, PDSEs have a different internal format, which gives them increased
usability. Each member name can be eight bytes long. The primary name for a
program object can be eight bytes long. Alias names for program objects can be up
to 1024 bytes long. The records of a given member of a PDSE are written or
retrieved sequentially.
You can use a PDSE in place of a PDS to store data, or to store programs in the
form of program objects. A program object is similar to a load module in a PDS. A
load module cannot reside in a PDSE and be used as a load module. One PDSE
cannot contain a mixture of program objects and data members.
PDSEs and PDSs are processed using the same access methods (BSAM, QSAM,
BPAM) and macros but you cannot use EXCP because of the data set’s internal
structures.
PDSEs have several features that improve both your productivity and system
performance. The main advantage of using a PDSE over a PDS is that PDSEs
automatically reuse space within the data set without anyone having to
periodically run a utility to reorganize it. See “Rewriting a Member” on page 437.
The size of a PDS directory is fixed regardless of the number of members in it,
while the size of a PDSE directory is flexible and expands to fit the members
stored in it. Also, the system reclaims space automatically whenever a member is
deleted or replaced, and returns it to the pool of space available for allocation to
other members of the same PDSE. The space can be reused without having to do
an IEBCOPY compress. Figure 85 shows these advantages.
Related reading: For information about macros used with PDSEs, see “Processing
a Member of a PDSE” on page 454 and z/OS DFSMS Macro Instructions for Data
Sets. For information about using RACF to protect PDSEs, see Chapter 5,
“Protecting Data Sets,” on page 53. For information about load modules and
program objects see z/OS MVS Program Management: User’s Guide and Reference.
In Figure 85, when member B is deleted, the space it occupied becomes available
for reuse by new members D and E.
Structure of a PDSE
When accessed sequentially, through BSAM or QSAM, the PDSE directory appears
to be constructed of 256-byte blocks containing sequentially ordered entries. The
PDSE directory looks like a PDS directory even though its internal structure and
block size are different. PDSE directory entries vary in length. Each directory entry
contains the member name or an alias, the starting location of the member within
the data set and optionally user data. The directory entries are arranged by name
in alphanumeric collating sequence.
You can use BSAM or QSAM to read the directory sequentially. The directory is
searched and maintained by the BLDL, DESERV, FIND, and STOW macros. If you
use BSAM or QSAM to read the directory of a PDSE which contains program
objects with names longer than 8 bytes, directory entries for these names will not
be returned. If you need to be able to view these names, you must use the
DESERV FUNC=GET_ALL interface instead of BSAM or QSAM. Similarly, the
BLDL, FIND, and STOW macro interfaces allow specification of only 8-byte
member names. These are analogous DESERV functions for each of these interfaces
to allow for processing names greater than 8 bytes. See “PDS Directory” on page
416 for a description of the fields in a PDSE directory entry.
The PDSE directory is indexed, permitting more direct searches for members.
Hardware-defined keys are not used to search for members. Instead, the name and
the relative track address of a member are used as keys to search for members. The
TTRs in the directory can change if you move the PDSE, since for PDSE members
the TTRs are not relative track and record numbers but rather pseudo randomly
generated aliases for the PDSE member. These TTRs may sometimes be referred to
as Member Locator Tokens (MLTs).
The limit for the number of members in a PDSE directory is 522,236. The PDSE
directory is expandable; you can keep adding entries up to the directory’s size
limit or until the data set runs out of space. The system uses the space it needs for
the directory entries from storage available to the data set.
For a PDS, the size of the directory is determined when the data set is initially
allocated. There can be fewer members in the data set than the directory can
contain, but when the preallocated directory space is full, the PDS must be copied
to a new data set before new members can be added.
Reuse of Space
When a PDSE member is updated or replaced, it is written in the first available
space. This is either at the end of the data set or in a space in the middle of the
data set marked for reuse. This space need not be contiguous. The objective of the
space reuse algorithm is not to extend the data set unnecessarily.
With the exception of UPDATE, a member is never immediately written back to its
original space. The old data in this space is available to programs that had a
connection to the member before it was rewritten. The space is marked for reuse
only when all connections to the old data are dropped. However, once they are
dropped, there are no pointers to the old data, so no program can access it. A
connection may be established at the time a PDSE is opened, or by BLDL, FIND,
POINT, or DESERV. These connections remain in effect until the program closes the
PDSE or the connections are explicitly released by issuing DESERV
FUNC=RELEASE, STOW disconnect, or (under certain cases) another POINT or
FIND. Pointing to the directory can also release connections. Connections are
dropped when the data set is closed.
Related reading: For more information about connections see z/OS DFSMS Macro
Instructions for Data Sets.
Directory Structure
Logically, a PDSE directory looks the same as a PDS directory. It consists of a series
of directory records in a block. Physically, it is a set of pages at the front of the
data set, plus additional pages interleaved with member pages. Five directory
pages are initially created at the same time as the data set. New directory pages
are added, interleaved with the member pages, as new directory entries are
required. A PDSE always occupies at least five pages of storage.
While the preceding notes can be used to define an algorithm for calculating PDSE
TTRs, it is strongly recommended that you not do TTR calculations because this
algorithm might change with new releases of the system.
Block 1 Block 2
X’100001’ 14 X’100002’
Block 1 Block 2
X’100001’ 14 X’10000B’
In both examples, PDSE member A has a TTR of X'000002'. In Figure 86, the records
are unblocked, the record number1 for logical record 1 is X'100001' and logical
record 2 is X'100002'.
In Figure 87, the records are fixed length, blocked with LRECL=80 and
BLKSIZE=800. The first block is identified by the member TTR, the second block
by a TTR of X'10000B', and the third block by a TTR of X'100015'. Note that the
TTRs of the blocks differ by an amount of 10, which is the blocking factor.
To position to a member, use the TTR obtained from the BLDL or NOTE macro, or
a BSAM read of the directory, or DESERV FUNC=GET or FUNC=GET_ALL. To
locate the TTR of a record within a member, use the NOTE macro (see “Using the
NOTE Macro to Provide Relative Position” on page 465).
1. The first record in a member can be pointed to using the TTR for the member (in the examples above, X'000002').
Related reading: See “Block Size (BLKSIZE)” on page 327 for information about
using BLKSIZE. Also see “Reading a PDSE Directory” on page 476.
user-defined or system-defined block size is saved in the data set label when the
records are written, and becomes the default block size for input. These
constructed blocks are called simulated blocks.
Figure 88 shows an example of how the records are reblocked when the PDSE
member is read:
Suppose you create a PDSE member that has a logical record length of 80 bytes,
such that you write five blocks with a block size of 160 (blocking factor of 2) and
five short blocks with a block size of 80. When you read back the PDSE member,
the logical records are reblocked into seven 160-byte simulated blocks and one
short block. Note that short block boundaries are not saved on output.
You also can change the block size of records when reading the data set. Figure 89
shows how the records are reblocked when read:
Figure 89. Example of Reblocking When the Block Size Has Been Changed
Suppose you write three blocks with block size of 320, and the logical record
length is 80 bytes. Then if you read this member with a block size of 400 (blocking
factor of 5), the logical records are reblocked into two 400-byte simulated blocks
and one 160-byte simulated block.
If the data set was a PDS, you would not be able to change the block size when
reading the records.
Related reading: See z/OS MVS System Management Facilities (SMF) for more
information about SMF.
This section shows how to use the SPACE JCL keyword to allocate primary and
secondary storage space amounts for a PDSE. The PDSE directory can extend into
secondary space. A PDSE can have a maximum of 123 extents. A PDSE cannot
extend beyond one volume. Note that a fragmented volume might use up extents
more quickly because you get less space with each extent. With a
SPACE=(CYL,(1,1,1)) specification, the data set can extend to 123 cylinders (if space
is available).
Guideline: If you use JCL to allocate the PDSE, you must specify the number of
directory blocks in the SPACE parameter, or the allocation fails.
However, if you allocate a PDSE using data class, you can omit the
number of directory blocks in the SPACE parameter. For a PDSE, the
number of directory blocks is unlimited.
Related reading:
v See “Allocating Space for a PDS” on page 419 for examples of the SPACE
keyword.
v See Chapter 3, “Allocating Space on Direct Access Volumes,” on page 35, z/OS
MVS JCL User’s Guide, and z/OS MVS JCL Reference for information about
allocating space.
The following are some areas to consider when determining the space
requirements for a PDSE.
Integrated Directory
All PDSE space is available for either directory or member use. Within a data set,
there is no difference between pages used for the directory and pages used for
members. As the data set grows, the members and directory have the same space
available for use. The directory, or parts of it, can be in secondary extents.
The format of a PDSE lets the directory contain more information. This information
can take more space than a PDS directory block.
The PDSE directory contains keys (member names) in a compressed format. The
insertion or deletion of new keys may cause the compression of other directory
keys to change. Therefore the change in the directory size may be different than
the size of the inserted or deleted record.
Studies show that a typical installation has 18% to 30% of its PDS space in the
form of gas. This space is unusable to the data set until it has been compressed. A
PDSE dynamically reuses all the allocated space according to a first-fit algorithm.
You do not need to make any allowance for gas when you allocate space for a
PDSE.
Space is only reclaimed for an OPEN for output when it is the only open for
output on that system. PDSE space cannot be reclaimed immediately after a
member is deleted or dated. If a deleted or updated member still has an existing
connection from another task (or the input DCB from an ISPF edit session), the
member space is not reclaimed until the connection is released and the data set is
opened for output and that OPEN for OUTPUT is the only one on that system.
ABEND D37 can occur on a PDSE indicating it is FULL, but another member can
still be saved in the data set. Recovery processing from an ABEND D37 in ISPF
closes and reopens the data set. This new open of the data set allows PDSE code to
reclaim space so a member can now be saved.
Data set compression is not necessary with a PDSE. Since there is no gas
accumulation in a PDSE, there is no need for compression.
Extent Growth
A PDSE can have up to 123 extents. Because a PDSE can have more secondary
extents, you can get the same total space allocation with a smaller secondary
allocation. A PDS requires a secondary extent about eight times larger than a PDSE
to have the same maximum allocation. Conversely, for a given secondary extent
value, PDSEs can grow about eight times larger before needing to be condensed.
Defragmenting is the process by which multiple small extents are consolidated into
fewer large extents. This operation can be performed directly from the interactive
storage management facility (ISMF) data set list.
Although the use of smaller extents can be more efficient from a space
management standpoint, to achieve the best performance you should avoid
fragmenting your data sets whenever possible.
Applications can use different logical block sizes to access the same PDSE. The
block size in the DCB is logical and has no effect on the physical block (page) size
being used.
The DCB block size does not affect the physical block size and all members of a
PDSE are assumed to be reblockable. If you code your DCB with BLKSIZE=6160,
the data set is physically reblocked into 4 KB pages, but your program still sees
6160-byte logical blocks.
Free Space
The space for any library can be overallocated. This excess space can be released
manually with the FREE command in ISPF. Or you could code the release (RLSE)
parameter on your JCL or select a management class that includes the release
option partial.
Fragmentation
Most allocation units are approximately the same size. This is because of the way
members are buffered and written in groups of multiple pages. There is very little,
if any, fragmentation in a PDSE.
If there is fragmentation, copy the data set with IEBCOPY or DFSMSdss. The
fragmented members are recombined in the new copy.
Defining a PDSE
This section shows how to define a PDSE. The DSNTYPE keyword defines either a
PDSE or PDS. The DSNTYPE values follow:
v LIBRARY (defines a PDSE)
v PDS (defines a partitioned data set)
To define PDSE data set types, specify DSNTYPE=LIBRARY in a data class
definition, a JCL DD statement, the LIKE keyword, the TSO ALLOCATE command,
or the DYNALLOC macro.
Recommendation: If you do not want to allocate the data set as a PDSE, but the
data class definition set up in the ACS routine specifies DSNTYPE, override it in
one of two ways:
v By specifying a data class without the DSNTYPE keyword (in the JCL DD
statement or ISMF panel).
v By specifying DSNTYPE=PDS in the JCL DD statement, data class, LIKE
keyword, or ALLOCATE command.
When you create a data set and specify the number of directory entries or
DSORG=PO or the data class has DSORG=PO without being overridden, SMS
chooses whether it will be a PDS or PDSE. SMS uses the first source of information
in the following list:
v DSNTYPE=PDS or DSNTYPE=LIBRARY (for a PDSE) in JCL or dynamic
allocation.
v DSNTYPE of PDS or LIBRARY in data class.
v Installation default (in IGDSMSxx member if SYS1.PARMLIB).
v PDS.
An error condition exists and the job is ended with appropriate messages if the
DSNTYPE keyword was specified in the JCL, but the job runs on a processor with
a release of MVS that does not support the JCL keyword DSNTYPE. Message
IEF630I is issued.
//PDSEDD DD DSNAME=MASTFILE(MEMBERK),SPACE=(TRK,(100,5,7)),
// DISP=(NEW,CATLG),DCB=(RECFM=FB,LRECL=80,BLKSIZE=80),
// DSNTYPE=LIBRARY,STORCLAS=S1P01S01,---
...
OPEN (OUTDCB,(OUTPUT))
...
PUT OUTDCB,OUTAREA Write record to member
...
CLOSE (OUTDCB) Automatic STOW
...
OUTAREA DS CL80 Area to write from
OUTDCB DCB ---,DSORG=PS,DDNAME=PDSEDD,MACRF=PM
You can use the same program to allocate either a sequential data set or a member
of a PDS or PDSE with only a change to the JCL, as follows:
1. The PDSE might be system managed. Specify a STORCLAS in the DD
statement for the PDSE, or let the ACS routines direct the data set to
system-managed storage.
2. Code DSORG=PS in the DCB macro.
3. Specify in the DD statement that the system is to store the data as a member of
a PDSE; that is, DSNAME=name(membername).
4. Either specify a data class in the DD statement or allow the ACS routines to
assign a data class.
5. Use an OPEN macro, a series of PUT or WRITE macros, and the CLOSE macro
to process the member. When the data set is closed, the system issues a STOW
macro.
As a result of these steps, the data set and its directory are created, the records of
the member are written, and an entry is automatically made in the directory with
no user data.
A PDSE becomes a PDSE program library when the binder stores the PDSE’s first
member.
The example in Figure 91 on page 453 shows how to process more than one PDSE
or PDS member without closing and reopening the data set.
//PDSEDD DD ---,DSN=MASTFILE,DISP=MOD,SPACE=(TRK,(100,5,7))
...
OPEN (OUTDCB,(OUTPUT))
...
** WRITE MEMBER RECORDS
The A option on STOW in Figure 91 means the members did not exist before. You
can code R to replace or all members.
...
OPEN (DCB1,(OUTPUT),DCB2,(OUTPUT))
WRITE DECB1,SF,DCB1,BUFFER Write record to 1st member
CHECK DECB1
...
WRITE DECB2,SF,DCB2,BUFFER Write record to 2nd member
CHECK DECB2
...
STOW DECB1,PARML1,R Enter 1st member in the directory
STOW DECB2,PARML2,R Enter 2nd member in the directory
...
DCB1 DCB DSORG=PO,DDNAME=X, ... Both DCBs open to the
DCB2 DCB DSORG=PO,DDNAME=X, ... same PDSE
The R option of STOW in Figure 92 on page 453 means you are adding new
members or replacing members. You could code A to mean you are only adding
new members.
Open two DCBs to the same PDSE, write the member records, and issue STOW for
them. Code different names for the parameter list in the STOW macro for each
member written in the PDSE directory.
Related reading: For more information, see z/OS UNIX System Services Command
Reference.
| You can use IEBCOPY to copy between PDSEs and PDS data sets. When using
| IEBCOPY to copy data members between PDSEs and PDS data sets, the most
| efficient way to copy do so (where a conversion is required), is to use a 2-step
| process:
| 1. Use IEBCOPY UNLOAD to copy selected members or the entire PDS or PDSE
| to a sequential file.
| 2. Use IEBCOPY LOAD to copy these members or the data set into a PDSE or
| PDS.
| The performance is significantly better than a direct 1-step copy operation between
| unlike data set formats. Please note, this recommendation applies to PDSEs with
| data members and not PDSE libraries that contain program objects, which cannot
| be converted via an IEBCOPY load process.
PDSEs are designed to automatically reuse data set storage when a member is
replaced. PDSs do not reuse space automatically. If a member is deleted or
replaced, the old copy of the PDS or PDSE member remains available to
applications that were accessing that member’s data before it was deleted or
replaced.
v FIND by name
v FIND by TTR
v POINT
All connections established to members while a data set was opened are released
when the data set is closed. If the connection was established by FIND by name,
the connection is released when another member is connected through FIND or
POINT. The system reclaims the space used when all connections for a specific
member have been released.
If deleting or replacing a member, the old version of the member is still accessible
by those applications connected to it. Any application connecting to the member
by name (through BLDL, FIND, or OPEN) following the replace operation accesses
the new version. (The replaced version cannot be accessed using a FIND by TTR or
POINT unless a connection already exists for it.)
Connections established by OPEN, BLDL, FIND, and POINT are used by BSAM,
QSAM, and BPAM for reading and writing member data. Connections established
by DESERV are primarily used by program management. When your program or
the system closes the DCB, the system drops all connections between the DCB and
the data set. If you use BLDL, FIND by TTR, or POINT to connect to members,
you can disconnect those members before closing the DCB by issuing STOW DISC.
If you use DESERV to connect to members you can disconnect those members
before closing the DCB by issuing DESERV FUNC=RELEASE.
BLDL also searches a concatenated series of directories when (1) a DCB is supplied
that is opened for a concatenated PDS or (2) a DCB is not supplied, in which case
the search order begins with the TASKLIB, then proceeds to the JOBLIB or
STEPLIB (themselves perhaps concatenated) followed by LINKLIB.
| You can alter the sequence of directories searched if you supply a DCB and specify
| START= or STOP= parameters. These parameters allow you to specify the first and
| last concatenation numbers of the data sets to be searched.
You can improve retrieval time by directing a subsequent FIND macro to the BLDL
list rather than to the directory to locate the member to be processed.
Figure 77 on page 425 shows the BLDL list, which must begin with a 4-byte list
description that specifies the number of entries in the list and the length of each
entry (12 to 76 bytes). If you specify an option such as NOCONNECT,
| BYPASSLLA, START=, or STOP=, an 8-byte BLDL prefix must precede the 4-byte
list descriptor. The first 8 bytes of each entry contain the member name or alias.
The next 6 bytes contain the TTR, K, Z, and C fields. The minimum directory
length is 12 bytes.
Like a BSAM or QSAM read of the directory, the BLDL NOCONNECT option does
not connect the PDSE members. The BLDL NOCONNECT option causes the
system to use less virtual storage. The NOCONNECT option is appropriate when
BLDLs are issued for many members that might not be processed.
Do not use the NOCONNECT option if two applications will process the same
member. For example, if an application deletes or replaces a version of a member
and NOCONNECT was specified, that version is inaccessible to any application
that is not connected.
For PDSE program libraries, you can direct BLDL to search the LINKLST, JOBLIB,
and STEPLIB. Directory entries for load modules located in the link pack area
(LPA) cannot be accessed by the BLDL macro.
For variable blocked spanned (RECFM=VBS) records, the BSP macro backspaces to
the start of the first record in the buffer just read. The system does not backspace
within record segments. Issuing the BSP macro followed by a read always begins
the block with the first record segment or complete segment. (A block can contain
more than one record segment.)
If you write in a PDSE member and issue the BSP macro followed by a WRITE
macro, you destroy all the data of the member beyond the record just written.
All functions return results to the caller in areas provided by the invoker or areas
returned by DE services. All functions provide status information in the form of
return and reason codes. The IGWDES macro maps all the parameter areas.
FUNC=GET
DESERV GET returns SMDEs for members of opened PDSs or PDSEs or a
concatenation of PDSs or PDSEs. The data set can be opened for either input,
output, or update. The SMDE contains the PDS or PDSE directory. The SMDE is
mapped by the IGWSMDE macro and contains a superset of the information that
is mapped by IHAPDS. The SMDE returned can be selected by name or by BLDL
directory entry.
Input by Name List: If you want to select SMDEs by name, you supply a list of
names that must be sorted in ascending order, without duplicates. Each name
comprises a 2-byte length field followed by the characters of the name. When
searching for names with less than eight characters, the names are padded on the
right with blanks to make up eight characters. For each length field that contains a
value greater than eight, DE services ignores trailing blanks and nulls beyond the
eighth byte when doing the search.
All connections made through a single call to GET are associated with a single
unique connect identifier. The connect identifier may be used to release all the
connections in a single invocation of the RELEASE function. An example of
DESERV GET is shown in Figure 93.
FUNC=GET_ALL
The GET_ALL function returns SMDEs for all the member names in a PDS, a
PDSE, or a concatenation of PDSs and PDSEs. Member level connections can be
established for each member found in a PDSE. A caller uses the CONCAT
parameter to indicate which data set in the concatenation is to be processed, or
whether all of the data sets in the concatenation are to be processed.
If the caller requests that DESERV GET_ALL return all the SMDE directory entries
for an entire concatenation, the SMDEs are returned in sequence as sorted by the
SMDE_NAME field without returning duplicate names. As with the GET function,
all connections can be associated with a single connect identifier established at the
time of the call. This connect identifier can then be used to release all the
connections in a single invocation of the RELEASE function. See Figure 95 on page
460 for an overview of control blocks related to the GET_ALL function.
FUNC=GET_NAMES
The GET_NAMES function will obtain a list of all names and associated
application data for a member of a new PDSE. This function does not support
PDSs.
The caller provides a name or its alias name for the member as input to the
function. The buffer is mapped by the DESB structure and is formatted by
GET_NAMES. This function will return data in a buffer obtained by GET_NAMES.
The data structure returned in the DESB is the member descriptor structure
(DESD). The DESD_NAME_PTR field points to the member or alias name. The
DESD_DATA_PTR points to the application data. For a data member, the
application data is the user data from the directory entry. For a primary member
name of a program object, the application data is mapped by the PMAR and
PMARL structures of the IGWPMAR macro. For an alias name of a program object,
the application data is mapped by the PMARA structure of the IGWPMAR macro.
The DESB_COUNT field indicates the number of entries in the DESD, which is
located at the DESB_DATA field. The buffer is obtained in a subpool as specified
by the caller and must be released by the caller. If the caller is in key 0 and
subpool 0 is specified, the DESB will be obtained in subpool 250.
See Figure 96 on page 461 for an overview of control blocks related to the
GET_NAMES function.
FUNC=RELEASE
The RELEASE function can remove connections established through the GET and
GET_ALL functions. The caller must specify the same DCB which was passed to
DESERV to establish the connections. The connections established by the BLDL,
FIND, or POINT macro are unaffected.
The caller can specify which connections are to be removed in one of two ways,
either a connect id or a list of SMDEs by supplying. The function removes all
connects from a single request of the GET or GETALL functions if the caller passes
a connect identifier. Alternatively, if provided with a list of SMDEs, the function
removes the connections associated with the versions of the member names in the
SMDEs.
If all connections of a connect identifier are released based on SMDEs, the connect
identifier is not freed or reclaimed. Only release by connect identifier will cause DE
services to reclaim the connect id for future use. It is not an error to include
SMDEs for PDS data sets even though connections can’t be established. It is an
error to release an used connect identifier. It is also an error to release a PDSE
SMDE for which there is no connection.
The DE services user does not need to issue the RELEASE function to release
connections as all connections not explicitly released can be released by closing the
DCB. See Figure 97 on page 462 for an overview of control blocks related to the
RELEASE function.
FUNC=UPDATE
Update selected fields of the directory entry for a PDSE program object using the
DESERV UPDATE function. This lets the caller update selected fields of the PMAR.
The caller must supply a DCB that is open for output or update. The caller must
also supply a DESL that points to the list of SMDEs to be updated. The DESL can
be processed in sequence and a code can indicate successful and unsuccessful
update. The SMDE (as produced by the GET function) contains the MLT and
concatenation number of the member as well as an item number. These fields will
be used to find the correct directory record to be updated. The DESL_NAME_PTR
is ignored. The caller should issue a DESERV GET function call to obtain the
SMDEs; modify the SMDEs as required; and issue a DESERV UPDATE function
call to pass the updated DESL.
The UPDATE function does not affect the directory entry imbedded in the program
object. This has implications for a binder inclusion of a PDSE program object as a
sequential file. The binder can use the directory entry in the program object rather
than the one in the directory.
The UPDATE function does not affect connections established by other DE services
invocations.
There are two ways you can direct the system to the right member when you use
the FIND macro. Specify the address of an area containing the name of the
member, or specify the address of the TTRk field of the entry in a BLDL list you
have created, by using the BLDL macro. k is the concatenation number of the data
set containing the member. In the first case, the system searches the directory of
the data set to connect to the member. In the second case, no search is required,
because the relative track address is in the BLDL list entry.
If the data set is open for output, close it and reopen it for input or update
processing before issuing the FIND macro.
If you have insufficient access authority (RACF execute authority), or if the share
options are violated, the FIND macro fails.
Related reading: See “Sharing PDSEs” on page 470 for a description of the share
options permitted for PDSEs.
If you are testing a single data set, use the CONCAT default, which is 0. The
CONCAT parameter is used only for partitioned concatenation, not sequential
concatenation. For sequential concatenation, the current data set is tested. The
return code in register 15 shows whether the function failed or is not supported on
the system.
ISITMGD can also be used to determine the type of library, data or program.
Specifying the DATATYPE option on the ISITMGD macro will set the library type
in the parameter list. See constants ISMDTREC, ISMDTPGM, and ISMDTUNK in
macro IGWCISM for the possible data type settings.
If you issue the NOTE macro while pointing to within the PDSE directory, a TTRz
is returned that represents the location of the first directory record.
The TTRz returned from a NOTE for the first directory record is the only valid
TTRz that can be used for positioning by POINT while processing within the PDSE
directory.
Here are some examples of results when using NOTE with PDSEs. A NOTE:
v immediately following an OPEN returns a nonvalid address (X'00000000'). Also,
if a member is being pointed to using a FIND macro or by the member name in
the JCL, but no READ, WRITE, or POINT has been issued, NOTE returns a
nonvalid address of (X'00000000').
v immediately following a STOW ADD or STOW REPLACE returns the TTRz of
the logical end-of-file mark for the member stowed. If the member is empty (no
writes done), the value returned is the starting TTRz of the member stowed.
v following any READ after an OPEN returns the starting TTRz of the PDSE
directory if no member name is in the JCL, or the TTRz of the member if the
member name is in the JCL.
v following the first READ after a FIND or POINT (to the first record of a
member) returns the TTRz of the member.
v following the first WRITE of a member returns the TTRz of the member.
v following a later READ or WRITE returns the TTRz of the first logical record in
the block just read or written.
v issued while positioned in the middle of a spanned record returns the TTRz of
the beginning of that record.
v issued immediately following a POINT operation (where the input to the POINT
was in the form “TTR1”) will return a note value of “TTR0”.
v issued immediately following a POINT operation (where the input to the POINT
was in the form “TTR0”) will return a nonvalid note value X'00000000').
Related reading: For information about the NOTE macro, see “Using the NOTE
Macro to Return the Relative Address of a Block” on page 515 and z/OS DFSMS
Macro Instructions for Data Sets.
positioning to the beginning of a member, the z byte in the TTR must be zero. The
POINT macro establishes a connection to the PDSE member (unless the connection
already exists).
The POINT macro positions to the first segment of a spanned record even if the
NOTE was done on another segment. If the current record spans blocks, setting the
z byte of the TTRz field to one lets you access the next record (not the next
segment).
You can position from one PDSE member to the first block of another member.
Then you can position to any record within that member. Attempting to position
from one member into the middle of another member causes the wrong record to
be accessed. Either data from the first member will be read, or an I/O error will
occur. When the PDSE is open for output, using the POINT macro to position to a
member other than the one being written results in a system ABEND.
If you have insufficient access authority (you have only RACF execute authority)
or if the share options are violated, the POINT macro fails with an I/O error. See
“Sharing PDSEs” on page 470.
Related reading: For more information about the POINT macro, see z/OS DFSMS
Macro Instructions for Data Sets and “Using the POINT Macro to Position to a
Block” on page 516.
Figure 100. Using NOTE and FIND to Switch Between Members of a Concatenated PDSE
This example uses FIND by TTR. Note that when your program resumes reading a
member, that member might have been replaced by another program. See “Sharing
PDSEs” on page 470.
You can also use the STOW macro to add, delete, replace, or change a member
name in the directory. The add and replace options also store additional
information in the directory entry. When you use STOW REPLACE to replace a
primary member name, any existing aliases are deleted. When you use STOW
DELETE to delete a primary member name, any existing aliases are deleted. STOW
ADD and REPLACE are not permitted against PDSE program libraries.
The STOW INITIALIZE function allows you to clear, or reset to empty, a PDSE
directory, as shown in Figure 101:
Issuing the STOW macro synchronizes the data to DASD. See “Using the
SYNCDEV Macro to Synchronize Data” on page 517 for more information about
synchronizing data, and “STOW—Update the Directory” on page 429 for more
information about using the STOW macro.
//PDSEDD DD ---,DSN=MASTFILE(MEMBERK),DISP=OLD
...
OPEN (INDCB) Open for input, automatic FIND
...
GET INDCB,INAREA Read member record
...
CLOSE (INDCB)
...
INAREA DS CL80 Area to read into
INDCB DCB ---,DSORG=PS,DDNAME=PDSEDD,MACRF=GM
When your program is run, OPEN searches the directory automatically and
positions the DCB to the member.
To retrieve several PDSE or PDS members without closing and reopening the data
set, use this procedure or the procedure shown in Figure 83 on page 433:
1. Code DSORG=PO in the DCB macro.
2. Specify the name of the PDSE in the DD statement by coding DSNAME=name.
3. Issue the BLDL macro to get the list of member entries you need from the
directory.
4. Repeat the following steps for each member to be retrieved.
a. Use the FIND or POINT macro to prepare for reading the member records.
If you use the POINT macro, it will not work in a partitioned concatenation.
b. The records can be read from the beginning of the member. If you want to
read out of sequential order, use the POINT macro to point to records
within the member.
c. Read and check the records until all those required have been processed.
d. Your end-of-data-set (EODAD) routine receives control at the end of each
member. At that time, you can process the next member or close the data
set.
Figure 103 shows the technique for processing several members without closing
and reopening. Figure 83 on page 433 shows a variation of retrieving members. It
gives better performance with a PDS or a concatenation of PDSs and PDSEs.
//PDSEDD DD ---,DSN=D42.MASTFILE,DISP=SHR
...
OPEN (INDCB) Open for input, no automatic FIND
...
BLDL INDCB,BLDLLIST Retrieve the relative disk locations
* of several user-supplied names in
* virtual storage.
LA BLDLREG,BLDLLIST+4 Point to the first entry in the list
...
Begin a “MEMBER”, possibly in another concatenated data set
MVC TTRN(4),8(BLDLREG) Get relative disk address of member
FIND INDCB,TTRN,C Point to the member
...
READ DECBX,SF,INDCB,INAREA Read a block of the member
CHECK DECBX Wait for completion of READ
INAREA DS CL80
INDCB DCB ---,DSORG=PO,DDNAME=PDSEDD,MACRF=R,EODAD=EODRTN
TTRN DS F TTRN of the start of the member
BLDLREG EQU 5 Register to address BLDL list entries
BLDLLIST DS 0F List of member names for BLDL
DC H’10’ Number of entries (10 for example)
DC H’14’ Number of bytes per entry
DC CL8’MEMBERA’ Name of member, supplied by user
DS CL3 TTR of first record (set by BLDL)
* The following 3 fields are set by BLDL
DS X K byte, concatenation number
DS X Z byte, location code
DS X C byte, flag and user data length
... one list entry per member (14 bytes each)
Sharing PDSEs
PDSE data sets and members can be shared. If allocated with DISP=SHR, the PDSE
directory can be shared by multiple writers and readers, and each PDSE member
can be shared by a single writer or multiple readers. Any number of systems can
have the same PDSE open for input. If one system has a PDSE open for output (to
create or replace members), that PDSE can be opened on other systems only if the
systems are using the PDSE extended sharing protocol. The storage administrator
can establish PDSE extended sharing protocol by using the PDSESHARING
keyword in the IGDSMSxx member of SYS1.PARMLIB as described in z/OS
DFSMSdfp Storage Administration Reference.
INPUT—A version of a member can be accessed by any number of DCBs open for
input.
OUTPUT—Any number of DCBs open for output can create members at the same
time. The members are created in separate areas of the PDSE. If the members being
created have the same name (specified in the STOW done after the data is written),
the last version stowed is the version that is seen by users, and the storage
occupied by the first version is added to the available space for the PDSE. You can
have:
v Multiple DCBs reading and creating new versions of the same member at the
same time. Readers continue to see the “old” version until they do a new BLDL
or FIND by name.
v A single DCB updating a version of a member while multiple DCBs are creating
new versions of the member. The user updating the data set continues to access
the “old” version until the application does a new BLDL or FIND by name.
Sharing Violations
Violation of the sharing rules, either within a computer system or across several
computer systems, can result in OPEN failing with a system ABEND.
Under some conditions, using the FIND or POINT macro might violate sharing
rules:
v The share options let only one user update at a time. Suppose you are updating
a PDSE and are using the FIND or POINT macros to access a specific member. If
someone else is reading that member at the same time, the first WRITE or PUTX
issued after the FIND or POINT fails. (The failure does not occur until the
WRITE or PUTX because you could be open for update but only reading the
member. However, your FIND or POINT would succeed if the other user is
reading a different member of the same PDSE at the same time. A POINT error
simulates an I/O error.
v If the calling program has insufficient RACF access authority, the FIND or
POINT will fail. For example, if the calling program opens a PDSE for input but
only has RACF execute authority, the FIND will fail.
Related reading: See z/OS Security Server RACF Security Administrator’s Guide.
A shared-access user of a PDSE can read existing members and create new
members or new copies of existing members concurrently with other shared-access
users on the same system and on other systems. Shared access to a PDSE during
an update-in-place of a member is restricted to a single system. Programs on other
systems cannot open the data set.
Figure 105 shows the results of OPEN for UPDAT with positioning in a decision
table.
Figure 105. OPEN for UPDAT and Positioning to a Member Decision Table
Before using program packages which change the VTOC and the data on the
volume (for example, DFSMSdss full volume, and tracks RESTORE), it is
recommended that the volume be VARIED OFFLINE to all other systems.
Applications that perform these modifications to data residing on volumes without
using the PDSE API should specify in their usage procedure that the volume being
modified should be OFFLINE to all other systems, to help ensure there are no
active connections to PDSEs residing on the volume while performing the
operation.
Related reading: See z/OS DFSMSdfp Advanced Services for information about the
DFP share attributes callable service.
| If these volume assignment rules are not followed for PDSEs in a sysplex, data set
| accessibility or integrity may be impacted.
Rule: You also must have global resource serialization (GRS) or an equivalent
product running on your system.
To change the PDSE sharing option back to normal, follow these steps for each
z/OS system in your sysplex that is running with extended sharing:
1. Change the IGDSMSxx member in SYS1.PARMLIB to contain
PDSESHARING(NORMAL) or remove the PDSESHARING entry to allow the
system to default to normal sharing.
2. Re-IPL the system.
Rule: To ensure that the sysplex does not continue with extended sharing, you
must reset all systems at the same time.
Restriction: All systems that share a PDSE must operate in the same sharing mode
(either NORMAL or EXTENDED). To prevent damage to the shared
PDSE, the operating system negotiates the sharing rules when a
system joins the sysplex. The joining system is not allowed to join the
other systems that are in the PDSE sharing sysplex.
Related reading: For more information on using the PDSESHARING keyword, see
the z/OS DFSMSdfp Storage Administration Reference.
| This SET SMS command establishes each system’s preference, and negotiation
| between the sysplex members takes place. When all members have agreed to
| extended sharing, the sysplex can switch to that level of sharing.
| Note: No systems change to extended sharing until they have all issued the SET
| SMS=xx command. You may see the following message on each system:
| IGW303I NORMAL PDSE SHARING FORCED, INCOMPATIBLE PROTOCOL FOUND
| In this case, you may have to issue the SET SMS=xx a second time to trigger
| the switch from NORMAL to EXTENDED sharing. All the systems will issue
| message IGW306I when they migrate to EXTENDED sharing:
| IGW306I MIGRATION TO EXTENDED PDSE SHARING COMPLETE
|
Modifying a Member of a PDSE
The following sections discuss updating, rewriting, and deleting members of a
PDSE.
Updating in Place
A member of a PDSE can be updated in-place. Only one user can update at a time.
When you update-in-place, you read records, process them, and write them back to
their original positions without destroying the remaining records. The following
rules apply:
v You must specify the UPDAT option in the OPEN macro to update the data set.
To perform the update, you can use only the READ, WRITE, GET, PUTX,
CHECK, NOTE, POINT, FIND, BLDL, and STOW macros.
v You cannot update concatenated PDSEs.
v You cannot delete any record or change its length; you cannot add new records.
v You cannot use LBI, large block interface.
With QSAM
You can update a member of a PDSE using the locate mode of QSAM (DCB
specifies MACRF=(GL,PL)) and using the GET and PUTX macros. The DD
statement must specify the data set and member name in the DSNAME parameter.
Using this method, only the member specified in the DD statement can be
updated.
When you rewrite the member, you must provide two DCBs, one for input and
one for output. Both DCB macros can refer to the same data set; that is, only one
DD statement is required.
Because space is allocated when the data set is created, you do not need to request
additional space. You do not need to compress the PDSE after rewriting a member
because the system automatically reuses the member’s space whenever a member
is replaced or deleted.
When the primary name is deleted, the system also deletes all aliases. If an alias is
deleted, the system deletes only the alias name and its directory entry.
A PDSE member is not actually deleted while in use. Any program connected to
the member when the delete occurs can continue to access the member until the
data set is closed. This is called a deferred delete. Any program not connected to
the member at the time it is deleted cannot access the member. It appears as
though the member does not exist in the PDSE.
Unlike a PDS, after a PDSE member is deleted, it cannot be accessed. (The pointer
to the member is removed so that the application can no longer access it. The data
can be overwritten by the creation of another member later.)
You can use sequentially read the directories of a concatenation of PDSs and
PDSEs. However, you cannot sequentially read a UNIX directory. This is
considered to be a like sequential concatenation. To proceed to each successive data
set, you can rely on the system’s EOV function or you can issue the FEOV macro.
Concatenating PDSEs
Two or more PDSEs can be automatically retrieved by the system and processed
successively as a single data set. This technique is known as concatenation. There
are two types of concatenation: sequential and partitioned. You can concatenate
PDSEs with sequential and PDSs.
Sequential Concatenation
To process sequentially concatenated data sets, use a DCB that has DSORG=PS.
Each DD statement can include the following types of data sets:
v Sequential data sets, which can be on disk, tape, instream (SYSIN), TSO
terminal, card reader, and subsystem
v UNIX files
v PDS members
v PDSE members
For the rules for concatenating like and unlike data sets, see “Concatenating Data
Sets Sequentially” on page 391.
Restriction: You cannot use this technique to read a z/OS UNIX directory.
Partitioned Concatenation
To process sequentially concatenated data sets, use a DCB that has DSORG=PO.
When PDSEs are concatenated, the system treats the group as a single data set. A
partitioned concatenation can contain a mixture of PDSs, PDSEs, and UNIX
directories. Each PDSE is treated as if it had one extent, although it might have
multiple extents. You can use partitioned concatenation only when the DCB is
open for input.
Concatenated PDSEs are always treated as having like attributes, except for block
size. The concatenation uses only the attributes of the first data set, except for the
block size. BPAM OPEN uses the largest block size among the concatenated data
sets. For concatenated fixed-format data sets (blocked or unblocked), the logical
record length for each data set must be equal.
Process a concatenation of PDSEs in the same way that you process a single PDSE,
except that you must use the FIND macro to begin processing a member. You
cannot use the POINT (or NOTE) macro until after you issue the FIND macro for
the appropriate member. If two members of different data sets in the concatenation
have the same name, the FIND macro determines the address of the first one in the
concatenation. You would not be able to process the second data set in the
concatenation. The BLDL macro provides the concatenation number of the data set
to which the member belongs in the K field of the BLDL list. (See
“BLDL—Construct a Directory Entry List” on page 424.)
To copy one or more specific members using IEBCOPY, use the SELECT control
statement. In this example, IEBCOPY copies members A, B, and C from
USER.PDS.LIBRARY to USER.PDSE.LIBRARY.
//INPDS DD DSN=USER.PDS.LIBRARY,DISP=SHR
//OUTPDSE DD DSN=USER.PDSE.LIBRARY,DISP=OLD
//SYSIN DD DD *
COPY OUTDD=OUTPDSE
INDD=INPDS
SELECT MEMBER=(A,B,C)
This DFSMSdss COPY example converts all PDSs with the high-level qualifier of
“MYTEST” on volume SMS001 to PDSEs with the high-level qualifier of
“MYTEST2” on volume SMS002. The original PDSs are then deleted. If you use
dynamic allocation, specify INDY and OUTDY for the input and output volumes.
However, if you define the ddnames for the volumes, use the INDD and OUTDD
parameters.
COPY DATASET(INCLUDE(MYTEST.**) -
BY(DSORG = PDS)) -
INDY(SMS001) -
OUTDY(SMS002) -
CONVERT(PDSE(**)) -
RENAMEU(MYTEST2) -
DELETE
If you want the PDSEs to retain the original PDS names, use the TSO RENAME
command to rename each PDSE individually. (You cannot use pattern-matching
characters, such as asterisks, with TSO RENAME.)
RENAME (old-data-set-name) (new-data-set-name)
If you want to rename all the PDSEs at once, use the access method services
ALTER command and run a job:
ALTER MYTEST2.* NEWNAME(MYTEST.*)
When copying members from a PDSE program library into a PDS, certain
restrictions must be considered. Program objects which exceed the limitations of
load modules, such as total module size or number of external names, cannot be
correctly converted to load module format.
Improving Performance
After many adds and deletes, the PDSE members might become fragmented. This
can affect performance. To reorganize the PDSE, use IEBCOPY or DFSMSdss COPY
to back up all the members. You can either delete and restore all members, or
delete and reallocate the PDSE. It is preferable to delete and reallocate the PDSE
because it usually uses less processor time and does less I/O than deleting every
member.
With z/OS V1R6, DFSMSdfp provides two address spaces for processing PDSEs:
SMSPDSE and SMSPDSE1. A z/OS system can have only the SMSPDSE address
space, or both the SMSPDSE and SMSPDSE1 address spaces. Some control blocks
that are associated with reading, writing, and loading PDSE members are still
located in the extended common service area (ECSA).
SMSPDSE A non-restartable address space for PDSE data sets that are in the
LNKLST concatenation. (The linklist and other system functions
use global connections.) The SMSPDSE address space cannot be
restarted because global connections cannot handle the interruption
and reconnection that are part of an address space restart
operation. SMSPDSE is the only PDSE address space for the z/OS
system when one of the following conditions exists:
v The IGDSMSxx initialization parameter, PDSESHARING, is set
to NORMAL.
v The IGDSMSxx initialization parameters in a sysplex coupled
systems environment are set as follows:
– PDSESHARING(EXTENDED)
– PDSE_RESTARTABLE_AS(NO)
SMSPDSE1 A restartable address space that provides connections to and
processes requests for those PDSEs that are not part of the
LNKLST concatenation. To create the SMSPDSE1 address space
during IPL in a sysplex coupled systems environment, set the
IGDSMSxx initialization parameters as follows:
v PDSESHARING(EXTENDED)
v PDSE_RESTARTABLE_AS(YES)
Related reading:
v For information on configuring the restartable SMSPDSE1 address space, see
Using the restartable PDSE address space in z/OS DFSMS Using the New
Functions.
v For information on analyzing and repairing PDSEs and restarting the SMSPDSE1
address space, see Diagnosing PDSE problems in z/OS DFSMSdfp Diagnosis.
Topic Location
Accessing the z/OS UNIX File System 481
Using HFS Data Sets 483
Creating z/OS UNIX Files 485
Managing UNIX Files and Directories 490
Reading UNIX Files Using BPAM 496
Concatenating UNIX Files and Directories 499
Root
directory
PATH
Directory Directory
File
File
File
File Directory Directory
File File
File File
File
For more information, see z/OS UNIX System Services Planning and z/OS UNIX
System Services User’s Guide.
For additional information, see “Processing UNIX Files with an Access Method” on
page 20.
You can access the files in a hierarchical file system by using z/OS UNIX System
Services. UNIX provides a way for z/OS to access hierarchical file systems, and for
UNIX applications to access z/OS data sets. You can use many of the standard
BSAM, QSAM, BPAM, and VSAM interfaces to access files within a hierarchical file
system. Most applications that use these access methods can access HFS data sets
without reassembly or recompilation.
HFS data sets appear to the z/OS system much as a PDSE does, but the internal
structure is entirely different. HFS data sets can be SMS managed or non-SMS
managed. DFSMS accesses the data within the files. You can back up, recover,
migrate, and recall HFS data sets.
HFS data sets have the following processing requirements and restrictions:
v They must reside on DASD volumes and be cataloged.
v They cannot be processed with UNIX system services calls or with access
methods. You can process the file system with UNIX system services calls and
with access methods.
v They can be created, renamed, and scratched using standard DADSM routines.
v They can be dumped, restored, migrated, recalled, and copied using DFSMShsm,
if you use DFSMSdss as the data mover. DFSMShsm does not process individual
files within an HFS data set.
v They cannot be copied using the IEBCOPY utility.
For more information about managing HFS data sets, see z/OS DFSMSdfp Advanced
Services and z/OS UNIX System Services Planning.
data class. If you do not specify the number of the directory blocks, the
allocation fails. The value of the number has no effect.
_______________________________________________________________
2. Define a data class for HFS data sets. Although you can create uncataloged
HFS data sets, they must be cataloged when they are mounted. These data sets
can expand to as many as 255 extents of DASD space on multiple volumes (59
volumes maximum with 123 extents per volume).
_______________________________________________________________
3. Log on as a TSO/E user and define additional directories, as described in
“Creating Additional Directories.”
_______________________________________________________________
The hierarchical file system can use first-in-first-out (FIFO) special files. To allocate
a FIFO special file in a z/OS UNIX file system, specify PIPE in the DSNTYPE
parameter and a path name in the PATH parameter.
These directories can be used as mount points for additional mountable file
systems. You can also use an IBM-supplied program that creates directories and
device files. Users or application programs can then add files to those additional
file systems.
Any user with write access authority to a directory can create subdirectories in that
directory using the MKDIR command. Within the root directory, only superusers
can create subdirectories. Authorized users can use the MOUNT command to
mount file systems in a directory.
Before you begin: Be familiar with how to use JCL, TSO/E ALLOCATE, or SVC 99
to create a data set, and understand how to specify the FILEDATA and
PATHMODE parameters. For more information, see the following material:
v “JCL Parameters for UNIX Files” on page 488
v z/OS MVS JCL Reference
v z/OS TSO/E Command Reference
v z/OS MVS Programming: Authorized Assembler Services Guide
v z/OS UNIX System Services Command Reference
Perform the following steps to create the UNIX file and its directory, write the
records to the file, and create an entry in the directory:
1. Code DSORG=PS or DSORG=PSU in the DCB macro.
_______________________________________________________________
2. In the DD statement, specify that the data be stored as a member of a new
UNIX directory.
Specify PATH=pathname and PATHDISP=(KEEP,DELETE) in the DD statement.
For an example of creating a UNIX file or directory, see “Creating z/OS UNIX
Files.”
_______________________________________________________________
3. Process the UNIX file with an OPEN macro, a series of PUT or WRITE macros,
and the CLOSE macro. A STOW macro is issued automatically when the data
set is closed.
_______________________________________________________________
Figure 107 on page 486 shows an example of creating a UNIX file with QSAM. You
can use BSAM, QSAM, BPAM, or UNIX System Services to read this new UNIX
file.
Processing Restrictions
The following restrictions are associated with using BSAM, BPAM, and QSAM
with UNIX files:
_______________________________________________________________
6. Submit the job, or issue the SVC 99 or TSO ALLOCATE command.
_______________________________________________________________
7. Issue the ISHELL command in a TSO/E session to confirm that you have
successfully created the UNIX file or directory.
_______________________________________________________________
8. Use ISPF Option 3.4 to browse the new UNIX file.
_______________________________________________________________
Result: The ISHELL command displays all the directories and files in a UNIX
directory. The new file is empty until you run a program to write data into it.
Example: The following example shows how to create a UNIX file, paytime in the
xpm17u01 directory, using JCL. The new directory and file can be any type of
UNIX file system (such as HFS, NFS, zFS, or TFS).
//SYSUT2 DD PATH=’/sj/sjpl/xsam/xpm17u01/paytime’,
// PATHDISP=(KEEP,DELETE), Disposition
// PATHOPTS=(OCREAT,ORDWR),
// PATHMODE=(SIRUSR,SIWUSR, Owner can read and write file
// SIRGRP,SIROTH) Others can read the file
// FILEDATA=TEXT Removes trailing blanks in the file
Related reading: For more information on the JCL parameters for UNIX files, see
z/OS MVS JCL Reference. For more information on using UNIX files, see z/OS UNIX
System Services User’s Guide.
Before you begin: For more information on utilities for copying files, see z/OS
DFSMSdfp Utilities.
Tip: The assembler requires the macro file name to be all capitals. Other
programs such as a compiler might not require the filename to be all capitals.
_______________________________________________________________
2. Code other DD statements to copy additional PDS or PDSE members to UNIX
files. You also can copy an entire PDS, PDSE, or UNIX directory to a new
UNIX directory.
_______________________________________________________________
3. Use the macro library to browse or copy additional files.
In the following example, the system macro library, SYS1.MACLIB, is
concatenated with a UNIX directory that contains macros that were copied
from elsewhere.
// EXEC PGM=ASMA90 High-level assembler
//SYSPRINT DD SYSOUT=*
//SYSLIB DD DSN=SYS1.MACLIB,DISP=SHR
// DD PATH=’/u/BIGPROG/macros/special’,PATHOPTS=ORDONLY,
// FILEDATA=TEXT Recognize line delimiters
. . . (other DD statements)
_______________________________________________________________
Table 40 shows the UNIX permissions classes for UNIX files and directories. For
more information on setting UNIX file permissions, see z/OS UNIX System Services
Planning.
Owner class
The user ID of the file owner or creator.
Group class
The user IDs that belong to a specific UNIX group, such as the Information
Technology department.
Other class
Any user ID that is not in the owner or group class. The other class usually
has the most restrictive permissions.
Table 40. Access Permissions for UNIX Files and Directories
UNIX file type Security Settings
Owner Group Other
Directory search search search
write write write
read read read
no access no access no access
File (member) execute execute execute
write write write
read read read
no access no access no access
BPAM OPEN verifies that you have UNIX search authority to each UNIX directory.
The FIND and BLDL macros verify that you have UNIX read authority to each
UNIX file. FIND and BLDL call UNIX OPEN. If the open fails because you do not
have read authority to the UNIX file, FIND returns return code 8, reason code 20.
A UNIX directory can contain files for which you do not have read authority.
Ensure that the application program does not issue BLDL and FIND for those
UNIX files.
Related reading: For more information on using RACF with UNIX files, see z/OS
UNIX System Services Planning.
You can, for example, use ISHELL to list all the directories and files in a UNIX
directory. Use the Options menu choice to display all the fields for each file.
Figure 109 on page 493 shows the ISPF Shell panel.
- Press Enter.
- Select an action bar choice.
- Specify an action code or command on the command line.
Before you begin: To allow you to use UNIX files, you must have a home
directory that corresponds to your user ID, such as /u/joe, and a RACF identity.
All UNIX directory and filenames are case sensitive.
You can get to a UNIX session from either TSO/E or ISPF. Once inside the UNIX
session, you can toggle between UNIX and TSO/E or ISPF. Perform the following
steps to establish a UNIX session and display UNIX files and directories:
1. In a TSO/E session, issue the OMVS command to establish a UNIX session
inside the TSO session.
a. For more information about using OMVS, press PF1 to display the online
help.
b. Select OMVS to get to the UNIX session.
_______________________________________________________________
2. Issue the ISHELL command to enter the ISPF shell which allows you to work
with UNIX directories, files, FIFO special files, and symbolic links, and mount
or unmount file systems.
a. Select File to display a UNIX file.
b. Select Directory to display a UNIX directory.
_______________________________________________________________
3. Press PF3 to exit the ISPF shell and return to the OMVS screen.
_______________________________________________________________
4. Use the Exit command to end the UNIX session and return to the TSO screen.
_______________________________________________________________
Related reading: For more information, see z/OS UNIX System Services Command
Reference.
Restriction: Although you can use IEBCOPY to copy a PDS or PDSE, you cannot
use IEBCOPY to copy a UNIX file.
Example: The example in Figure 110 uses OPUT to copy member MEM1 in
XMP17U36.PDSE01 to the UNIX file, MEM2 in the special directory.
Figure 110. Using OPUT to Copy Members of a PDS or PDSE to a UNIX File
Related reading: For the OPUT syntax, see z/OS UNIX System Services Command
Reference or the TSO/E Help.
Related reading: For more information on the OPUTX command, see z/OS UNIX
System Services Command Reference.
Related reading: For more information on the OCOPY command, see z/OS UNIX
System Services Command Reference.
Related reading: For more information on the OGET command, see z/OS UNIX
System Services Command Reference.
Related reading: For more information on the OGETX command, see z/OS UNIX
System Services Command Reference.
Related reading: For more information on these services and utilities, see z/OS
DFSMSdfp Advanced Services.
Although you cannot use ISPF Browse/Edit with UNIX files, you can use the
OBROWSE command.
SMF Records
CLOSE does not write SMF type 14, 15, or 60–69 records for UNIX files. DFSMS
relies on UNIX System Services to write the requested SMF records.
Restrictions:
v BPAM cannot write to UNIX files.
v BSAM and QSAM cannot sequentially read a UNIX directory.
v BPAM cannot store user data in UNIX directory entries.
v BPAM cannot use the DESERV macro for UNIX files.
v The BLDL macro creates simulated TTRs dynamically. You cannot compare them
from a different run of your program.
As with all access methods, you can issue the OPEN and CLOSE macros under the
same task.
Related reading: For more information on macros, see z/OS DFSMS Macro
Instructions for Data Sets.
The BLDL macro reads one or more UNIX directory entries into virtual storage.
Place UNIX file names in a BLDL list before issuing the BLDL macro. For each file
name in the list, BLDL returns a three-byte simulated relative track address (TTR).
This TTR is like a simulated PDS directory entry. Each open DCB has its own set
of simulated TTRs for the UNIX files. This TTR is no longer valid after the file is
closed.
| You can alter the sequence of directories searched if you supply a DCB and specify
| START= or STOP= parameters. These parameters allow you to specify the first and
| last concatenation numbers of the data sets to be searched.
If more than one filename exists in the list, the filenames must be in collating
sequence, regardless of whether the members are from the same or different UNIX
directories, PDSs, or PDSEs in the concatenation.
You can improve retrieval time by directing a subsequent FIND macro to the BLDL
list rather than to the directory to locate the file to be processed. The FIND macro
uses the simulated TTR to identify the UNIX file.
The BLDL list must begin with a 4-byte list descriptor that specifies the number of
entries in the list and the length of each entry (12 to 76 bytes). The first 8 bytes of
each entry contain the file name or alias. The next 6 bytes contain the TTR, K, Z,
and C fields.
Restriction: BLDL does not return user data or NOTE lists in the simulated PDS
directory entry.
the FIND. The FIND macro lets you search a concatenated series of UNIX, PDSE,
and PDS directories when you supply a DCB opened for the concatenated data
sets.
There are two ways that you can direct the system to the correct file when you use
the FIND macro:
v Specify the address of an area that contains the name of the file.
v Specify the address of the TTR field of the entry in a BLDL that list you have
created by using the BLDL macro.
In the first case, the system searches the directory of the data set for the relative
track address. In the second case, no search is required, because the TTR is in the
BLDL list entry.
When the application program issues FIND, BPAM opens the specified file and
establishes a connection. BPAM retains the logical connection until the program
issues STOW DISC or CLOSE or ends the task.
If you want to process only one UNIX file, you can specify DSORG=PS using
either BSAM or QSAM. You specify the name of the file that you want to process
and the name of the UNIX in the PATH parameter of the DD statement. When you
open the data set, the system places the starting address in the DCB so that a
subsequent GET or READ macro begins processing at that point.
Restriction: You cannot use the FIND, BLDL, or STOW macro when you are
processing one UNIX file sequentially.
If your program does not issue STOW DISC, the CLOSE macro automatically
issues STOW DISC for each connected file. If the file cannot be closed, STOW DISC
returns status code 4 and issues an error message. That different tasks issue the
FIND and STOW macros for the same file can be a possible cause of errors.
A UNIX file cannot be deleted between the time a program issues FIND or BLDL
for the file until the connection for the program ends and BPAM closes the file. For
programs that run for a long time or access many files, keeping this connection
open for a long time can be a processing bottleneck. The connections consume
virtual storage above the 16 MB line and might interfere with other programs that
are trying to update the files. The solution is for the application program to issue
the STOW DISC macro to close the file as soon as it is no longer needed.
To reaccess the UNIX file, the application program must reissue the BLDL or FIND
macro.
Sequential Concatenation
To process sequentially concatenated data sets and UNIX files, use a DCB that has
DSORG=PS. Each DD statement can specify any of the following types of data sets:
v Sequential data sets, which can be on disk, tape, instream (SYSIN), TSO/E
terminal, card reader, and subsystem (SUBSYS)
v UNIX files
v PDS members
v PDSE members
When a UNIX file is found within a sequential concatenation, the system forces the
use of the LRECL, RECFM, and BUFNO from the previous data set. (The unlike
attributes bit is not set in a like sequential concatenation.) Also, the system uses the
same NCP and BLKSIZE values as for any BSAM sequential like concatenation. For
QSAM, the system uses the value of BLKSIZE for each data set. For the rules for
concatenating like and unlike data sets, see “Concatenating Data Sets Sequentially”
on page 391.
Partitioned Concatenation
Concatenated UNIX directories are processed with a DSORG=PO in the DCB.
When UNIX directories are concatenated, the system treats the group as a single
data set. A partitioned concatenation can contain a mixture of PDSs, PDSEs, and
UNIX directories in any order. Partitioned concatenation is supported only when
the DCB is open for input.
//DATA01 DD DSN=XPM17U19.PDS001,DISP=SHR,VOL=SER=1P0101,UNIT=SYSDA
// DD DSN=XPM17U19.PDS001,DISP=SHR,VOL=SER=1P0101,UNIT=SYSDA
// DD DSN=XPM17U19.PDS001,DISP=SHR,VOL=SER=1P0101,UNIT=SYSDA
// . . .
// DD DSN=XPM17U19.PDSE01,DISP=SHR,VOL=SER=1P0101,UNIT=SYSDA
// DD DSN=XPM17U19.PDSE01,DISP=SHR,VOL=SER=1P0101,UNIT=SYSDA
// DD DSN=XPM17U19.PDSE01,DISP=SHR,VOL=SER=1P0101,UNIT=SYSDA
// DD DSN=XPM17U19.PDSE01,DISP=SHR,VOL=SER=1P0101,UNIT=SYSDA
// DD PATH=’/sj/sjpl/xsam/xpm17u01/’, # two UNIX directories
// PATHDISP=KEEP,FILEDATA=TEXT,
// PATHOPTS=(ORDONLY)
// RECFM=FB,LRECL=80,BLKSIZE=800
// DD PATH=’/sj/sjpl/xsam/xpm17u02/’,
// PATHDISP=KEEP,FILEDATA=TEXT,
// PATHOPTS=(ORDONLY)
// RECFM=FB,LRECL=80,BLKSIZE=800
Figure 111. A Partitioned Concatenation of PDS extents, PDSEs, and UNIX directories
Concatenated UNIX directories are always treated as having like attributes, except
for block size. They use the attributes of the first file only, except for the block size.
BPAM OPEN uses the largest block size among the concatenated files. All
attributes of the first data set are used, even if they conflict with the block size
parameter specified.
Topic Location
Absolute Generation and Version Numbers 502
Relative Generation Number 503
Programming Considerations for Multiple-Step Jobs 503
Naming Generation Data Groups for ISO/ANSI Version 3 or Version 4 505
Labels
Creating a New Generation 506
Reclaiming Generation Data Sets 510
Retrieving a Generation Data Set 510
Building a Generation Data Group Index 511
You can catalog successive updates or generations of related data. They are called
generation data groups (GDGs). Each data set within a GDG is called a generation
data set (GDS) or generation. Within a GDG, the generations can have like or
unlike DCB attributes and data set organizations. If the attributes and
organizations of all generations in a group are identical, the generations can be
retrieved together as a single data set.
There are advantages to grouping related data sets. For example, the catalog
management routines can refer to the information in a special index called a
generation index in the catalog. Thus:
v All of the data sets in the group can be referred to by a common name.
v The operating system is able to keep the generations in chronological order.
v Outdated or obsolete generations can be automatically deleted by the operating
system.
Generation data sets have sequentially ordered absolute and relative names that
represent their age. The catalog management routines use the absolute generation
name. Older data sets have smaller absolute numbers. The relative name is a
signed integer used to refer to the latest (0), the next to the latest (−1), and so forth,
generation. For example, a data set name LAB.PAYROLL(0) refers to the most
recent data set of the group; LAB.PAYROLL(−1) refers to the second most recent
data set; and so forth. The relative number can also be used to catalog a new
generation (+1).
A generation data group (GDG) base is allocated in a catalog before the generation
data sets are cataloged. Each GDG is represented by a GDG base entry. Use the
access method services DEFINE command to allocate the GDG base.
Note: For new non-system-managed data sets, if you do not specify a volume and
the data set is not opened, the system does not catalog the data set. New
system-managed data sets are always cataloged when allocated, with the volume
assigned from a storage group.
See z/OS DFSMS Access Method Services for Catalogs for information about defining
and cataloging generation data sets in a catalog.
Notes:
1. A GDG base that is to be system managed must be created in a catalog.
Generation data sets that are to be system managed must also be cataloged in a
catalog.
2. Both system-managed and non-system-managed generation data sets can be
contained in the same GDG. However, if the catalog of a GDG is on a volume
that is system managed, the model DSCB cannot be defined.
3. You can add new non-system-managed generation data sets to the GDG by
using cataloged data sets as models without needing a model DSCB on the
catalog volume.
Restriction: Generation data sets cannot be PDSEs, UNIX files, or VSAM data sets.
The number of generations and versions is limited by the number of digits in the
absolute generation name; that is, there can be 9,999 generations. Each generation
can have 100 versions.
The version number lets you perform normal data set operations without
disrupting the management of the GDG. For example, if you want to update the
second generation in a 3-generation group, replace generation 2, version 0, with
generation 2, version 1. Only one version is kept for each generation.
You can catalog a generation using either absolute or relative numbers. When a
generation is cataloged, a generation and version number is placed as a low-level
entry in the GDG. To catalog a version number other than V00, you must use an
absolute generation and version number.
The value of the specified integer tells the operating system what generation
number to assign to a new generation, or it tells the system the location of an entry
representing a previously cataloged generation.
When you use a relative generation number to catalog a generation, the operating
system assigns an absolute generation number and a version number of V00 to
represent that generation. The absolute generation number assigned depends on
the number last assigned and the value of the relative generation number that you
are now specifying. For example if, in a previous job generation, A.B.C.G0005V00
was the last generation cataloged, and you specify A.B.C(+1), the generation now
cataloged is assigned the number G0006V00.
Though any positive relative generation number can be used, a number greater
than 1 can cause absolute generation numbers to be skipped. For example, if you
have a single step job, and the generation being cataloged is a +2, one generation
number is skipped. However, in a multiple-step job, one step might have a +1 and
a second step a +2, in which case no numbers are skipped.
When you use a relative generation number to refer to a generation that was
previously cataloged, the relative number has the following meaning:
v A.B.C(0) refers to the latest existing cataloged entry.
v A.B.C(−1) refers to the next-to-the-latest entry, and so forth.
When cataloging is requested using JCL, all actual cataloging occurs at step
termination, but the relative generation number remains the same throughout the
job. The following results can occur:
v A relative number used in the JCL refers to the same generation throughout a
job.
v A job step that ends abnormally can be deferred for a later step restart. If the job
step successfully cataloged a generation data set in its GDG, you must change
all relative generation numbers in the next steps using JCL before resubmitting
the job.
For example, if the next steps contained the following relative generation numbers:
v A.B.C(+1) refers to the entry cataloged in the terminated job step, or
v A.B.C(0) refers to the next to the latest entry, or
v A.B.C(−1) refers to the latest entry, before A.B.C(0).
The only time that you can use absolute generation numbers is when you need
to run concurrent jobs that use the same GDG and at least one of the jobs uses a
disposition of NEW or MOD. Ensure that the jobs do not accidentally overlay a
generation data set that another job is using.
Restriction: Be careful when you update GDGs because two or more jobs can
compete for the same resource and accidentally replace the generation data set
with the wrong version in the GDG. To prevent two users from allocating the same
absolute generation data set, take one of the following actions:
v Specify DISP=OLD.
v Specify DISP=SHR and open the data set for output.
For Version 3 or Version 4 labels, you must observe the following specifications
created by the GDG naming convention.
v Data set names whose last 9 characters are of the form .GnnnnVnn (n is 0 through
9) can only be used to specify GDG data sets. When a name ending in .GnnnnVnn
is found, it is automatically processed as a GDG. The generation number Gnnnn
and the version number Vnn are separated from the rest of the data set name and
placed in the generation number and version number fields.
v Tape data set names for GDG files are expanded from a maximum of 8
user-specified characters to 17 user-specified characters. (The tape label file
identifier field has space for 9 additional user-specified characters because the
generation number and version number are no longer contained in this field.)
v A generation number of all zeros is not valid, and is treated as an error during
label validation. The error appears as a “RANG” error in message IEC512I
(IECIEUNK) during the label validation installation exit.
v In an MVS system-created GDG name, the version number is always be 0. (MVS
does not increase the version number by 1 for subsequent versions.) To obtain a
version number other than 0, you must explicitly specify the version number
(for example, A.B.C.G0004V03) when the data set is allocated. You must also
explicitly specify the version number to retrieve a GDG with a version number
other than 0.
v Because the generation number and version number are not contained on the
identifier of HDR1, generations of the same GDG have the same name.
Therefore, an attempt to place more than one generation of a GDG on the same
volume results in an ISO/ANSI standards violation in a system supporting
Version 3 and MVS enters the validation installation exit.
If you are using absolute generation and version numbers, DCB attributes for a
generation can be supplied directly in the DD statement defining the generation to
be created and cataloged.
If you are using relative generation numbers to catalog generations, DCB attributes
can be supplied:
1. By referring to a cataloged data set for the use of its attributes.
2. By creating a model DSCB on the volume on which the index resides (the
volume containing the catalog). Attributes can be supplied before you catalog a
generation, when you catalog it, or at both times.
Restriction: You cannot use a model DSCB for system-managed generation data
sets.
3. By using the DATACLAS and LIKE keywords in the DD statement for both
system-managed and non-system-managed generation data sets. The generation
data sets can be on either tape or DASD.
4. Through the assignment of a data class to the generation data set by the data
class ACS routine.
To refer to a cataloged data set for the use of its attributes, you can specify one of
the following on the DD statement that creates and catalogs your generation:
v DCB=(dsname), where dsname is the name of the cataloged data set.
v LIKE=dsname, where dsname is the name of the cataloged data set.
v REFDD=ddname, where ddname is the name of a DD statement that allocated
the cataloged data set.
The DCB attributes allocated to the new data set depend on the attributes defined
in data class ALLOCL01. Your storage administrator can provide information on
the attributes specified by the data classes available to your installation.
The new generation data set have the same attributes as the data set defined in the
first example.
You can also refer to an existing model DSCB for which you can supply overriding
attributes.
Restriction: You cannot use a model DSCB for system-managed generation data
sets.
You can provide initial DCB attributes when you create your model; however, you
need not provide any attributes now. Because only the attributes in the data set
label are used, allocate the model data set with SPACE=(TRK,0) to conserve direct
access space. You can supply initial or overriding attributes creating and cataloging
a generation. To create a model DSCB, include the following DD statement in the
job step that builds the index or in any other job step that precedes the step in
which you create and catalog your generation:
//name DD DSNAME=datagrpname,DISP=(,KEEP),SPACE=(TRK,(0)),
// UNIT=yyyy,VOLUME=SER=xxxxxx,
// DCB=(applicable subparameters)
In the preceding example, datagrpname is the common name that identifies each
generation, and xxxxxx is the serial number of the volume that contains the
catalog. If you do not want any DCB subparameters initially, you need not code
the DCB parameter.
The model DSCB must reside on the catalog volume. If you move a catalog to a
new volume, you also need to move or create a new model DSCB on this new
volume. If you split or merge a catalog and the catalog remains on the same
volume as the existing model DSCB, you do not have to move or create a new
model DSCB.
The LIKE keyword specifies the allocation attributes of a new data set by copying
the attributes of a cataloged model data set. The cataloged data set referred to in
LIKE=dsname must be on DASD.
Recommendation: You can still use model DSCBs if they are present on the
volume, even if LIKE and DATACLAS are also used for a non-system-managed
generation data set. If you use model DSCBs, you do not need to change the JCL
(to scratch the model DSCB) when migrating the data to system-managed storage
or migrating from system-managed storage. If you do not specify DATACLAS and
LIKE in the JCL for a non-system-managed generation data set, and there is no
model DSCB, the allocation fails.
//DDNAME DSN=HLQ.----.LLQ(+1),DISP=(NEW,CATLG),LIKE=dsn
For more information on the JCL keywords used to allocate a generation data set,
see z/OS MVS JCL Reference.
The new generation data set is cataloged at allocation time, and rolled into the
GDG at the end-of-job step. If your job ends after allocation but before the
end-of-job step, the generation data set is cataloged in a deferred roll-in state. A
generation data set is in a deferred roll-in state when SMS does not remove the
temporary catalog entry and does not update the GDG base. You can resubmit
your job to roll the new generation data set into the GDG. For more information
about rolling in generation data sets see “Rolling In a Generation Data Set” on
page 509.
The attributes specified for the GDG determines what happens to the older
generations when a new generation is rolled. The access method services command
DEFINE GENERATIONDATAGROUP creates a GDG. It also specifies the limit (the
maximum number of active generation data sets) for a GDG, and specifies whether
all or only the oldest generation data sets should be rolled off when the limit is
reached.
When a GDG contains its maximum number of active generation data sets, and a
new generation data set is rolled in at the end-of-job step, the oldest generation
data set is rolled off and is no longer active. If a GDG is defined using DEFINE
GENERATIONDATAGROUP EMPTY, and is at its limit, then, when a new
generation data set is rolled in, all the currently active generation data sets are
rolled off.
The access method services command ALTER LIMIT can increase or reduce the
limit for an existing GDG. If a limit is reduced, the oldest active generation data
sets are automatically rolled off as needed to meet the decreased limit. If a change
in the limit causes generations to be rolled off, then the rolled off data sets are
listed with their disposition (uncataloged, recataloged, or deleted). If a limit is
increased, and there are generation data sets in a deferred roll-in state, these
generation data sets are not rolled into the GDG. The access method services
command ALTER ROLLIN can be used to roll the generation data sets into the
GDG in active status.
For more information about using the access method services commands DEFINE
GENERATIONDATAGROUP and ALTER see z/OS DFSMS Access Method Services
for Catalogs.
You can retrieve a generation data set by using either relative generation numbers
or absolute generation and version numbers.
Refer to generation data sets that are in a deferred roll-in state by their relative
number, such as (+1), within the job that allocates it. Refer to generation data sets
that are in a deferred roll-in state by their absolute generation number (GxxxxVyy)
in subsequent jobs.
For example, job A creates A.B.C.G0009V00 but the roll-in does not occur because
the address space abnormally ends. Because generation G0009V00 did not get
rolled in, jobs that refers to A.B.C (+1) attempt to recreate G0009V00. SMS gets a
failure due to the duplicate data set name when it tries to catalog the new version
of G0009V00. However, SMS detects that this failure occurred because a previous
roll-in of G0009V00 did not occur. Consequently, SMS reuses the old version of
G0009V00. Any data that was written in this old version gets rewritten.
Warning: Usually, GDS reclaim processing works correctly when you rerun the
abending job. However, if you accidentally run another job before rerunning the
previous job, data loss might occur. If this situation occurs in your installation, you
might want to turn off automatic GDS reclaim processing. If you turn off GDS
reclaim processing, you will need to manually delete or use the IDCAMS ROLLIN
command to roll in the generation that did not get rolled-in. Note that the
OPTION to either turn “on” GDS reclaim processing or to turn it “off” applies to
the entire system. It is not possible to set this OPTION to a particular value just for
one JOB or STEP. Different systems in a sysplex may set their own value for this
option but this may lead to unpredictable results.
Guideline: If GDS reclaim processing is turned off, use the access method services
ALTER command to delete, rename, or roll in the generation that did not get rolled
in. Otherwise, any attempt to create a new (+1) generation fails with error message
IGD17358I.
Related reading: For information on changing the setting for GDS reclaim
processing, see the z/OS DFSMSdfp Storage Administration Reference. For information
on the access method services commands for generation data sets, see the z/OS
DFSMS Access Method Services for Catalogs.
Rule: An alias name cannot be assigned to the highest level of a generation index.
The BLDG function of IEHPROGM builds the index. The BLDG function also
indicates how older or obsolete generations are to be handled when the index is
full. For example, when the index is full, you might want to empty it, scratch
existing generations, and begin cataloging a new series of generations. After the
index is built, a generation can be cataloged by its GDG name, and by either an
absolute generation and version number or a relative generation number.
Examples of how to build a GDG index are found in z/OS DFSMS Access Method
Services for Catalogs and in z/OS DFSMSdfp Utilities.
Topic Location
Using the CNTRL Macro to Control an I/O Device 513
Using the PRTOV Macro to Test for Printer Overflow 514
Using the SETPRT Macro to Set Up the Printer 514
Using the BSP Macro to Backspace a Magnetic Tape or Direct Access 515
Volume
Using the NOTE Macro to Return the Relative Address of a Block 515
Using the POINT Macro to Position to a Block 516
Using the SYNCDEV Macro to Synchronize Data 517
When you use the queued access method, only unit record equipment can be
controlled directly. When using the basic access method, limited device
independence can be achieved between magnetic tape and direct access storage
devices. With BSAM you must check all read or write operations before issuing a
device control macro.
Backspacing moves the tape toward the load point; forward spacing moves the
tape away from the load point.
Restriction: The CNTRL macro cannot be used with an input data set containing
variable-length records on the card reader.
If you specify OPTCD=H in the DCB parameter field of the DD statement, you can
use the CNTRL macro to position VSE tapes even if they contain embedded
checkpoint records. The CNTRL macro cannot be used to backspace VSE 7-track
tapes that are written in data convert mode and contain embedded checkpoint
records.
If the device specified on the DD statement is not for a directly allocated printer,
no action is taken.
For printers that are allocated to your program, the SETPRT macro is used to
initially set or dynamically change the printer control information. For more
information about using the SETPRT macro, see z/OS DFSMS Macro Instructions for
Data Sets.
For printers that have a universal character set (UCS) buffer and optionally, a
forms control buffer (FCB), the SETPRT macro is used to specify the UCS or FCB
images to be used. Note that universal character sets for the various printers are
not compatible. The three formats of FCB images (the FCB image for the 3800
Printing Subsystem, the 4248 format FCB, and the 3211 format FCB) are
incompatible. The 3211 format FCB is used by the 3203, 3211, 4248, 3262 Model 5,
and 4245 printers.
IBM-supplied UCS images, UCS image tables, FCB images, and character
arrangement table modules are included in the SYS1.IMAGELIB at system
initialization time. For 1403, 3203, 3211, 3262 Model 5, 4245, and 4248 printers,
user-defined character sets can be added to SYS1.IMAGELIB.
Related reading:
v For a description of how images are added to SYS1.IMAGELIB and how band
names/aliases are added to image tables see z/OS DFSMSdfp Advanced Services.
v For the 3800 and 3900, user-defined character arrangement table modules, FCB
modules, graphic character modification modules, copy modification modules,
and library character sets can be added to SYS1.IMAGELIB as described for
IEBIMAGE in z/OS DFSMSdfp Utilities.
v For information on building a 4248 format FCB (which can also be used for the
IBM 3262 Model 5 printer), see z/OS DFSMSdfp Utilities.
The FCB contents can be selected from the system library (or an alternate library if
you are using a 3800 or 3900), or defined in your program through the exit list of
the DCB macro. For information about the DCB exit list see “DCB Exit List” on
page 535.
For a non-3800 or non-3900 printer, the specified UCS or FCB image can be found
in one of the following:
v SYS1.IMAGELIB
v Image table (UCS image only)
v DCB exit list (FCB image only)
If the image is not found, the operator is asked to specify an alternate image name
or cancel the request.
For a printer that has no carriage control tape, you can use the SETPRT macro to
select the FCB, to request operator verification of the contents of the buffer, or to
allow the operator to align the paper in the printer.
For a SYSOUT data set, the specified images must be available at the destination of
the data set, which can be JES2, JES3, VM, or other type of system.
The direction of movement is toward the load point or the beginning of the extent.
You can not use the BSP macro if the track overflow option was specified or if the
CNTRL, NOTE, or POINT macro is used. The BSP macro should be used only
when other device control macros could not be used for backspacing.
Any attempt to backspace across the beginning of the data set on the current
volume results in return code X'04' in register 15, and your tape or direct access
volume is positioned before the first block. You cannot issue a successful backspace
command after your EODAD routine is entered unless you first reposition the tape
or direct access volume into your data set. CLOSE TYPE=T can position you at the
end of your data set.
You can use the BSP macro to backspace VSE tapes containing embedded
checkpoint records. If you use this means of backspacing, you must test for and
bypass the embedded checkpoint records. You cannot use the BSP macro for VSE
7-track tapes written in translate mode.
If a NOTE macro is issued after an automatic volume switch occurs, and before a
READ or WRITE macro is issued to the next volume, NOTE returns a relative
block address of zero except for extended format data sets.
For magnetic tape, the address is in the form of a 4-byte relative block address. If
TYPE=REL is specified or defaults, the address provided by the operating system
is returned in register 1. If TYPE=ABS is specified, the physical block identifier of a
data block on tape is returned in register 0. Later you can use the relative block
address or the block identifier as a search argument for the POINT macro.
For non-extended-format data sets on direct access storage devices, the address is
in the form of a 4-byte relative track record address. For extended format data sets,
the address is in the form of a block locator token (BLT). The BLT is essentially the
relative block number (RBN) within the current logical volume of the data set
where the first block has an RBN of 1. The user sees a multistriped data set as a
single logical volume; therefore, for a multistriped data set, the RBN is relative to
the beginning of the data set and incorporates all stripes. For PDSEs, the address is
in the form of a record locator token. The address provided by the operating
system is returned in register 1. For non-extended-format data sets and partitioned
data sets, NOTE returns the track balance in register 0 if the last I/O operation
was a WRITE, or returns the track capacity if the NOTE follows a READ or
POINT. For PDSEs, extended format data sets and HFS data sets, NOTE returns
X'7FFF' in register 0.
See “Using the NOTE Macro to Provide Relative Position” on page 465 for
information about using the NOTE macro to process PDSEs.
In a multivolume sequential data set you must ensure that the volume referred to
is the volume currently being processed. The user sees a multistriped
extended-format data set as a single logical volume; therefore, no special
positioning is needed. However, a single-striped multivolume extended-format
data set does require you to be positioned at the correct volume.
For disk, if a write operation follows the POINT macro, all of the track following
the write operation is erased, unless the data set is opened for UPDAT. Closing the
data set after such a write logically truncate ends the data set. POINT is not meant
to be used before a WRITE macro when a data set is opened for UPDAT.
If you specify OPTCD=H in the DCB parameter field of the DD statement, you can
use the POINT macro to position VSE tapes even if they contain embedded
checkpoint records. The POINT macro cannot be used to backspace VSE 7-track
tapes that are written in data convert mode and that contain embedded checkpoint
records.
If you specify TYPE=ABS, you can use the physical block identifier as a search
argument to locate a data block on tape. The identifier can be provided from the
output of a prior execution of the NOTE macro.
When using the POINT macro for a direct access storage device that is opened for
OUTPUT, OUTIN, OUTINX, or INOUT, and the record format is not fixed
standard, the number of blocks per track might vary slightly.
When SYNCDEV completes successfully (return code 0), a value is returned that
shows the number of data blocks remaining in the control unit buffer. For PDSEs
and compressed format data sets, the value returned is always zero. For PDSEs
and compressed format data sets, requests for synchronization information or for
partial synchronization cause complete synchronization. Specify Guaranteed
Synchronous Write through storage class to ensure that data is synchronized to
DASD at the completion of each CHECK macro. However, this degrades
performance. This produces the same result as issuing the SYNCDEV macro after
each CHECK macro. See z/OS DFSMSdfp Storage Administration Reference for
information about how the storage administrator specifies guaranteed synchronous
write.
Topic Location
General Guidance 519
EODAD End-of-Data-Set Exit Routine 527
SYNAD Synchronous Error Routine Exit 528
DCB Exit List 535
Allocation Retrieval List 538
DCB ABEND Exit 539
DCB OPEN Exit 543
Defer Nonstandard Input Trailer Label Exit List Entry 544
Block Count Unequal Exit 544
EOV Exit for Sequential Data Sets 545
FCB Image Exit 546
JFCB Exit 547
JFCBE Exit 548
Open/Close/EOV Standard User Label Exit 549
Open/EOV Nonspecific Tape Volume Mount Exit 553
Open/EOV Volume Security and Verification Exit 556
QSAM Parallel Input Exit 558
User Totaling for BSAM and QSAM 558
General Guidance
You can identify user-written exit routines for use with non-VSAM access methods.
These user-written exit routines can perform a variety of functions for non-VSAM
data sets, including error analysis, requesting user totaling, performing I/O
operations for data sets, and creating your own data set labels. These functions are
not for use with VSAM data sets. Similar VSAM functions are described in
Chapter 16, “Coding VSAM User-Written Exit Routines,” on page 241.
The DCB and DCBE macros can be used to identify the locations of exit routines:
v The routine that performs end-of-data procedures (the EODAD parameter of
DCB or DCBE).
v The routine that supplements the operating system’s error recovery routine (the
SYNAD parameter of DCB or DCBE).
v The list that contains addresses of special exit routines (the EXLST parameter of
DCB).
The exit addresses can be specified in the DCB or DCBE macro, or you can
complete the DCB or DCBE fields before they are needed. Table 41 on page 520
summarizes the exits that you can specify either explicitly in the DCB or DCBE, or
implicitly by specifying the address of an exit list in the DCB.
Programming Considerations
Most exit routines described in this chapter must return to their caller. The only
two exceptions are the end-of-data and error analysis routines.
Exception codes are provided in the data control block (QISAM), or in the data
event control block (BISAM and BDAM). The data event control block is described
below, and the exception code lies within the block as shown in Table 42 on page
521. If a DCBD macro instruction is coded, the exception code in a data control
block can be addressed as two 1-byte fields, DCBEXCD1 and DCBEXCD2. QISAM
exception codes are described in Table 47 on page 529. The other exception codes
are described in Table 43 on page 522, Table 45 on page 525, and Table 47 on page
529.
Status indicators are available only to the error analysis routine designated by the
SYNAD entry in the data control block or the data control block extension. Or,
they are available after I/O completion from BSAM or BPAM until the next WAIT
or CHECK for the DCB. A pointer to the status indicators is provided either in the
data event control block (BSAM, BPAM, and BDAM), or in register 0 (QISAM and
QSAM). The contents of registers on entry to the SYNAD exit routine are shown in
Table 48 on page 531, Table 49 on page 532, and Table 50 on page 532. The status
indicators for BSAM, BPAM, BDAM, and QSAM are shown in Figure 112 on page
526.
For BISAM, exception codes are returned by the control program after the
corresponding WAIT or CHECK macro instruction is issued, as indicated in
Table 43 on page 522.
0111 1111 7F Channel program has terminated without error. (The status
indicators in Figure 112 on page 526 are valid.)
0100 0001 41 Channel program has terminated with permanent error. (The
status indicators in Figure 112 on page 526 are valid.)
0100 0011 43 Abend condition occurred in the error recovery routine. (The
status indicators in Figure 112 on page 526 are not valid.)
0100 1011 4B One of the following errors occurred during tape error
recovery processing:
v The CSW command address was zeros.
v An unexpected load point was encountered.
(The status indicators in Figure 112 on page 526 are not
valid.)
0100 1111 4F Error recovery routines have been entered because of direct
access error but are unable to read home addresses or record
0. (The status indicators in Figure 112 on page 526 are not
valid.)
0101 0000 50 Channel program terminated with error. Input block was a
VSE-embedded checkpoint record. (The status indicators in
Figure 112 on page 526 are not valid.)
Table 45 on page 525 shows the exception bit codes for BDAM.
Figure 112 lists status indicators for BDAM, BPAM, BSAM, and QSAM.
Offset in
status indicator
area
Byte Bit Meaning Name
-12 - Word containing length that was read Valid only when reading with LBI
+2 0 Command reject Sense byte 0
1 Intervention required
2 Bus-out check
3 Equipment check
4 Data check
5 Overrun
6,7 Device-dependent information;
see the appropriate device
manual
+3 0-7 Device-dependent information; Sense byte 1
see the appropriate device
manual
The ending CCW may have the indirect data addressing bit and/or data chaining bit on
+12 0 Attention Status byte 0
1 Status modifier (Unit)
2 Control unit end
3 Busy
4 Channel end
5 Device end
6 Unit check—must be on for
sense bytes to be significant
7 Unit exception
If the sense bytes are X'10FE', the control program has set them to this nonvalid
combination because sense bytes could not be obtained from the device because of
recurrence of unit checks.
Register Contents
Table 46 shows the contents of the registers when control is passed to the EODAD
routine.
Table 46. Contents of Registers at Entry to EODAD Exit Routine
Register Contents
0-1 Reserved
2-13 Contents before execution of GET, CHECK, FEOV or EOV (EXCP)
14 Contains the address after a GET or CHECK as these macros generate a branch and link to the
access method routines. FEOV is an SVC. Register 14 will contain what is contained at the time
the FEOV was issued.
15 Reserved
Programming Considerations
You can treat your EODAD routine as a subroutine (and end by branching on
register 14) or as a continuation of the routine that issued the CHECK, GET or
FEOV macro.
The EODAD routine generally is not regarded as being a subroutine. After control
passes to your EODAD routine, you can continue normal processing, such as
repositioning and resuming processing of the data set, closing the data set, or
processing another data set.
For BSAM, you must first reposition the data set that reached end-of-data if you
want to issue a BSP, READ, or WRITE macro. You can reposition your data set by
issuing a CLOSE TYPE=T macro instruction. If a READ macro is issued before the
data set is repositioned, unpredictable results occur.
For BPAM, you may reposition the data set by issuing a FIND or POINT macro.
(CLOSE TYPE=T with BPAM results in no operation performed.)
For QISAM, you can continue processing the input data set that reached
end-of-data by first issuing an ESETL macro to end the sequential retrieval, then
issuing a SETL macro to set the lower limit of sequential retrieval.You can then
issue GET macros to the data set.
Your task will abnormally end under either of the following conditions:
v No exit routine is provided.
v A GET macro is issued in the EODAD routine to the DCB that caused this
routine to be entered (unless the access method is QISAM).
For BSAM, BPAM, and QSAM your EODAD routine is entered with the
addressability (24- or 31-bit) of when you issued the macro that caused entry to
EODAD. This typically is a CHECK, GET, or FEOV macro. DCB EODAD identifies
a routine that resides below the line (RMODE is 24). DCBE EODAD identifies a
routine that may reside above the line. If it resides above the line, then all macros
that might detect an end-of-data must be issued in 31-bit mode. If both the DCB
and DCBE specify EODAD, the DCBE routine is used.
For BDAM, BSAM, BPAM, and QSAM, the control program provides a pointer to
the status indicators shown in Figure 112 on page 526. The block being read or
written can be accepted or skipped, or processing can be terminated.
Table 47 on page 529 shows the exception code bits for QISAM.
If a data set is being created (load mode), the SYNAD exit routine is given
control when the next PUT or CLOSE macro instruction is issued. If a failure to
write a data block occurs, register 1 contains the address of the output buffer,
and register 0 contains the address of a work area containing the first 16 bytes of
the IOB; for other errors, the contents of register 1 are meaningless. After
appropriate analysis, the SYNAD exit routine should close the data set or end
the job step. If records are to be subsequently added to the data set using the
queued indexed sequential access method (QISAM), the job step should be
terminated by issuing an abend macro instruction. (Abend closes all open data
sets. However, an ISAM data set is only partially closed, and it can be reopened
in a later job to add additional records by using QISAM.) Subsequent execution
of a PUT macro instruction would cause reentry to the SYNAD exit routine,
because an attempt to continue loading the data set would produce
unpredictable results.
If a data set is being processed (scan mode), the address of the output buffer in
error is placed in register 1, the address of a work area containing the first 16
bytes of the IOB is placed in register 0, and the SYNAD exit routine is given
control when the next GET macro instruction is issued. Buffer scheduling is
suspended until the next GET macro instruction is reissued.
v Block Could Not Be Reached (Input) condition is reported if the control
program’s error recovery procedures encounter an uncorrectable error in
searching an index or overflow chain. The SYNAD exit routine is given control
when a GET macro instruction is issued for the first logical record of the
unreachable block.
v Block Could Not Be Reached (Update): The control program’s error recovery
procedures encounter an uncorrectable error in searching an index or overflow
chain.
If the error is encountered during closing of the data control block, bit 2 of
DCBEXCD2 is set to 1 and the SYNAD exit routine is given control immediately.
Otherwise, the SYNAD exit routine is given control when the next GET macro
instruction is issued.
v Sequence Check: A PUT macro instruction refers to a record whose key has a
smaller numeric value than the key of the record previously referred to by a
PUT macro instruction. The SYNAD exit routine is given control immediately;
the record is not transferred to secondary storage.
v Duplicate Record: A PUT macro instruction refers to a record whose key
duplicates the record previously referred to by a PUT macro instruction. The
SYNAD exit routine is given control immediately; the record is not transferred to
secondary storage.
v Data Control Block Closed When Error Routine Entered: The control program’s
error recovery procedures encounter an uncorrectable output error during
closing of the data control block. Bit 5 or 7 of DCBEXCD1 is set to 1, and the
SYNAD exit routine is immediately given control. After appropriate analysis, the
SYNAD routine must branch to the address in return register 14 so that the
control program can finish closing the data control block.
v Overflow Record: The input record is an overflow record. The SYNAD exit
routine is entered only if bit 4, 5, 6, or 7 of DCBEXCD1 is also on.
v Incorrect Record Length: The length of the record as specified in the
record-descriptor word (RDW) is larger than the value in the DCBLRECL field of
the data control block.
Register Contents
Table 48 shows the register contents on entry to the SYNAD routine for BDAM,
BPAM, BSAM, and QSAM.
Table 48. Register Contents on Entry to SYNAD Routine—BDAM, BPAM, BSAM, and QSAM
Register Bits Meaning
0 0-7 Value to be added to the status indicator’s address to provide the address of the first
CCW (QSAM only). Value may be zero, meaning unavailable, if LBI is used.
8-31 Address of the associated data event control block for BDAM, BPAM, and BSAM unless
bit 2 of register 1 is on; address of the status indicators shown in Figure 112 on page 526
for QSAM. If bit 2 of register 1 is on, the failure occurred in CNTRL, POINT, or BSP and
this field contains the address on an internal BSAM ECB.
1 0 Bit is on for error caused by input operation.
1 Bit is on for error caused by output operation.
2 Bit is on for error caused by BSP, CNTRL, or POINT macro instruction (BPAM AND
BSAM only).
3 Bit is on if error occurred during update of existing record or if error did not prevent
reading of the record. Bit is off if error occurred during creation of a new record or if
error prevented reading of the record.
4 Bit is on if the request was nonvalid. The status indicators pointed to in the data event
control block are not present (BDAM, BPAM, and BSAM only).
5 Bit is on if a nonvalid character was found in paper tape conversion (BSAM and QSAM
only).
6 Bit is on for a hardware error (BDAM only).
7 Bit is on if no space was found for the record (BDAM only).
8-31 Address of the associated data control block.
2-13 0-31 Contents that existed before the macro instruction was issued.
14 0-7 Reserved.
8-31 Return address.
15 0-31 Address of the error analysis routine.
Table 49 on page 532 shows the register contents on entry to the SYNAD routine
for BISAM.
Table 50 shows the register contents on entry to the SYNAD routine for QISAM.
Table 50. Register Contents on Entry to SYNAD Routine—QISAM
Register Bits Meaning
0 0 Bit 0=1 indicates that bits 8-31 hold the address of the key in error (only set for a
sequence error). If bit 0=1—address of key that is out of sequence. If bit 0=0—address of
a work area.
1-7 Reserved.
8-31 Address of a work area containing the first 16 bytes of the IOB (after an uncorrectable
I/O error caused by a GET, PUT, or PUTX macro instruction; original contents destroyed
in other cases). If the error condition was detected before I/O was started, register 0
contains all zeros.
1 0-7 Reserved.
8-31 Address of the buffer containing the error record (after an uncorrectable I/O error
caused by a GET, PUT, or PUTX macro instruction while attempting to read or write a
data record; in other cases, this register contains 0).
2-13 0-31 Contents that existed before the macro instruction was issued.
14 0-7 Reserved.
8-31 Return address. This address is either an address in the control program’s CLOSE
routine (bit 2 of DCBEXCD2 is on), or the address of the instruction following the
expansion of the macro instruction that caused the SYNAD exit routine to be given
control (bit 2 of DCBEXCD2 is off).
15 0-7 Reserved.
8-31 Address of the SYNAD exit routine.
Programming Considerations
For BSAM, BPAM, and QSAM your SYNAD routine is entered with the
addressability (24- or 31-bit) of when you issued the macro that caused entry to
SYNAD. This typically is a CHECK, GET, or PUT macro. DCB SYNAD identifies a
routine that resides below the line (RMODE is 24). DCBE SYNAD identifies a
routine that may reside above the line. If it resides above the line, then all macros
that might detect an I/O error must be issued in 31-bit mode. If both the DCB and
DCBE specify SYNAD, the DCBE routine will be used.
You can write a SYNAD routine to determine the cause and type of error that
occurred by examining:
v The contents of the general registers
v The data event control block (see “Status Information Following an
Input/Output Operation” on page 520)
v The exceptional condition code
v The standard status and sense indicators
You can use the SYNADAF macro to perform this analysis automatically. This
macro produces an error message. Your program can use a PUT, WRITE, or WTO
macro to print the message.
Your SYNAD routine can act as an exit routine and return to its caller, or the
SYNAD routine can continue in your main program with restrictions on the DCB.
The SYNAD routine branches elsewhere in your program and, after the analysis is
complete, you can return control to the operating system or close the data set. If
you close the data set, you cannot use the temporary close (CLOSE TYPE=T)
option in the SYNAD routine. To continue processing the same data set, you must
first return control to the control program by a RETURN macro. The control
program then transfers control to your processing program, subject to the
conditions described below. Never attempt to reread or rewrite the record, because
the system has already attempted to recover from the error.
You should not use the FEOV macro against the data set for which the SYNAD
routine was entered, within the SYNAD routine.
These options are applicable only to data errors, because control errors result in
abnormal termination of the task. Data errors affect only the validity of a block of
data. Control errors affect information or operations necessary for continued
processing of the data set. These options are not applicable to a spooled data set, a
subsystem data set, or output errors, except output errors on a real printer. If the
EROPT and SYNAD fields are not complete, ABE is assumed.
Because EROPT applies to a physical block of data, and not to a logical record, use
of SKP or ACC may result in incorrect assembly of spanned records.
ISAM
If the error analysis routine receives control from the CLOSE routine when indexed
sequential data sets are being created (the DCB is opened for QISAM load mode),
bit 3 of the IOBFLAGS field in the load mode buffer control table (IOBBCT) is set
to 1. The DCBWKPT6 field in the DCB contains an address of a list of work area
pointers (ISLVPTRS). The pointer to the IOBBCT is at offset 8 in this list as shown
in the following diagram:
Work area
DCB pointers
(ISLVPTRS) IOBBCT
0 1
4
IOBFLAGS
248 8 A(IOBBCT)
DCBWKPT6
If the error analysis routine receives control from the CLOSE routine when indexed
sequential data sets are being processed using QISAM scan mode, bit 2 of the DCB
field DCBEXCD2 is set to 1.
For information about QISAM error conditions and the meanings they have when
the ISAM interface to VSAM is being used, see Appendix E, “Using ISAM
Programs with VSAM Data Sets,” on page 611.
The DCB exit list must begin on a fullword boundary and each entry in the list
requires one fullword. Each exit list entry is identified by a code in the high-order
byte, and the address of the routine, image, or area is specified in the 3 low-order
bytes. Codes and addresses (including the information location) for the exit list
entries are shown in Table 51.
| IBM provides an assembler macro, IHAEXLST, to define symbols for the exit list
| codes. Those symbols are in Table 51 on page 536. The macro also defines a
| four-byte DSECT with the following symbols:
| Offset Length Symbol Meaning
|
| 0 EXLST Name of DSECT
| 0 4 EXLENTRA An entry in the DCB exit list
| 0 1 EXLCODES Code. Last-entry bit and seven-bit code.
| 1... .... EXLLASTE This is the last entry.
| .xxx xxxx Seven-bit entry code. See values in figure below.
| 1 3 EXLENTRB Address or other value as documented for entry code
| 4 EXLLENTH Constant 4 that represents length of each entry.
| For an example of coding a DCB exit list with IHAEXLST, see Figure 64 on page
| 367.
You can activate or deactivate any entry in the list by placing the required code in
the high-order byte. Care must be taken, however, not to destroy the last entry
indication. The operating system routines scan the list from top to bottom, and the
first active entry found with the proper code is selected.
You can shorten the list during execution by setting the high-order bit to 1, and
extend it by setting the high-order bit to 0.
Exit routines identified in a DCB exit list are entered in 24-bit mode even if the rest
of your program is executing in 31-bit mode. z/OS DFSMS Macro Instructions for
Data Sets has an example showing how to build a 24-bit routine in an area below
the 16 MB line that acts as a glue routine and branches to your 31-bit routine
above the line.
Register Contents
0 Variable; see exit routine description.
1 The 3 low-order bytes contain the address of the DCB currently being processed, except when
the user-label exits (X'01' - X'04' and X'0C'), user totaling exit (X'0A'), DCB abend exit (X'11'),
nonspecific tape volume mount exit (X'17'), or the tape volume security/verification exit (X'18')
is taken, when register 1 contains the address of a parameter list. The contents of the parameter
list are described in the explanation of each exit routine.
2-13 Contents before execution of the macro.
Note: These register contents are unpredictable if the exit is called during task termination. For
example, the system might call the DCB ABEND exit or the end-of-volume exit for QSAM
output.
14 Return address (must not be altered by the exit routine).
15 Address of exit routine entry point.
The conventions for saving and restoring register contents are as follows:
v The exit routine must preserve the contents of register 14. It need not preserve
the contents of other registers. The control program restores the contents of
registers 2 to 13 before returning control to your program.
v The exit routine must not use the save area whose address is in register 13,
because this area is used by the control program. If the exit routine calls another
routine or issues supervisor or data management macros, it must provide the
address of a new save area in register 13.
v The exit routine must not issue an access method macro that refers to the DCB
for which the exit routine was called, unless otherwise specified in the
individual exit routine descriptions that follow.
Serialization
During any of the exit routines described in this section, the system might hold an
enqueue on the SYSZTIOT resource. The resource represents the TIOT and DSAB
chain and holding it or being open to the DD are the only ways to ensure that
dynamic unallocation in another task does not eliminate those control blocks while
they are being examined. If the system holds the SYSZTIOT resource, your exit
routine cannot use certain system functions that might need the resource. Those
functions include LOCATE, OBTAIN, SCRATCH, CATALOG, OPEN, CLOSE,
FEOV, and dynamic allocation. Whether the system holds that resource is part of
system logic and IBM might change it in a future release. IBM recommends that
your exit routine not depend on the system holding or not holding SYSZTIOT. One
example of your exit routine depending on the system holding SYSZTIOT is your
routine testing control blocks for DDs outside the concatenation.
For more information about the RDJFCB macro, see z/OS DFSMSdfp Advanced
Services.
Programming Conventions
The allocation retrieval list must be below the 16 MB line, but the allocation return
area can be above the 16 MB line.
When you are finished obtaining information from the retrieval areas, free the
storage with a FREEMAIN or STORAGE macro.
You can use the IHAARL macro to generate and map the allocation retrieval list.
For more information about the IHAARL macro see z/OS DFSMSdfp Advanced
Services.
Restrictions
When OPEN TYPE=J is issued, the X'13' exit has no effect. The JFCB exit at X'07'
can be used instead (see “JFCB Exit” on page 547).
Length or
Offset Bit Pattern Description
Input to exit:
Input to exit:
Not all of these options are available for each abend condition. Your DCB ABEND
exit routine must determine which option is available by examining the contents of
the option mask byte (byte 3) of the parameter list. The address of the parameter
list is passed in register 1. Figure 113 shows the contents of the parameter list and
the possible settings of the option mask when your routine receives control.
When your DCB ABEND exit routine returns control to the system control
program (this can be done using the RETURN macro), the option mask byte must
contain the setting that specifies the action you want to take. These actions and the
corresponding settings of the option mask byte are in Table 52.
Table 52. Option Mask Byte Settings
Decimal
Value Action
0 Abnormally terminate the task immediately.
4 Ignore the abend condition.
8 Delay the abend until the other DCBs being processed concurrently are opened
or closed.
12 Make an attempt to recover.
Your exit routine must inspect bits 4, 5, and 6 of the option mask byte (byte 3 of
the parameter list) to determine which options are available. If a bit is set to 1, the
corresponding option is available. Indicate your choice by inserting the appropriate
value in byte 3 of the parameter list, overlaying the bits you inspected. If you use a
value that specifies an option that is not available, the abend is issued immediately.
If the contents of bits 4, 5, and 6 of the option mask are 0, you must not change
the option mask. This unchanged option mask results in a request for an
immediate abend.
If bit 5 of the option mask is set to 1, you can ignore the abend by placing a value
of 4 in byte 3 of the parameter list. Processing on the current DCB stops and bit
DCBOFOPN is off. There is no need to issue CLOSE. If you subsequently attempt
to use this DCB other than to issue CLOSE or FREEPOOL, the results are
unpredictable. If you ignore an error in end-of-volume, the DCB is closed and
control is returned to your program at the point that caused the end-of-volume
condition (unless the end-of-volume routines were called by the CLOSE routines).
If the end-of-volume routines were called by the CLOSE routines, an ABEND
macro is issued even though the IGNORE option was selected.
If bit 6 of the option mask is set to 1, you can delay the abend by placing a value
of 8 in byte 3 of the parameter list. All other DCBs being processed by the same
OPEN or CLOSE invocation will be processed before the abend is issued. For
end-of-volume, however, you can’t delay the abend because the end-of-volume
routine never has more than one DCB to process.
If bit 4 of the option mask is set to 1, you can attempt to recover. Place a value of
12 in byte 3 of the parameter list and provide information for the recovery attempt.
Table 53 on page 541 lists the abend conditions for which recovery can be
attempted. See z/OS MVS System Messages, Vol 7 (IEB-IEE); z/OS MVS System
Messages, Vol 8 (IEF-IGD); z/OS MVS System Messages, Vol 9 (IGF-IWM); z/OS MVS
System Messages, Vol 10 (IXC-IZP); and z/OS MVS System Codes.
Recovery Requirements
For most types of recoverable errors, you should supply a recovery work area (see
Figure 114 on page 542) with a new volume serial number for each volume
associated with an error.
Length or
Offset Bit Pattern Description
If no new volumes are supplied for such errors, recovery will be attempted with
the existing volumes, but the likelihood of successful recovery is greatly reduced.
If you request recovery for system completion code 117, return code 3C, or system
completion code 214, return code 0C, or system completion code 237, return code
0C, you do not need to supply new volumes or a work area. The condition that
caused the abend is disagreement between the DCB block count and the calculated
count from the hardware. To permit recovery, this disagreement is ignored and the
value in the DCB is used.
If you request recovery for system completion code 237, return code 04, you don’t
need to supply new volumes or a work area. The condition that caused the abend
is the disagreement between the block count in the DCB and that in the trailer
label. To permit recovery, this disagreement is ignored.
If you request recovery for system completion code 717, return code 10, you don’t
need to supply new volumes or a work area. The abend is caused by an I/O error
during updating of the DCB block count. To permit recovery, the block count is not
updated. So, an abnormal termination with system completion code 237, return
code 04, may result when you try to read from the tape after recovery. You may
attempt recovery from the abend with system completion code 237, return code 04,
as explained in the preceding paragraph.
System completion codes and their associated return codes are described in z/OS
MVS System Codes.
The work area that you supply for the recovery attempt must begin on a halfword
boundary and can contain the information described in Figure 114. Place a pointer
to the work area in the last 3 bytes of the parameter list pointed to by register 1
and described in Figure 113 on page 539.
If you acquire the storage for the work area by using the GETMAIN macro, you
can request that it be freed by a FREEMAIN macro after all information has been
extracted from it. Set the high-order bit of the option byte in the work area to 1
and place the number of the subpool from which the work area was requested in
byte 3 of the recovery work area.
Only one recovery attempt per data set is permitted during OPEN, CLOSE, or
end-of-volume processing. If a recovery attempt is unsuccessful, you can not
request another recovery. The second time through the exit routine you may
request only one of the other options (if allowed): Issue the abend immediately,
ignore the abend, or delay the abend. If at any time you select an option that is not
permitted, the abend is issued immediately.
If recovery is successful, you still receive an abend message on your listing. This
message refers to the abend that would have been issued if the recovery had not
been successful.
When opening a data set for output and the record format is fixed or variable, you
can force the system to calculate an optimal block size by setting the block size in
the DCB or DCBE to zero before returning from this exit. The system uses DCB
block size if it is not using the large block interface (LBI). See “Large Block
Interface (LBI)” on page 328. If the zero value you supply is not changed by the
DCB OPEN installation exit, OPEN determines a block size when OPEN takes
control after return from the DCB OPEN installation exit. See “System-Determined
Block Size” on page 329.
This exit is mutually exclusive with the JFCBE exit. If you need both the JFCBE
and DCB OPEN exits, you must use the JFCBE exit to pass control to your
routines.
The DCB OPEN exit is intended for modifying or updating the DCB. System
functions should not be attempted in this exit before returning to OPEN
processing. In particular, dynamic allocation, OPEN, CLOSE, EOV, and DADSM
functions should not be invoked because of an existing OPEN enqueue on the
SYSZTIOT resources.
concatenation, the system calls your DCB OPEN exit at the beginning of each data
set and calls your EOV exit only for each volume of each disk or tape data set after
the first volume of the data set.
For an end-of-volume (EOV) condition, the EOV routine passes control to your
installation’s nonstandard input trailer label routine, whether this exit code is
specified. For an end-of-data condition when this exit code is specified, the EOV
routine does not pass control to your installation’s nonstandard input trailer label
routine. Instead, the CLOSE routine passes control to your installation’s
nonstandard input trailer label.
When the system reads or writes any kind of cartridge tape, it calls the
block-count-unequal exit if the DCB block count does not match the block count
calculated for the cartridge. The EOV and CLOSE functions perform these
comparisons for cartridges, even for unlabeled tapes and for writes. The result can
be a 117-3C or 237-0C ABEND, but the system calls your optional DCB ABEND
exit.
The routine is entered during EOV processing. The trailer label block count is
passed in register 0. You can gain access to the count field in the DCB by using the
address passed in register 1 plus the proper displacement, which is shown in z/OS
DFSMS Macro Instructions for Data Sets. If the block count in the DCB differs from
that in the trailer label when no exit routine is provided or your exit gives return
code 0, the system calls your optional DCB abend exit and possibly your
installation’s DCB abend exit. If these exits do not exist or they allow abnormal
end, the task is abnormally terminated. The routine must terminate with a
RETURN macro and a return code that indicates what action is to be taken by the
operating system, as shown in Table 54.
Table 54. System Response to Block Count Exit Return Code
Return Code System Action
0 (X'00') The task is to be abnormally terminated with system completion code 237,
return code 4.
4 (X'04') Normal processing is to be resumed.
As with other exit routines, the contents of register 14 must be saved and restored
if any macros are used.
When you concatenate data sets with unlike attributes, no EOV exits are taken
when beginning each data set.
The system treats the volumes of a striped extended format data set as if they were
one volume. For such a data set your EOV exit is called only when the end of the
data set is reached and it is part of a like sequential concatenation.
When the EOV routine is entered, register 0 contains 0 unless user totaling was
specified. If you specified user totaling in the DCB macro (by coding OPTCD=T) or
in the DD statement for an output data set, register 0 contains the address of the
user totaling image area.
The routine is entered after the next volume has been positioned and all necessary
label processing has been completed. If the volume is a reel or cartridge of
magnetic tape, the tape is positioned after the tape mark that precedes the
beginning of the data.
You can use the EOV exit routine to take a checkpoint by issuing the CHKPT
macro (see z/OS DFSMSdfp Checkpoint/Restart). If a checkpointed job step
terminates abnormally, it can be restarted from the EOV checkpoint. When the job
step is restarted, the volume is mounted and positioned as on entry to the routine.
Restart becomes impossible if changes are made to the link pack area (LPA) library
between the time the checkpoint is taken and the job step is restarted. When the
EOV exit is entered, register 1 contains the address of the DCB. Registers 2 - 13
contain the contents when your program issued the macro that resulted in the EOV
condition. Register 14 has the return address. When the step is restarted, pointers
to EOV modules must be the same as when the checkpoint was taken.
The EOV exit routine returns control in the same manner as the DCB exit routine.
The contents of register 14 must be preserved and restored if any macros are used
in the routine. Control is returned to the operating system by a RETURN macro;
no return code is required.
Multiple exit list entries in the exit list can define FCBs. The OPEN and SETPRT
routines search the exit list for requested FCBs before searching SYS1.IMAGELIB.
The first 4 bytes of the FCB image contain the image identifier. To identify the FCB,
this image identifier is specified in the FCB parameter of the DD statement, by
your JFCBE exit, by the SETPRT macro, or by the system operator in response to
message IEC127D or IEC129D.
For an IBM 3203, 3211, 3262, 4245, or 4248 Printer, the image identifier is followed
by the FCB image described in z/OS DFSMSdfp Advanced Services.
You can create, modify, and list FCB images in libraries with the IEBIMAGE utility
and the CIPOPS utility. IEBIMAGE is described in z/OS DFSMSdfp Utilities.
The system searches the DCB exit list for an FCB image only when writing to a
printer that is allocated to the job step. The system does not search the DCB exit
list with a SYSOUT data set. Figure 115 on page 547 shows one way the exit list
can be used to define an FCB image.
...
DCB ..,EXLST=EXLIST
...
EXLIST DS 0F
DC X’10’ Flag code for FCB image
DC AL3(FCBIMG) Address of FCB image
DC X’80000000’ End of EXLST and a null entry
FCBIMG DC CL4’IMG1’ FCB identifier
DC X’00’ FCB is not a default
DC AL1(67) Length of FCB
DC X’90’ Offset print line
* 16 line character positions to the right
DC X’00’ Spacing is 6 lines per inch
DC 5X’00’ Lines 2-6, no channel codes
DC X’01’ Line 7, channel 1
DC 6X’00’ Lines 8-13, no channel codes
DC X’02’ Line (or Lines) 14, channel 2
DC 5X’00’ Line (or Lines) 15-19, no channel codes
DC X’03’ Line (or Lines) 20, channel 3
DC 9X’00’ Line (or Lines) 21-29, no channel codes
DC X’04’ Line (or Lines) 30, channel 4
DC 19X’00’ Line (or Lines) 31-49, no channel codes
DC X’05’ Line (or Lines) 50, channel 5
DC X’06’ Line (or Lines) 51, channel 6
DC X’07’ Line (or Lines) 52, channel 7
DC X’08’ Line (or Lines) 53, channel 8
DC X’09’ Line (or Lines) 54, channel 9
DC X’0A’ Line (or Lines) 55, channel 10
DC X’0B’ Line (or Lines) 56, channel 11
DC X’0C’ Line (or Lines) 57, channel 12
DC 8X’00’ Line (or Lines) 58-65, no channel codes
DC X’10’ End of FCB image
...
END
//ddname DD UNIT=3211,FCB=(IMG1,VERIFY)
/*
JFCB Exit
This exit list entry does not define an exit routine. It is used with the RDJFCB
macro and OPEN TYPE=J. The RDJFCB macro uses the address specified in the
DCB exit list entry at X'07' to place a copy of the JFCB for each DCB specified by
the RDJFCB macro.
The area is 176 bytes and must begin on a fullword boundary. It must be located in
the user’s address space. This area must be located below 16 MB virtual. The DCB
can be either open or closed when the RDJFCB macro is run.
If RDJFCB fails while processing a DCB associated with your RDJFCB request,
your task is abnormally terminated. You cannot use the DCB abend exit to recover
from a failure of the RDJFCB macro. See z/OS DFSMSdfp Advanced Services.
JFCBE Exit
JCL-specified setup requirements for the IBM 3800 and 3900 Printing Subsystem
cause a JFCB extension (JFCBE) to be created to reflect those specifications. Your
JFCBE exists if BURST, MODIFY, CHARS, FLASH, or any copy group is coded on
the DD statement. The JFCBE exit can examine or modify those specifications in
the JFCBE.
Although use of the JFCBE exit is still supported, its use is not recommended.
Place the address of the routine in an exit list. The device allocated does not have
to be a printer. This exit is taken during OPEN processing and is mutually
exclusive with the DCB OPEN exit. If you need both the JFCBE and DCB OPEN
exits, you must use the JFCBE exit to pass control to your routines. Everything that
you can do in a DCB OPEN exit routine can also be done in a JFCBE exit. See
“DCB OPEN Exit” on page 543. When you issue the SETPRT macro to a SYSOUT
data set, the JFCBE is further updated from the information in the SETPRT
parameter list.
When control is passed to your exit routine, the contents of register 1 will be the
address of the DCB being processed.
The area pointed to by register 0 will contain a 176 byte JFCBE followed by the 4
byte FCB identification that is obtained from the JFCB. If the FCB operand was not
coded on the DD statement, this FCB field will be binary zeros.
If your exit routine modifies your copy of the JFCBE, you should indicate this by
turning on bit JFCBEOPN (X'80' in JFCBFLAG) in the JFCBE copy. On return to
OPEN, this bit indicates if the system copy is to be updated. The 4-byte FCB
identification in your area is used to update the JFCB regardless of the bit setting.
Checkpoint/restart also interrogates this bit to determine which version of the
JFCBE to use at restart time. If this bit is not on, the JFCBE generated by the restart
JCL is used.
The physical location of the labels on the data set depends on the data set
organization. For direct (BDAM) data sets, user labels are placed on a separate user
label track in the first volume. User label exits are taken only during execution of
the OPEN and CLOSE routines. Thus you can create or examine as many as eight
user header labels only during execution of OPEN and as many as eight trailer
labels only during execution of CLOSE. Because the trailer labels are on the same
track as the header labels, the first volume of the data set must be mounted when
the data set is closed.
For physical sequential (BSAM or QSAM) data sets on DASD or tape with IBM
standard labels, you can create or examine as many as eight header labels and
eight trailer labels on each volume of the data set. For ISO/ANSI tape label data
sets, you can create an unlimited number of user header and trailer labels. The
user label exits are taken during OPEN, CLOSE, and EOV processing.
To create or verify labels, you must specify the addresses of your label exit routines
in an exit list as shown in Table 51 on page 536. Thus you can have separate
routines for creating or verifying header and trailer label groups. Care must be
taken if a magnetic tape is read backward, because the trailer label group is
processed as header labels and the header label group is processed as trailer labels.
When your routine receives control, the contents of register 0 are unpredictable.
Register 1 contains the address of a parameter list. The contents of registers 2 to 13
are the same as when the macro instruction was issued. However, if your program
does not issue the CLOSE macro, or abnormally ends before issuing CLOSE, the
CLOSE macro will be issued by the control program, with control-program
information in these registers.
The first address in the parameter list points to an 80-byte label buffer area. The
format of a user label is described in “User Label Groups” on page 564. For input,
the control program reads a user label into this area before passing control to the
label routine. For output, your user label exit routine builds labels in this area and
returns to the control program, which writes the label. When an input trailer label
routine receives control, the EOF flag (high-order byte of the second word in the
parameter list) is set as follows:
Bit 0 = 0: Entered at EOV
Bit 0 = 1: Entered at end-of-file
Bits 1-7: Reserved
When a user label exit routine receives control after an uncorrectable I/O error has
occurred, the third word of the parameter list contains the address of the standard
status indicators. The error flag (high-order byte of the third word in the parameter
list) is set as follows:
Bit 0 = 1: Uncorrectable I/O error
Bit 1 = 1: Error occurred during writing of updated label
Bits 2-7: Reserved
The fourth entry in the parameter list is the address of the user totaling image
area. This image area is the entry in the user totaling save area that corresponds to
the last record physically written on the volume. (The image area is discussed in
“User Totaling for BSAM and QSAM” on page 558.)
Each routine must create or verify one label of a header or trailer label group,
place a return code in register 15, and return control to the operating system. The
operating system responds to the return code as shown in Table 55 on page 551.
You can create user labels only for data sets on magnetic tape volumes with IBM
standard labels or ISO/ANSI labels and for data sets on direct access volumes.
When you specify both user labels and IBM standard labels in a DD statement by
specifying LABEL=(,SUL) and there is an active entry in the exit list, a label exit is
always taken. Thus, a label exit is taken even when an input data set does not
contain user labels, or when no user label track has been allocated for writing
labels on a direct access volume. In either case, the appropriate exit routine is
entered with the buffer area address parameter set to 0. On return from the exit
routine, normal processing is resumed; no return code is necessary.
Table 55. System Response to a User Label Exit Routine Return Code
Routine Type Return Code System Response
Input header or 0 (X'00') Normal processing is resumed. If there are any
trailer label remaining labels in the label group, they are
ignored.
4 (X'04') The next user label is read into the label buffer
area and control is returned to the exit routine. If
there are no more labels in the label group,
normal processing is resumed.
8¹ (X'08') The label is written from the label buffer area and
normal processing is resumed.
12¹ (X'0C') The label is written from the label area, the next
label is read into the label buffer area, and
control is returned to the label processing routine.
If there are no more labels, processing is
resumed.
Output header or 0 (X'00') Normal processing is resumed; no label is written
trailer label from the label buffer area.
4 (X'04') User label is written from the label buffer area.
Normal processing is resumed.
8 (X'08') User label is written from the label buffer area. If
fewer than eight labels have been created, control
is returned to the exit routine, which then creates
the next label. If eight labels have been created,
normal processing is resumed.
Note:
1. Your input label routines can return these codes only when you are processing a
physical sequential data set opened for UPDAT or a direct data set opened for OUTPUT
or UPDAT. These return codes let you verify the existing labels, update them if
necessary, and request that the system write the updated labels.
Label exits are not taken for system output (SYSOUT) data sets, or for data sets on
volumes that do not have standard labels. For other data sets, exits are taken as
follows:
v When an input data set is opened, the input header label exit 01 is taken. If the
data set is on tape being opened for RDBACK, user trailer labels will be
processed.
v When an output data set is opened, the output header label exit 02 is taken.
However, if the data set already exists and DISP=MOD is coded in the DD
statement, the input trailer label exit 03 is taken to process any existing trailer
labels. If the input trailer label exit 03 does not exist, then the deferred input
trailer label exit 0C is taken if it exists; otherwise, no label exit is taken. For tape,
these trailer labels will be overwritten by the new output data or by EOV or
close processing when writing new standard trailer labels. For direct access
devices, these trailer labels will still exist unless rewritten by EOV or close
processing in an output trailer label exit.
v When an input data set reaches EOV, the input trailer label exit 03 is taken. If
the data set is on tape opened for RDBACK, header labels will be processed. The
input trailer label exit 03 is not taken if you issue an FEOV macro. If a defer
input trailer label exit 0C is present, and an input trailer label exit 03 is not
present, the 0C exit is taken. After switching volumes, the input header label exit
01 is taken. If the data set is on tape opened for RDBACK, trailer labels will be
processed.
v When an output data set reaches EOV, the output trailer label exit 04 is taken.
After switching volumes, output header label exit 02 is taken.
v When an input data set reaches end-of-data, the input trailer label exit 03 is
taken before the EODAD exit, unless the DCB exit list contains a defer input
trailer label exit 0C.
v When an input data set is closed, no exit is taken unless the data set was
previously read to end-of-data and the defer input trailer label exit 0C is present.
If so, the defer input trailer label exit 0C is taken to process trailer labels, or if
the tape is opened for RDBACK, header labels.
v When an output data set is closed, the output trailer label exit 04 is taken.
To process records in reverse order, a data set on magnetic tape can be read
backward. When you read backward, header label exits are taken to process trailer
labels, and trailer label exits are taken to process header labels. The system
presents labels from a label group in ascending order by label number, which is the
order in which the labels were created. If necessary, an exit routine can determine
label type (UHL or UTL) and number by examining the first four characters of
each label. Tapes with IBM standard labels and direct access devices can have as
many as eight user labels. Tapes with ISO/ANSI labels can have an unlimited
number of user labels.
After an input error, the exit routine must return control with an appropriate
return code (0 or 4). No return code is required after an output error. If an output
error occurs while the system is opening a data set, the data set is not opened
(DCB is flagged) and control is returned to your program. If an output error occurs
at any other time, the system attempts to resume normal processing.
Open or EOV calls this exit when either must issue mount message IEC501A or
IEC501E to request a scratch tape volume. Open issues the mount message if you
specify the DEFER parameter with the UNIT option, and either you did not specify
a volume serial number in the DD statement or you specified
'VOL=SER=SCRTCH'. EOV always calls this exit for a scratch tape volume request.
This user exit gets control in the key and state of the program that issued the
OPEN or EOV, and no locks are held. This exit must provide a return code in
register 15.
If OPEN or EOV finds that the volume pointed to by register 0 is being used either
by this or by another job (an active ENQ on this volume), it calls this exit again
and continues to do so until you either specify an available volume serial number
or request a scratch volume. If the volume you specify is available but is rejected
by OPEN or EOV for some other reason (I/O errors, expiration date, password
check, and so forth), this exit is not called again.
When this exit gets control, register 1 points to the parameter list described by the
IECOENTE macro. Figure 117 on page 554 shows this parameter list.
Length or
Offset Bit Pattern Description
.... ...1 OENTNTRY Set to 1 if this is not the first time this exit
was called because the requested tape volume
is being used by this job or other job
6 2 OENTRSVD RESERVED
16(X’10’) 4 OENTJFCB Points to the OPEN or EOV copy of the JFCB. The
high order bit is always on, indicating that
this is the end of the parameter list.
When this user exit is entered, the general registers contain the information in
Table 56 for saving and restoring.
Table 56. Saving and Restoring General Registers
Register Contents
0 Variable
1 Address of the parameter list for this exit
2-13 Contents of the registers before the OPEN, FEOV, or EOV was issued
14 Return address (you must preserve the contents of this register in this user
exit)
15 Entry point address to this user exit
You do not have to preserve the contents of any register other than register 14. The
operating system restores the contents of registers 2 through 13 before it returns to
OPEN or EOV and before it returns control to the original calling program.
Do not use the save area pointed to by register 13; the operating system uses it. If
you call another routine, or issue a supervisor or data management macro in this
user exit, you must provide the address of a new save area in register 13.
This user exit gets control in the key and state of the program that issued the
OPEN or EOV request, and no locks are held. This exit must provide a return code
in register 15.
OPEN abnormally terminates with a 913-34 ABEND code, and EOV terminates with a
937-44 ABEND code.
12 (X'0C') Use this volume without checking the data set’s expiration date. Password, RACF
authority, and data set name checking still occurs.
16 (X'10') Use this volume. A conflict with the password, label expiration date, or data set name
does not prevent the new data set from writing over the current data set if it is the first
one on the volume. To write over other than the first data set, the new data set must have
the same level of security protection as the current data set.
When this exit gets control, register 1 points to the parameter list described by the
IECOEVSE macro. The parameter list is shown in Figure 118 on page 557.
Length or
Offset Bit Pattern Description
.... ...1 OEVSFILE Set to 0 if the first data set on the volume
is to be written and set to 1 if this is not
the first data set on the volume to be
written. This bit is always 0 for INPUT
processing.
6 2 OEVSRSVD Reserved
When this user exit is entered, the general registers have the following contents.
Register Contents
0 Variable
1 Address of the parameter list for this exit.
2-13 Contents of the registers before the OPEN or EOV was issued
14 Return address (you must preserve the contents of this register in this user
exit)
15 Entry point address to this user exit
You do not have to preserve the contents of any register other than register 14. The
operating system restores the contents of registers 2 through 13 before it returns to
OPEN or EOV and before it returns control to the original calling program.
Do not use the save area pointed to by register 13; the operating system uses it. If
you call another routine or issue a supervisor or data management macro in this
user exit, you must provide the address of a new save area in register 13.
User totaling is ignored for extended format data sets and HFS data sets.
To request user totaling, you must specify OPTCD=T in the DCB macro instruction
or in the DCB parameter of the DD statement. The area in which you collect the
control data (the user totaling area) must be identified to the control program by
an entry of X'0A' in the DCB exit list. OPTCD=T cannot be specified for SYSIN or
SYSOUT data sets.
The user totaling area, an area in storage that you provide, must begin on a
halfword boundary and be large enough to contain your accumulated data plus a
2 byte length field. The length field must be the first 2 bytes of the area and
specify the length of the complete area. A data set for which you have specified
user totaling (OPTCD=T) will not be opened if either the totaling area length or
the address in the exit list is 0, or if there is no X'0A' entry in the exit list.
The control program establishes a user totaling save area, where the control
program preserves an image of your totaling area, when an I/O operation is
scheduled. When the output user label exits are taken, the address of the save area
entry (user totaling image area) corresponding to the last record physically written
on a volume is passed to you in the fourth entry of the user label parameter list.
(This parameter list is described in “Open/Close/EOV Standard User Label Exit”
on page 549.) When an EOV exit is taken for an output data set and user totaling
has been specified, the address of the user totaling image area is in register 0.
When using user totaling for an output data set, that is, when creating the data set,
you must update your control data in your totaling area before issuing a PUT or a
WRITE macro. The control program places an image of your totaling area in the
user totaling save area when an I/O operation is scheduled. A pointer to the save
area entry (user totaling image area) corresponding to the last record physically
written on the volume is passed to you in your label processing routine. Thus you
can include the control total in your user labels.
When subsequently using this data set for input, you can collect the same
information as you read each record and compare this total with the one
previously stored in the user trailer label. If you have stored the total from the
preceding volume in the user header label of the current volume, you can process
each volume of a multivolume data set independently and still maintain this
system of control.
When variable-length records are specified with the totaling function for user
labels, special considerations are necessary. Because the control program
determines if a variable-length record fits in a buffer after a PUT or a WRITE is
issued, the total you have accumulated can include one more record than is really
written on the volume. For variable-length spanned records, the accumulated total
includes the control data from the volume-spanning record although only a
segment of the record is on that volume. However, when you process such a data
set, the volume-spanning record or the first record on the next volume will not be
available to you until after the volume switch and user label processing are
completed. Thus the totaling information in the user label cannot agree with that
developed during processing of the volume.
One way you can resolve this situation is to maintain, when you are creating a
data set, control data about each of the last two records and include both totals in
your user labels. Then the total related to the last complete record on the volume
and the volume-spanning record or the first record on the next volume would be
available to your user label routines. During subsequent processing of the data set,
your user label routines can determine if there is agreement between the generated
information and one of the two totals previously saved.
When the totaling function for user labels is selected with DASD devices and
secondary space is specified, the total accumulated can be one less than the actual
written.
Topic Location
Direct Access Storage Device Architecture 561
Volume Label Group 562
Data Set Control Block (DSCB) 564
User Label Groups 564
As seen by software, each disk or tape is called a volume. Each volume can
contain one or more complete data sets and parts of data sets. Each complete or
partial data set on a DASD volume has a data set label. Each complete or partial
data set on a tape volume has a data set label only if the volume has IBM standard
labels or ISO or ANSI standard labels. For information about data sets and labels
on magnetic tapes, see “Magnetic Tape Volumes” on page 11.
Only standard label formats are used on direct access volumes. Volume, data set,
and optional user labels are used (see Figure 119 on page 562). In the case of direct
access volumes, the data set label is the data set control block (DSCB).
Related reading: For more information about tracks and records, see “Direct
Access Storage Device (DASD) Volumes” on page 8.
IPL records
Cylinder
Volume label
Cylinder 0
Tracks Additional labels
Track 0 (Optional)
VTOC DSCB
Free space DSCB
DSCB VTOC
DSCB
DCSB
The format of the data portion of the direct access volume label group is shown in
Figure 120 on page 563.
The operating system identifies an initial volume label when, in reading the initial
record, it finds that the first 4 characters of the record are VOL1. That is, they
contain the volume label identifier and the volume label number. The initial
volume label is 80 bytes. The format of an initial volume label are described in the
following text.
Volume Label Number (1). Field 2 identifies the relative position of the volume
label in a volume label group. It must be written as X'F1'.
Volume Security. Field 4 is reserved for use by installations that want to provide
security for volumes. Make this field an X'C0' unless you have your own security
processing routines.
VTOC Pointer. Field 5 of direct access volume label 1 contains the address of the
VTOC in the form of CCHHR.
Reserved. Field 6 is reserved for possible future use, and should be left blank.
Owner Name and Address Code. Field 7 contains an optional identification of the
owner of the volume.
3 (76) User-Specified
Each group can include as many as eight labels, but the space required for both
groups must not be more than one track on a direct access storage device. A
program becomes device-dependent (among direct access storage devices) when it
creates more than eight header labels or eight trailer labels.
User Header Label Group. The operating system writes these labels as directed by
the processing program recording the data set. The first four characters of the user
header label must be UHL1,UHL2, through UHL8; you can specify the remaining
76 characters. When the data set is read, the operating system makes the user
header labels available to the application program for processing.
User Trailer Label Group. These labels are recorded (and processed) as explained
in the preceding text for user header labels, except that the first four characters
must be UTL1,UTL2, through UTL8.
Label Identifier. Field 1 shows the kind of user header label. “UHL” means a user
header label; “UTL” means a user trailer label.
Label Number. Field 2 identifies the relative position (1 to 8) of the label within
the user label group. It is an EBCDIC character.
Topic Location
DBCS Character Support 567
Record Length When Using DBCS Characters 567
Double-byte character set (DBCS) support lets you process characters in languages
that contain too many characters or symbols for each to be assigned a 1-byte
hexadecimal value. You can use DBCS to process languages, such as Japanese and
Chinese, that use ideographic characters. In DBCS, two bytes are used to describe
each character; this lets you describe more than 35 000 characters. When one byte
is used to describe a character, as in EBCDIC, it is called a single-byte character set
(SBCS).
When the data has a mixture of DBCS and SBCS strings, you must use two special
delimiters, SO (shift out) and SI (shift in), which designate where a DBCS string
begins and where it ends. SO tells you when you are leaving an SBCS string, and
SI tells you when you are returning to an SBCS string. Use the PRINT and REPRO
commands to insert the SO and SI characters around the DBCS data.
Fixed-Length Records
Because inserting of SO and SI characters increases the output record length, you
must define the output data set with enough space in the output record. The
record length of the output data set must be equal to the input data set’s record
length plus the additional number of bytes necessary to insert the SO and SI pairs.
Each SO and SI pair consists of 2 bytes. In the following example for a fixed-length
record, the input record length is 80 bytes and consists of one DBCS string
surrounded by an SO and SI pair. The output record length would be 82 bytes,
which is correct.
Input record length = 80; number of SO and SI pairs = 1
Output record length = 82 (correct length)
An output record length of 84 bytes, for example, would be too large and would
result in an error. An output record length of 80 bytes, for example, would be too
small because there would not be room for the SO and SI pair. If the output record
length is too small or too large, an error message is issued, a return code of 12 is
returned from IEBGENER, and the command ends.
Variable-Length Records
Because insertion of SO and SI characters increases the output record length, you
must define the output data set with enough space in the output record. The input
data set’s record length plus the additional number of bytes necessary to insert the
SO and SI pairs must not exceed the maximum record length of the output data
set. Each SO and SI pair consists of 2 bytes. If the output record length is too
small, an error message will be issued, a return code of 12 will be returned from
IEBGENER, and the command will be ended.
In the following example for a variable-length record, the input record length is 50
bytes and consists of four DBCS string surrounded by SO and SI pairs. The output
record length is 50 bytes which is too small because the SO and SI pairs add eight
extra bytes to the record length. The output record length should be at least 58
bytes.
Input record length = 50; number of SO and SI pairs = 4
Output record length = 50 (too small; should be at least 58 bytes)
Topic Location
Using the Basic Direct Access Method (BDAM) 569
Processing a Direct Data Set Sequentially 570
Organizing a Direct Data Set 570
Creating a Direct Data Set 571
Referring to a Record 573
Adding or Updating Records 574
Sharing DCBs 578
The application program must synchronize all I/O operations with a CHECK or a
WAIT macro.
The application program must block and unblock its own input and output
records. (BDAM only reads and writes data blocks.)
You can find data blocks within a data set with one of the following addressing
techniques.
Relative track address technique. This locates a track on a direct access storage
device starting at the beginning of the data set.
Relative block address technique. This locates a fixed-length data block starting
from the beginning of the data set.
If dynamic buffering is specified for your direct data set, the system will provide a
buffer for your records. If dynamic buffering is not specified, you must provide a
buffer for the system to use.
The discussion of direct access storage devices shows that record keys are optional.
If they are specified, they must be used for every record and must be of a fixed
length.
You can use direct addressing to develop the organization of your data set. When
you use direct addresses, the location of each record in the data set is known.
By Range of Keys
If format-F records with keys are being written, the key of each record can be used
to identify the record. For example, a data set with keys ranging from 0 to 4999
should be allocated space for 5000 records. Each key relates directly to a location
that you can refer to as a relative record number. Therefore, each record should be
assigned a unique key.
If identical keys are used, it is possible, during periods of high processor and
channel activity, to skip the desired record and retrieve the next record on the
track. The main disadvantage of this type of organization is that records might not
exist for many of the keys, even though space has been reserved for them.
By Number of Records
Space could be allocated based on the number of records in the data set rather
than on the range of keys. Allocating space based on the number of records
requires the use of a cross-reference table. When a record is written in the data set,
you must note the physical location as a relative block number, an actual address,
or as a relative track and record number. The addresses must then be stored in a
table that is searched when a record is to be retrieved. Disadvantages are that
cross-referencing can be used efficiently only with a small data set; storage is
required for the table, and processing time is required for searching and updating
the table.
DSORG and KEYLEN can be specified through data class. For more information
about data class see Chapter 21, “Specifying and Initializing Data Control Blocks,”
on page 317.
If a direct data set is created and updated or read within the same job step, and
the OPTCD parameter is used in the creation, updating, or reading of the data set,
different DCBs and DD statements should be used.
Format-F records are written sequentially as they are presented. When a track is
filled, the system automatically writes the capacity record and advances to the next
track.
Rule: Direct data sets whose records are to be identified by relative track address
must be limited in size to no more than 65 536 tracks for the entire data set.
Example: In the example problem in Figure 122, a tape containing 204-byte records
arranged in key sequence is used to allocate a direct data set. A 4-byte binary key
for each record ranges from 1000 to 8999, so space for 8000 records is requested.
//DAOUTPUT DD DSNAME=SLATE.INDEX.WORDS,DCB=(DSORG=DA, C
// BLKSIZE=200,KEYLEN=4,RECFM=F),SPACE=(204,8000),---
//TAPINPUT DD ---
...
DIRECT START
...
L 9,=F’1000’
OPEN (DALOAD,(OUTPUT),TAPEDCB)
LA 10,COMPARE
NEXTREC GET TAPEDCB
LR 2,1
COMPARE C 9,0(2) Compare key of input against
* control number
BNE DUMMY
WRITE DECB1,SF,DALOAD,(2) Write data record
CHECK DECB1
AH 9,=H’1’
B NEXTREC
DUMMY C 9,=F’8999’ Have 8000 records been written?
BH ENDJOB
WRITE DECB2,SD,DALOAD,DUMAREA Write dummy
CHECK DECB2
AH 9,=H’1’
BR 10
INPUTEND LA 10,DUMMY
BR 10
ENDJOB CLOSE (TAPEDCB,,DALOAD)
...
DUMAREA DS 8F
DALOAD DCB DSORG=PS,MACRF=(WL),DDNAME=DAOUTPUT, C
DEVD=DA,SYNAD=CHECKER,---
TAPEDCB DCB EODAD=INPUTEND,MACRF=(GL), ---
...
Referring to a Record
You choose among three types of record addressing and you can choose other
addressing options.
Record Addressing
After you have determined how your data set is to be organized, you must
consider how the individual records will be referred to when the data set is
updated or new records are added. You refer to records using one of three forms of
addressing:
v Relative Block Address. You specify the relative location of the record (block)
within the data set as a 3-byte binary number. You can use this type of reference
only with format-F records. The system computes the actual track and record
number. The relative block address of the first block is 0.
v Relative Track Address. You specify the relative track as a 2-byte binary number
and the actual record number on that track as a 1-byte binary number. The
relative track address of the first track is 0. The number of the first record on
each track is 1.
Direct data sets whose records are to be identified by relative track address must
be limited in size to no more than 65 536 tracks for the entire data set.
v Actual Address. You supply the actual address in the standard 8-byte form,
MBBCCHHR. Remember that using an actual address might force you to specify
that the data set is unmovable. In that case the data set is ineligible to be system
managed.
In addition to the relative track or block address, you specify the address of a
virtual storage location containing the record key. The system computes the actual
track address and searches for the record with the correct key.
Extended Search
You request that the system begin its search with a specified starting location and
continue for a certain number of records or tracks. You can use the extended search
option to request a search for unused space where a record can be added.
To use the extended search option, you must specify in the DCB (DCBLIMCT) the
number of tracks (including the starting track) or records (including the starting
record) that are to be searched. If you specify a number of records, the system
might actually examine more than this number. In searching a track, the system
searches the entire track (starting with the first record); it therefore might examine
records that precede the starting record or follow the ending record.
If the DCB specifies a number equal to or greater than the number of tracks
allocated to the data set or the number of records within the data set, the entire
data set is searched in the attempt to satisfy your request.
In addition to the relative track or block address, you specify the address of a
virtual storage location containing the record key. The system computes the actual
track address and searches for the record with the correct key.
Feedback Option
The feedback option specifies that the system is to provide the address of the
record requested by a READ or WRITE macro. This address can be in the same
form that was presented to the system in the READ or WRITE macro, or as an
8-byte actual address. You can specify the feedback option in the OPTCD
parameter of the DCB and in the READ or WRITE macro. If the feedback option is
omitted from the DCB, but is requested in a READ or WRITE macro, an 8-byte
actual address is returned to you.
If you want to add a record passing a relative block address, the system converts
the address to an actual track address. That track is searched for a dummy record.
If a dummy record is found, the new record is written in place of it. If there is no
dummy record on the track, you are informed that the write operation did not take
place. If you request the extended search option, the new record will be written in
place of the first dummy record found within the search limits you specify. If none
is found, you are notified that the write operation could not take place.
In the same way, a reference by relative track address causes the record to be
written in place of a dummy record on the referenced track or the first within the
search limits, if requested. If extended search is used, the search begins with the
first record on the track. Without extended search, the search can start at any
record on the track. Therefore, records that were added to a track are not
necessarily located on the track in the same sequence they were written in.
You will have to retrieve the record first (using a READ macro), test for a dummy
record, update, and write.
To add a new record, use a relative track address. The system examines the
capacity record to see if there is room on the track. If there is, the new record is
written. Under the extended search option, the record is written in the first
available area within the search limit.
//DIRADD DD DSNAME=SLATE.INDEX.WORDS,---
//TAPEDD DD ---
...
DIRECTAD START
...
OPEN (DIRECT,(OUTPUT),TAPEIN)
NEXTREC GET TAPEIN,KEY
L 4,KEY Set up relative record number
SH 4,=H’1000’
ST 4,REF
WRITE DECB,DA,DIRECT,DATA,’S’,KEY,REF+1
WAIT ECB=DECB
CLC DECB+1(2),=X'0000' Check for any errors
BE NEXTREC
The write operation adds the key and the data record to the data set. If the existing
record is not a dummy record, an indication is returned in the exception code of
the DECB. For that reason, it is better to use the WAIT macro instead of the
CHECK macro to test for errors or exceptional conditions.
//DIRECTDD DD DSNAME=SLATE.INDEX.WORDS,---
//TAPINPUT DD ---
...
DIRUPDAT START
...
OPEN (DIRECT,(UPDAT),TAPEDCB)
NEXTREC GET TAPEDCB,KEY
PACK KEY,KEY
CVB 3,KEYFIELD
SH 3,=H’1’
ST 3,REF
READ DECBRD,DIX,DIRECT,’S’,’S’,0,REF+1
CHECK DECBRD
L 3,DECBRD+12
MVC 0(30,3),DATA
ST 3,DECBWR+12
WRITE DECBWR,DIX,DIRECT,’S’,’S’,0,REF+1
CHECK DECBWR
B NEXTREC
...
KEYFIELD DS 0D
DC XL3’0’
KEY DS CL5
DATA DS CL30
REF DS F
DIRECT DCB DSORG=DA,DDNAME=DIRECTDD,MACRF=(RISXC,WIC), C
OPTCD=RF,BUFNO=1,BUFL=100
TAPEDCB DCB ---
...
There is no check for dummy records. The existing direct data set contains 25 000
records whose 5-byte keys range from 00 001 to 25 000. Each data record is 100
bytes long. The first 30 characters are to be updated. Each input tape record
consists of a 5-byte key and a 30-byte data area. Notice that only data is brought
into virtual storage for updating.
When you are updating variable-length records, you should use the same length to
read and write a record.
Sharing DCBs
BDAM permits several tasks to share the same DCB and several jobs to share the
same data set. It synchronizes I/O requests at both levels by maintaining a
read-exclusive list.
When several tasks share the same DCB and each asks for exclusive control of the
same block, BDAM issues a system ENQ for the block (or in some cases the entire
track). It reads in the block and passes it to the first caller while putting all
subsequent requests for that block on a wait queue. When the first task releases the
block, BDAM moves it into the next caller’s buffer and posts that task complete.
The block is passed to subsequent callers in the order the request was received.
BDAM not only synchronizes the I/O requests, but also issues only one ENQ and
one I/O request for several read requests for the same block.
Because BDAM processing is not sequential and I/O requests are not related, a
caller can continue processing other blocks while waiting for exclusive control of
the shared block.
Because BDAM issues a system ENQ for each record held exclusively, it permits a
data set to be shared between jobs, so long as all callers use BDAM. The system
enqueues on BDAM’s commonly understood argument.
BDAM supports multiple task users of a single DCB when working with existing
data sets. When operating in load mode, however, only one task can use the DCB
at a time. The following restrictions and comments apply when more than one task
shares the same DCB, or when multiple DCBs are used for the same data set.
v Subpool 0 must be shared.
v You should ensure that a WAIT or CHECK macro has been issued for all
outstanding BDAM requests before the task issuing the READ or WRITE macro
ends. In case of abnormal termination, this can be done through a STAE/STAI or
ESTAE exit.
v FREEDBUF or RELEX macros should be issued to free any resources that could
still be held by the terminating task. You can free the resources during or after
task termination.
Rule: OPEN, CLOSE, and all I/O must be performed in the same key and state
(problem state or supervisor state).
Topic Location
Using the Basic Indexed Sequential Access Method (BISAM) 579
Using the Queued Indexed Sequential Access Method (QISAM) 579
Processing ISAM Data Sets 580
Organizing Data Sets 580
Creating an ISAM Data Set 584
Allocating Space 587
Calculating Space Requirements 590
Retrieving and Updating 595
Adding Records 600
Maintaining an Indexed Sequential Data Set 603
| Note: z/OS no longer supports indexed sequential (ISAM) data sets. Before
| migrating to z/OS V1R7, convert your indexed sequential data sets to key
| sequenced data sets (KSDS). To ease the task of converting programs from ISAM to
| VSAM, consider using the ISAM interface for VSAM. See Appendix E, “Using
| ISAM Programs with VSAM Data Sets,” on page 611. The ISAM interface requires
| 24-bit addressing.
| This chapter is written as if you were using ISAM to access real ISAM data sets.
| Some parts of this chapter describe functions that the system no longer supports.
| Appendix E, “Using ISAM Programs with VSAM Data Sets,” on page 611 clarifies
| this chapter.
BISAM directly retrieves logical records by key, updates blocks of records in-place,
and inserts new records in their correct key sequence.
Your program must synchronize all I/O operations with a CHECK or a WAIT
macro.
Other DCB parameters are available to reduce I/O operations by defining work
areas that contain the highest level master index and the records being processed.
A data set processed with QISAM can have unblocked fixed-length records (F),
blocked fixed-length records (FB), unblocked variable-length records (V), or
blocked variable-length records (VB).
QISAM can create an indexed sequential data set (QISAM, load mode), add
additional data records at the end of the existing data set (QISAM, resume load
mode), update a record in place, or retrieve records sequentially (QISAM, scan
mode).
For an indexed sequential data set, you can allocate space on the same or separate
volumes for the data set’s prime area, overflow area, and cylinder/master index or
indexes. For more information about space allocation, see z/OS MVS JCL User’s
Guide.
QISAM automatically generates a track index for each cylinder in the data set and
one cylinder index for the entire data set. Specify the DCB parameters NTM and
OPTCD to show that the data set requires a master index. QISAM creates and
maintains as many as three levels of master indexes.
You can purge records by specifying the OPTCD=L DCB option when you allocate
an indexed sequential data set. The OPTCD=L option flags the records you want to
purge with a X'FF' in the first data byte of a fixed-length record or the fifth byte of
a variable-length record. QISAM ignores these flagged records during sequential
retrieval.
You can get reorganization statistics by specifying the OPTCD=R DCB option when
an indexed sequential data set is allocated. The application program uses these
statistics to determine the status of the data set’s overflow areas.
When you allocate an indexed sequential data set, you must write the records in
ascending key order.
The records in an indexed sequential data set are arranged according to collating
sequence by a key field in each record. Each block of records is preceded by a key
field that corresponds to the key of the last record in the block.
An indexed sequential data set resides on direct access storage devices and can
occupy as many as three different areas:
v The prime area, also called the prime data area, contains data records and
related track indexes. It exists for all indexed sequential data sets.
v The index area contains master and cylinder indexes associated with the data
set. It exists for a data set that has a prime area occupying more than one
cylinder.
v The overflow area contains records that overflow from the prime area when new
data records are added. It is optional.
The track indexes of an indexed sequential data set are similar to the card catalog
in a library. For example, if you know the name of the book or the author, you can
look in the card catalog and obtain a catalog number that enables you to locate the
book in the book files. You then go to the shelves and go through rows until you
find the shelf containing the book. Then you look at the individual book numbers
on that shelf until you find the particular book.
ISAM uses the track indexes in much the same way to locate records in an indexed
sequential data set.
As the records are written in the prime area of the data set, the system accounts
for the records contained on each track in a track index area. Each entry in the
track index identifies the key of the last record on each track. There is a track index
for each cylinder in the data set. If more than one cylinder is used, the system
develops a higher-level index called a cylinder index. Each entry in the cylinder
index identifies the key of the last record in the cylinder. To increase the speed of
searching the cylinder index, you can request that a master index be developed for
a specified number of cylinders, as shown in Figure 125 on page 582.
Rather than reorganize the entire data set when records are added, you can request
that space be allocated for additional records in an overflow area.
Master Index
Cylinder Index
Prime Area
Records are written in the prime area when the data set is allocated or updated.
The last track of prime data is reserved for an end-of-file mark. The portion of
Figure 125 labeled cylinder 1 illustrates the initial structure of the prime area.
Although the prime area can extend across several noncontiguous areas of the
volume, all the records are written in key sequence. Each record must contain a
key; the system automatically writes the key of the highest record before each
block.
When the ABSTR option of the SPACE parameter of the DD statement is used to
generate a multivolume prime area, the VTOC of the second volume, and of all
succeeding volumes, must be contained within cylinder 0 of the volume.
Index Areas
The operating system generates track and cylinder indexes automatically. As many
as three levels of master index are created if requested.
Track Index
The track index is the lowest level of index and is always present. There is one
track index for each cylinder in the prime area; it is written on the first tracks of
the cylinder that it indexes.
The index consists of a series of paired entries, that is, a normal entry and an
overflow entry for each prime track. Figure 126 on page 583 shows the format of a
track index.
For fixed-length records, each normal entry points to record 0 or to the first data
record on a track shared by index and data records. (DCBFIRSH also points to it.)
For variable-length records, the normal entry contains the key of the highest record
on the track and the address of the last record.
The overflow entry is originally the same as the normal entry. (This is why 100
appears twice on the track index for cylinder 1 in Figure 125.) The overflow entry
is changed when records are added to the data set. Then the overflow entry
contains the key of the highest overflow record and the address of the lowest
overflow record logically associated with the track.
If all the tracks allocated for the prime data area are not used, the index entries for
the unused tracks are flagged as inactive. The last entry of each track index is a
dummy entry indicating the end of the index. When fixed-length record format has
been specified, the remainder of the last track of each cylinder used for a track
index contains prime data records, if there is room for them.
Each index entry has the same format as the others. It is an unblocked,
fixed-length record consisting of a count, a key, and a data area. The length of the
key corresponds to the length of the key area in the record to which it points. The
data area is always 10 bytes long. It contains the full address of the track or record
to which the index points, the level of the index, and the entry type.
Cylinder Index
For every track index created, the system generates a cylinder index entry. There is
one cylinder index for a data set that points to a track index. Because there is one
track index per cylinder, there is one cylinder index entry for each cylinder in the
prime data area, except for a 1-cylinder prime area. As with track indexes, inactive
entries are created for any unused cylinders in the prime data area.
Master Index
As an optional feature, the operating system creates a master index at your
request. The presence of this index makes long, serial searches through a large
cylinder index unnecessary.
You can specify the conditions under which you want a master index created. For
example, if you have specified NTM=3 and OPTCD=M in your DCB macro, a
master index is created when the cylinder index exceeds 3 tracks. The master index
consists of one entry for each track of cylinder index. If your data set is extremely
large, a higher-level master index is created when the first-level master index
exceeds three tracks. This higher-level master index consists of one entry for each
track of the first-level master index. This procedure can be repeated for as many as
three levels of master index.
Overflow Areas
As records are added to an indexed sequential data set, space is required to
contain those records that will not fit on the prime data track on which they
belong. You can request that a number of tracks be set aside as a cylinder overflow
area to contain overflows from prime tracks in each cylinder. An advantage of
using cylinder overflow areas is a reduction of search time required to locate
overflow records. A disadvantage is that there will be unused space if the
additions are unevenly distributed throughout the data set.
Instead of, or in addition to, cylinder overflow areas, you can request an
independent overflow area. Overflow from anywhere in the prime data area is
placed in a specified number of cylinders reserved solely for overflow records. An
advantage of having an independent overflow area is a reduction in unused space
reserved for overflow. A disadvantage is the increased search time required to
locate overflow records in an independent area.
If you request both cylinder overflow and independent overflow, the cylinder
overflow area is used first. It is a good practice to request cylinder overflow areas
large enough to contain a reasonable number of additional records, and an
independent overflow area to be used as the cylinder overflow areas are filled.
One-Step Method
To create an indexed sequential data set by the one-step method, take the
following actions:
1. Code DSORG=IS or DSORG=ISU and MACRF=PM or MACRF=PL in the DCB
macro.
2. Specify the following attributes in the DD statement:
v DCB attributes DSORG=IS or DSORG=ISU
v Record length (LRECL)
v Block size (BLKSIZE)
v Record format (RECFM)
v Key length (KEYLEN)
v Relative key position (RKP)
The records that comprise a newly created data set must be presented for writing
in ascending order by key. You can merge two or more input data sets. If you want
a data set with no records (a null data set), you must write at least one record
when you allocate the data set. You can subsequently delete this record to achieve
the null data set.
Recommendations:
v If you unload a data set so that it deletes all existing records in an ISAM data
set, at least one record must be written on the subsequent load. If no record is
written, the data set will be unusable.
v If the records are blocked, do not write a record with a hexadecimal value of FF
and a key of hexadecimal value FF. This value of FF is used for padding. If it
occurs as the last record of a block, the record cannot be retrieved. If the record
is moved to the overflow area, the record is lost.
v After an indexed sequential data set has been allocated, you cannot change its
cms characteristics. However, for added flexibility, the system lets you retrieve
records by using either the queued access technique with simple buffering or the
basic access method with dynamic buffering.
If you do not specify full-track-index write, the operating system writes each
normal overflow pair of entries for the track index after the associated prime data
track has been written. If you do specify full-track-index write, the operating
system accumulates track index entries in virtual storage until either (a) there are
enough entries to fill a track or (b) end-of-data or end-of-cylinder is reached. Then
the operating system writes these entries as a group, writing one group for each
track of track index. The OPTCD=U option requires allocation of more storage
space (the space in which the track index entries are gathered), but the number of
I/O operations required to write the index can be significantly decreased.
When you specify the full-track-index write option, the track index entries are
written as fixed-length unblocked records. If the area of virtual storage available is
not large enough the entries are written as they are created, that is, in normal
overflow pairs.
Example: The example in Figure 127 shows the creation of an indexed sequential
data set from an input tape containing 60-character records.
//INDEXDD DD DSNAME=SLATE.DICT(PRIME),DCB=(BLKSIZE=240,CYLOFL=1, C
// DSORG=IS,OPTCD=MYLR,RECFM=FB,LRECL=60,NTM=6,RKP=19, C
// KEYLEN=10),UNIT=3380,SPACE=(CYL,25,,CONTIG),---
//INPUTDD DD ---
...
ISLOAD START 0
...
DCBD DSORG=IS
ISLOAD CSECT
OPEN (IPDATA,,ISDATA,(OUTPUT))
NEXTREC GET IPDATA Locate mode
LR 0,1 Address of record in register 1
PUT ISDATA,(0) Move mode
B NEXTREC
...
CHECKERR L 3,=A(ISDATA) Initialize base for errors
USING IHADCB,3
TM DCBEXCD1,X'04'
BO OPERR Uncorrectable error
TM DCBEXCD1,X'20'
BO NOSPACE Space not found
TM DCBEXCD2,X'80'
BO SEQCHK Record out of sequence
Error routine
The key by which the data set is organized is in positions 20 through 29. The
output records will be an exact image of the input, except that the records will be
blocked. One track per cylinder is to be reserved for cylinder overflow. Master
indexes are to be built when the cylinder index exceeds 6 tracks. Reorganization
information about the status of the cylinder overflow areas is to be maintained by
the system. The delete option will be used during any future updating.
Multiple-Step Method
To create an indexed sequential data set in more than one step, create the first
group of records using the procedure in “one-step method”. This first group of
records must contain at least one data record. The remaining records can then be
added to the end of the data set in subsequent steps, using resume load. Each
group to be added must contain records with successively higher keys. This
method lets you allocate the indexed sequential data set in several short time
periods rather than in a single long one.
This method also lets you provide limited recovery from uncorrectable output
errors. When an uncorrectable output error is detected, do not attempt to continue
processing or to close the data set. If you have provided a SYNAD routine, it
should issue the ABEND macro to end processing. If no SYNAD routine is
provided, the control program will end your processing. If the error shows that
space in which to add the record was not found, you must close the data set;
issuing subsequent PUT macros can cause unpredictable results. You should begin
recovery at the record following the end of the data as of the last successful close.
The rerun time is limited to that necessary to add the new records, rather than to
that necessary to re-create the entire data set.
Resume Load
When you extend an indexed sequential data set with resume load, the disposition
parameter of the DD statement must specify MOD. To ensure that the necessary
control information is in the DSCB before attempting to add records, you should at
least open and close the data set successfully on a system that includes resume
load. This is necessary only if the data set was allocated on a previous version of
the system. Records can be added to the data set by resume load until the space
allocated for prime data in the first step has been filled.
During resume load on a data set with a partially filled track or a partially filled
cylinder, the track index entry or the cylinder index entry is overlaid when the
track or cylinder is filled. Resume load for variable-length records begins at the
next sequential track of the prime data set. If resume load abnormally ends after
these index entries have been overlaid, a subsequent resume load will result in a
sequence check when it adds a key that is higher than the highest key at the last
successful CLOSE but lower than the key in the overlaid index entry. When the
SYNAD exit is taken for a sequence check, register 0 contains the address of the
high key of the data set. However, if the SYNAD exit is taken during CLOSE,
register 0 will contain the IOB address.
Allocating Space
An indexed sequential data set has three areas: prime, index, and overflow. Space
for these areas can be subdivided and allocated as follows:
v Prime area—If you request only a prime area, the system automatically uses a
portion of that space for indexes, taking one cylinder at a time as needed. Any
unused space in the last cylinder used for index will be allocated as an
independent overflow area. More than one volume can be used in most cases,
but all volumes must be for devices of the same device type.
v Index area—You can request that a separate area be allocated to contain your
cylinder and master indexes. The index area must be contained within one
volume, but this volume can be on a device of a different type than the one that
contains the prime area volume. If a separate index area is requested, you
cannot catalog the data set with a DD statement.
If the total space occupied by the prime area and index area does not exceed one
volume, you can request that the separate index area be imbedded in the prime
area (to reduce access arm movement) by indicating an index size in the SPACE
parameter of the DD statement defining the prime area.
If you request space for prime and index areas only, the system automatically
uses any space remaining on the last cylinder used for master and cylinder
indexes for overflow, provided the index area is on a device of the same type as
the prime area.
v Overflow area—Although you can request an independent overflow area, it must
be contained within one volume and must be of the same device type as the
prime area. If no specific request for index area is made, then it will be allocated
from the specified independent overflow area.
To request that a designated number of tracks on each cylinder be used for
cylinder overflow records, you must use the CYLOFL parameter of the DCB
macro. The number of tracks that you can use on each cylinder equals the total
number of tracks on the cylinder minus the number of tracks needed for track
index and for prime data. That is:
Overflow tracks = total tracks
− (track index tracks + prime data tracks)
When you allocate a 1-cylinder data set, ISAM reserves 1 track on the cylinder for
the end-of-file mark. You cannot request an independent index for an indexed
sequential data set that has only 1 cylinder of prime data.
When you request space for an indexed sequential data set, the DD statement must
follow several rules, as shown below and summarized in Table 57.
v Space can be requested only in cylinders, SPACE=(CYL,(...)), or absolute tracks,
SPACE=(ABSTR,(...)). If the absolute track technique is used, the designated
tracks must make up a whole number of cylinders.
v Data set organization (DSORG) must be specified as indexed sequential (IS or
ISU) in both the DCB macro and the DCB parameter of the DD statement.
v All required volumes must be mounted when the data set is opened; that is,
volume mounting cannot be deferred.
v If your prime area extends beyond one volume, you must specify the number of
units and volumes to be spanned; for example, UNIT=(3380,3),VOLUME=(,,,3).
v You can catalog the data set using the DD statement parameter DISP=(,CATLG)
only if the entire data set is defined by one DD statement; that is, if you did not
request a separate index or independent overflow area.
As your data set is allocated, the operating system builds the track indexes in the
prime data area. Unless you request a separate index area or an imbedded index
area, the cylinder and master indexes are built in the independent overflow area. If
you did not request an independent overflow area, the cylinder and master
indexes are built in the prime area.
You can accomplish the same type of allocation by qualifying your dsname with
the element indication (PRIME). The PRIME element is assumed if it is omitted. It
is required only if you request an independent index or an overflow area. To
request an imbedded index area when an independent overflow area is specified,
you must specify DSNAME=dsname(PRIME). To indicate the size of the imbedded
index, you specify SPACE=(CYL,(quantity,,index size)).
Use modulo-32 arithmetic when calculating key length and data length terms in
your equations. Compute these terms first, then round up to the nearest increment
of 32 bytes before completing the equation.
The ISAM load mode reserves the last prime data track for the file mark.
Prime data tracks required = (200000 records / 1300 records per track) + 1 = 155
Example: Approximately 5000 overflow records are expected for the data set
described in step 1. Because 55 overflow records will fit on a track, 91 overflow
tracks are required. There are 91 overflow tracks for 155 prime data tracks, or
approximately 1 overflow track for every 2 prime data tracks. Because the 3380
disk pack for a 3380 Model AD4 has 15 tracks per cylinder, it would probably be
best to allocate 5 tracks per cylinder for overflow.
Overflow = 47968/(256+((12+267)/32)(32)+((30+267)/32)(32))
records = 47968/864
per track = 55
Example: Again assuming a 3380 Model AD4 disk pack and records with 12-byte
keys, 57 index entries fit on a track.
Index = 47968/(256+((12+267)/32)(32)+((10+267)/32)(32))
entries = 47968/832
per track = 57
For variable-length records, or when a prime data record will not fit on the last
track of the track index, the last track of the track index is not shared with prime
data records. In this case, if the remainder of the division is less than or equal to 2,
drop the remainder. In all other cases, round the quotient up to the next integer.
Example: The 3380 disk pack from the 3380 Model AD4 has 15 tracks per cylinder.
You can fit 57 track index entries into one track. Therefore, you need less than 1
track for each cylinder.
Number of trk index = (2 (15 − 5) + 1) / (57 + 2)
trks per cylinder = 21 / 59
The space remaining on the track is 47968 − (21 (832)) = 30496 bytes.
This is enough space for 16 blocks of prime data records. Because the normal
number of blocks per track is 26, the blocks use 16/26ths of the track, and the
effective number of track index tracks per cylinder is therefore 1 − 16/26 or 0.385.
Space is required on the last track of the track index for a dummy entry to show
the end of the track index. The dummy entry consists of an 8-byte count field, a
key field the same size as the key field in the preceding entries, and a 10-byte data
field.
Example: If you set aside 5 cylinder overflow tracks, and you need 0.385ths of a
track for the track index, 9.615 tracks are available on each cylinder for prime data
records.
Prime data tracks per cylinder = 15 − 5 − (0.385) = 9.615
Example: You need 155 tracks for prime data records. You can use 9.615 tracks per
cylinder. Therefore, you need 17 cylinders for your prime area and cylinder
overflow areas.
Number of cylinders required = (155) / (9.615) = 16.121 (round up to 17)
Example: You have 17 track indexes (from Step 6). Because 57 index entries fit on a
track (from Step 3), you need 1 track for your cylinder index. The remaining space
on the track is unused.
Number of tracks required for cyl. index = (17 + 1) / 57 = 18 / 57 = 0.316 < 1
Every time a cylinder index crosses a cylinder boundary, ISAM writes a dummy
index entry that lets ISAM chain the index levels together. The addition of dummy
entries can increase the number of tracks required for a given index level. To
determine how many dummy entries will be required, divide the total number of
tracks required by the number of tracks on a cylinder. If the remainder is 0,
subtract 1 from the quotient. If the corrected quotient is not 0, calculate the number
of tracks these dummy entries require. Also consider any additional cylinder
boundaries crossed by the addition of these tracks and by any track indexes
starting and stopping within a cylinder.
If the cylinder index exceeds the NTM specification, an entry is made in the master
index for each track of the cylinder index. If the master index itself exceeds the
NTM specification, a second-level master index is started. As many as three levels
of master indexes are created if required.
The space requirements for the master index are computed in the same way as
those for the cylinder index.
If the number of cylinder indexes is greater than NTM, calculate the number of
tracks for a first level master index as follows:
# Tracks for first level master index =
(Cylinder track indexes + 1) / Index entries per track
If the number of first level master indexes is greater than NTM, calculate the
number of tracks for a second level master index as follows:
# Tracks for second level master index =
(First level master index + 1) / Index entries per track
If the number of second level master indexes is greater than NTM, calculate the
number of tracks for a third level master index as follows:
# Tracks for second level master index =
(Second level master index + 1) / Index entries per track
Example: Assume that your cylinder index will require 22 tracks. Because large
keys are used, only 10 entries will fit on a track. If NTM was specified as 2, 3
tracks will be required for a master index, and two levels of master index will be
created.
Number of tracks required for master indexes = (22 + 1) / 10 = 2.3
Using the data set allocated in Figure 127 on page 586, assume that you are to
retrieve all records whose keys begin with 915. Those records with a date
(positions 13 through 16) before the current date are to be deleted. The date is in
the standard form as returned by the system in response to the TIME macro, that
is, packed decimal 0cyyddds. Overflow records can be logically deleted even though
they cannot be physically deleted from the data set.
Figure 128 on page 596 shows how to update an indexed sequential data set
sequentially.
//INDEXDD DD DSNAME=SLATE.DICT,---
...
ISRETR START 0
DCBD DSORG=IS
ISRETR CSECT
...
USING IHADCB,3
LA 3,ISDATA
OPEN (ISDATA)
SETL ISDATA,KC,KEYADDR Set scan limit
TIME , Today’s date in register 1
ST 1,TODAY
NEXTREC GET ISDATA Locate mode
CLC 19(10,1),LIMIT
BNL ENDJOB
CP 12(4,1),TODAY Compare for old date
BNL NEXTREC
MVI 0(1),X’FF’ Flag old record for
deletion
PUTX ISDATA Return delete record
B NEXTREC
TODAY DS F
KEYADDR DC C’915’ Key prefix
DC XL7’0’ Key padding
LIMIT DC C’916’
DC XL7’0’
...
CHECKERR
Because the operations are direct, there is no anticipatory buffering. However, if ‘S’
is specified on the READ macro, the system provides dynamic buffering each time
a read request is made. (See Figure 129 on page 599.)
If an error analysis routine has not been specified and a CHECK is issued, and an
error situation exists, the program abnormally ends with a system completion code
of X'001'. For both WAIT and CHECK, if you want to determine whether the
record is an overflow record, you should test the exception code field of the DECB.
After you test the exception code field, you need not set it to 0. If you have used a
READ KU (read an updated record) macro, and if you plan to use the same DECB
again to rewrite the updated record using a WRITE K macro, you should not set
the field to 0. If you do, your record might not be rewritten properly.
When you are using scan mode with QISAM and you want to issue PUTX, issue
an ENQ on the data set before processing it and a DEQ after processing is
complete. ENQ must be issued before the SETL macro, and DEQ must be issued
after the ESETL macro. When you are using BISAM to update the data set, do not
modify any DCB fields or issue a DEQ until you have issued CHECK or WAIT.
If you specify DISP=SHR, you must also issue an ENQ for the data set before each
I/O request and a DEQ on completion of the request. All users of the data set
must use the same qname and rname operands for ENQ. For example, you might
use the data set name as the qname operand. For more information about using
ENQ and DEQ, see z/OS MVS Programming: Assembler Services Reference ABE-HSP
and z/OS MVS Programming: Assembler Services Guide.
Subtasking
For subtasking, I/O requests should be issued by the task that owns the DCB or a
task that will remain active while the DCB is open. If the task that issued the I/O
request ends, the storage used by its data areas (such as IOBs) can be freed, or
queuing switches in the DCB work area can be left on, causing another task
issuing an I/O request to the DCB to program check or to enter the wait state.
For example, if a subtask issues and completes a READ KU I/O request, the IOB
created by the subtask is attached to the DCB update queue. (READ KU means the
record retrieved is to be updated.) If that subtask ends, and subpool zero is not
shared with the subtask owning the DCB, the IOB storage area is freed and the
integrity of the ISAM update queue is destroyed. A request from another subtask,
attempting to use that queue, could cause unpredictable abends. As another
example, if a WRITE KEY NEW is in process when the subtask ends, a
'WRITE-KEY-NEW-IN-PROCESS' bit is left on. If another I/O request is issued to
the DCB, the request is queued but cannot proceed.
Exclusive control of the data set is requested, because more than one task might be
referring to the data set at the same time. Notice that, to avoid tying up the data
set until the update is completed, exclusive control is released after each block is
written.
Using FREEDBUF: Note the use of the FREEDBUF macro in Figure 129 on page
599. Usually, the FREEDBUF macro has two functions:
v To indicate to the ISAM routines that a record that has been read for update will
not be written back
v To free a dynamically obtained buffer.
In Figure 129, because the read operation was unsuccessful, the FREEDBUF macro
frees only the dynamically obtained buffer.
The first function of FREEDBUF lets you read a record for update, then decide not
to update it without performing a WRITE for update. You can use this function
even when your READ macro does not specify dynamic buffering, if you have
included S (for dynamic buffering) in the MACRF field of your READ DCB.
You can cause an automatic FREEDBUF merely by reusing the DECB; that is, by
issuing another READ or a WRITE KN to the same DECB. You should use this
feature whenever possible, because it is more efficient than FREEDBUF. For
example, in Figure 129, the FREEDBUF macro could be eliminated, because the
WRITE KN addressed the same DECB as the READ KU.
//INDEXDD DD DSNAME=SLATE.DICT,DCB=(DSORG=IS,BUFNO=1,...),---
//TAPEDD DD ---
...
ISUPDATE START 0
...
NEXTREC GET TPDATA,TPRECORD
ENQ (RESOURCE,ELEMENT,E,,SYSTEM)
READ DECBRW,KU,,’S’,MF=E Read into dynamically
* obtained buffer
WAIT ECB=DECBRW
TM DECBRW+24,X’FD’ Test for any condition
BM RDCHECK but overflow
L 3,DECBRW+16 Pick up pointer to
* record
MVC ISUPDATE-ISRECORD Update record
WRITE DECBRW,K,MF=E
WAIT ECB=DECBRW
TM DECBRW+24,X’FD’ Any errors?
BM WRCHECK
DEQ (RESOURCE,ELEMENT,,SYSTEM)
B NEXTREC
RDCHECK TM DECBRW+24,X’80’ No record found
BZ ERROR If not, go to error
* routine
FREEDBUF DECBRW,K,ISDATA Otherwise, free buffer
MVC ISKEY,KEY Key placed in ISRECORD
MVC ISUPDATE,UPDATE Updated information
* placed in ISRECORD
WRITE DECBRW,KN,,WKNAREA,’S’,MF=E Add record to data set
WAIT ECB=DECBRW
TM DECBRW+24,X’FD’ Test for errors
BM ERROR
DEQ (RESOURCE,ELEMENT,,SYSTEM) Release exclusive
* control
B NEXTREC
WKNAREA DS 4F BISAM WRITE KN work field
ISRECORD DS 0CL50 50-byte record from ISDATA
DS CL19 DCB First part of ISRECORD
ISKEY DS CL10 Key field of ISRECORD
DS CL1 Part of ISRECORD
ISUPDATE DS CL20 Update area of ISRECORD
ORG ISUPDATE Overlay ISUPDATE with
TPRECORD DS 0CL30 TPRECORD 30-byte record
KEY DS CL10 from TPDATA DCB Key
* for locating
UPDATE DS CL20 ISDATA record update
RESOURCE DC CL8’SLATE’ information or new data
ELEMENT DC C’DICT’
READ DECBRW,KU,ISDATA,’S’,’S’,KEY,MF=L
ISDATA DCB DDNAME=INDEXDD,DSORG=IS,MACRF=(RUS,WUA), C
MSHI=INDEX,SMSI=2000
TPDATA DCB ---
INDEX DS 2000C
...
Using Other Updating Methods: For an indexed sequential data set with
variable-length records, you can make three types of updates by using the basic
access method. You can read a record and write it back with no change in its
length, simply updating some part of the record. You do this with a READ KU,
followed by a WRITE K, the same way you update fixed-length records.
Two other methods for updating variable-length records use the WRITE KN macro
and lets you change the record length. In one method, a record read for update (by
a READ KU) can be updated in a manner that will change the record length and
be written back with its new length by a WRITE KN (key new). In the second
method, you can replace a record with another record having the same key and
possibly a different length using the WRITE KN macro. To replace a record, it is
not necessary to have first read the record.
In either method, when changing the record length, you must place the new length
in the DECBLGTH field of the DECB before issuing the WRITE KN macro. If you
use a WRITE KN macro to update a variable-length record that has been marked
for deletion, the first bit (no record found) of the exceptional condition code field
(DECBEXC1) of the DECB is set on. If this condition is found, the record must be
written using a WRITE KN with nothing specified in the DECBLGTH field.
Recommendation: Do not try to use the DECBLGTH field to determine the length
of a record read because DECBLGTH is for use with writing records, not reading
them.
If you are reading fixed-length records, the length of the record read is in
DCBLRECL, and if you are reading variable-length records, the length is in the
record descriptor word (RDW).
For this example, the maximum record length of both data sets is 256 bytes. The
key is in positions 6 through 15 of the records in both data sets. The transaction
code is in position 5 of records on the transaction tape. The work area
(REPLAREA) size is equal to the maximum record length plus 16 bytes.
Adding Records
You can use either the queued access method or the basic access method to add
records to an indexed sequential data set. To insert a record between existing
records in the data set, you must use the basic access method and the WRITE KN
(key new) macro. Records added to the end of a data set (that is, records with
successively higher keys), can be added to the prime data area or the overflow
area by the basic access method using WRITE KN, or they can be added to the
prime data area by the queued access method using the PUT macro.
//INDEXDD DD DSNAME=SLATE.DICT,DCB=(DSORG=IS,BUFNO=1,...),---
//TAPEDD DD ---
...
ISUPDVLR START 0
...
NEXTREC GET TPDATA,TRANAREA
CLI TRANCODE,2 Determine if replacement or
* other transaction
BL REPLACE Branch if replacement
READ DECBRW,KU,,’S’,’S’,MF=E Read record for update
CHECK DECBRW,DSORG=IS Check exceptional conditions
CLI TRANCODE,2 Determine if change or append
BH CHANGE Branch if change
...
...
* CODE TO MOVE RECORD INTO REPLACEA+16 AND APPEND DATA FROM TRANSACTION
* RECORD
...
MVC DECBRW+6(2),REPLAREA+16 Move new length from RDW
* into DECBLGTH (DECB+6)
WRITE DECBRW,KN,,REPLAREA,MF=E Rewrite record with
* changed length
CHECK DECBRW,DSORG=IS
B NEXTREC
CHANGE ...
...
* CODE TO CHANGE FIELDS OR UPDATE FIELDS OF THE RECORD
...
WRITE DECBRW,K,MF=E Rewrite record with no
* change of length
CHECK DECBRW,DSORG=IS
B NEXTREC
REPLACE MVC DECBRW+6(2),TRANAREA Move new length from RDW
* into DECBLGTH (DECB+6)
WRITE DECBRW,KN,,TRANAREA-16,MF=E Write transaction record
* as replacement for record
* with the same key
CHECK DECBRW,DSORG=IS
B NEXTREC
CHECKERR ... SYNAD routine
...
REPLAREA DS CL272
TRANAREA DS CL4
TRANCODE DS CL1
KEY DS CL10
TRANDATA DS CL241
READ DECBRW,KU,ISDATA,’S’,’S’,KEY,MF=L
ISDATA DCB DDNAME=INDEXDD,DSORG=IS,MACRF=(RUSC,WUAC),SYNAD=CHECKERR
TPDATA DCB ---
...
Figure 130. Directly Updating an Indexed Sequential Data Set with Variable-Length Records
Subsequent additions are written either on the prime track or as part of the
overflow chain from that track. If the addition belongs after the last prime record
on a track but before a previous overflow record from that track, it is written in the
first available location in the overflow area. Its link field contains the address of
the next record in the chain.
For BISAM, if you add a record that has the same key as a record in the data set, a
“duplicate record” condition is shown in the exception code. However, if you
specified the delete option and the record in the data set is marked for deletion,
the condition is not reported and the new record replaces the existing record. For
more information about exception codes, see z/OS DFSMS Macro Instructions for
Data Sets.
When you use the WRITE KN macro, the record being added is placed in the
prime data area only if there is room for it on the prime data track containing the
record with the highest key currently in the data set. If there is not sufficient room
on that track, the record is placed in the overflow area and linked to that prime
track, even though additional prime data tracks originally allocated have not been
filled.
When you use the PUT macro, records are added to the prime data area until the
space originally allocated is filled. After this allocated prime area is filled, you can
add records to the data set using WRITE KN, in which case they will be placed in
the overflow area. Resume load is discussed in more detail under “Creating an
ISAM Data Set” on page 584.
To add records with successively higher keys using the PUT macro:
v The key of any record to be added must be higher than the highest key
currently in the data set.
v The DD statement must specify DISP=MOD or specify the EXTEND option in
the OPEN macro.
v The data set must have been successfully closed when it was allocated or when
records were previously added using the PUT macro.
You can continue to add fixed-length records in this manner until the original
space allocated for prime data is exhausted.
When you add records to an indexed sequential data set using the PUT macro,
new entries are also made in the indexes. During resume load on a data set with a
partially filled track or a partially filled cylinder, the track index entry or the
cylinder index entry is overlaid when the track or cylinder is filled. If resume load
abnormally ends after these index entries have been overlaid, a subsequent resume
load will get a sequence check when adding a key that is higher than the highest
key at the last successful CLOSE but lower than the key in the overlaid index
entry. When the SYNAD exit is taken for a sequence check, register 0 contains the
address of the highest key of the data set. Figure 131 on page 603 graphically
represents how records are added to an indexed sequential data set.
10 20 40 100
Prime
Data
150 175 190 200
Overflow
Add records 40 Track 100 Track 3 190 Track 200 Track 3 Track
25 and 101 1 record 1 2 record 2 Index
10 20 25 40
Prime
Data
101 150 175 190
Add records 26 Track 100 Track 3 190 Track 200 Track 3 Track
26 and 199 1 record 3 2 record 4 Index
10 20 25 26
Prime
Data
101 150 175 190
v In one pass by writing it directly into another area of direct access storage. In
this case, the area occupied by the original data set cannot be used by the
reorganized data set.
The operating system maintains statistics that are pertinent to reorganization. The
statistics, written on the direct access volume and available in the DCB for
checking, include the number of cylinder overflow areas, the number of unused
tracks in the independent overflow area, and the number of references to overflow
records other than the first. They appear in the RORG1, RORG2, and RORG3 fields
of the DCB.
When creating or updating the data set, if you want to be able to flag records for
deletion during updating, set the delete code (the first byte of a fixed-length record
or the fifth byte of a variable-length record) to X'FF'. Figure 132 describes the
process for deleting indexed data set records, and how a flagged record will not be
rewritten in the overflow area after it has been forced off its prime track (unless it
has the highest key on that cylinder) during a subsequent update.
Key Data
Fixed-length X'FF'
Delete code
BDW RDW
Key Data
Variable-length LL00 LL00 X'FF' LL00
Delete code
Initial format 100 Track 1 100 Track 1 200 Track 2 200 Track 2
10 20 40 100
Similarly, when you process sequentially, flagged records are not retrieved for
processing. During direct processing, flagged records are retrieved the same as any
other records, and you should check them for the delete code.
Buffer Requirements
The only case in which you will ever have to compute the buffer length (BUFL)
requirements for your program occurs when you use the BUILD or GETPOOL
macro to construct the buffer area. If you are creating an indexed sequential data
set (using the PUT macro), each buffer must be 8 bytes longer than the block size
to allow for the hardware count field. That is:
One exception to this formula arises when you are dealing with an unblocked
format-F record whose key field precedes the data field; its relative key position is
0 (RKP=0). In that case, the key length must also be added:
The buffer requirements for using the queued access method to read or update
(using the GET or PUTX macro) an indexed sequential data set are discussed
below.
For fixed-length unblocked records when both the key and data are to be read, and
for variable-length unblocked records, padding is added so that the data will be on
a doubleword boundary, that is:
Tip: When you use the basic access method to update records in an indexed
sequential data set, the key length field need not be considered in determining
your buffer requirements.
If you are reading only the data portion of fixed-length unblocked records or
variable-length records, the work area is the same size as the record.
The size of the work area needed varies according to the record format and the
device type. You can calculate it during execution using device-dependent
information obtained with the TRKCALC macro, DEVTYPE macro, and data set
information from the DSCB obtained with the OBTAIN macro. The TRKCALC,
DEVTYPE and OBTAIN macros are discussed in z/OS DFSMSdfp Advanced Services.
Restriction: You can use the TRKCALC or DEVTYPE macro only if the index and
prime areas are on devices of the same type or if the index area is on a device with
a larger track capacity than the device containing the prime area.
If you do not need to maintain device independence, you can precalculate the size
of the work area needed and specify it in the SMSW field of the DCB macro. The
maximum value for SMSW is 65 535.
For fixed-length blocked records, the size of the main storage work area (SMSW) is
calculated as follows:
SMSW = (DS2HIRPR) (BLKSIZE + 8) + LRECL + KEYLEN
The value for DS2HIRPR is in the index (format-2) DSCB. If you do not use the
MSWA and SMSW parameters, the control program supplies a work area using the
formula BLKSIZE + LRECL + KEYLEN.
For variable-length records, SMSW can be calculated by one of two methods. The
first method can lead to faster processing, although it might require more storage
than the second method.
The second method yields a minimum value for SMSW. Therefore, the first method
is valid only if its application results in a value higher than the value that would
be derived from the second method. If neither MSWA nor SMSW is specified, the
control program supplies the work area for variable-length records, using the
second method to calculate the size.
In all the above formulas, the terms BLKSIZE, LRECL, KEYLEN, and SMSW are
the same as the parameters in the DCB macro (Trk Cap=track capacity). REM is the
remainder of the division operation in the formula and N is the first constant in
the block length formulas. (REM-N-KEYLEN) is added only if its value is positive.
for SMSI is 65 535. If you do not use this technique, the index on the volume must
be searched. If the high-level index is greater than 65 535 bytes in length, your
request for the high-level index in storage is ignored.
The size of the storage area (SMSI parameter) varies. To allocate that space during
execution, you can find the size of the high-level index in the DCBNCRHI field of
the DCB during your DCB user exit routine or after the data set is open. Use the
DCBD macro to gain access to the DCBNCRHI field (see Chapter 21, “Specifying
and Initializing Data Control Blocks,” on page 317). You can also find the size of
the high-level index in the DS2NOBYT field of the index (format 2) DSCB, but you
must use the utility program IEHLIST to print the information in the DSCB. You
can calculate the size of the storage area required for the high-level index by using
the formula
SMSI = (Number of Tracks in High-Level Index)
(Number of Entries per Track)
(Key Length + 10)
The formula for calculating the number of tracks in the high-level index is in
“Calculating Space Requirements” on page 590. When a data set is shared and has
the DCB integrity feature (DISP=SHR), the high-level index in storage is not
updated when DCB fields are changed.
Device Control
An indexed sequential data set is processed sequentially or directly. Direct
processing is accomplished by the basic access method. Because you provide the
key for the record you want read or written, all device control is handled
automatically by the system. If you are processing the data set sequentially, using
the queued access method, the device is automatically positioned at the beginning
of the data set.
In some cases, you might want to process only a section or several separate
sections of the data set. You do that by using the SETL macro, which directs the
system to begin sequential retrieval at the record having a specific key. The
processing of succeeding records is the same as for normal sequential processing,
except that you must recognize when the last desired record has been processed.
At this point, issue the ESETL macro to ends sequential processing. You can then
begin processing at another point in the data set. If you do not specify a SETL
macro before retrieving the data, the system assumes default SETL values. See the
GET and SETL macros in z/OS DFSMS Macro Instructions for Data Sets.
The key class is useful because you do not have to know the entire key of the first
record to be processed. A key class consists of all the keys that begin with identical
characters. The key class is defined by specifying the desired characters of the key
class at the address specified in the lower-limit address of the SETL macro and
setting the remaining characters to the right of the key class to binary zeros.
To use actual addresses, you must keep a record of where the records were written
when the data set was allocated. The device address of the block containing the
record just processed by a PUT-move macro is available in the 8-byte data control
block field DCBLPDA. For blocked records, the address is the same for each record
in the block.
SETL KH—Start with the record whose key is equal to or higher than the specified
key.
SETL KC—Start with the first record having a key that falls into the specified key
class.
SETL I—Start with the record found at the specified direct access address in the
prime area of the data set.
Because the DCBOPTCD field in the DCB can be changed after the data set is
allocated (by respecifying the OPTCD in the DCB or DD statement), it is possible
to retrieve deleted records. Then, SETL functions as noted above.
When the delete option is specified in the DCB, the SETL macro options function
as follows.
SETL B—Start retrieval at the first undeleted record in the data set.
SETL K—Start retrieval at the record matching the specified key, if that record is
not deleted. If the record is deleted, an NRF (no record found) indication is set in
the DCBEXCD field of the DCB, and SYNAD is given control.
SETL KH—Start with the first undeleted record whose key is equal to or higher
than the specified key.
SETL KC—Start with the first undeleted record having a key that falls into the
specified key class or follows the specified key class.
SETL I—Start with the first undeleted record following the specified direct access
address.
Without the delete option specified, QISAM retrieves and handles records marked
for deletion as nondeleted records.
Regardless of the SETL or delete option specified, the NRF condition will be
posted in the DCBEXCD field of the DCB, and SYNAD is given control if the key
or key class:
v Is higher than any key or key class in the data set
v Does not have a matching key or key class in the data set
Note: If the previous SETL macro completed with an error, an ESETL macro should
be run before another SETL macro.
Topic Location
Upgrading ISAM Applications to VSAM 612
How an ISAM Program Can Process a VSAM Data Set 613
Conversion of an Indexed Sequential Data Set 617
JCL for Processing with the ISAM Interface 618
Restrictions on the Use of the ISAM Interface 620
| This appendix is intended to help you use ISAM programs with VSAM data sets.
| The system no longer supports use of indexed sequential (ISAM) data sets. The
| information in this appendix is shown to facilitate conversion to VSAM.
| Although the ISAM interface is an efficient way of processing your existing ISAM
| programs, all new programs that you write should be VSAM programs. Before you
| migrate to z/OS V1R7 or a later release, you should migrate indexed sequential
| data sets to VSAM key-sequenced data sets. Existing programs can use the ISAM
| interface to VSAM to access those data sets and need not be deleted. During data
| set conversion you can use the REPRO command with the ENVIRONMENT
| keyword to handle the ISAM “dummy” records. For information about identifying
| and migrating ISAM data sets and programs prior to installing z/OS V1R7, see
| z/OS Migration.
| The z/OS system no longer supports the creation or opening of indexed sequential
| data sets.
VSAM, through its ISAM interface program, allows a debugged program that
processes an indexed sequential data set to process a key-sequenced data set. The
key-sequenced data set can have been converted from an indexed-sequential or a
sequential data set (or another VSAM data set) or can be loaded by one of your
own programs. The loading program can be coded with VSAM macros, ISAM
macros, PL/I statements, or COBOL statements. That is, you can load records into
a newly defined key-sequenced data set with a program that was coded to load
records into an indexed sequential data set.
Figure 133 on page 612 shows the relationship between ISAM programs processing
VSAM data with the ISAM interface and VSAM programs processing the data.
Convert
Load Key
ISAM
VSAM sequenced
interface
data set
Access Access
VSAM
Access
ISAM programs
New VSAM
converted to
programs
VSAM programs
There are some minor restrictions on the types of processing an ISAM program can
do if it is to be able to process a key-sequenced data set. These restrictions are
described in “Restrictions on the Use of the ISAM Interface” on page 620.
IBM provides the ISAM compatibility interface that allows you to run an ISAM
program against a VSAM key-sequenced data set. To convert your ISAM data sets
to VSAM, use the ISAM compatibility interface or IDCAMS REPRO.
Related reading: For more information, see Appendix E, “Using ISAM Programs
with VSAM Data Sets,” on page 611 and z/OS Migration.
The ISAM interface receives return codes and exception codes for logical and
physical errors from VSAM, translates them to ISAM codes, and routes them to the
processing program or error-analysis (SYNAD) routine through the ISAM DCB or
DECB. Table 58 shows QISAM error conditions and the meaning they have when
the ISAM interface is being used.
Table 58. QISAM Error Conditions
Request
Parameter
Byte and Error List Error
Offset QISAM Meaning Detected By Code Interface/VSAM Meaning
DCBEXCD1
Bit 0 Record not found Interface Record not found (SETL K for a deleted record)
VSAM 16 Record not found
VSAM 24 Record on nonmountable volume
Bit 1 Invalid device address – – Always 0
Bit 2 Space not found VSAM 28 Data set cannot be extended
VSAM 40 Virtual storage not available
Bit 3 Invalid request Interface Two consecutive SETL requests
Interface Invalid SETL (I or ID)
Interface Invalid generic key (KEY=0)
VSAM 4 Request after end-of-data
VSAM 20 Exclusive use conflict
VSAM 36 No key range defined for insertion
VSAM 64 Placeholder not available for concurrent data-set
positioning
VSAM 96 Key change attempted
Bit 4 Uncorrectable input VSAM 4 Physical read error (register 15 contains a value
error of 12) in the data component
VSAM 8 Physical read error (register 15 contains a value
of 12) in the index component
VSAM 12 Physical read error (register 15 contains a value
of 12) in the sequence set of the index
Table 59 shows BISAM error conditions and the meaning they have when the
ISAM interface is being used.
If invalid requests occur in BISAM that did not occur previously and the request
parameter list indicates that VSAM is unable to handle concurrent data-set
positioning, the value specified for the STRNO AMP parameter should be
increased. If the request parameter list indicates an exclusive-use conflict,
reevaluate the share options associated with the data.
Table 59. BISAM Error Conditions
Request
Parameter
Byte and Error List Error
Offset BISAM Meaning Detected By Code Interface/VSAM Meaning
DCBEXC1
Bit 0 Record not found VSAM 16 Record not found
VSAM 24 Record on nonmountable volume
Bit 1 Record length check VSAM 108 Record length check
Bit 2 Space not found VSAM 28 Data set cannot be extended
Bit 3 Invalid request Interface – No request parameter list available
VSAM 20 Exclusive-use conflict
Table 60 gives the contents of registers 0 and 1 when a SYNAD routine specified in
a DCB gets control.
Table 60. Register Contents for DCB-Specified ISAM SYNAD Routine
Register BISAM QISAM
0 Address of the DECB 0, or, for a sequence check, the address of a field containing the
higher key involved in the check
1 Address of the DECB 0
You can also specify a SYNAD routine through the DD AMP parameter (see “JCL
for Processing with the ISAM Interface” on page 618). Table 61 gives the contents
of registers 0 and 1 when a SYNAD routine specified through AMP gets control.
Table 61. Register Contents for AMP-Specified ISAM SYNAD Routine
Register BISAM QISAM
0 Address of the DECB 0, or, for a sequence check, the address of a field containing the
higher key involved in the check
1 Address of the DECB Address of the DCB
If your SYNAD routine issues the SYNADAF macro, registers 0 and 1 are used to
communicate. When you issue SYNADAF, register 0 must have the same contents
it had when the SYNAD routine got control and register 1 must contain the
address of the DCB.
When you get control back from SYNADAF, the registers have the same contents
they would have if your program were processing an indexed sequential data set:
register 0 contains a completion code, and register 1 contains the address of the
SYNADAF message.
The completion codes and the format of a SYNADAF message are given in z/OS
DFSMS Macro Instructions for Data Sets.
Table 62 shows abend codes issued by the ISAM interface when there is no other
method of communicating the error to the user.
Table 62. ABEND Codes Issued by the ISAM Interface
ABEND Error DCB/DECB Set By ABEND
Code Detected By Module/Routine Issued By Error Condition
03B OPEN OPEN/OPEN ACB and OPEN Validity check; either (1) access method services
VALID CHECK and DCB values for LRECL, KEYLE, and RKP do
not correspond, (2) DISP=OLD, the DCB was
opened for output, and the number of logical
records is greater than zero (RELOAD is
implied), or (3) OPEN ACB error code 116 was
returned for a request to open a VSAM structure.
031 VSAM SYNAD SYNAD SYNAD (ISAM) was not specified and a VSAM
physical and logical error occurred.
VSAM SCAN/GET and SETL SYNAD SYNAD (ISAM) was not specified and an invalid
request was found.
LOAD LOAD/RESUME LOAD SYNAD (ISAM) was not specified and a
sequence check occurred.
LOAD LOAD LOAD SYNAD (ISAM) was not specified and the RDW
(record descriptor word) was greater than
LRECL.
039 VSAM SCAN/EODAD SCAN End-of-data was found, but there was no
EODAD exit.
001 VSAM SYNAD I/O error detected.
If a SYNAD routine specified through AMP issues the SYNADAF macro, the
parameter ACSMETH can specify either QISAM or BISAM, regardless of which of
the two is used by your processing program.
Table 63 shows the DEB fields that are supported by the ISAM interface. Except as
noted, field meanings are the same as in ISAM.
Table 63. DEB Fields Supported by ISAM Interface
DEB Section Bytes Fields Supported
PREFIX 16 LNGTH
BASIC 32 TCBAD, OPATB, DEBAD, OFLGS (DISP ONLY), FLGS1
(ISAM-interface bit), AMLNG (104), NMEXT(2), PRIOR, PROTG,
DEBID, DCBAD, EXSCL (0-DUMMY DEB), APPAD
ISAM Device 16 EXPTR, FPEAD
Each volume of a multivolume component must be on the same type of device; the
data component and the index component, however, can be on volumes of devices
of different types.
When you define the key-sequenced data set into which the indexed sequential
data set is to be copied, you must specify the attributes of the VSAM data set for
variable and fixed-length records.
The level of sharing permitted when the key-sequenced data set is defined should
be considered. If the ISAM program opens multiple DCBs pointing to different DD
statements for the same data set, a share-options value of 1, which is the default,
permits only the first DD statement to be opened. See “Cross-Region Share
Options” on page 197 for a description of the cross-region share-options values.
JCL is used to identify data sets and volumes for allocation. Data sets can also be
allocated dynamically.
With ISAM, deleted records are flagged as deleted, but are not actually removed
from the data set. To avoid reading VSAM records that are flagged as deleted
(X'FF'), code DCB=OPTCD=L. If your program depends on a record’s only being
flagged and not actually removed, you might want to keep these flagged records
when you convert and continue to have your programs process these records. The
access method services REPRO command has a parameter (ENVIRONMENT) that
causes VSAM to keep the flagged records when you convert.
The DCB parameter in the DD statement that identifies a VSAM data set is
nonvalid and must be removed. If the DCB parameter is not removed,
unpredictable results can occur. Certain DCB-type information can be specified in
the AMP parameter, which is described later in this chapter.
When an ISAM processing program is run with the ISAM interface, the AMP
parameter enables you to specify:
v That a VSAM data set is to be processed (AMORG)
v The need for additional data buffers to improve sequential performance
(BUFND)
v The need for extra index buffers for simulating the residency of the highest
level(s) of an index in virtual storage (BUFNI)
v Whether to remove records flagged (OPTCD)
v What record format (RECFM) is used by the processing program
For a complete description of the AMP parameter and its syntax, see z/OS MVS
JCL Reference.
Sharing Restrictions:
v You can share data among subtasks that specify the same DD statement in their
DCB(s), and VSAM ensures data integrity. But, if you share data among subtasks
that specify different DD statements for the data, you are responsible for data
integrity. The ISAM interface does not ensure DCB integrity when two or more
DCBs are opened for a data set. All of the fields in a DCB cannot be depended
on to contain valid information.
v Processing programs that issue concurrent requests requiring exclusive control
can encounter exclusive-use conflicts if the requests are for the same control
interval. For more information, see Chapter 12, “Sharing VSAM Data Sets,” on
page 191.
v When a data set is shared by several jobs (DISP=SHR), you must use the ENQ
and DEQ macros to ensure exclusive control of the data set. Exclusive control is
necessary to ensure data integrity when your program adds or updates records
in the data set. You can share the data set with other users (that is, relinquish
exclusive control) when reading records.
Additional restrictions:
v A program must run successfully under ISAM using standard ISAM interfaces;
the interface does not check for parameters that are nonvalid for ISAM.
v VSAM path processing is not supported by the ISAM interface.
v Your ISAM program (on TSO/E) cannot dynamically allocate a VSAM data set
(use LOGON PROC).
v CATALOG/DADSM macros in the ISAM processing program must be replaced
with access method services commands.
v ISAM programs will run, with sequential processing, if the key length is defined
as smaller than it actually is. This is not permitted with the ISAM interface.
v If your ISAM program creates dummy records with a maximum key to avoid
overflow, remove that code for VSAM.
v If your program counts overflow records to determine reorganization needs, its
results will be meaningless with VSAM data sets.
v For processing programs that use locate processing, the ISAM interface
constructs buffers to simulate locate processing.
v For blocked-record processing, the ISAM interface simulates unblocked-record
processing by setting the overflow-record indicator for each record. (In ISAM, an
overflow record is never blocked with other records.) Programs that examine
ISAM internal data areas (for example, block descriptor words (BDW) or the
MBBCCHHR address of the next overflow record) must be modified to use only
standard ISAM interfaces.The ISAM RELSE instruction causes no action to take
place.
v If your DCB exit list contains an entry for a JFCBE exit routine, remove it. The
interface does not support the use of a JFCBE exit routine. If the DCB exit list
contains an entry for a DCB open exit routine, that exit is taken.
v The work area into which data records are read must not be shorter than a
record. If your processing program is designed to read a portion of a record into
a work area, you must change the design. The interface takes the record length
indicated in the DCB to be the actual length of the data record. The record
length in a BISAM DECB is ignored, except when you are replacing a
variable-length record with the WRITE macro.
v If your processing program issues the SETL I or SETL ID instruction, you must
modify the instruction to some other form of the SETL or remove it. The ISAM
interface cannot translate a request that depends on a specific block or device
address.
v Although asynchronous processing can be specified in an ISAM processing
program, all ISAM requests are handled synchronously by the ISAM interface;
WAIT and CHECK requests are always satisfied immediately. The ISAM CHECK
macro does not result in a VSAM CHECK macro’s being issued but merely
causes exception codes in the DECB (data event control block) to be tested.
v If your ISAM SYNAD routine examines information that cannot be supported by
the ISAM interface (for example, the IOB), specify a replacement ISAM SYNAD
routine in the AMP parameter of the VSAM DD statement.
v The ISAM interface uses the same RPL over and over, thus, for BISAM, a READ
for update uses up an RPL until a WRITE or FREEDBUF is issued (when the
interface issues an ENDREQ for the RPL). (When using ISAM you can merely
issue another READ if you do not want to update a record after issuing a
BISAM READ for update.)
v The ISAM interface does not support RELOAD processing. RELOAD processing
is implied when an attempt is made to open a VSAM data set for output,
specifying DISP=OLD, and, also, the number of logical records in the data set is
greater than zero.
ISAMDATA contains records flagged for deletion; these records are to be kept in
the VSAM data set. The ENVIRONMENT(DUMMY) parameter in the REPRO
command tells the system to copy the records flagged for deletion.
//CONVERT JOB ...
//JOBCAT DD DISP=SHR,DSNAME=USERCTLG
//STEP EXEC PGM=IDCAMS
//SYSPRINT DD SYSOUT=A
//ISAM DD DISP=OLD,DSNAME=ISAMDATA,DCB=DSORG=IS
//VSAM DD DISP=OLD,DSNAME=VSAMDATA
//SYSIN DD *
REPRO -
INFILE(ISAM ENVIRONMENT(DUMMY)) -
OUTFILE(VSAM)
/*
To drop records flagged for deletion in the indexed-sequential data set, omit
ENVIRONMENT(DUMMY).
The use of the JOBCAT DD statement prevents this job from accessing any
system-managed data sets.
When the processing program closes the data set, the interface issues VSAM PUT
macros for ISAM PUT locate requests (in load mode), deletes the interface routines
from virtual storage, frees virtual-storage space that was obtained for the interface,
and gives control to VSAM.
Topic Location
Coded Character Sets Sorted by CCSID 625
Coded Character Sets Sorted by Default LOCALNAME 628
CCSID Conversion Groups 634
CCSID Decision Tables 637
Tables for Default Conversion Codes 642
In the table below, the CCSID Conversion Group column identifies the group, if
any, containing the CCSIDs which can be supplied to BSAM or QSAM for
ISO/ANSI V4 tapes to convert from or to the CCSID shown in the CCSID column.
For a description of CCSID conversion and how you can request it see “Character
Data Conversion” on page 303.
See “CCSID Conversion Groups” on page 634 for the conversion groups. A blank
in the CCSID Conversion Group column indicates that the CCSID is not supported
by BSAM or QSAM for CCSID conversion with ISO/ANSI V4 tapes.
For more information about CCSIDs and LOCALNAMEs, see Character Data
Representation Architecture Reference and Registry.
CCSID CCSID
Conversion Conversion
CCSID Group DEFAULT LOCALNAME CCSID Group DEFAULT LOCALNAME
CCSID CCSID
Conversion Conversion
CCSID Group DEFAULT LOCALNAME CCSID Group DEFAULT LOCALNAME
CCSID CCSID
Conversion Conversion
CCSID Group DEFAULT LOCALNAME CCSID Group DEFAULT LOCALNAME
CCSID CCSID
Conversion Conversion
CCSID Group DEFAULT LOCALNAME CCSID Group DEFAULT LOCALNAME
367 29172
500 33268
1047 41460
4596 45556
5143 49652
8692 53748
12788 61696
16884 61711
20980 61712
25076
290 25076
367 29172
500 33058
1027 33268
1047 41460
4386 45556
4596 49652
5143 53748
8692 61696
12788 61711
16884 61712
20980
1047
5143
37 4133 16421
273 4369 16884
275 4370 20517
277 4371 20980
278 4373 24613
280 4374 25076
282 4376 28709
284 4378 29172
285 4380 32805
290 4381 33058
297 4386 33268
367 4393 41460
500 4596 45556
871 4967 49652
875 5033 53748
937 5143 61696
1026 8229 61711
1027 8692 61712
1047 12788
37 4133 16421
273 4369 16884
275 4370 20517
277 4371 20980
278 4373 24613
280 4374 25076
282 4376 28709
284 4378 29172
285 4380 32805
290 4381 33058
297 4386 33268
367 4393 41460
500 4596 45556
871 4967 49652
875 5033 53748
937 5143 61696
1026 8229 61711
1027 8692 61712
1047 12788
If the table entry contains a 0, it means that the CCSID parameter was not
supplied in the JCL. In this case, if data management performs CCSID
conversion, a system default CCSID of 500 is used.
TAPE (DD)
Refers to the CCSID which is specified by the CCSID parameter on the DD
statement, dynamic allocation, or TSO ALLOCATE. If the table entry contains a
0, it means that the CCSID parameter was not supplied. In this case, if data
management performs CCSID conversion, a system default CCSID of 367 is
used when open for output and not DISP=MOD.
Label
Refers to the CCSID which will be stored in the tape label during output
processing (not DISP=MOD), or which is found in an existing label during
DISP=MOD or input processing. Unless otherwise indicated in the tables, a
CCSID found in the tape label overrides a CCSID specified on the DD
statement on input.
Conversion
Refers to the type of data conversion data management will perform based on
the combination of CCSIDs supplied from the previous three columns.
v Default denotes that data management performs conversion using Default
Character conversion as described in “Character Data Conversion” on page
303. This conversion is used when CCSIDs are not supplied by any source.
An existing data set created using Default Character conversion cannot be
read or written (unless DISP=OLD) using CCSIDs.
v Convert a->b denotes that data management performs CCSID conversion
with a and b representing the CCSIDs used.
v No conversion denotes that data management performs no conversion on
the data.
v Fail denotes that the combination of CCSIDs is invalid. This results in an
ABEND513-14 during open.
Table 65 describes processing used when the data set is opened for output (not
EXTEND) with DISP=NEW or DISP=OLD. The label column indicates what
will be stored in the label.
Table 65. Output DISP=NEW,OLD
USER TAPE(DD) Label Conversion Comments
0 0 BLANK Default No CCSIDs specified. Use Default Character
Conversion.
0 Y Y Convert 500->Y USER default is 500.
0 65535 65535 No conversion No convert specified on DD. CCSID of data is
unknown.
X 0 367 Convert X->367 Default tape is 367.
X Y Y Convert X->Y USER is X. DD is Y.
X 65535 X No conversion User data assumed to be X.
65535 0 65535 No conversion No convert specified on JOB/EXEC. CCSID of
data unknown.
65535 Y Y No conversion User data assumed to be Y.
65535 65535 65535 No conversion No convert specified.
Table 66 describes processing used when the data set is opened for output with
DISP=MOD or when the OPEN option is EXTEND. This is only allowed for IBM
created Version 4 tapes. Attempting to open a non-IBM created Version 4 tape with
DISP=MOD will result in ABEND513-10.
Table 66. Output DISP=MOD (IBM V4 tapes only)
USER TAPE(DD) Label Conversion Comments
0 0 BLANK Default No CCSIDs specified. Use Default Character
Conversion.
0 0 Z Convert 500->Z USER default is 500.
0 0 65535 Fail CCSID of tape data is unknown. Prevent
mixed user data.
0 Y BLANK Fail Blank in label means Default Character
Conversion but Y specified. CCSID mismatch.
0 Y Z Fail CCSID mismatch. Label says Z but DD says
Y.
0 Y Y Convert 500->Y USER default is 500. Label says Y and DD
says Y.
0 Y 65535 Fail DD says Y but CCSID of data is unknown.
CCSID mismatch.
0 65535 BLANK Fail Blank in label means Default Character
Conversion but unknown CCSID on tape.
0 65535 Z Fail DD says no convert. Label says Z and USER
CCSID not specified.
0 65535 65535 No conversion No convert specified. User must ensure data
is in correct CCSID.
X 0 BLANK Fail Blank in label means Default Character
Conversion but USER CCSID is X. No
interface to convert X to 7-bit ASCII.
X 0 Z Convert X->Z USER is X. Label is Z.
X 0 65535 Fail Label CCSID is unknown, but USER is X with
no convert specified. Potential mismatch.
X Y BLANK Fail Blank in label means Default Character
Conversion but DD says Y. CCSID mismatch.
X Y Z Fail DD says Y but label says Z. CCSID mismatch.
Table 67 describes processing used when the data set is opened for INPUT or
RDBACK.
Table 67. Input
USER TAPE(DD) Label Conversion Comments
0 0 BLANK Default No CCSIDs specified. Assume Default
Character Conversion.
0 0 Z Convert Z->500 USER default is 500. Label says Z.
0 0 65535 No conversion Label says no convert and no CCSIDs
specified.
0 Y BLANK Fail Fail if IBM V4 tape because blank in label
means Default Character Conversion but DD
says Y.
0 Y BLANK Convert Y->500 Allow if not IBM V4 tape because user is
indicating data on tape is Y via the DD.
0 Y Z Fail Label say Z but DD says Y. CCSID mismatch.
0 Y Y Convert Y->500 USER default is 500. DD says Y and label says
Y.
0 Y 65535 Convert Y->500 DD is saying tape data is Y. USER default is
500.
0 65535 BLANK No conversion DD specified no conversion.
0 65535 Z No conversion DD specified no conversion.
0 65535 65535 No conversion DD specified no conversion.
X 0 BLANK Fail Blank in label means Default Character
Conversion but USER specified CCSID.
CCSID mismatch.
X 0 Z Convert Z->X USER is X. Label is Z.
X 0 65535 No conversion Label says no conversion and no CCSID
specified on DD, therefore, no conversion.
X Y BLANK Fail Fail if IBM V4 tape because blank in label
means Default Character Conversion but DD
says Y. CCSID mismatch.
X Y BLANK Convert Y->X Allow if not IBM V4 tape because DD is
indicating data is Y. USER is X.
X Y Z Fail Label says Z but DD says Y. CCSID
mismatch.
X Y Y Convert Y->X Label and DD both specify Y. USER is X.
X Y 65535 Convert Y->X Label CCSID is unknown but DD says Y.
USER is X. Assume data is Y.
X 65535 BLANK Fail Fail if IBM V4 tape because blank in label
means Default Character Conversion but
USER says X. CCSID mismatch.
X 65535 BLANK No conversion Allow if not IBM V4 tape because DD
specified no convert.
X 65535 Z Fail Label says Z, USER says X but DD says no
convert. CCSID mismatch between USER and
label.
X 65535 X No conversion Label says X and USER says X, therefore,
allow no conversion.
X 65535 65535 No conversion No conversion specified, but tape data must
be X.
65535 0 BLANK No conversion USER specified no conversion indicating that
application can accept any data including
7-bit ASCII.
65535 0 Z No conversion USER specified no conversion indicating that
application can accept any data including Z.
When converting EBCDIC code to ASCII code, all EBCDIC code not having an
ASCII equivalent is converted to X'1A'. When converting ASCII code to EBCDIC
code, all ASCII code not having an EBCDIC equivalent is converted to X'3F'.
Because Version 3 ASCII uses only 7 bits in each byte, bit 0 is always set to 0
during EBCDIC to ASCII conversion and is expected to be 0 during ASCII to
EBCDIC conversion.
0 1 2 3 4 5 6 7 8 9 A B C D E F
00-0F 000102031A091A7F 1A1A1A0B0C0D0E0F
10-1F 101112131A1A081A 18191A1A1C1D1E1F
20-2F 1A1A1A1A1A0A171B 1A1A1A1A1A050607
30-3F 1A1A161A1A1A1A04 1A1A1A1A14151A1A
40-4F 201A1A1A1A1A1A1A 1A1A5B2E3C282B21
50-5F 261A1A1A1A1A1A1A 1A1A5D242A293B5E
60-6F 2D2F1A1A1A1A1A1A 1A1A7C2C255F3E3F
70-7F 1A1A1A1A1A1A1A1A 1A603A2340273D22
80-8F 1A61626364656667 68691A1A1A1A1A1A
90-9F 1A6A6B6C6D6E6F70 71721A1A1A1A1A1A
A0-AF 1A7E737475767778 797A1A1A1A1A1A1A
B0-BF 1A1A1A1A1A1A1A1A 1A1A1A1A1A1A1A1A
C0-CF 7B41424344454647 48491A1A1A1A1A1A
D0-DF 7D4A4B4C4D4E4F50 51521A1A1A1A1A1A
E0-EF 5C1A535455565758 595A1A1A1A1A1A1A
F0-FF 3031323334353637 38391A1A1A1A1A1A
z/OS information
z/OS information is accessible using screen readers with the BookServer/Library
Server versions of z/OS books in the Internet library at:
www.ibm.com/servers/eserver/zseries/zos/bkserv/
IBM may have patents or pending patent applications covering subject matter
described in this document. The furnishing of this document does not give you
any license to these patents. You can send license inquiries, in writing, to:
IBM Director of Licensing
IBM Corporation
North Castle Drive
Armonk, NY 10504-1785
U.S.A.
For license inquiries regarding double-byte (DBCS) information, contact the IBM
Intellectual Property Department in your country or send inquiries, in writing, to:
IBM World Trade Asia Corporation Licensing
2-31 Roppongi 3-chome, Minato-ku
Tokyo 106, Japan
The following paragraph does not apply to the United Kingdom or any other
country where such provisions are inconsistent with local law:
INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS
PUBLICATION “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER
EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS
FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or
implied warranties in certain transactions, therefore, this statement may not apply
to you.
IBM may use or distribute any of the information you supply in any way it
believes appropriate without incurring any obligation to you.
Licensees of this program who wish to have information about it for the purpose
of enabling: (i) the exchange of information between independently created
programs and other programs (including this one) and (ii) the mutual use of the
information which has been exchanged, should contact:
The licensed program described in this information and all licensed material
available for it are provided by IBM under terms of the IBM Customer Agreement,
IBM International Program License Agreement, or any equivalent agreement
between us.
Trademarks
The following terms are trademarks of International Business Machines
Corporation in the United States, in other countries, or both:
Microsoft®, Windows, Windows NT®, and the Windows logo are trademarks of
Microsoft Corporation in the United States, other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other
countries.
Other company, product, and service names may be trademarks or service marks
of others.
automatic class selection (ACS) routine. A procedural BPAM. Basic partitioned access method.
set of ACS language statements. Based on a set of input
BPI. Bytes per inch.
variables, the ACS language statements generate the
name of a predefined SMS class, or a list of names of BSAM. Basic sequential access method.
predefined storage groups, for a data set.
BSM. Backspace past tape mark and forward space
automatic data set protection (ADSP). In z/OS, a user over tape mark (parameter of CNTRL).
attribute that causes all permanent data sets created by
the user to be automatically defined to RACF with a BSP. Backspace one block (macro).
discrete RACF profile.
BSR. Backspace over a specified number of blocks
AVGREC. Average record scale (JCL keyword). (parameter of CNTRL).
| basic format. The format of a data set that has a data BUFNO. Buffer number (parameter of DCB and DD).
| set name type (DSNTYPE) of BASIC. A basic format
| data set is a sequential data set that is specified to be BUFOFF. Buffer offset (length of ASCII block prefix by
| neither large format nor extended format. The size of a which the buffer is offset; parameter of DCB and DD).
| basic format data set cannot exceed 65 535 tracks on
| each volume. C
base configuration. The part of an SMS configuration
CA. Control area.
that contains general storage management attributes,
such as the default management class, default unit, and catalog. A data set that contains extensive information
default device geometry. It also identifies the systems required to locate other data sets, to allocate and
or system groups that an SMS configuration manages. deallocate storage space, to verify the access authority
of a program or operator, and to accumulate data set
BCDIC. Binary coded decimal interchange code.
usage statistics. A catalog has a basic catalog structure
BCS. Basic catalog structure. (BCS) and its related volume tables of contents
(VTOCs) and VSAM volume data sets (VVDSs). See
BDAM. Basic direct access method. master catalog and user catalog. See also VSAM volume
data set.
BDW. Block descriptor word.
CBIC. See control blocks in common.
BFALN. Buffer alignment (parameter of DCB and
DD). CBUF. Control block update facility.
BFTEK. Buffer technique (parameter of DCB and DD). CCHHR. Cylinder/head/record address.
BISAM. Basic indexed sequential access method. CCSID. Coded Character Set Identifier.
BLKSIZE. Block size (parameter of DCB, DCBE and CCW. Channel command word.
DD).
CDRA. See Character Data Representation Architecture
blocking. (1) The process of combining two or more (CDRA) API.
records in one block. (2) Suspending a program process
(UNIX). CF. Coupling facility.
class. See SMS class. count-key data. A disk storage device for storing data
in the format: count field normally followed by a key
cluster. A named structure consisting of a group of field followed by the actual data of a record. The count
related components. For example, when the data set is field contains, in addition to other information, the
key sequenced, the cluster contains both the data and address of the record in the format: CCHHR (where CC
index components; when the data set is entry is the two-digit cylinder number, HH is the two-digit
sequenced, the cluster contains only a data component. head number, and R is the record number) and the
length of the data. The key field contains the record’s
collating sequence. An ordering assigned to a set of key.
items, such that any two sets in that assigned order can
be collated. cross memory. A synchronous method of
communication between address spaces.
component. In systems with VSAM, a named,
cataloged collection of stored records, such as the data CSA. Common service area.
component or index component of a key-sequenced file
or alternate index. CSW. Channel status word.
compress. (1) To reduce the amount of storage CYLOFL. Number of tracks for cylinder overflow
required for a given data set by having the system records (parameter of DCB).
replace identical words or phrases with a shorter token
associated with the word or phrase. (2) To reclaim the D
unused and unavailable space in a partitioned data set
that results from deleting or modifying members by DA. Direct access (value of DEVD or DSORG).
moving all unused space to the end of the data set.
DADSM. See direct access device space management.
compressed format. A particular type of
extended-format data set specified with the DASD volume. Direct access storage device volume.
(COMPACTION) parameter of data class. VSAM can
compress individual records in a compressed-format DATACLAS. Data class (JCL keyword).
data set. SAM can compress individual blocks in a
compressed-format data set. See compress. data class. A collection of allocation and space
attributes, defined by the storage administrator, that are
concurrent copy. A function to increase the used to create a data set.
accessibility of data by enabling you to make a
consistent backup or copy of data concurrent with the data control block (DCB). A control block used by
usual application program processing. access method routines in storing and retrieving data.
configuration. The arrangement of a computer system data definition (DD) statement. A job control
as defined by the characteristics of its functional units. statement that describes a data set associated with a
See SMS configuration. particular job step.
CONTIG. Contiguous space allocation (value of Data Facility Storage Management Subsystem
SPACE). (DFSMS). An operating environment that helps
automate and centralize the management of storage. To
Glossary 649
manage storage, SMS provides the storage DCBD. Data-control-block dummy section.
administrator with control over data class, storage
class, management class, storage group, and automatic DCBE. Data control block extension.
class selection routine definitions.
DD. Data definition. See also data definition (DD)
Data Facility Storage Management Subsystem data statement.
facility product (DFSMSdfp). A DFSMS functional
component and a base element of z/OS that provides DDM. Distributed data management (DDM).
functions for storage management, data management,
DEB. Data extent block.
program management, device management, and
distributed data access. DECB. Data event control block.
Data Facility Storage Management Subsystem DEN. Magnetic tape density (parameter of DCB and
Transactional VSAM Services (DFSMStvs). An DD).
optional feature of DFSMS for running batch VSAM
processing concurrently with CICS online transactions. DES. Data Encryption Standard.
DFSMStvs users can run multiple batch jobs and online
transactions against VSAM data, in data sets defined as DEVD. Device dependent (parameter of DCB and
recoverable, with concurrent updates. DCBD).
data integrity. Preservation of data or programs for DFSMSdss. A DFSMS functional component or base
their intended purpose. As used in this publication, element of z/OS, used to copy, move, dump, and
data integrity is the safety of data from inadvertent restore data sets and volumes.
destruction or alteration.
DFSMShsm. A DFSMS functional component or base
data management. The task of systematically element of z/OS, used for backing up and recovering
identifying, organizing, storing, and cataloging data in data, and managing space on volumes in the storage
an operating system. hierarchy.
data record. A collection of items of information from DFSMSrmm. A DFSMS functional component or base
the standpoint of its use in an application, as a user element of z/OS, that manages removable media.
supplies it to VSAM for storage. Contrast with index
record. DFSMStvs. See Data Facility Storage Management
Subsystem Transactional VSAM Services.
data security. Prevention of access to or use of data or
programs without authorization. As used in this dictionary. A table that associates words, phrases, or
publication, data security is the safety of data from data patterns to shorter tokens. The tokens replace the
unauthorized use, theft, or purposeful destruction. associated words, phrases, or data patterns when a data
set is compressed.
data set. In DFSMS, the major unit of data storage and
retrieval, consisting of a collection of data in one of direct access. The retrieval or storage of data by a
several prescribed arrangements and described by reference to its location in a data set rather than relative
control information to which the system has access. In to the previously retrieved or stored data.
z/OS non-UNIX environments, the terms data set and
direct access device space management (DADSM). A
file are generally equivalent and sometimes are used
collection of subroutines that manages space on disk
interchangeably. In z/OS UNIX environments, the
volumes. The subroutines are Create, Scratch, Extend,
terms data set and file have quite distinct meanings. See
and Partial Release.
also hierarchical file system (HFS) data set.
direct data set. A data set whose records are in
data synchronization. The process by which the
random order on a direct access volume. Each record is
system ensures that data previously given to the
stored or retrieved according to its actual address or its
system through WRITE, CHECK, PUT, and PUTX
address according to the beginning of the data set.
macros is written to some form of nonvolatile storage.
Contrast with sequential data set.
DAU. Direct access unmovable data set (value of
directory entry services (DE Services). Directory
DSORG).
Entry (DE) Services provides directory entry services
DBB. Dictionary building block. for PDS and PDSE data sets. Not all of the functions
will operate on a PDS however. DE Services is usable
DBCS. See double-byte character set. by authorized as well as unauthorized programs
through the executable macro, DESERV.
DCB. Data control block name, macro, or parameter of
DD statement. See also data control block.
DLF. Data lookaside facility. EXLST. Exit list (parameter of DCB and VSAM
macros).
double-byte character set (DBCS). A 2-byte value that
can represent a single character for languages that EXPDT. Expiration date for a data set (JCL keyword).
contain too many characters or symbols for each to be
assigned a 1-byte value. export. To create a backup or portable copy of a
VSAM cluster, alternate index, or user catalog.
DSCB. Data set control block.
extended format. The format of a data set that has a
DSORG. Data set organization (parameter of DCB data set name type (DSNTYPE) of EXTENDED. The
and DD and in a data class definition). data set is structured logically the same as a data set
that is not in extended format but the physical format
dummy storage group. A type of storage group that is different. Data sets in extended format can be striped
contains the serial numbers of volumes no longer and/or compressed, or neither. Data in an extended
connected to a system. Dummy storage groups allow format VSAM KSDS can be compressed. The size of an
existing JCL to function without having to be changed. extended format data set cannot exceed 65 535 tracks
See also storage group. on each volume. See also striped data set and compressed
format.
dynamic allocation. The allocation of a data set or
volume using the data set name or volume serial extent. A continuous space on a DASD volume
number rather than using information contained in a occupied by a data set or portion of a data set.
JCL statement.
ECKD. Extended count-key-data. file permission bits. Information about a file that is
used, along with other information, to determine if a
ECSA. Extended common service area. process has access permission to a file. The bits are
divided into three parts: owner, group, and other. Each
entry-sequenced data set (ESDS). A data set whose part is used with the corresponding file class of
records are loaded without respect to their contents and processes. These bits are contained in the file mode.
whose RBAs cannot change. Records are retrieved and
stored by addressed access, and new records are added file system. In the z/OS UNIX HFS environment, the
at the end of the data set. collection of files and file management structures on a
physical or logical mass storage device, such as a
EOB. End-of-block. diskette or minidisk. See also HFS data set.
EOD. End-of-data. first-in-first-out (FIFO). A queuing technique in which
the next item to be retrieved is the item that has been
EODAD. End-of-data-set exit routine address
in the queue for the longest time.
(parameter of DCB, DCBE, and EXLST).
first-in-first-out (FIFO) special file. A type of file with
EOV. End-of-volume.
the property that data written to such a file is read on a
ESDS. See entry-sequenced data set. first-in first-out basis.
Glossary 651
format-D. ASCII or ISO/ANSI variable-length records. generation data set (GDS). One of the data sets in a
generation data group; it is historically related to the
format-DB. ASCII variable-length, blocked records. others in the group.
format-DBS. ASCII variable-length, blocked spanned generic profile. An RACF profile that contains
records. security information about multiple data sets, users, or
resources that may have similar characteristics and
format-DS. ASCII variable-length, spanned records. require a similar level of protection. Contrast with
discrete profile.
format-F. Fixed-length records.
gigabyte. 230 bytes, 1 073 741 824 bytes. This is
format-FB. Fixed-length, blocked records.
approximately a billion bytes in American English.
format-FBS. Fixed-length, blocked, standard records.
GL. GET macro, locate mode (value of MACRF).
format-FBT. Fixed-length, blocked records with track
GM. GET macro, move mode (value of MACRF).
overflow option.
GRS. Global resource serialization.
format-FS. Fixed-length, standard records.
GSR. Global shared resources.
format-U. Undefined-length records.
GTF. Generalized trace facility.
format-V. Variable-length records.
free space. Space reserved within the control intervals header label. (1) An internal label, immediately
of a key-sequenced data set for inserting new records preceding the first record of a file, that identifies the
into the data set in key sequence or for lengthening file and contains data used in file control. (2) The label
records already there; also, whole control intervals or data set label that precedes the data records on a
reserved in a control area for the same purpose. unit of recording media.
FSM. Forward space past tape mark and backspace HFS. Hierarchical file system.
over tape mark (parameter of CNTRL).
hierarchical file system (HFS) data set. A data set
FSR. Forward space over a specified number of blocks that contains a POSIX-compliant file system, which is a
(parameter of CNTRL). collection of files and directories organized in a
hierarchical structure, that can be accessed using z/OS
UNIX System Services. See also file system.
G
Hiperbatch. An extension to both QSAM and VSAM
GCR. Group coded recording (tape recording). designed to improve performance. Hiperbatch uses the
data lookaside facility to provide an alternate fast path
GDG. See generation data group. method of making data available to many batch jobs.
GDS. See generation data set. Hiperspace. A high performance virtual storage space
of up to 2 GB. Unlike an address space, a Hiperspace
generation data group (GDG). A collection of contains only user data and does not contain system
historically related non-VSAM data sets that are control blocks or common areas; code does not execute
arranged in chronological order; each data set is called in a Hiperspace. Unlike a data space, data in
a generation data set. Hiperspace cannot be referenced directly; data must be
moved to an address space in blocks of 4 KB before
generation data group base entry. An entry that
they can be processed. Hiperspace pages can be backed
permits a non-VSAM data set to be associated with
by expanded storage or auxiliary storage, but never by
other non-VSAM that sets as generation data sets.
J M
MACRF. Macro instruction form (parameter of DCB
JES. Job entry subsystem.
and ACB).
JFCB. Job file control block.
management class. (1) A named collection of
JFCBE. Job file control block extension. management attributes describing the retention and
backup characteristics for a group of data sets, or for a
group of objects in an object storage hierarchy. For
K objects, the described characteristics also include class
transition. (2) In DFSMSrmm, if assigned by ACS
KEYLEN. Key length (JCL and DCB keyword). routine to system-managed tape volumes, management
class can be used to identify a DFSMSrmm vital record
key-sequenced data set (KSDS). A VSAM data set specification.
whose records are loaded in ascending key sequence
and controlled by an index. Records are retrieved and
Glossary 653
manual tape library. Installation-defined set of tape MSHI. Main storage for highest-level index
drives defined as a logical unit together with the set of (parameter of DCB).
system-managed volumes that can be mounted on the
drives. MSWA. Main storage for work area (parameter of
DCB).
master catalog. A catalog that contains extensive data
set and volume information that VSAM requires to multilevel alias (MLA) facility. A function in catalog
locate data sets, to allocate and deallocate storage address space that allows catalog selection based on
space, to verify the authorization of a program or one to four data set name qualifiers.
operator to gain access to a data set, and to accumulate
usage statistics for data sets. MVS/DFP. An IBM licensed program that is the base
for the storage management subsystem.
MBBCCHHR. Module number, bin number, cylinder
number, head number, record number. MVS/ESA. Multiple Virtual Storage/Enterprise
Systems Architecture. A z/OS operating system
media. The disk surface on which data is stored. environment that supports ESA/390.
MEDIA2. Enhanced Capacity Cartridge System Tape. MVS/ESA SP. An IBM licensed program used to
control the MVS operating system. MVS/ESA SP
MEDIA3. High Performance Cartridge Tape. together with DFSMS compose the base MVS/ESA
operating environment.
MEDIA4. Extended High Performance Cartridge Tape.
move mode. A transmittal mode in which the record NSR. Nonshared resources.
to be processed is moved into a user work area.
NUB. No user buffering. partitioned data set (PDS). A data set on direct access
storage that is divided into partitions, called members,
NUP. No update. each of which can contain a program, part of a
program, or data.
O partitioned data set extended (PDSE). A
system-managed data set that contains an indexed
object. A named byte stream having no specific
directory and members that are similar to the directory
format or record orientation.
and members of partitioned data sets. A PDSE can be
object backup storage group. A type of storage group used instead of a partitioned data set.
that contains optical or tape volumes used for backup
password. A unique string of characters that a
copies of objects. See also storage group.
program, a computer operator, or a terminal user must
object storage group. A type of storage group that supply to meet security requirements before a program
contains objects on DASD, tape, or optical volumes. See gains access to a data set.
also storage group.
PDAB. Parallel data access block.
operand. Information entered with a command name
PDS. See partitioned data set.
to define the data on which a command operates and
to control the execution of the command. PDS directory. A set of records in a partitioned data
set (PDS) used to relate member names to their
operating system. Software that controls the execution
locations on a DASD volume.
of programs; an operating system input/output control,
and data management. PDSE. See partitioned data set extended.
OPTCD. Optional services code (parameter of DCB). PE. Phase encoding (tape recording mode).
optical volume. Storage space on an optical disk, petabyte. 250 bytes, 1 125 899 906 842 624 bytes. This
identified by a volume label. See also volume. is approximately a quadrillion bytes in American
English.
optimum block size. For non-VSAM data sets,
optimum block size represents the block size that PL. PUT macro, locate mode (value of MACRF).
would result in the greatest space utilization on a
device, taking into consideration record length and PM. PUT macro, move mode (value of MACRF).
device characteristics.
PO. Partitioned organization (value of DSORG).
OUTIN. Output and then input (parameter of OPEN).
pointer. An address or other indication of location.
OUTINX. Output at end of data set (to extend) and For example, an RBA is a pointer that gives the relative
then input (parameter of OPEN). location of a data record or a control interval in the
data set to which it belongs.
P pool storage group. A type of storage group that
contains system-managed DASD volumes. Pool storage
page. (1) A fixed-length block of instructions, data, or groups allow groups of volumes to be managed as a
both, that can be transferred between real storage and single entity. See also storage group.
external page storage. (2) To transfer instructions, data,
or both between real storage and external page storage. portability. The ability to use VSAM data sets with
different operating systems. Volumes whose data sets
page space. A system data set that contains pages of are cataloged in a user catalog can be demounted from
virtual storage. The pages are stored in and retrieved storage devices of one system, moved to another
from the page space by the auxiliary storage manager. system, and mounted on storage devices of that system.
Individual data sets can be transported between
paging. A technique in which blocks of data, or pages,
operating systems using access method services.
are moved back and forth between main storage and
auxiliary storage. Paging is the implementation of the POSIX. Portable operating system interface for
virtual storage concept. computer environments.
Glossary 655
POU. Partitioned organization unmovable (value of REFDD. Refer to previous DD statement (JCL
DSORG). keyword).
primary space allocation. Amount of space requested register. An internal computer component capable of
by a user for a data set when it is created. Contrast storing a specified amount of data and accepting or
with secondary space allocation. transferring this data rapidly.
primary key. One or more characters within a data relative record data set (RRDS). A VSAM data set
record used to identify the data record or control its whose records have fixed or variable lengths, and are
use. A primary key must be unique. accessed by relative record number.
primary storage. A DASD volume available to users Resource Access Control Facility (RACF). An IBM
for data allocation. The volumes in primary storage are licensed program that is included in z/OS Security
called primary volumes. See also storage hierarchy. Server and is also available as a separate program for
Contrast with migration level 1 and migration level 2. the z/OS and VM environments. RACF provides access
control by identifying and verifying the users to the
PRTSP. Printer line spacing (parameter of DCB). system, authorizing access to protected resources,
logging detected unauthorized attempts to enter the
PS. Physical sequential (value of DSORG). system, and logging detected accesses to protected
resources.
PSU. Physical sequential unmovable (value of
DSORG). RETPD. Retention period (JCL keyword).
PSW. Program status word. reusable data set. A VSAM data set that can be reused
as a work file, regardless of its old contents. It must not
Q be a base cluster of an alternate index.
QISAM. Queued indexed sequential access method. RKP. Relative key position (parameter of DCB).
QSAM. Queued sequential access method. RLS. Record-level sharing. See VSAM Record-Level
Sharing (VSAM RLS).
RACF. See Resource Access Control Facility. RPL. Request parameter list.
record-level sharing. See VSAM Record-Level Sharing sequence checking. The process of verifying the order
(VSAM RLS). of a set of records relative to some field’s collating
sequence.
SER. Volume serial number (value of VOLUME). soft link. See symbolic link.
serialization. In MVS, the prevention of a program source control data set (SCDS). A VSAM linear data
from using a resource that is already being used by an set containing an SMS configuration. The SMS
interrupted program until the interrupted program is configuration in an SCDS can be changed and validated
finished using the resource. using ISMF.
service request block (SRB). A system control block SP. Space lines on a printer (parameter of CNTRL).
used for dispatching tasks.
spanned record. A logical record whose length
SETL. Set lower limit of sequential retrieval (QISAM exceeds control interval length, and as a result, crosses,
macro). or spans, one or more control interval boundaries
within a single control area.
SF. Sequential forward (parameter of READ or
WRITE). SRB. See service request block.
shared resources. A set of functions that permit the SS. Select stacker on card reader (parameter of
sharing of a pool of I/O related control blocks, channel CNTRL).
programs, and buffers among several VSAM data sets
open at the same time. See also LSR and GSR. storage administrator. A person in the data processing
center who is responsible for defining, implementing,
SI. Shift in. and maintaining storage management policies.
SK. Skip to a printer channel (parameter of CNTRL). storage class. A collection of storage attributes that
identify performance goals and availability
SL. IBM standard labels (value of LABEL). requirements, defined by the storage administrator,
used to select a device that can meet those goals and
slot. For a relative record data set, the data area requirements.
addressed by a relative record number which may
contain a record or be empty. storage group. A collection of storage volumes and
attributes, defined by the storage administrator. The
SMB. See system-managed buffering. collections can be a group of DASD volumes or tape
volumes, or a group of DASD, optical, or tape volumes
SMS. See storage management subsystem and treated as a single object storage hierarchy.
system-managed storage.
storage hierarchy. An arrangement of storage devices
SMS class. A list of attributes that SMS applies to with different speeds and capacities. The levels of the
data sets having similar allocation (data class), storage hierarchy include main storage (memory, DASD
performance (storage class), or backup and retention cache), primary storage (DASD containing
(management class) needs. uncompressed data), migration level 1 (DASD
containing data in a space-saving format), and
SMS configuration. A configuration base, Storage
migration level 2 (tape cartridges containing data in a
Management Subsystem class, group, library, and drive
space-saving format). See also primary storage, migration
definitions, and ACS routines that the Storage
level 1 and migration level 2.
Management Subsystem uses to manage storage. See
also configuration, base configuration, and source control Storage Management Subsystem (SMS). A DFSMS
data set. facility used to automate and centralize the
management of storage. Using SMS, a storage
SMSI. Size of main-storage area for highest-level
administrator describes data allocation characteristics,
index (parameter of DCB).
performance and availability goals, backup and
SMS-managed data set. A data set that has been retention requirements, and storage requirements to the
assigned a storage class. system through data class, storage class, management
class, storage group, and ACS routine definitions.
Glossary 657
stripe. In DFSMS, the portion of a striped data set, system-managed storage. Storage managed by the
such as an extended format data set, that resides on Storage Management Subsystem. SMS attempts to
one volume. The records in that portion are not always deliver required services for availability, performance,
logically consecutive. The system distributes records and space to applications.
among the stripes such that the volumes can be read
from or written to simultaneously to gain better system-managed tape library. A collection of tape
performance. Whether it is striped is not apparent to volumes and tape devices, defined in the tape
the application program. configuration database. A system-managed tape library
can be automated or manual. See also tape library.
striped data set. An extended format data set that
occupies multiple volumes. A software implementation system management facilities (SMF). A component of
of sequential data striping. z/OS that collects input/output (I/O) statistics,
provided at the data set and storage class levels, which
striping. A software implementation of a disk array helps you monitor the performance of the direct access
that distributes a data set across multiple volumes to storage subsystem.
improve performance.
SYSOUT class. A category of output with specific tape storage group. A type of storage group that
characteristics and written on a specific output device. contains system-managed private tape volumes. The
Each system has its own set of SYSOUT classes, tape storage group definition specifies the
designated by a character from A to Z, a number from system-managed tape libraries that can contain tape
0 to 9, or a *. volumes. See also storage group.
sysplex. A set of z/OS systems communicating and tape volume. A tape volume is the recording space on
cooperating with each other through certain a single tape cartridge or reel. See also volume.
multisystem hardware components and software
services to process customer workloads. task control block (TCB). Holds control information
related to a task.
system. A functional unit, consisting of one or more
computers and associated software, that uses common TCB. See task control block.
storage for all or part of a program and also for all or
part of the data necessary for the execution of the terabyte. 240 bytes, 1 099 511 627 776 bytes. This is
program. approximately a trillion bytes in American English.
Note: A computer system can be a stand-alone unit, or TIOT. Task input/output table.
it can consist of multiple connected units.
TMP. Terminal monitor program.
system-managed data set. A data set that has been
assigned a storage class. trailer label. A file or data set label that follows the
data records on a unit of recording media.
system-managed buffering (SMB). A facility available
for system-managed extended-format VSAM data sets transaction ID (TRANSID). A number associated with
in which DFSMSdfp determines the type of buffer each of several request parameter lists that define
management technique along with the number of requests belonging to the same data transaction.
buffers to use, based on data set and application
specifications. TRC. Table reference character.
system-managed directory entry (SMDE). A directory TRTCH. Track recording technique (parameter of DCB
that contains all the information contained in the PDS and of DD statement).
directory entry (as produced by the BLDL macro) as
well as information specific to program objects, in the TTR. Track record address. A representation of a
extensible format. relative track address.
UTL. User trailer label. z/OS Network File System. A base element of z/OS,
that allows remote access to z/OS host processor data
from workstations, personal computers, or any other
V system on a TCP/IP network that is using client
software for the Network File System protocol.
VBS. Variable blocked spanned.
z/OS UNIX System Services (z/OS UNIX). The set of
VIO. Virtual input/output. functions provided by the SHELL and UTILITIES,
kernel, debugger, file system, C/C++ Run-Time Library,
volume. The storage space on DASD, tape, or optical Language Environment, and other elements of the
devices, which is identified by a volume label. See also z/OS operating system that allow users to write and
DASD volume, optical volume, and tape volume. run application programs that conform to UNIX
standards.
volume positioning. Rotating the reel or cartridge so
that the read-write head is at a particular point on the zSeries File System (zFS). A UNIX file system that
tape. contains one or more file systems in a data set. zFS is
complementary with the hierarchical file system.
VSAM. Virtual storage access method.
Glossary 659
660 z/OS V1R7.0 DFSMS Using Data Sets
Index
Numerics ABEND (continued)
913-34 556
ACS (automatic class selection)
(continued)
16 MB line 937-44 556 management class 29
above 348, 410 D37 448 SMS configuration 27
above, below 341 ABEND macro 540, 586 storage class 29
below 348 abnormal termination 151 ACS routines 389
2 GB bar ABS value 516 actual track address
DCB central storage address 359 absolute generation name 501 BDAM (basic direct access
DCBE central storage address 359 absolute track allocation 29 method) 569
real buffer address 348, 349 ABSTR (absolute track) value for SPACE DASD volumes 10
24-bit addressing mode 411 parameter direct data sets 573
2540 Card Read Punch 316 absolute track allocation 29 ISAM 608
31-bit addressing ISAM 582, 588 using feedback option 574
VSAM 263 SMS restriction 37 add PDS members 422
31-bit addressing mode 349, 356, 411 ACB (access control block) 341 address
buffers above 16 MB 166 ACB macro accessing a KSDS’s index 275
keywords for VSAM 264 access method control block 135 relative
multiple LSR pools 208 buffer space 137, 166 direct data sets 573
OPEN, CLOSE (non-VSAM) 341 improved control interval access 187 directories 416, 418, 441
3211 printer 514 MACRF parameter 194 address spaces, PDSE 479
3262 Model 5 printer 514 PASSWD parameter 56 addressed access 96
3525 Card Punch RMODE31 parameter 166 addressed direct retrieval 146
opening associated data sets 381 storage for control block 140 addressed sequential retrieval 145
record format 316 STRNO parameter 173 ADDVOL command 42
3800 Model 3 printer, table reference ACCBIAS subparameter 167, 169 ADSP processing, cluster profiles 54
character 294, 297, 314 access method services AL (ISO/ANSI standard label) 12, 55
4245 printer 514 allocation examples 32, 34 alias name
4248 printer 514 ALTER LIMIT command 509 PDS
64-bit address, coding 348 ALTER ROLLIN command 509 creating 429
64-bit addressing mode, RLS 220 commands 16 deleting 429
64-bit virtual storage cryptographic option 63 directory format 417
for VSAM RLS buffers 225 DEFINE command (non-VSAM) 501 PDSE
7-track tapes, VSE (Virtual Storage access methods 617 creating 467
Extended) 513 above 2 GB 17 deleting 467, 475
basic 352, 359 differences from PDS 441
BDAM (basic direct access directory format 441
A method) 569 length 439
abend create control block 136 program object 460
EC6-FF0D 496 data management 15 renaming 476
ABEND EXAMINE command 235 restrictions 445
001 320, 616 indexed sequential data set 579 storage requirements 450
002 297 KSDS cluster analysis 235 ALLOCATE command
002-68 407 processing signals 496 building alternate indexes 119
013 347, 348, 386 processing UNIX files 20 creating data sets 389
013-4C 347 queued 364, 368 data set allocation 16, 30
013-60 392 queued, buffer control 352 defining data sets 104
013-DC 310 selecting, defining 4, 8 examples 30, 32, 34, 130
013-FD 329 VSAM 73, 101 releasing space 340
013-FE 329 VSAM (virtual storage access temporary data set names 105
013-FF 329 method) 18 UNIX files 485
031 616 VSAM, non-VSAM 16 allocation
039 616 access rules for RLS 228 data set
03B 616, 620 accessibility 643 definition 16, 30
0C4 263 accessing examples 30, 34, 336
117-3C 544 VSAM data sets using DFSMStvs 219 generation 506, 508
213 342 z/OS UNIX files 7 partitioned 420, 423
213-FD 378 ACS (automatic class selection) 29 sequential 389, 390
237-0C 544 assigning classes 335 system-managed 336
513-10 638 data class 29 using access method services 32,
513-14 637 distributed file manager (DFM) 28 34
913 60 installation data class 324 VSAM 270
Index 663
buffering macros CCSID (coded character set identifier) CICS (Customer Information Control
queued access method 356 (continued) System)
BUFFERS parameter 210 QSAM (queued sequential access recoverable data sets 224
BUFFERSPACE parameter 107, 120, 166, method) 303, 365 VSAM RLS 222
176 CDRA 625 CICS transactional recovery
BUFL parameter 315, 316, 319, 329, 350 central storage address VSAM recoverable data sets 224
BUFND parameter 166, 176 DCB, DCBE 359 CICS VSAM Recovery (CICSVR)
BUFNI parameter 166, 176 CF (coupling facility) 219 description 52
BUFNO CF cache for VSAM RLS data 220 CICSVR
buffer pool chained description 52
construct automatically 350 scheduling CIDF (control interval definition
BUFNO (number of buffers) 319 channel programs 402 field) 181
BUFNO parameter 402, 411 description 401 control information 74
BUFOFF parameter 402 ignored request conditions 402 CIMODE parameter 48
writing format-U or format-D non-DASD data sets 402 ciphertext 64
records 329 chaining CIPOPS utility 546
BUFRDS field 210 RPL (request parameter list) 139 class specifications 32
BUFSP parameter 166, 176 Change Accumulation 52 classes
BUILD macro 329, 347, 348, 351, 356, channel examples 336
605 programs JCL keyword for 335
buffer pool 348 chained segments 402 clear, reset to empty (PDSE directory)
description 349 channel programs STOW INITIALIZE 468
BUILDRCD macro 299, 329, 347, 349, number of (NCP) 361 CLOSE macro 318, 341, 533
352 channel status word 526 buffer flushing 343
usage 300 chapter reference description
BWD (backward) 95 control intervals, processing 179 non-VSAM 338, 344
bypass label processing (BLP) 12, 55 character VSAM 151
BYPASSLLA option 424, 455 control device-dependent considerations 401
chained scheduling 402 multiple data sets 338
character codes parallel input processing 366, 368
C DBCS 567
character special files 7
PDS (partitioned data set) 429, 430
SYNAD 338
CA (control area) 93
CHARS parameter 314 temporary close option 338, 344
read integrity options 231
CHECK macro 151 TYPE=T 338, 344
caching VSAM RLS data 220
BDAM 569 volume positioning 338, 345
CANCEL command 407
before CLOSE 338 closing a data set
candidate with space amount 110
BPAM 359 non-VSAM 338, 344
capacity record 571
BSAM 359 VSAM 151
card
compressed format data set 409 CLUSTER parameter 106, 129
reader (CNTRL macro) 513
DECB 360 cluster verification 50
catalog 53, 501
description 362 clusters 104
BCS component 235
determining block length 404 define, naming 104
control interval size 157
end-of-data-set routine 527 CNTRL macro 319, 401, 402, 513
description 23
I/O operations 19 CO (Create Optimized) 170, 172
EXAMINE command 235
MULTACC 403 COBOL applications 79
protection 56
PDSE synchronization 445 COBOL programming language 332
structural analysis 235
performance 403 CODE parameter 58
user, examining 236
read 404 coded character sets
catalog damage recovery 49
sharing data set 372 sorted by CCSID 625
catalog management 18
SYNAD routine 534 sorted by default LOCALNAME 628
CATALOG parameter 59, 107, 120, 132,
TRUNC macro 362 codes
133
unlike data sets 397 exception 526
catalog search interface 24
update 398, 434 coding VSAM user-written exit
catalog verification 50
VSAM 150, 151 routines 241
cataloging
writing PDS 422 common service area 188
data sets
checkpoint COMPACTION option 408
GDG 501
shared data sets 203 completion check
tape, file sequence number 12
shared resource restrictions 216 asynchronous requests 150
cataloging data sets 501, 503
checkpoint data set COMPRESS parameter 408
CATLIST line operator 124
data sets supported 381 compressed control
CBIC (control blocks in common) 188,
security 381 information field 76
196
checkpoint/restart 216 compressed format data set
CBUF (control block update facility) 201
CHKPT macro 216, 545 specifying block size 328
CCSID (coded character set
CI (control interval) compressed format data sets 36, 408,
identifier) 402, 625, 628, 634
read integrity options 231 409
CCSID parameter 303
access method 313
decision tables 637
CHECK macro 409
Index 665
data integrity data sets (continued) data sets (continued)
enhanced for sequential data sets 374 free space, altering 165 type
sharing DASD 374 guaranteed space 38 VSAM 78, 101
sharing data sets opened for improperly closed 50 VIO (virtual I/O) 21
output 371 ISMF (interactive storage management VSAM processing 135
data management facility) 449 data storage
description 3 KSDS structural analysis 235 DASD volumes 8
macros learning names of 24 magnetic tape 11
summary 15 linear extended format 113 overview 3
quick start 318 loading (VSAM) 113, 116 data synchronization 517
data mode 352 loading VSAM data sets 165 data-in-virtual (DIV) 5
DATA parameter 106, 119, 129 maximum number of volumes 37 DATACLAS parameter 30, 389, 506, 508
data set maximum size 37 DATATEST parameter 235, 236, 238
closing maximum size (4 GB) 73 DATATYPE option 464
non-VSAM 338, 344 minimum size 38 DB2 striping 113
compatible characteristics 395 multiple cylinders 110 DBB-based (dictionary building
concatenation name hiding 55 blocks) 408
like attributes 438 naming 22 DBCS (double-byte character set)
partitioned 437 non-system-managed 32, 33 character codes 567
concatenation, partitioned 477, 499 nonspanned records 77 printing and copying 567
control interval access 187 open for processing 137 record length
conversion 478, 479 options 317 fixed-length records 567
description 314 organization variable-length records 568
name sharing 192, 195 indexed sequential 580 SBCS strings 567
processing 17 organization, defined 4 SI (shift in) 567
RECFM 293, 311, 314 password 56 SO (shift out) 567
resource pool, connection 211 read sharing (recoverable) 225, 226 DCB (data control block) 519
reusable 116 read/write sharing ABEND exit
security 53 (nonrecoverable) 227 description 539
space allocation record loading ABEND exit, options 539
indexed sequential data set 594 REPRO command 113, 115 ABEND installation exit 543
SYSIN 347 recovery 49 address 337
SYSOUT 347 recovery, backup 45 allocation retrieval list 538
temporary request access 141 attributes of, determining 317, 337
allocation 269 restrictions (SMS) 28 changing 337
names 269 routing 385, 387 creation 317
unlike characteristics 394, 437 RPL access 138 description 317, 325
unopened 210 security 60 dummy control section 337
VIO maximum size, SMS sequential exit list 535
managed 37 overlapping operations 398 fields 317
data sets sequential (extend) 399 Installation OPEN exit 544
adding sequential and PDS modifying 318
records 164 quick start 318 OPEN exit 543
allocating 16 sequential concatenation 391 parameters 327
allocation types 451 shared sequence of completion 324
attributes, component 112 cross-system 205 sharing 578
buffers, assigning 347 shared (search direct) 383 sharing a data set 371
characteristics 3 sharing 191 DCB (DCBLIMCT) 573
checkpoint (PDSE) 445 sharing DCBs (data control DCB macro 401, 421, 422, 467, 571
checkpoint security 381 block) 578 DCBBLKSI (without LBI) 405
compress 100 small 109 DCBD macro 337, 434
compressed format 327 space allocation 447 DCBE macro
UPDAT option 398 indexed sequential data set 587 DCBEEXPS flag 377
concatenation PDS (partitioned data set) 419, IHADCBE macro 338
like attributes 392 420 LBI (large block interface) 434
unlike attributes 396 specifying 35, 44 MULTACC parameter 403
conversion 421, 617 space allocation (DASD volume) 421 MULTSDN parameter 403
copy 64 space allocation (direct) 570 non-VSAM data set 317
DASD, erasing 61 spanned records 78 number of stripes 410
direct 569 summary (VSAM) 87 parameters 327
discrete profile 54 SYSIN 385, 387 PASTEOD=YES 381
DSORG parameter 333 SYSOUT 385, 387 PDSs and PDSEs 19
duplicate names 105 SYSOUT parameter 386 performance with BSAM and
encryption 63, 65 system-managed 31 BPAM 403
exporting 48 tape 55 sharing 371
extended, sequential 44 temporary (BDAM, VIO) 572 DCBEBLKSI (with LBI) 405
extents, VSAM 111 DCBEEXPS flag 377
Index 667
discrete ECB (event control block) ERASE option 61
profile 54 description 521, 523 ERASE parameter 108, 125
DISP parameter 372, 429, 470, 489, 552, exception code bits 523 erase-on-scratch
602 ECSA (extended common service DASD data sets 60, 61
description 346 area) 479 RAMAC Virtual Array 61
passing a generation 509 empty sequential data set 342 EROPT (automatic error options)
shared data sets ENCIPHER parameter DCB macro 533
indexed sequential 597 REPLACE parameter 65 error
distributed file manager 4 enciphering data 63 analysis
DIV macro 5, 85, 96, 144 encryption logical 253
DLF (data lookaside facility) 178, 406 data encryption keys 66 physical 256
DLVRP macro data using the REPRO ENCIPHER register contents 531, 532
delete a resource pool 211 command 63, 64 status indicators 521
DO (Direct Optimized) 170, 171 using ICSF 66 uncorrectable 528
random record access 168 VSAM data sets 65 conditions 613, 614
DSAB chain 538 end of determinate 342
DSCB (data set control block) sequential retrieval 528 handling 368
data set label 561, 565 end-of-file handling deferred writes 214
description 564 software 182 indeterminate 342
index (format-2) DS2HTRPR end-of-file mark 422 KSDS (key-sequenced data set) 235
field 608 end-of-volume multiple regions 202
model 506, 507 exit 393 structural 235
security byte 59 processing 396 error analysis
DSECT statement ENDREQ macro 135, 141, 147, 148, 151, exception codes 524
DCB 337 195 options, automatic 533
DCBE 338 enhanced data integrity status indicators 526
DSNAME parameter 130, 428 applications bypassing 376 error message
DSNTYPE parameter 31, 33, 334, 444, diagnosing enhanced data integrity IEC983I 375, 376
450 violations 377 IEC984I 377
DSORG parameter 333, 339, 420, 428, IFGEDI task 375, 376 IEC985I 377
430, 571 IFGPSEDI member 374, 376 IGD17358I 511
indexed sequential data set 584 restriction, multiple sysplexes 376 ERRORLIMIT parameter 237
dummy setting up 374 ESDS (entry-sequenced data set)
control section 337 synchronizing 376 alternate index structure 99
records ENQ macro 206, 372, 373, 574 defined 6
direct data sets 574 entry-sequenced 182 extent consolidation 111
DUMMY option 114 ENVIRONMENT parameter 115 insert record 142
Duplicate Record condition 530 EOD (end-of-data) processing 80
Duplicate Record Presented for Inclusion restoring values (VERIFY) 50 record access 95
in the Data Set condition 523 EODAD (end-of-data-set) routine sequential (non-VSAM) data sets 79
DW (Direct Weighted) 169, 171 BSP macro 515 ESDS (entry-sequenced data sets) 79
DYNALLOC macro 30, 450 changing address 337 ESETL (end-of-sequential retrieval)
bypassing enhanced data concatenated data sets 438 macro 528
integrity 377 description 527 description 609
SVC 99 parameter list 318 EODAD routine entered 527 ESETL macro 527
dynamic indicated by CIDF 180 ESTAE exit 578
buffering processing 391 EVENTS macro 359, 362, 403
ISAM data set 585, 596 programming considerations 246, EXAMINE command 124, 235, 237, 239
dynamic allocation 34 527 example
bypassing enhanced data receives control 469 creating a temporary VSAM data set
integrity 377 register contents 245, 246, 527 with default parameter values 131
dynamic buffering specifications 527 defining a temporary VSAM data set
direct data set 570 user exit 153 using ALLOCATE 130
Dynamic Volume Count attribute 41, 42 EODAD (end-of-data) routine exception
exit routine calling the optional DCB OPEN exit
EXCEPTIONEXIT 246 routine 345
E JRNAD, journalizing
transactions 247
code 520, 526
exit routine
EBCDIC (extended binary coded decimal
EODAD parameter 519 I/O errors 246
interchange code)
EOV (end-of-volume) register contents 246
conversion to/from ASCII 361, 364,
defer nonstandard input trailer label exception code bits
365
exit 544 BDAM (basic direct access
data conversion 17
EODAD routine entered 527 method) 524
record format dependencies 314
forcing 346, 347 EXCEPTIONEXIT parameter 108
EBCDIC (extended binary coded decimal
processing 203, 344, 347 exchange buffering 402
Interchange code)
EOV function 476
label character coding 12
ERASE macro 135, 141, 147, 148
Index 669
FREE command 449 GETMAIN macro 347, 542 IBM 3380 Direct Access Storage
free control interval 281 GETPOOL macro 329, 347, 349, 350, 351, drive 400
free space 356, 605 IBM standard label (SL) 55, 561
altering 165 buffer pool 348 ICFCATALOG parameter 129
DEFINE command 162 global resource serialization 197 ICI (improved control interval access)
determining 163 global shared resources 208 ACB macro 187
optimal control interval 164 GRS (global resource serialization) 197, APF (authorized program
performance 162 371 facility) 187
threshold 164 GSR (global shared resources) 94, 101, cross-memory mode 152
FREE=CLOSE parameter 344 149 extended format data sets 187
FREEBUF macro 352, 356, 597, 598 control block structure 196 MACRF option 196
buffer control 347 subpool 208 not for compressed data set 94
description 357 GTF (generalized trace facility) not for extended format data set 89
FREEDBUF macro 578, 598 extended-format 408 SHAREOPTIONS parameter 201
FREEMAIN macro 350, 538, 542 VSAM 178 UPAD routine 259
FREEPOOL macro 315, 316, 341, 348, guaranteed user buffering (UBF) 186
350, 351 SPACE attribute 40, 41 using 187
FREESPACE parameter 83, 108 DASD volumes 38 VSAM 152, 188
full access, password 138 synchronous write 517 ICKDSF (Device Support Facilities) 369,
full page increments 448 guaranteed space allocation 110 561
full-track-index write option 585 guaranteed space attribute 92 ICSF, for encrypting data 63
IDC01700I–IDC01724I messages 237
IDC01723I message 236
G H IDCAMS print 82
IDRC (Improved Data Recording
gaps header
Capability) 326
interblock 293 index record 279
IEBCOPY
GDG (generation data group) label, user 549, 552, 564
compress 439
absolute, relative name 501 HFS data sets
compressing PDSs 415, 416, 437
allocating data sets 506, 509 defined 481
convert PDS to PDSE 421
building an index 511 FIFO special files 484
convert PDSE to PDS 421
creating a new 506 planning 483
copying between PDSE and PDS 454
deferred roll-in 509, 511 requirements 483
fragmentation 450
entering in the catalog 501, 503 restrictions 483
PDS to PDSE 478
limits 509 type of UNIX file system 20, 481
PDSE back up 479
naming conventions 505 hierarchical file system
SELECT (member) 478
retrieving 504, 510 UNIX file system 20
space, reclaim 479
ROLLOFF/expiration 510 Hiperbatch
IEBIMAGE 514, 546
GDS (generation data set) DLF (data lookaside facility) 406
IEC034I message 395
absolute, relative name 501 not for extended-format data set 406
IEC127D message 546
activating 509 performance 406
IEC129D message 546
passing 509 QSAM 18
IEC161I message 171
reclaim processing 510 Hiperspace 18
IEC501A message 553, 556
roll-in 509, 511 buffer
IEC501E message 553, 556
GDS_RECLAIM keyword 511 LRU (least recently used) 171
IEC502E message 556
GENCB ACB macro 194 LSR 208
IECOENTE macro 553
GENCB macro 135, 137, 138, 140, 141, SMBHWT 168
IECOEVSE macro 556
173, 263 HOLD type connection 425
IEF630I message 451
general registers 554 horizontal pointer index entry 280
IEFBR14 job step 113
generation
IEHLIST program 608
index, name 501
IEHLIST utility 25, 588, 617
number
relative 501, 506
I IEHMOVE program 417, 418
I/O (input/output) IEHPROGM utility program
generic
buffers generation data group
profile 54
managing with shared building index 511
generic key 146
resources 212 muiltivolume data set creation
GET macro 207, 302, 346, 365, 393, 394,
sharing 207 error 588
397, 398, 404, 434, 436, 527, 530
space management 166 PROTECT command 59
description 364
control block sharing 207 SCRATCH control statement 61
parallel input 366
error recovery 369 tape, file sequence number 12
GET_ALL function 459
journaling of errors 214 IFGEDI task, starting 375, 376
GET-locate 352, 355
overlap 359 IFGPSEDI member
pointer to current segment 390
sequential data sets, overlapping excluding data sets 376
GETBUF macro 352, 356
operations 398 setting mode 374
buffer control 347
I/O data sets, spooling 385 IGDSMSxx PARMLIB member
description 356
I/O status indicators 520, 526 GDS_RECLAIM 511
GETIX macro
PDSESHARING 479
processing the index 275
Index 671
JRNAD exit routine KSDS (key-sequenced data set) leading tape mark tape 12
back up data 48 (continued) LEAVE option 339, 346
building parameter list 250 insert record close processing 338
control interval splits 248 description 85 tape last read backward 345
deferred writes 214 sequential 142 tape last read forward 345
example 249 inserting records LERAD exit routine 153
exit, register contents 247 description 82 error analysis 253
journalizing transactions 137, 248 logical records 82 register contents 254
recording RBA changes 248 record (retrieval, storage) 177 level sharing
shared resources 214 sequential access 95 directory, member 470
transactions, journalizing 137 structural errors 235 like
values 214 VSAM (virtual storage access concatenation 393, 394
method) 408 BSAM block size 394
KU (key, update) data sets 392
K coding example 598
defined 361
DCB, DCBE 393
like concatenation 396
key
read updated ISAM record 597 LIKE keyword 31, 94, 409, 450, 506, 508
alternate 100
linear data sets 79, 96
compression 85, 284
processing 144
front 282
rear 282 L link field 605
link pack area (LPA) 456
control interval size 160 LABEL parameter 13, 59, 326, 345, 550
linkage editor
data encryption 66 label validation 343
note list 418
field label validation installation exit 327
LISTCAT command 16, 103, 112, 128,
indexed sequential data set 580 labels
160, 209
file, secondary 66 character coding 12
LISTCAT output
indexed sequential data set DASD, volumes 8
VSAM cylinders 111
adding records 600, 604 direct access
LISTCAT parameter 130
retrieving records 596, 601 DSCB 561, 565
load mode
RKP (relative key position) 585, format 561
BDAM (basic direct access
605 user label groups 564
method) 578
track index 580, 583 volume label group 562, 563
QISAM
key class exits 549, 552
description 580
key prefix 608 tape volumes 12
loading
key length large format data sets
VSAM data sets
reading a PDSE directory 445 allocating 31
REPRO command 113, 115
key-range data sets BSAM access 5
local locking, non-RLS access 229
restriction, extent consolidation 111 characteristics 412
local shared resources 208
key-sequenced 182 closing 413
locate
key-sequenced data sets free space 414
mode
ISAM compatibility interface 612 opening 413
parallel input 366
keyboard 643 processing 411
mode (QSAM) 475
keyed direct retrieval 146 QSAM access 6
LOCATE macro 24
keyed sequential retrieval 144 size limit 412
locate mode 78, 352
keyed-direct access 95, 97 last-volume
buffers 355
keyed-sequential access 95, 96, 97 extend 399
QSAM (queued sequential access
KEYLEN parameter 210, 313, 334, 445, LBI (large block interface)
method) 390
571 BLKSIZE parameter 327
records exceeding 32 760 bytes 300
KEYS parameter 107, 120, 132 block size merge 325
lock manager (CF based) 219
keyword parameters converting BSAM to LBI 434
locking unit 205
31-bit addressing, VSAM 263 DCB OPEN exit 319
logging for batch applications 52
KILOBYTES parameter 107, 129 determining BSAM block length 404
logical
KN (key, new) 598 JCL example 396
error analysis routine 253
KSDS (key-sequenced data set) 79, 163 like concatenation 396
logical block size 449
alternate index 97, 98 performance 401
logical end-of-file mark 444
buffers 173 recommendation 319
logical record
CI, CA splits 227 requesting 328
control interval 84
cluster analysis 235 system-determined block size 333,
length, SYSIN 386
control interval 403
LookAt message retrieval tool xviii
access 179 using larger blocks
lower-limit address 608
data component 146 BPAM 328
LPA (link pack area) 456
defined 6 BSAM 328
LRD (last record) 95
extent consolidation 111 QSAM 328
LRECL parameter 114, 130, 334, 365,
free space 160 writing short BSAM block 405
386, 585, 607
index LDS (linear data set)
coding in K units 310
accessing 275 allocating space for 110
records exceeding 32 760 bytes 300
processing 275 defined 6
index options 177 extent consolidation 111
Index 673
NSR (nonshared resources) (continued) overflow (continued) PDS (partitioned data set) (continued)
NSR subparameter 149 PRTOV macro 514 description 416, 419
Sequential Optimized (SO) 171 records 584 processing macros 423
Sequential Weighted (SW) 169, 171 overflow area 581 reading 438
NSR subparameter 211 Overflow Record condition 523 directory (size limit) 416
NTM parameter 584 overlap directory, updating 429
NUB (no user buffering) 180 input/output extents 437
NUIW parameter 210 performance 402 locating members 423, 424
null data set 342 overlap I/O 435 macros 424, 429
null record segments overlapping operations 398 maximum number of volumes 37
PDSE (partitioned data set number of extents 38
extended) 447 processing 18
P quick start 318
retrieving members 430, 434, 469
padded record
O end-of-block condition 306
rewriting 436, 437
space allocation 419, 420
O/EOV (open/end-of-volume) variable-length blocks 307
structure 415
nonspecific tape volume mount page
updating member 434, 436
exit 553 real storage
PDS and PDSE differences 441
volume security/verification exit fixing 188
PDSDE (BLDL Directory Entry) 458
described 556, 558 page size
PDSE (partitioned data set extended)
OAM (object access method) 6, 27 physical block size 449
ABEND D37 448
object page space 123
address spaces 479
improved control interval access 187 PAGEDEF parameter 314
allocating 31, 33, 450
object access method 6 paging
block size 442
OBJECT parameter 141 excessive 177
block size, physical 449
OBROWSE command 82, 496 paging operations
concatenation 477, 499
OBTAIN macro 24, 82, 606 reduce 359
connection 454
offset reading 361 paper tape reader 316
convert 478
OPEN macro 398, 400, 421, 474 parallel
convert (PDS, PDSE) 478
connecting program to data set 317 input processing 365
converting 421
control interval processing 187 parallel data access blocks (PDAB) 366
creating 450, 454
data sets 341 Parallel Sysplex-wide locking 229
data set compression 449
description 323, 327 partial release 109
data set types 450
EXTEND parameter 325 partitioned concatenation
defined 5, 439
functions 323, 327 including UNIX directories 499
deleting 475
multiple 341 partitioned table spaces (DB2) 113
directory 456, 465
options 325, 327 PARTREL macro 61, 82, 340
BLDL macro 455
parallel input processing 366 PASS disposition 346
description 441, 442
protect key of zero 350 PASSWD parameter 56
indexed search 440, 442
resource pool, connection 211 password
reading 477
RLS rules 228 access 56
size limit 442
OPEN TYPE=J macro 24 authorization checking 60
directory (FIND macro) 463
tape, file sequence number 13 authorize access 138
directory structure 443
OPEN UPDAT LABEL parameter 59
directory, read 476
positioning 471 non-VSAM data sets 59
directory, update 467
OPTCD parameter 365, 401, 513, 558, prompting 58
DYNALLOC macro 450
571, 574, 585 protection precautions 57
extended sharing protocol 472
control interval access 180 VSAM data sets 56
extents 449, 477, 499
master index 584 PATH parameter 485, 489
fixed-length blocked records 445
OPTCD=B path verification 50
fragmentation 450
generate concatenation 397 PATHENTRY parameter 132
free space 449
OPTCD=C option 403 PATHOPTS parameter 489
full block allocation 448
OPTCD=H PC (card punch) record format 315, 316
integrated directory 448
VSE checkpoint records 402 PDAB (parallel data access block) 558
logical block size 449
optimal block size 329 PDAB (parallel data access blocks) 366
macros 455, 468
OUTDATASET parameter 64, 129, 133 PDAB (parallel input processing) 365
maximum number of volumes 37
OUTFILE parameter 64 PDAB macro 535
member
OUTIN option 44 work area 366
add record 475
OUTIN parameter 516 PDF directory entry 416
retrieving 468
OUTINX option 44 PDS (partitioned data set)
members
OUTINX parameter 516 adding members 422
adding, replacing 452
output buffer, truncate 356 concatenation 437, 438
members (multiple) 453
OUTPUT option 44 converting to and from PDSE 421,
multiple system sharing 471
output stream 385, 387 478
multiple-system environment 473
overflow creating 420, 423
NOTE macro (TTRz) 465
area 581, 584 defined 5, 415, 417
null segments 300
chain 602 directory 424
Index 675
QSAM (queued sequential access RACF (Resource Access Control Facility) READ macro (continued)
method) (continued) (continued) direct data set 361
updating (continued) checkpoint data sets 381 existing records (ISAM data sets) 597
PDS member 436 control 53 spanned records, keys 361
user totaling 558 DASD data sets, erasing 61 read sharing
using buffers 352 DASDVOL authority 561 integrity across CI and CA splits 227
queued access method erase DASD data 60 recoverable data sets 226
buffer name hiding 55 read short block
control, pool 347 protection 54 extended-format data set 405
buffering macros 356 RACF command 55 READPW parameter 132
sequential data set 356 read 53 real buffer address 348, 349
queued indexed sequential access STGADMIN.IFG.READVTOC.volser real storage
method 5 facility class 55 for VSAM RLS buffers 226
queued sequential access method update 53 reblocking
description 6 z/OS Security Server 53 records
quick reference RAMAC Virtual Array 62 PDSE 440
accessing records 359 randomizing reblocking records
backup, recovery 45 indirect addressing 571 PDSE (partitioned data set
CCSIDs 625 RBA (relative byte address) 79, 80, 94, extended) 445
data control block (DCB) 317 95, 109, 115, 137, 139, 145, 146, 149, 151 RECATALOG parameter 107
data sets, introducing 3 JRNAD RECFM (record format)
DBCS, using 567 parameter list 214 fixed-length 306
direct access labels, using 561 recording changes 248 ISO/ANSI 306
direct access volume, space 35 locate a buffer pool 215 magnetic tape 313
direct data sets, processing 569 RBA (relative record number) parameter
generation data groups, slots 182 card punch 315, 316
processing 501 RBN (relative block number) 11, 409 card reader 315, 316
I/O device control macros 513 RD (card reader) 315, 316 sequential data sets 293
indexed sequential data sets 579 RDBACK parameter 325, 326, 552 sequential data sets 313
ISAM program, VSAM data sets 611 RDF (record definition field) 114 spanned variable-length 298, 301
JCL, VSAM 265 format 182 undefined-length 311
KSDS Cluster Errors 235 free space 85 variable-length 296, 307
KSDS, index 275 linear data set 85 RECFM parameter 334
magnetic tape volumes 11 new record length 85 RECFM subparameter 295
non-VSAM data sets, RECFM 293 records 75 reclaiming generation data sets
non-VSAM data sets, sharing 371 slot 86 overview 510
non-VSAM user-written exit structure 183, 185 procedure 511
routines 519 RDJFCB macro 24 recommendation
PDS, processing 415 allocation retrieval list 519, 538 extending data sets during EOV
PDSE, processing 439 BLKSZLIM retrieval 330 processing 345
protecting data sets 53 JFCB exit 547 recommendations
sequential data sets 389 tape, file sequence number 13 block size calculation 35
sharing resources 207 UNIX files 495 catalog, analyzing 237
spooling and scheduling data RDW (record descriptor word) 402 EXAMINE command, use VERIFY
sets 385 data mode exception before 237
UNIX, processing 481 spanned records 297 record
using 31-bit addressing, VSAM 263 description 297 access
using SMS 27 extended logical record interface 311 KSDS (key-sequenced data
VSAM data set prefix 307 set) 95, 97
define 103 segment descriptor word 300 access, password 138
examples 127 updating indexed sequential data access, path 149
organizing 73 set 600 adding to a data set 164
processing 135 variable-length records format-D 307, average length 420
sharing 191 309 block
VSAM performance 157 read boundaries 486
VSAM RLS 219 access, data set names 55 control characters 311
VSAM user-written exit routines 241 access, password 138 control interval size 158
quick start backward, truncated block 295 data set address 9
data sets forward definition field 75, 182
sequential, PDS 318 SF 400 deleting 147
integrity, cross-region sharing 198 descriptor word (see BDW) 297
integrity, VSAM data set 231, 233 direct data sets 576
R READ macro 302, 360, 373, 397, 398,
400, 403, 404, 434, 435, 527, 534, 598
direct retrieval 146
ESDS (entry-sequenced data set) 142
R0 record
basic access method 361 fixed-length
capacity record data field 571
block processing 359 full-track-index write option 585
RACF (Resource Access Control Facility)
description 361 parallel input 365
alter 53
Index 677
restrictions (continued) RLS parameter security
PDSE (continued) CR subparameter 233 APF protection 53, 60, 62
processing 444 CRE subparameter 233 cryptographic 53, 62
PRINT command, input errors 125 NRI subparameter 233 O/EOV security/verification
sharing 620 RlsAboveTheBarMaxPoolSize exit 556, 558
sharing violations 470 keyword in IGDSMSxx PARMLIB password protection 53, 55, 60
SMS member 226 RACF protection 53
absolute track allocation 37 RLSE parameter 340, 449 security (USVR) 261
ABSTR value for SPACE RLSE subparameter 109 segment
parameter 37 RlsFixedPoolSize buffer 347
STEPCAT statement 47, 618 keyword in IGDSMSxx PARMLIB control code 299
system-managed data set 47, 618 member 226 descriptor word
tapes, Version 3 or 4 327 RLSWAIT exit 255 indicating a null segment 300
TRKCALC macro 607 RLSWAIT exit routine 254 spanned records 299
UNIX files 486 RLT (Record Locator Tokens) 443 null 300
UNIX files, simulated VSAM roll-in, generation PDSE restriction 300
access 81 ALTER ROLLIN command 509, 511 Selective Forward Recovery 52
UPDAT option, compressed-format reclaim processing 510 Sequence Check condition 530
data set 398 routine sequence set record
update-in-place, compressed-format exit index entries 278
data set 398 VSAM user-written 241 sequence-set record
VSAM data set processing 7 RPL (request parameter list) 149 format 279
VSAM data sets coding guidance 242 free-control-interval entry 280
concatenation not allowed in create 138 index entries 280
JCL 18 exit routine correction 243 RBA 280
VSAM, space constraint relief 42 parameter 180 sequential
resume load mode transaction IDs 213 access
extending RPL macro 135, 140 RRDS 96, 97
indexed sequential data set 587 RRDS (relative record data set) 143 processing control interval size 159
partially filled track or cylinder 587 defined 6 sequential access buffers 176
QISAM 580 free space 108 sequential bias 169
resume loading 584 hexadecimal values 186 sequential concatenation
retained locks, non-RLS access 230 variable-length 158 data sets 391
retention period 336 RRDS (relative-record data set) 79, 86, read directories sequentially
RETPD keyword 336 96 PDSs/PDSEs 477
retrieve extent consolidation 111 UNIX directories 499
sequential data sets 390 RRN (relative record number) 47, 86, UNIX files 499
retrieving 113 sequential data set
generation data set 504, 510 run-mode, VSAM RLS 231 concatenation 437
PDS members 430, 434 device
PDSE members 468, 469 control 513, 516
records
directly 596
S modify 398
queued access method 356
S99NORES flag 377
sequentially 595 update-in-place 398
SAA (Systems Application
RETURN macro 533, 544, 545 sequential data sets 389
Architecture) 21
reusable VSAM data sets 116 chained scheduling 401
SAM (sequential access method)
REUSE parameter 64, 107, 117, 120 device
buffer space 347
REWIND option 338, 346 independence 399, 401
null record segments 447
rewriting enhanced data integrity 374
SBCS (single-byte character set) 567
PDS (partitioned data set) 437 maximum (16 extents) 406, 412
scan mode 580, 597
RKP (relative key position) modify 399
SCHBFR macro
parameter 585, 605 number of extents 38
description 215
RLS (record level sharing) quick start 318
SCRATCH macro 82
access rules 228 read 403
IEHPROGM utility program 61
index trap 234 record
scratch tape requests
RLS (record-level sharing) 103, 117 retrieve 390, 391
OPEN or EOV routines 519
accessing 228 record length 404
SDR (sustained data rate) 92
accessing data sets 219 striping 39
SDW (segment descriptor word)
CF caching 220 sequential data striping
conversion 309
read integrity options 231 compared with VSAM striping 39
description 299
run-mode requirements 231 extended-format data sets 410
format-S records 309
setting up resources 219 migrating extended-format data
format-V records 299
specifying read integrity 233 sets 410
location in buffer 361
timeout value for lock requests 233 sequential insertion 142, 143, 144
secondary
RLS Above the 2–GB Bar sequential millisecond response 169
space allocation 38, 92, 110
ISMF Data Class keyword 225 serialization
storage devices 3
SYSZTIOT resource 538
secondary key-encrypting keys 66
Index 679
striped SYNAD exit routine (continued) tape data sets (continued)
alternate index 93 synchronous creating with file sequence number >
CA (control area) 93 programming considerations 533 9999 13
data sets, SMS 44 temporary close restriction 338 TAPEBLKSZLIM keyword 330
extended-format sequential data SYNAD exit routines 535 tasks
sets 37 SYNAD parameter 519 <gerund phrase>
multivolume VSAM data sets 39 SYNAD routine steps for 492
VSAM data sets 38 CHECK routine 359 bypassing enhanced data integrity,
VSAM, space constraint relief 42 SYNADAF macro 329, 533, 534, 615 applications 376
striped data sets description 368, 369 copying PDS 421
extended-format data set 391 example 623 copying PDSE 454
multi 409 message format 368, 369 copying UNIX files
multistriped 410 SYNADRLS macro 534 OCOPY command 494
number of buffers 319 description 369 OGET command 494
partial release request 409 SYNCDEV macro 445, 479, 517 OGETX command 495
RLS (not supported) 230 synchronizing data 517 OPUT 494
sequential data 406 synchronous mode 150 OPUTX 494
single-striped 410 SYS1.DBBLIB 408 steps for 494
striped VSAM data set SYS1.IMAGELIB data set 514 creating a UNIX macro library
maximum number of extents 93 SYS1.PARMLIB steps for 489
striping IFGPSEDI member 374, 376 creating UNIX files
data (layering) 91 SYSIN data set 385 steps for 485
DB2 (partitioned table spaces) 113 input stream 386 diagnosing enhanced data integrity
guaranteed space attribute 41 logical record length 386 violations 377
Hiperbatch 406 routing data 387 displaying UNIX files and directories
sequential data 406 SYSOUT data set 385 steps for 492
sequential data (migrating control characters 311, 387 enhanced data integrity, setting
extended-format data sets) 410 routing data 385, 387, 515 up 374
space allocation 92 sysplex reclaiming generation data sets
VSAM data 89 volumes assignments for PDSEs 473 overview 510
STRMAX field 210 sysplex, enhanced data integrity 376 procedure 511
STRNO parameter 173, 176 system setting up enhanced data integrity
structural analysis, data sets 235 cross multiple systems 376
SUBALLOCATION parameter 109 sharing 205 overview 374
subgroup, point determined block size termination, QSAM 343, 401
note list 422 tape data sets 331 temporary
subpool enhanced data integrity 376 close option 338, 344
resources input stream 385, 387 data set names 269
shared 208 output stream 385, 387 TEMPORARY attribute 48
subpool 252 system-determined block size 329 temporary file system
user key storage 350 different data types 328 accessing 7
subpool, shared resources 217 SYSVTOC enqueue 345 defined 481
subtask sharing 192 SYSZTIOT resource 538 UNIX file system 20
SUL (IBM standard user label) 12 exit routines TESTCB ACB macro 194
suppression, validation 343 DCB OPEN exit 543 TESTCB macro 135, 140
SW (Sequential Weighted) 169, 171 request parameter list 213
switching from 24-bit addressing TFS (temporary file system) 7
mode 411
symbolic links, accessing 7
T TFS files
defined 481
tape
SYNAD (physical error exit) 153 type of UNIX file system 20
data set
SYNAD exit routine 136, 152, 385, 524, time sharing option 63
system-determined block size 331
526, 528, 529 timeout value for lock requests 233
density 313
add records TIOT chain 538
end 363
ISAM data set 602 TIOT option 391
exceptional conditions 363
analyzing errors 256 track
labels
changing address in DCB 337 capacity 158
identifying volumes 12
deferred, write buffers 214 format 8
library 3, 4, 27, 29
example 257 index
mark 15, 363
exception codes 530 entries 582
recording technique (TRTCH) 394
macros used in 368, 369 indexed sequential data set 581
to disk
programming considerations 256 resume load 587
create direct data sets 572
register contents 531 number on cylinder 587
update direct data sets 577
DCB-specified 615 overflow 9
to print 390
entry 256, 532 TRACKS parameter 107
volume positioning 338
SETL option 609 trailer label 397, 549
tape data sets
sharing a data set 373 transaction ID
creating with file sequence
relate action requests 213
number 13
Index 681
volumes (continued) VSAM (virtual storage access method) VTOC (volume table of contents)
positioning (continued) (continued) (continued)
releasing volume 344 linear data sets pointer 563
security/verification exit 556, 558 description 85 reading data set names 55
separation data (index) 178 logical record retrieval 73 VTS (Virtual Tape Server) 27, 29
switching 362, 364, 391 lookaside processing 148 VVDS (VSAM volume data set) 107, 124
system-residence 60 mode (asynchronous, VVR (VSAM volume record) 125
unlabeled tape 14 synchronous) 150
VOLUMES parameter 107, 120, 129, 132, non-RLS access to data sets 227
274
VRRDS (variable-length RRDS) 87
number of extents 38
performance improvement 157
W
WAIT macro 359, 362, 365, 397, 403, 523,
VSAM (virtual storage access method) processing data sets 18
569, 576, 578, 598
31-bit addresses programming considerations 241
description 363
buffers above 16 MB 166 relative- record data sets
WAREA parameter 140
keywords 264 variable length records 87
WORKFILES parameter 121
multiple LSR pools 208 relative-record data set
write integrity, cross-region sharing 199
31-bit addressing 263 accessing records 96, 97
WRITE macro 302, 360, 373, 398, 400,
addressing mode (31-bit, 24-bit) 17 relative-record data sets
405, 418, 421, 422, 434, 435, 444, 456,
allocate data sets 32 fixed-length records 86, 87
470, 516, 527, 533, 534, 571, 575, 602, 604
allocating space for data sets 108 variable-length records 87
block processing 359
alternate index 97 reusing data sets 116
description 361
backup program 48 RLS
K (key) 597
buffer 157 timeout value for lock
READ request 574
catalog (generation data group requests 233
S parameter 329
base) 501 using 219
write validity check 335
CICS VSAM Recovery 52 RLS CF caching 220
WRITECHECK parameter 108
cluster (replacing) 48 sample program 153, 154
writing
control interval 107 Sequential Weighted (SW) 171
buffer 213
control interval size 157 shared information blocks 201
WRTBFR macro 212, 217
converting from ISAM to VSAM 617 specifying read integrity 233
deferred, writing buffers 213
Create Optimized (CO) 172 sphere 195
WTO macro 533
Create Recovery Optimized (CR) 172 string processing 166
data set striping 39, 113
access 94 structural analysis 235
types 101 temporary data set 269 X
data set (logical record retrieval) 73 UNIX files 485 XDAP macro 21
data sets upgrading alternate indexes 100 XLRI (extended logical record interface)
defining 103 user-written exit routines using 310
types 78 coding guidelines 242
description 6 functions 241
DFSMStvs access 219
Direct Weighted (DW) 171
volume data sets 107
VSAM data sets
Z
z/OS Security Server 53
entry-sequenced data set 95 Extended Addressability 38
z/OS UNIX files
description 79 striped 38, 39
accessing 7
entry-sequenced data sets VSAM user-written exit routines
defined 5
description 81 coding 241
processing 20
error analysis 136 data sets 243
zFS (zSeries file system) 7
EXAMINE command 235 guidelines for coding 241
zSeries File System
extending a data set 111 multiple request parameter lists 243
accessing 7
extending data 110 programming guidelines 242
defined 481
Hiperspace 18 return to a main program 243
type of UNIX file system 20
I/O buffers 166 VSE (Virtual Storage Extended)
UNIX file system 20
ICI (improved control interval embedded checkpoint records 514,
access) 188 516
ISAM programs for processing 613 chained scheduling 402
JCL DD statement 265 embedded checkpoint records
key-sequenced data set 95 (BSP) 515
examining for errors 235 embedded checkpoint records
index processing 275 (CNTRL) 513
key-sequenced data sets tapes 402
description 82, 85 VSI blocks
KSDS (key-sequenced data set) 408 cross-system sharing 201
levels, password data component 206
control 53 VTOC (volume table of contents) 8, 561
master 53 description 23
read 53 DSCB (data set control block) 564
update 53 ISAM data set 582
Overall, how satisfied are you with the information in this book?
How satisfied are you that the information in this book is:
When you send comments to IBM, you grant IBM a nonexclusive right to use or distribute your comments in any
way it believes appropriate without incurring any obligation to you.
Name Address
Company or Organization
Phone No.
___________________________________________________________________________________________________
Readers’ Comments — We’d Like to Hear from You Cut or Fold
SC26-7410-05 Along Line
_ _ _ _ _ _ _Fold
_ _ _and
_ _ _Tape
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _Please
_ _ _ _ _do
_ _not
_ _ staple
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _Fold
_ _ _and
_ _ Tape
______
NO POSTAGE
NECESSARY
IF MAILED IN THE
UNITED STATES
IBM Corporation
Department 55JA, Mail Station P384
2455 South Road
Poughkeepsie, NY
12601-5400
_________________________________________________________________________________________
Fold and Tape Please do not staple Fold and Tape
Cut or Fold
SC26-7410-05 Along Line
Printed in USA
SC26-7410-05