OPC Scheduler
OPC Scheduler
Vasfi Gucer
Stefan Franke
Wilhelm Hillinger
Peter May
Jari Ypya
Paul Rodriguez
Henry Daboub
Sergio Juri
ibm.com/redbooks
SG24-6013-00
Take Note!
Before using this information and the product it supports, be sure to read the general information in
Appendix H, Special notices on page 395.
Contents
Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xi
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
The team that wrote this redbook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
Comments welcome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xx
Chapter 1. Introduction . . . . . . . . . . . . . . . . . . .
1.1 Job scheduling in the enterprise . . . . . . . . . . .
1.2 Overview and basic architecture of OPC . . . .
1.2.1 OPC overview . . . . . . . . . . . . . . . . . . . .
1.2.2 OPC concepts and terminology . . . . . . .
1.2.3 OPC architecture . . . . . . . . . . . . . . . . . .
1.3 Overview and basic architecture of TWS . . . .
1.3.1 TWS overview . . . . . . . . . . . . . . . . . . . .
1.3.2 TWS concepts and terminology . . . . . . .
1.3.3 TWS architecture . . . . . . . . . . . . . . . . . .
1.4 Tivoli Enterprise product structure . . . . . . . . .
1.4.1 Tivoli Management Framework . . . . . . . .
1.4.2 Tivoli Management Agent . . . . . . . . . . . .
1.4.3 Toolkits. . . . . . . . . . . . . . . . . . . . . . . . . .
1.5 Benefits of integrating OPC 2.3 and TWS 7.0 .
1.6 A glimpse into the future. . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. .1
. .1
. .2
. .2
. .2
. .4
. .5
. .5
. .5
. .7
. .7
. .7
. .8
. .9
. .9
. 10
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. 11
. 11
. 11
. 12
. 12
. 12
. 13
. 14
. 15
. 15
. 15
. 17
. 17
. 18
. 19
. 20
. 21
. 26
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
..
..
..
..
..
..
..
..
..
..
..
..
..
..
..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
..
..
..
..
..
..
..
..
..
..
..
..
..
..
..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
..
..
..
..
..
..
..
..
..
..
..
..
..
..
..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
iii
iv
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
..
..
..
..
..
..
..
..
..
..
..
..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. . 79
. . 81
. . 85
. . 86
. . 86
. . 87
. . 89
. 110
. 113
. 116
. 116
. 117
. 122
.
.
.
.
.
.
.
.
.
.
.
.
.
..
..
..
..
..
..
..
..
..
..
..
..
..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. 123
. 123
. 124
. 124
. 127
. 127
. 128
. 129
. 130
. 130
. 132
. 132
. 133
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
. 37
. 37
. 38
. 38
. 39
. 42
. 55
. 70
. 72
. 72
. 77
5.5
5.6
5.7
5.8
..
..
..
..
..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
..
..
..
..
.
.
.
.
.
.
.
.
.
.
. 135
. 136
. 142
. 143
. 145
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
..
..
..
..
..
..
..
..
..
..
..
..
..
..
..
..
..
..
..
..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. 157
. 157
. 161
. 161
. 162
. 163
. 164
. 165
. 165
. 167
. 168
. 169
. 174
. 174
. 177
. 189
. 194
. 194
. 195
. 197
. 201
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
..
..
..
..
..
..
..
..
..
..
..
..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. 203
. 203
. 203
. 205
. 209
. 210
. 212
. 221
. 236
. 237
. 237
. 238
. 245
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
..
..
..
..
..
..
..
..
..
..
..
..
.
.
.
.
.
.
.
.
.
.
.
.
.
8.4.1 Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.4.2 Submitting a Job Stream/Job/Command into TWS via CLI .
8.4.3 Create jobs for TWS Extended Agent for OS/390 . . . . . . . .
8.4.4 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.4.5 Demonstration run . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.5 User interaction with OPC and TWS . . . . . . . . . . . . . . . . . . . . . .
8.5.1 Network for the scenario. . . . . . . . . . . . . . . . . . . . . . . . . . .
8.5.2 The solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
vi
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. 245
. 248
. 248
. 252
. 260
. 265
. 265
. 267
. 275
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. 277
. 277
. 277
. 277
. 278
. 279
. 282
. 283
. 283
. 284
. 288
. 288
. 288
. 289
. 289
. 296
. 297
. 298
. 299
. 299
. 299
. 299
. 300
. 300
. 300
. 300
. 302
. 302
. 303
. 303
. 304
. 305
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
..
..
..
..
..
..
..
..
..
..
..
..
..
..
..
..
..
..
..
..
..
..
..
..
..
..
..
..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
..
..
..
..
..
..
..
..
..
..
..
..
..
..
..
..
..
..
..
..
..
..
..
..
..
..
..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. 307
. 307
. 307
. 308
. 309
. 311
. 312
. 313
. 313
. 313
. 314
. 314
. 315
. 320
. 321
. 321
. 322
. 323
. 323
. 325
. 326
. 326
. 327
. 327
. 328
. 328
. 329
. 329
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. 331
. 331
. 331
. 332
. 332
. 333
. 333
. 337
. 341
. 343
. 343
. 345
vii
. . . . . . . . . . . . . . 371
. . . . . . . . . . . . . . 371
. . . . . . . . . . . . . . 375
. . . . . . . . . . . . . . 378
viii
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
.......
.......
.......
.......
.......
......
......
......
......
......
.......
.......
.......
.......
.......
......
......
......
......
......
.
.
.
.
.
399
399
399
399
400
ix
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
Figures
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.
35.
36.
37.
38.
39.
40.
xi
41.
42.
43.
44.
45.
46.
47.
48.
49.
50.
51.
52.
53.
54.
55.
56.
57.
58.
59.
60.
61.
62.
63.
64.
65.
66.
67.
68.
69.
70.
71.
72.
73.
74.
75.
76.
77.
78.
79.
80.
81.
82.
83.
xii
Client Install . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Add Clients. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
File Browser . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Install Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Client Install confirmation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
Patch installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Install Patch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Install Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Media selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Product selection for install . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
Install TWS Connector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Multiple TWS Connector instance scenario . . . . . . . . . . . . . . . . . . . . . . . . 70
Tivoli Job Scheduling Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
JSC Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Select language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Choose Install folder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Shortcut location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Installation successful message . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
JSC start . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Starting the JSC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
JSC password prompt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
JSC release level notice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
User preferences file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Open Location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Starting with the JSC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Creating a Job Instance list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
Job instance properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
Choosing job instance status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
Listing errored jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Making a Ready List for a JCL prep workstation . . . . . . . . . . . . . . . . . . . . 94
Ready List for a JCL Prep Workstation . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
Job properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
Changing job status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Creating a new Job Stream . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Job stream properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
Defining jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
Job (operation) details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
Job (operation) icon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
Defining internal dependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
Dependency arrow. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
Scheduling the job stream . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
Using rules ISPF dialog panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
Using rules JSC dialog. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
xiii
xiv
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
xv
xvi
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
Preface
The beginning of the new century sees the data center with a mix of work,
hardware, and operating systems previously undreamt of. Todays challenge
is to manage disparate systems with minimal effort and maximum reliability.
People experienced in scheduling traditional host-based batch work must
now manage distributed systems, and those working in the distributed
environment must take responsibility for work running on the corporate
OS/390 system.
This redbook considers how best to provide end-to-end scheduling using
Tivoli Operations Planning and Control, Version 2 Release 3 (OPC), and
Tivoli Workload Scheduler, Version 7 Release 0 (TWS).
In this book, we provide the information to install the necessary OPC and
TWS software components and configure them to communicate with each
other.
In addition to technical information, we will consider various scenarios that
may be encountered in the enterprise and suggest practical solutions. We will
describe how to manage work and dependencies across both environments
using a single point of control.
xvii
xviii
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
xix
The team would like to express special thanks to Warren Gill and Cesare
Pagano for their major contributions to this book. They did a great job in
finding the contacts, reviewing the material, and, most importantly,
encouraging us throughout the whole redbook process.
Thanks also to Finn Bastrup Knudsen and Doug Specht from IBM for
reviewing the book for us.
We also would like to thank the following people for their invaluable
contributions to this project:
Maria Pia Cagnetta, Anna Dawson, Eduardo Esteban, Jamie Meador, Geoff
Pusey, Bob Rodriguez, Craig Sullivan, Jose Villa
Tivoli Systems
Caroline Cooper, Morten Moeller
International Technical Support Organization, Austin Center
Theo Jenniskens, Wolfgang Heitz, Ingrid Stey
IBM
Comments welcome
Your comments are important to us!
We want our Redbooks to be as helpful as possible. Please send us your
comments about this or other Redbooks in one of the following ways:
Fax the evaluation form found in IBM Redbooks review on page 417 to
the fax number shown on the form.
Use the online evaluation form found at ibm.com/redbooks
Send your comments in an Internet note to [email protected]
xx
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
Chapter 1. Introduction
This redbook provides practical examples for using Tivoli Operations
Planning and Control, Version 2 Release 3 (OPC), and Tivoli Workload
Scheduler, Version 7 Release 0 (TWS 7.0), to provide end-to-end scheduling
in an industry environment.
We will discuss the basic architecture of the OPC and TWS products and
show how they work with the Tivoli Framework and the Job Scheduling
Console (JSC). The JSC is a common graphical user interface (GUI)
introduced in these versions of the products, which gives the user one
standard interface with which to manage work on both OPC and TWS.
We will recommend techniques for the installation of OPC, TWS, Tivoli
Framework, and JSC and provide troubleshooting tips. We will document how
to achieve a successful integration between the Tivoli Framework and JSC.
Job scheduling in the enterprise will be considered; several real-life scenarios
will be examined and best practice solutions recommended.
The solution is to use OPC and TWS working together across the enterprise.
The Job Scheduling Console provides a centralized control point with a single
interface to the workload regardless of whether that workload is running
under OPC or TWS.
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
Plans
OPC builds operating plans from your descriptions of the production
workload. First, a long-term plan (LTP) is created, which shows (usually, for
one or two months) the job streams (applications) that should be run each
day and the dependencies between jobstreams. Then, a more detailed
current plan is created. The current plan is used by OPC to submit and
control jobs (operations). You can simulate the effects of changes to your
production workload, calendar, and installation by generating trial plans.
Job streams
A job stream is a description of a unit of production work. It includes a list of
the jobs (related tasks) associated with that unit of work. For example, a
payroll job stream might include a manual task in which an operator prepares
a job, several computer-processing tasks in which programs are run to read a
database, update employee records, and write payroll information to an
output file, and a print task that prints paychecks.
Workstations
OPC supports a range of work process types, called workstations, that map
the processing needs of any task in your production workload. Each
workstation supports one type of activity. This gives you the flexibility to
schedule, monitor, and control any data center activity, including:
Job setup (manual and automatic)
Jobs
Started tasks
NetView communication
Print jobs
Manual preprocessing or postprocessing activity.
Special resources
You can use OPC special resources to represent any type of limited resource,
such as tape drives, communication lines, or a database. A special resource
can be used to serialize access to a dataset or to limit the number of file
transfers on a network link. The resource does not have to represent a
physical object in your configuration, although it often does.
Dependencies
Chapter 1. Introduction
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
Version 2.3 introduced a new Java GUI, the Job Scheduling Console (JSC).
The JSC provides a common interface to both OPC and TWS.
Chapter 1. Introduction
MONTHEND, containing a list of each last business day of the month for the
current year, and a calendar, named HOLIDAYS, containing a list of your
companys holidays. At the start of each processing day, TWS automatically
selects all the job streams that run on that day and carries forward incomplete
job streams from the previous day.
Workstations
A workstation is usually an individual computer on which jobs and job
streams are executed. A workstation definition is required for every computer
that executes jobs in the TWS network. Primarily, workstation definitions refer
to physical workstations.
However, in the case of extended agents and network agents, the
workstations are logical definitions that must be hosted by a physical TWS
workstation.
There are several types of workstations in a TWS network:
Job Scheduling Console Client
Any workstation running the Job Scheduling Console GUI can manage the
TWS plan and database objects.The Job Scheduling Console works like a
remote console and can be installed on a machine that does not have the
TWS engine installed.
Master Domain Manager
The Domain Manager in the topmost domain of a TWS network. It contains
the centralized database files used to document scheduling objects. It creates
the production plan at the start of each day and performs all logging and
reporting for the network.
Domain Manager
The management hub in a domain. All communications to and from the
agents in a domain are routed through the Domain Manager.
Backup Domain Manager
A backup domain manager is a fault-tolerant agent capable of assuming the
responsibilities of its Domain Manager.
Fault-tolerant Agent
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
Chapter 1. Introduction
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
1.4.3 Toolkits
Toolkits are supplied to enable customers to extend the functions of
applications and develop new applications using standard APIs.
Chapter 1. Introduction
10
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
11
12
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
events will be lost and the current plan will not be updated accordingly. See
also DOC APAR PQ07742.
If you are not sure what your MAS looks like, go to SDSF and enter:
/$D MEMBER
You will receive output similar to that shown in the following screen:
13
For JES2 exits, which have been reassembled, at least a JES2 hotstart must
be performed in order to fetch the new version.
must be two to four characters. All the Tivoli OPC subsystem names, as
defined in the SYS1.PARMLIB member IEFSSNnn, must be unique within a
GRS complex. Also, the Tivoli OPC subsystem names of the OPC Controllers
must be unique within your OPCplex/OPC network (both local and remote
systems). See also DOC APAR PQ19877 for more details.
module name is the name of the subsystem initialization module, EQQINITD
(for Tivoli OPC 2.3.0).
maxecsa defines the maximum amount of extended common service area
(ECSA) that is used to queue Tivoli OPC job-tracking events. The value is
expressed in kilobytes (1 KB equals 1024 bytes). The default is 4, which
means that a maximum of 4 KB (4096 bytes) of ECSA storage is needed to
queue Tivoli OPC job-tracking events. When a tracker subsystem is down, the
events are buffered in common storage until the tracker is restarted; so, if the
size of ECSA is not sufficient, you will loose events and message EQQZ035E
is written in the message log. The size of the ECSA depends on your
installation and the expected amount of possible buffered events. The
MAXECSA table, on page 53 of the Installation Guide, shows you the
relationship between the MAXECSA value and the number of events that can
be held in common storage.
suffix is the module name suffix for the EQQSSCM module that EQQINITD
14
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
module. The suffix must be a single character. Because the name of the
module shipped with Tivoli OPC is EQQSSCMD, you should specify a suffix
value of D.
15
before working with the OPC panel library. Table 1 lists the ISPF and OPC
datasets.
Table 1. ISPF and OPC dialog datasets
DDNAME
OPC use
created by / found in
SYSPROC
Clist library
SEQQCLIB
ISPPROF
ISPPLIB
panel library
SEQQPxxx
ISPMLIB
message library
SEQQMxxx
ISPSLIB
EQQJOBS option 2
ISPTLIB
read tables
SEQQTBL0
Note
If you use the ISPF command table, EQQACMDS, invoke Tivoli OPC as a
separate ISPF application with the name, EQQA. If you want to use a
different ISPF application name, such as EQQB, create a command table
with the name EQQBCMDS. If necessary, you can modify or create an
ISPF command table using ISPF/PDF Option 3.9. Note that ISPF/PDF
Option 3.9 writes the created or modified table to the dataset allocated to
the ISPTABL.
If you notice that dialog commands, such as right or left , are not working
anymore, the invoked ISPF application name does not match the used
command table.
There is a useful TSO command, called ISRDDN , that you can use to check
the current dialog allocations. All you have to do is enter the TSO ISRDDN
command from any OPC panel. It might be useful to press the Help key to get
familiar with all the functions of ISRDDN. In addition, TSO ISRDDN can help
you easily find all the libraries in your allocation that contain a specific
member.
The next screen shows an example of TSO ISRDDN output.
16
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
Disposition,Act,DDname
SHR,KEEP >
ADMPCF
SHR,KEEP >
ISPLLIB
MOD,KEEP >
ISPLOG
SHR,KEEP >
ISPMLIB
SHR,KEEP >
SHR,KEEP >
SHR,KEEP >
SHR,KEEP >
ISPPLIB
SHR,KEEP >
SHR,KEEP >
SHR,KEEP >
SHR,KEEP >
SHR,KEEP >
Scroll ===>PAGE
Data Set Name List Actions: B E V F C I Q
GDDM.SADMPCF
OPCESA.V2R3M0.SEQQLMD0
SFRA4.SPFTEMP.LOG
OPCESA.V2R3M0.SEQQMSG0
VS4.ISPF.MLIB
ISP.SISPMENU
ISF.SISFMLIB
OPCESA.V2R3M0.SEQQPENU
OPCESA.V2R3M0.SEQQPNL0
ISP.SISPPENU
ISF.SISFPLIB
CPAC.ISPPLIB
MK4.USER.PENU
17
must have a TCP/IP server installed. Figure 2 shows the dataflow from the
JSC to the Controller subsystem with all the necessary components.
Platform
Operating system
Microsoft Windows NT
Version 4.0 with SP 3 or later
Microsoft Windows 95
Microsoft Windows 98
RS/6000
SPARC System
In addition, you need to have the following software installed on your machine
or a machine that you can access:
18
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
On Windows:
a. From the Start menu, select the Run option to display the Run
dialog.
b. In the Open field, enter F:\Install.
On AIX:
a. Type the following command:
jre -nojit -cp install.zip install
19
On Sun Solaris:
a. Change to the directory where you downloaded install.zip before
running the installer.
b. Enter sh install.bin.
The splash window, shown in the Figure 3, is displayed.
Note that you can also select languages other than English from this window.
On Windows:
Depending on the shortcut location that you specified during installation,
click the JS Console icon or select the corresponding item in the Start
menu.
20
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
You can also start the JSC console from the command line. Just type
runcon from the \bin\java subdirectory of the installation path.
On AIX:
Type ./AIXconsole.sh.
On Sun Solaris:
Type . /SUNconsole.sh.
A Tivoli Job Scheduling Console start-up window is displayed as shown in
Figure 4.
21
22
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
Type in the Login name and press Enter. Then, select Set & Close as shown
in Figure 7 on page 24.
23
4. Enter the name of the group. This field is used to determine the GID under
which many operations are performed. Then select Create and Close.
This is shown in Figure 8 on page 25.
24
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
You will see again the administrator panel with the newly-defined Tivoli
Administrator as shown in Figure 9 on page 26.
25
26
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
Platform
Operating System
RS/6000
UX 10.x and 11
SPARC System
where:
node is the name or the ID of the managed node on which you are
creating the instance. The name of the TMR server is the default.
engine_name is the name of the new instance.
address is the IP address of the OS/390 system where the OPC
subsystem to which you want to connect is installed.
port is the port number of the OPC TCP/IP server to which the OPC
Connector will connect.
You can also run the wopcconn utility in interactive mode. To do this, perform
the following steps:
1. At the command line, enter wopcconn with no arguments.
2. Select choice number 1 in the first menu.
Authorization Roles
To manage OPC Connector instances from a TMR server or Managed node,
you must be a Tivoli Administrator. To control access to OPC, the TCP/IP
27
server associates each Tivoli administrator to a RACF user. For this reason, a
Tivoli Administrator should be defined for every RACF user.
Each Tivoli administrator has one or more roles. To use or manage OPC
Connectors, you need the following roles:
user
- To use the instances
- To view instance settings
admin, senior, or super
- To perform all actions available to the user role
- To create and remove instances
- To change instance settings
- To start and stop instances
28
create an instance
stop an instance
start an instance
restart an instance
remove an instance
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
where:
node is the name or the object ID (OID) of the managed node on which
you are creating the instance. The TMR server name is the default.
engine_name is the name of the new or existing instance.
object_id is the object ID of the instance.
new_name is the new name for the instance.
address is the IP address of the OS/390 system where the OPC
subsystem to which you want to connect is installed.
port is the port number of the OPC TCP/IP server to which the OPC
Connector must connect.
trace_level is the trace detail level from 0 to 5. trace_length is the
maximum length of the trace file.You can also use wopcconn in interactive
mode. To do this, just enter the command, without arguments, in the
command line.
Example
We used a Tivoli OPC V2R3 with the IP address 9.39.62.19.On this machine,
a TCP/IP Server connects to Port 3111. ITSO7 is the name of the TMR server
where we installed the OPC Connector. We called this new Connector
instance OPC.
With the following command, our instance has been created:
wopcconn -create -h isto7 -e opc -a 9.39.62.19 -p 3111
29
:
:
:
:
OPC
1929225022.1.1289#OPC::Engine#
itso7
Active
OPC version
: 2.3.0
2. Name
: OPC
: 524288
: 3
0. Exit
30
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
//OPCSP
EXEC PGM=EQQSERVR,REGION=0M,TIME=1440
//*********************************************************************
//* THIS IS A SAMPLE STARTED TASK PROCEDURE FOR AN OPC TCP/IP SERVER
//* IT SHOULD BE REVIEWED AND MODIFIED AS REQUIRED
//* TO SUIT THE NEEDS OF THE INSTALLATION.
//*** MOD 210601 BY MICHAELA ***
//*********************************************************************
//STEPLIB DD DISP=SHR,DSN=OPCESA.V2R3M0.SEQQLMD0
//SYSTCPD DD DISP=SHR,DSN=TCPIP.IV4.TCPPARMS(TCPDATA)
//EQQMLIB DD DISP=SHR,DSN=OPCESA.V2R3M0.SEQQMSG0
//EQQMLOG DD DISP=SHR,DSN=OPC.V2R3M0.MLOGS
//EQQPARM DD DISP=SHR,DSN=OPC.V2R3M0.PARM(OPCSP)
//SYSMDUMP DD DISP=MOD,DSN=OPC.V2R3M0.SYSDUMPS
//EQQDUMP DD DISP=SHR,DSN=OPC.V2R3M0.EQQDUMPS
//*
Be aware that the calendar definition is required but missing in the OPC
documentation. The calendar definition should be as follows:
SERVOPTS SUBSYS(OPCA) /* OPC Controller the server connects */
USERMAP(MAP1)
PROTOCOL(TCPIP)
/* Communication protocol */
PORTNUMBER(3111)
31
32
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
Security Product
Security Sever
(RACF)
Other SAF
compliant
In every case
Level
OS/390 V2R6 or
later
Solution
TMEADMIN
TMEADMIN
Prerequisite
None(TMEADMIN
class provided
insOS/390 base
Manually define
TMEADMIN
class(using
EQQ9RFE and
EQQ9RF01
samples)
OPC ID mapping
table
33
Security Product
Level
Solution
Prerequisite
Security Server
(RACF)
TMEADMIN
RACF
MVS V5r2.2
TMEADMIN
UW37652(PTF for
RACF V2.2.0)
Other SAF
compliant
Manually define
TMEADMIN
class(using
EQQ9RFE and
EQQ9RF01
samples)
TMEADMIN
OPC ID mapping
table
In every case
We want to point out that every RACF user who has update authority to this
usermap table may get access to the OPC subsystem. For your security,
planning the usermap table should be an important resource to protect.
We used the OPC ID mapping table in our environment because it is an easy
and comfortable way to show access to the OPC subsystem without the need
for deep RACF knowledge.
Figure 10 shows the relationship between the EQQPARM member and the
usermap.
EQQPARM(XXX)
SERVOPTS SUBSYS(OPCA)
USERMAP(MAP1)
PROTOCOL(TCPIP)
PORTNUMBER(3111)
CODEPAGE(IBM-037)
INIT CALENDAR(DEFAULT)
EQQPARM(MAP1)
34
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
We want to demonstrate the path from the JSC logon, through the Tivoli
Framework to the TCP/IP server:
1. A user, May, opens a JSC logon panel in order to log on to the OPC
Controller as shown in Figure 11.
35
3. The usermap on the OPC TCP/IP server maps the user Tivoli Framework
Administrator, May, to a RACF user ID, SFRA4; therefore, access is
granted.
Usermap
2.16 Summary
In this chapter, we have covered the installation of OPC V2R3 Controller, Job
Scheduling Console (JSC), OPC Connector, and TCP/IP Server support. We
also gave examples of how to create a Tivoli Administrator, which is required
to access to JSC.
You may refer to the Tivoli OPC V2R3 Installation Guide, SH19-4379, for
more information on installing these modules.
36
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
37
On Windows NT:
Window NT version 4.0 with Service Pack 4 or higher.
NTFS partition with approximately 200 MB free disk space is
recommended. The amount of space needed may increase if you have
many jobs and save log files for a long time.
TCP/IP network.
A TWS user account is required for proper installation. You can create the
account beforehand or have the Setup program create it for you.
On Unix:
At least 250 MB of free disk space is recommended. The amount of space
needed may increase if you have many jobs and save log files for a long time.
Master and Domain managers save more information to disks.
38
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
f. Log in to localhost using telnet and change the password. This change
must be made because AIX, by default, requires you to change the
user password at the first login:
$ telnet localhost
2. Log in as root, create the file system, and type smitty storage.
a. Select File Systems
b. Select Add / Change / Show / Delete File Systems
c. Select Journaled File Systems
d. Select Add a Journaled File System
e. Select Add a Standard Journaled File System
f. Select rootvg in the Volume Group name field and press enter as seen
in the following.
39
F1=Help
Esc+5=Reset
F9=Shell
F2=Refresh
F6=Command
F10=Exit
[Entry Fields]
rootvg
[500000]
[/opt/maestro]
yes
read/write
[]
no
4096
4096
8
F3=Cancel
F7=Edit
Enter=Do.
#
+
+
+
+
+
+
+
F4=List
F8=Image
5. Change the directory to TWS home and untar the installation package:
$ cd /opt/maestro
$ tar -xvf /cdrom/TIVOLI/AIX/MAESTRO.TAR
40
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
if [-x /opt/maestro/StartUp]
then
echo "netman started..."
/bin/su - maestro -c "/opt/maestro/StartUp"
fi
Or, to start the entire TWS process tree (typically, on the TWS Master):
if [-x /opt/maestro/bin/conman]
then
echo "Workload Scheduler started..."
/bin/su - maestro -c "/opt/maestro/bin/conman start"
9. Configure the TWS Master. Log in to the TWS Master as maestro user.
Use the composer command to add TWS Workstations and SFinal Job
Stream to the database:
$ composer
- new
cpuname MDM
os UNIX
node 10.69.14.8
description "Master Domain Manager"
for Maestro
autolink on
resolvedep on
fullstatus on
end
10.Create a new symphony file that includes the Master Domain Manager
workstation definition. To do this, add the final job stream to the production
cycle. This job stream contains the Jnextday job, which automates the
creation of the symphony file.
$ composer add Sfinal
41
$ Jnextday
You can now begin defining additional scheduling objects in the CLI,
including workstations, jobs, and job streams, or you can continue to
install the TWS Connector and the Job Scheduling Console.
Note that new objects are not recognized until the Jnextday job runs in the
final job stream. By default, the final job stream runs at 5:59am. If you
want to incorporate new scheduling objects sooner, you can run Jnextday
manually as in the preceding step 11 or use the conman submit command.
For information about defining your scheduling objects, refer to the Tivoli
Workload Scheduler 7.0 Plus Users Guide, GC31-8401.
42
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
1. Any oservs currently running on the systems you are about to install must
be shut down (only for previously installed versions of Tivoli Enterprise)
2. The names of the Tivoli Managed Nodes must be included in the
/etc/hosts file, the Windows NT LMHOSTS file, the NIS host map, or the
name server.
3. All potential Tivoli users must have at least read access to the Tivoli
directories.
If your company has not already installed Tivoli Framework, the following are
the detailed instructions to install the TMR Server on Windows NT. For more
information and instructions to install on UNIX platforms, you may refer to the
manual, TME 10 Framework 3.6 Planning and Installation Guide , SC31-8432.
If you will install Tivoli Management Framework only for TWS, OPC, and JSC,
you may perform the following steps:
1. Insert the Tivoli Framework 3.6 CD-ROM.
2. Select Start->Run from the Windows NT Desktop, type D:\setup.exe, and
press Enter where D:\ is the CD-ROM drive.
3. Click Next to start installation as shown in Figure 13.
43
5. Press Yes to add advanced user rights to the TME Administrator as shown
in Figure 15.
6. The window, shown in Figure 16, instructs you to log out and back in.
Logoff Windows and log in as Administrator. Execute D:\setup.exe again.
7. Type in the user information in the window shown in Figure 17 on page 45.
44
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
9. You can leave the following Remote User File Access window empty since
we do not intend to access remote file systems.
45
10.In the window, shown in Figure 20, select the Typical install option. You
can use the browse button to select a different installation directory if
necessary.
11.In the window, shown in Figure 21 on page 47, enter the TME 10 License
Key. If you do not have it already, you may request it. Along with your TME
10 software, you should have received a TME 10 License Key Request
Form. Complete this form and return it to your customer support provider.
46
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
12.Enter the location of the Tivoli database directory in the window shown in
Figure 22.
13.You will see an MS-DOS window that will report the database installation
status as shown in Figure 23 on page 48. Press any key after the
initialization completes.
47
14.Select Yes in the window, shown in Figure 24, to restart your computer
and then log in as Administrator user.
48
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
49
19.In the window, shown in Figure 27, you will see the Installation Complete
message.
20.After installing Tivoli Framework 3.6, you need to install 3.6.2 patches.
Insert the Tivoli Framework 3.6.2 Patch CD-ROM.
21.Start the Tivoli Desktop from Windows by selecting Start->Tivoli->Tivoli.
22.Log in to the local machine. Use your Windows IP address or host name
as shown in Figure 28.
50
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
24.You will probably get the error message, shown in Figure 30, because the
program could not find the Installation Images from the default directory.
Just click OK.
51
52
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
27.You will receive a window similar to the one in Figure 33 on page 54. Click
Continue Install.
53
54
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
2. The user login name must be the same as the TWS user in the host that
you are connecting through JSC. In the screen, shown in Figure 35 on
page 56, we entered maestro@itso8 where itso8 is our TWS Master.
55
56
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
5. Next, you have to set the Managed Resources for that Policy Region.
Right-click on the TWS Region and select Managed Resources as
shown in Figure 38.
57
7. Install Managed Node for your TWS Master. In this case, the node is
itso8.dev.tivoli.com. Log on to TWS Master, create a file system size of
200 MB, and mount it to the directory, /Tivoli. Refer to Section 3.4,
Installing TWS Engine 7.0 on UNIX on page 39, to see how to create a
file system.
8. Mount the Tivoli Framework 3.6 CD in the TMR Server.
9. Double click on the TWS Policy Region, and then select Create
ManagedNode as shown in Figure 39 on page 58.
58
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
10.Insert the root users password of the node you are installing, and then
select Add Clients as shown in Figure 41.
.
59
11.In the Add Clients window, shown in Figure 42, type itso8.dev.tivoli.com
for the node name, and then select Add & Close.
12.Next, you have to select the media from which to install. Click on your TMR
Server and the /cdrom directory as shown in Figure 43.
13.In the Install Options window, shown in Figure 44 on page 61, you need to
specify the correct directories for Tivoli Framework components. You can
60
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
see our settings there. If the directories that you specify do not exist, you
can activate the When installing, create Specified Directories if
missing checkbox.
You can also check the Arrange for start of the Tivoli daemon at
system (re)boot time checkbox if you want to start Tivoli Framework
automatically after a (re)boot.
Press Set and then Close after making your selections.
14.In the next window, shown in Figure 45 on page 62, click Continue Install
to begin the actual process.
61
15.You probably want to use Tivoli Framework 3.6.2; so, you have to install
patches. Unmount the Tivoli Framework 3.6 CD-ROM and mount the 3.6.2
TMF Patch CD-ROM.
Select Desktop->Install->Install Patch to install Tivoli Framework Patch
3.6.2 as shown in Figure 46 on page 63.
62
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
16.In the Install Patch window, shown in Figure 47 on page 64, select Tivoli
Management Framework 3.6.2. Maintenance Release, and then select
the clients to install the patch.
Click Install & Close after making your selections.
63
17.Next, you need to install Job Scheduling Services to the TMR Server and
the TWS Master. Unmount Tivoli Framework 3.6.2 Patch CD-ROM and
mount the TWS CD-ROM.
18.Select Desktop->Install->Install Product to install Job Scheduling
Services as shown in Figure 48 on page 65.
64
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
65
20.First, select Tivoli Job Scheduling Console. Install to both the TWS
Master and TMR Server nodes.
After making your selections, click Install & Close, as shown in Figure 50
on page 67. This will install the Tivoli Job Scheduling Console to the TWS
Master and the TMR Server.
66
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
21.Next, you need to install the TWS Connector to the TMR Server and TWS
Master. Select Desktop->Install->Install Product from the Tivoli
Desktop, and then choose Tivoli TWS Connector, as shown in Figure 51
on page 68. You will be prompted with an Install Options panel. Do not
create an instance in this phase because we install two clients at a time,
and instance names have to be unique. You can create instances later
from the command line using the wtwsconn.sh utility.
Press Set and then Close to finish the installation of TWS Connector.
67
Note
You can verify the installation of the TWS Connector using the Tivoli
Framework wlookup command. The TWS Connector creates the following
Framework classes, which are visible with the wlookup command.
MaestroEngine
MaestroPlan
MaestroDatabase
For example, you can run wlookup as follows:
wlookup -R | grep Maestro
MaestroDatabase
MaestroEngine
MaestroPlan
22.Next, you need to create a TWS Connector instance for each TWS engine
that you want to access with the Job Scheduling Console. To create TWS
Connector instances, you must be a Tivoli administrator with admin,
senior, or super authorization roles. To create an instance, use the
68
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
installed the TWS Connector that you need to access through the Job
Scheduling Console.
The wtwsconn command is invoked to create, remove, view, stop, and set
TWS Connector instances. The following screen shows various options
that can be used with the wtwsconn command.
wtwsconn.sh
wtwsconn.sh
wtwsconn.sh
wtwsconn.sh
wtwsconn.sh
Log in to the TMR Server as root user and add an instance to the TWS
Master using the command shown in the following screen.
$. /etc/Tivoli/setup_env.sh
$wtwsconn.sh -create -h itso8 -n master -t /opt/maestro
Scheduler engine created
Created instance: itso8, on node: itso8
MaestroEngine 'maestroHomeDir' attribute set to: /opt/maestro
MaestroPlan 'maestroHomeDir' attribute set to: /opt/maestro
MaestroDatabase 'maestroHomeDir' attribute set to: /opt/maestro
69
70
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
1. Log in to TWS Master as maestro user and dump current security to a file:
$ dumpsec > mysec
2. Modify security file. Add the TMF Administrator user into the file.In our
case, the user is Root_itso7-region. Execute vi mysec from a command
line and add Root_itso7-region as shown in the following screen.
USER MAESTRO
CPU=@+LOGON=maestro,root,Root_itso7-region
BEGIN
USEROBJ CPU=@ ACCESS=ADD,DELETE,DISPLAY,MODIFY,ALTPASS
JOB
CPU=@ ACCESS=ADD,ADDDEP,ALTPRI,CANCEL,CONFIRM,DELDEP,DELETE,DI
SPLAY,KILL,MODIFY,RELEASE,REPLY,RERUN,SUBMIT,USE
SCHEDULE
CPU=@ ACCESS=ADD,ADDDEP,ALTPRI,CANCEL,DELDEP,DELETE,DI
SPLAY,LIMIT,MODIFY,RELEASE,REPLY,SUBMIT
RESOURCE
CPU=@ ACCESS=ADD,DELETE,DISPLAY,MODIFY,RESOURCE,USE
PROMPT
ACCESS=ADD,DELETE,DISPLAY,MODIFY,REPLY,USE
FILE
NAME=@ ACCESS=CLEAN,DELETE,DISPLAY,MODIFY
CPU
CPU=@ ACCESS=ADD,CONSOLE,DELETE,DISPLAY,FENCE,LIMIT,LINK,MODIF
Y,SHUTDOWN,START,STOP,UNLINK
PARAMETER
CPU=@ ACCESS=ADD,DELETE,DISPLAY,MODIFY
CALENDAR
ACCESS=ADD,DELETE,DISPLAY,MODIFY,USE
END
~
~
"mysec" 13 lines, 699 characters
3. Stop the instances. Log in TWS Master as maestro user. Instances start
automatically when needed by JSC.
5. Use the makesec command to compile and install the new operational
Security file with the following command:
$ makesec mysec
71
Use dumpsec and makesec every time you want to modify the Security file.
Changes to the Security file take effect when TWS is stopped and restarted.
To have consistent security on all TWS Workstations, you should copy
/opt/unison/Security to other TWS Engines. Remember to stop the Connector
instances after updating Security and also stop and start TWS Engines or you
can wait until the next Jnextday to run.
72
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
Windows 98
AIX 4.2.1 or 4.3
Sun Solaris 2.6 or 2.7
HP-UX 10.x or 11.0
Perform the following steps to install the Job Scheduling Console:
1. Log in as root or Administrator.
2. Insert the Tivoli Job Scheduling Console CD-ROM into the system
CD-ROM drive, or mount the CD-ROM from a drive on a remote system.
For this example, the CD-ROM drive is drive D.
3. Run the installation command:
On Windows:
From the Start menu, select the Run... option to display the Run dialog.
Enter d:\jsgui\windows\install.exe in the Open field.
On AIX:
Set the DISPLAY variable if you are logged in from a remote machine as
shown in the following screen:
$ export DISPLAY=10.68.14.100:0
$ sh install.bin
InstallAnywhere is preparing to install...
Please choose a Java virtual machine to run this program.
(These virtual machines were found on your PATH)
--------------------------------------------------------1.
/usr/bin/java
2.
/usr/bin/jre
3.
Exit.
Please enter your selection (number): 1
4. In the next window, shown in Figure 53 on page 74, you can select the
language of JSC. The default is English. Press OK after making your
selection.
73
7. In the next window, shown in Figure 55 on page 75, you can select the
languages you want to install.
74
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
8. Next, you need to choose the install folder as shown in Figure 56.
75
10.The next window, shown in Figure 58, shows that the installation was
successful.
.
76
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
Modify the JAVAPATH to point to the directory in which your Java files
reside.
JAVAPATH=/usr/jdk_base
12.Start JSC:
In Windows:
- DoubleClick JSC icon.
In AIX:
- It is recommended that you create a shell script that starts JSC. Just
create a file, for example, one named JSC and add the lines in the
following screen.
$ vi /usr/bin/jsc
cd /opt/maestro/JSConsole/bin/java
./AIXconsole.sh
- Just enter jsc from a window, and the Job Scheduling Console will
start.
3.8 Summary
In this chapter, we covered the installation of our TWS environment. At a
minimum, the installation requires the following steps:
1. Install the TWS Master and Workstations.
2. Install the TMR Server (either on a machine separate from the Master or
on the same machine).
3. If your TWS Master is on a different machine than the TMR Server, create
a Managed Node on the host on which you will install the TWS Master.
4. Install Job Scheduling Services and TWS Connector on the TMR Server
and the TWS Master.
5. Create a Tivoli Administrator for TWS.
6. Set the security of TWS Workstations.
7. Install the Job Scheduling Console on all nodes where you need to control
production using a GUI.
77
78
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
Figure 59 on page 80 shows the JSC start window. The processors, known as
engines, being managed at the ITSO are listed in the left pane. They are:
OPC - OPC-controlled OS/390 work
itso7 - TWS-controlled AIX work
itso9 - TWS-controlled AIX work
master - TWS-controlled AIX work
OPC represents an OPC system running on an OS/390 processor in Mainz,
Germany. The others are TWS systems running on AIX processors in Austin,
Texas.
79
Clicking on the key symbol to the left of the name expands the menu.
80
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
As the JSC starts, you are asked to enter the location of the engine on which
the Tivoli Framework is running and your password as shown in Figure 61 on
page 82.
81
As the JSC starts, it briefly displays a window with the JSC level number, as
shown on Figure 62 on page 83.
Note
82
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
The first time the JSC loads the following window is shown in Figure 63.
When you click OK, the window, shown in Figure 64 on page 84, is shown.
83
This lets you copy a pre-customized user profile, which could contain list
views appropriate to the user. It could also be used to turn off the showing of
the next window, shown in Figure 65.
This window gives you the option to read the online tutorial. Unless you turn
the future display of this window off by ticking Dont show this window
again, it will be shown every time you start the JSC.The tutorial encourages
you to create a workstation and then a job stream (OPC application).
84
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
It is very unlikely that you will want your users to do this in a production
database, and you may consider it worthwhile to load a pre-customized
standard profile file onto users machines when you install the JSC. Our
recommendation is for you to configure your JSC lists to your data center
standard preferences with the intention of using these settings as the default
initialization file. Once you have set them up, save them and install the file on
users machines when you install the JSC.
The file name is GlobalPreferences.ser, and it can be found in the JSC
directory, JSConsole/dat/.tmeconsole/USER@NODE_LANGUAGE, in
Windows or in $HOME/.tmeconsole/USER@NODE_LANGUAGE in UNIX
(where USER is the user name used to log into TWS; NODE is the node to
log in on, and LANGUAGE is your language settings in TWS). It will be about
20 KB in size, depending on the number of list views you have created.
(There is another GlobalPreferences.ser file in the tivoliconsole subdirectory
that is about 1 KB in size. It is not the one that contains the changed
preferences).
Note
JSC client uses Windows regional settings to display dates and times.
Change the regional settings from the control panel according your country.
After rebooting Windows, dates and times will be shown in the selected
format.
85
Note
86
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
4.2.2.1 Database
Access the calendar, period, operator instruction, Event-triggered Tracking,
and JCL variable databases.
4.2.2.2 Long Term Plan
All Long Term Plan related tasks.
4.2.2.3 Plan - JCL Edit.
Editing jobs or correcting JCL on failed jobs. Using a JCL preparation
workstation to prep jobs.
4.2.2.4 Plan - New Occurrence
Add a new occurrence (job stream instance) into the plan.
4.2.2.5 Plan - Ended in Error list
A view of operations in error status can be created on the JSC, but a more
rich functionality is still available from the ISPF Ended in Error panel. This
includes JCL edit, application rerun from a selected operation, step level
restart, joblog browse, automatic recovery attempt, remove, delete, and
complete from group.
4.2.2.6 Batch jobs
Invoke OPC batch jobs; so, you can extend or modify the Long Term and
Current Plans.
4.2.2.7 Service Functions
Access OPC Service Functions ; so, you can stop or start Job Submission,
Automatic Recovery, and Event-Triggered Tracking, or do a Long Term Plan
refresh.
OPC
Application
Description
JSC
Job stream
Explanation
A sequence of jobs including the
resources and workstations that support
them and scheduling information.
87
OPC
Application
Group
88
JSC
Explanation
Current plan
Plan
External
Dependency
External job
In-effect date
for run cycles
Valid from
Input arrival
time
Negative run
cycle
Occurrence
OPC
Controller
Engine
Operation
Job
Operation
number
Job identifier
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
OPC
JSC
Explanation
Operations in
the current
plan
Job instances
Out-of-effect
date for run
cycles.
Valid to
Run cycle
with offsets.
Offset-based run
cycle
Run cycle
with rules.
Rule-based run
cycle
Special
resources
Logical resources
Status:
Complete
Successful
Status:
Delete
Canceled
Status:
Started
Running
Task
Task
89
Click on the icon that represents your OPC system in the Job Scheduling
pane on the left of the JSC window. Then, use the right mouse button and
click on Default Plan Lists. Select Create Plan List on that menu, and Job
Instance on the next menu as shown in Figure 66.
90
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
Change the name and select the internal status radio button to display OPC
statuses. Select error status as shown in Figure 68 on page 92.
91
You can use filter fields to limit the error list display to one workstation, and
you can choose to define a periodic refresh rate. Here, we have chosen to
refresh every 120 seconds. Save the new list with the OK button.
Note
Automatic periodic refresh time interval is available for each query filter
and may also be defined at the top most level, to be used by all query
filters.
92
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
Now, the new Ended-in-Error list appears in the Job Scheduling pane, and
when you click on it, the right pane displays the job streams containing jobs in
error as shown in Figure 69.
Use the same process to make the Ready Lists. Select the relevant statuses
and workstation as shown in the Figure 70 on page 94.
93
The JCL prep workstation identifier is GX, and the statuses selected are
Arriving, Ready, and Interrupted. Once it is saved, when you click on the list
in the Job Scheduling pane, you see the same jobs you would see on the
OPC Ready List for that workstation as shown in the Figure 71 on page 95.
94
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
To change the status, use the right mouse button and click on the operation.
You will see a window like the one shown in Figure 72 on page 96.
95
Select Set Status, and you will see the window shown in Figure 73 on page
97.
96
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
You can change the status to C and, thus, release successor jobs, but you
cannot edit JCL from the JSC.
4.2.4.2 Creating a Job Stream (Application)
Using the JSC to create a new Job Stream in the OPC Database should
cause no problems for anyone who can create an application using the OPC
ISPF dialog panels. The same information is required, but the look and
position of the fields are different. The JSC offers benefits of a graphical
interface, while the text-based ISPF dialogs may be faster for people who are
familiar with them. However, the JSC does have two major facilities that are
not available with the ISPF dialogs.
You can view all planned occurrences of a job stream with multiple
run-cycles. Whereas the ISPF dialog command, GENDAYS, calculates only
the effect of one rules-based run cycle, the JSC shows the combined
effect of all run cycles, both rules- and period/offset-based.
You can create the jobs (operations) and make internal and external
dependencies using a graphical view.
97
To create a job stream, right-click on the icon for the OPC engine in the left
pane, and then select New Job Stream as shown in Figure 74.
Selecting New Job Stream produces two windows as shown in the Figure 75
on page 99.
98
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
99
100
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
Most of the operation details can be entered in the window shown in Figure
77, except for what is perhaps the single most important one. To enter the job
name, you must click on Task in the menu on the left.
Note
If you do not set an operation start time in Time Restrictions, the time is
defaulted to 00:00 on day 0 when the job stream is saved. This will give
you a warning icon in the time line view indicating time inconsistencies. To
prevent this, after saving the job stream, you should immediately update
the job stream and tick the Follow Job Stream Rules box, which will
remove the 00:00 setting.
Once you save this new operation, an icon appears in the frame as shown in
Figure 78 on page 102.
101
More operations may be created in the same way. When there is more than
one job in a job stream (operation in an application), all must be linked
together with dependencies. This is easily done by clicking on the appropriate
icon on the menu bar as shown in the Figure 79 on page 103.
102
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
103
104
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
105
: MAY#TEST#APP1
: SOMETHN2
To schedule the application to run Every Wednesday in the Year, only one
selection needs to be made in the Frequency section. The JSC requires that,
in addition to Only or Every, a frequency must be selected as shown in Figure
83 on page 107.
106
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
The rule cannot be saved and the OK button remains greyed out until at least
one selection is ticked in each of the Periods, Available days, and Available
types of days. To get the same result as in the ISPF dialog, choose 1st.
The JSC will display all calculated run days on either a monthly calendar, as
shown in Figure 84 on page 108, or an annual calendar.
107
The run days are shown colored pale blue. A yearly calendar can be selected
by clicking on the tab at the top and is shown in Figure 85 on page 109.
108
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
If there are multiple run cycles, the display will show all of the calculated run
days, and, by clicking on the run cycle name on the left, the days it has
scheduled are highlighted with a dark blue bar as shown in Figure 86 on page
110.
109
This display is a very useful resource for the application builder and the
scheduler, but it must always be remembered that this is a calculation of run
days. Actual scheduling is done by the Long Term Plan batch job.
110
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
The command line interface (CLI) is used for certain advanced features.
Some of the capabilities of the CLI are not available in the Job Scheduling
Console. The Job Scheduling Console and CLI are independent and can be
run simultaneously to manipulate TWS data. Sometimes, the CLI is simply
faster, and you do not need any graphical displays to use it. The
111
$ conman
MAESTRO for UNIX (AIX)/CONMAN 7.0 (1.4) (C) Tivoli Systems Inc. 1998
Installed for group 'DEFAULT'.
Locale LANG set to "en_US"
Schedule (Exp) 06/08/00 (#82) on MASTER. Batchman LIVES. Limit: 200, Fence: 0
, Audit Level: 1
%sc
CPUID
RUN NODE
LIMIT FENCE
DATE TIME STATE METHOD DOMAIN
MASTER
82 *UNIX MASTER 200
0 06/08/00 15:48 I J
MASTERDM
AIXMVS
82 OTHR X-AGENT 10
0 06/08/00 15:48 LXI JX mvsopc MASTERDM
ITSO6
82 UNIX FTA
10
0 06/08/00 15:49 LTI JW
MASTERDM
ITSO7
82 UNIX FTA
10
0 06/08/00 15:49 LTI JW
MASTERDM
R3XA1
82 OTHR X-AGENT 0
0 06/08/00 15:49 LHI JH r3batch MASTERDM
ITSO11
82 WNT MANAGER 10
0
LT
NTDOMAIN
%sbd master#"test";alias="testi23"
Submitted MASTER#test to batchman as MASTER#JOBS.TESTI23
%sj master#jobs.@
(Est) (Est)
CPU
Schedule Job
State Pr Start Elapse Dependencies
MASTER #JOBS
112
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
113
- Workstations
- Workstation Classes
- Domains
- Job Streams
- Job Definitions
- Resources
- Prompts
- Parameters
- Users
- Calendars
In plan views, you can:
Create, delete, and modify groups and list of customized views
Monitor status of:
- Workstations
- Job streams
- Jobs
- Resources
- Files
- Prompts
- Domains
Submit:
- Jobs
- Job Stream
- Ad-hoc jobs
Confirm jobs
Change jobs to be confirmed
Release dependencies of job streams
View jobs output
Cancel jobs
Kill running jobs
Rerun jobs
114
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
115
View history. In legacy GUI and CLI, the command is Set Symphony.
View the job streams coded in scripts using the internal TWS scheduling
commands. Sometimes, it is useful to see how a job stream is created with
one quick look. Looking at pure code could be more efficient than
navigating the GUI interfaces. Use the CLI for this.
Modify TWS security. The only way to do this is through the command
shell with the commands, dumpsec and makesec.
Use wildcards to submit jobs and job streams.
Print reports. This is done through the command shell. There are several
commands for generating reports.
Arrange objects in plan or database lists in ascending or descending
order.
Execute console commands directly.
Save a job stream that needs resources from other workstations.
116
JS console status
WAITING
READY
READY
RUNNING
SUCCESSFUL
SUCC
ERROR
ABEND, FAIL
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
JS console status
CANCELED
HELD
UNDECIDED
ERROR, EXTRN
BLOCKED
SUSP
117
118
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
119
120
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
121
4.4 Summary
In this chapter, we discussed the Job Scheduling Console Graphical User
Interface (JSC GUI), which is the standard interface for Tivoli Workload
Scheduler (TWS) and Operations Planning and Control (OPC). It is possible
to manage work running under TWS and OPC from the same GUI; so,
customers who are using OPC and TWS together as an end-to-end
scheduling solution will find JSC especially handy since operators need to be
educated only on one user interface.
We also covered functions that must be performed exclusively from either the
OPC ISPF interface, the TWS legacy GUI or the CLI. It is expected that new
releases of the JSC will include some or all of these functions.
122
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
123
Keywords
Meaning
Abend
Abnormal end
Abendu
DOC
Documentation
Loop
loop
Wait
wait
MSG
message
Perf
Performance
INCORROUT
Incorrect output
ABEND
Choose the ABEND keyword when the Tivoli OPC program comes to an
abnormal end with a system abend code. You should also use ABEND when
any program that services Tivoli OPC (for example, VTAM) terminates it, and
one of the following symptoms appears:
An abend message at an operator console. The abend message contains
the abend code and is found in the system console log.
A dump is created in a dump dataset.
ABENDU
124
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
Choose the ABENDU keyword when the Tivoli OPC program comes to an
abnormal end with a user abend code and the explanation of the abend code
states that it is a program error. Also, choose this keyword when a user abend
(which is not supposed to signify a program error) occurs when it should not
occur, according to the explanation. If a message was issued, use the MSG
keyword to document it.
DOC
Choose the DOC keyword when one or more of the following symptoms
appears:
There is incomplete or inaccurate information in a Tivoli OPC publication.
The published description of Tivoli OPC does not agree with its actual
operation.
INCORROUT
Choose the INCORROUT keyword when one or more of these symptoms
appears:
You received unexpected output, and the problem does not appear to be a
loop.
The output appears to be incorrect or incomplete.
The output is formatted incorrectly.
The output comes from damaged files or from files that are not set up or
updated correctly.
LOOP
Choose the LOOP keyword when one or more of the following symptoms
exists:
Part of the program (other than a message) is repeating itself.
A Tivoli OPC command has not completed after an expected period of
time, and the processor usage is at higher-than-normal levels.
The processor is used at higher-than-normal levels, a workstation operator
experiences terminal lockout, or there is a high channel activity to a Tivoli
OPC database.
MSG
Choose the MSG keyword to specify a message failure. Use this keyword
when a Tivoli OPC problem causes a Tivoli OPC error message. The
125
message might appear at the system console or in the Tivoli OPC message
log, or both. The messages issued by Tivoli OPC appear in the following
formats:
EQQ FnnnC
EQQ FFnnC
EQQ nnnnC
The message is followed by the message text. The variable components
represent:
F or FF - This is the Tivoli OPC component that issued the message
nn, nnn, or nnnn - This is the message number
C - Severity code of I (information), W (warning), or E (error)
The following is message number examples:
EQQN008E
EQQWl10W
EQQF008I
If the message that is associated with your problem does not have the EQQ
prefix, your problem is probably not associated with Tivoli OPC, and you
should not use the MSG keyword.
PERFM
Choose the PERFM keyword when one or more of the following symptoms
appears:
Tivoli OPC event processing or commands, including commands entered
from a terminal in session with Tivoli OPC, take an excessive amount of
time to complete.
Tivoli OPC performance characteristics do not meet explicitly stated
expectations. Describe the actual and expected performances and the
explicit source of the performance expectation.
WAIT
Choose the WAIT keyword when one or more of the following symptoms
appears:
The Tivoli OPC program, or any program that services this program, has
suspended activity while waiting for a condition to be satisfied without
issuing a message to indicate why it is waiting.
126
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
127
128
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
file. In most situations, Tivoli OPC will also snap the data that it considers to
be in error.
Trace Information
Tivoli OPC maintains an internal trace to make it possible to see the order
that its modules have been invoked in prior to an abend. The trace is
wraparound with an end mark after the last trace entry added. Each entry
consists of two 8-byte character fields: The module name field and the reason
field. The end mark consists of a string of 16 asterisks (X'5C'). For most
abnormal terminations, a trace table is written in the diagnostic file
(EQQDUMP). These trace entries are intended to be used by IBM staff when
they are diagnosing Tivoli OPC problems. A trace entry with the reason
PROLOG is added upon entry to the module. Similarly, an entry with EPILOG
is added at the exit from the module. When trace entries are added for other
reasons, the reason is provided in the reason field.
129
//SYSMDUMP DD DISP=MOD,DSN=OPC.V2R3M0.DMP
Please note that //SYSOUT=* destroys the internal format of the Dump and
renders it useless. When you experience an Abend and find no dumps in your
dump datasets have a look to your DAE (Dump analysis and elimination) set
up. DAE can be used to prevent the creating of certain kind of dumps. See
also the OS/390 V2R9.0 MVS Initialization and Tuning Guide, SC28-1751.
To become familiar with obtaining a console dump, see also Section 5.4.8,
Preparing a console dump on page 133.
Document instruction addresses from within the loop, if possible.
Provide a description of the situation leading up to the problem.
130
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
UNIX system services (USS) in full function mode. EQQTTOP uses C coding
in order to use the new C socket interface. New messages are implemented,
some of them pointing to other S/390 manuals.
Example:
Error number
1036
Message name
EIBMNOACTIVETCP
Error description
TCP/IP is not active
/F procname,status,subtask
/*where procname is the subsystem name of the Controller or tracker */
131
F OPCA,STATUS,SUBTASK
EQQZ207I NORMAL MODE MGR
EQQZ207I JOB SUBMIT TASK
EQQZ207I DATA ROUTER TASK
EQQZ207I TCP/IP TASK
EQQZ207I EVENT MANAGER
EQQZ207I GENERAL SERVICE
EQQZ207I JT LOG ARCHIVER
EQQZ207I EXTERNAL ROUTER
EQQZ207I WS ANALYZER
IS
IS
IS
IS
IS
IS
IS
IS
IS
ACTIVE
ACTIVE
ACTIVE
ACTIVE
ACTIVE
INACTIVE
ACTIVE
ACTIVE
ACTIVE
The above screen shows that the general service task has an inactive status.
To find more details, have a look into MLOG. The modify commands are
described in the TME 10 OPC Quick Reference, GH19-4374.
132
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
Specify any messages that were sent to the Tivoli OPC message log or to
the system console.
Obtain a dump using the MVS DUMP command. Check if the dump
options include RGN and GRSQ.
A wait state in the system is similar to a hang. However, the processing is
suspended. Usually, it is recognized by a poor dialog or no job submission.
A probable cause could be that one task holds a resource while other
tasks are waiting until the owning task releases the resource. Such
resource contentions can happen a lot of the time but are not serious if
they are resolved in a short time. If you experience a long wait or hang,
you can display an eventual resource contention when entering the
following command in SDSF:
JOBNAME
OPCA
SFRA4CC
ASID
003F
002C
As you see, there are two tasks trying to get access (or lock) for one resource
exclusive. Exclusive means that no other task can get the lock at the same
time. An exclusive lock is usually an update access. The second task has to
wait until the first, which is currently the owner, releases it. Message ISG343I
returns with two fields, called Major and Minor name.In our example,
SYSZDRK is the major name, and OPCATURN2 is the minor name.
SYSZDRK represents the active OPC current plan while the first four digits of
the minor name represents your OPC subsystem name. With this information,
you can search for known problems in the Software database. If you find no
hint, your IBM support representative may ask you for a console dump.
133
storage to be dumped. For waits or hangs, the GRSQ option must be turned
on.
The following panel shows the display of the current dump options.
SDUMP indicates the options for the SYSMDUMP, which is the preferred type
of dump. The options shown are sufficient for almost every dums in OPC. For
a detailed explanation, refer to the OS/390 System commands, options for
SDUMP types. If you miss one of these options, you can change it with the
change dump command (CD) command. For GRSQ, as an example:
CD SET,SDUMP=(GRSQ)
You need to be sure that the dump datasets, which have been provided by the
OS/390 installation, are free to be taken.
The example in the previous screen shows that all three can be used for
console dumps. If not, you can clear a certain one. Make sure that nobody
needs it anymore.
The //dd clear,dsn=00 command means that Sys1.Dump00 is eligible
5.4.8.1 Dump the failing system
Now, you are ready to obtain the console dump for further analysis.
134
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
135
Event number
136
Event name
Meaning
Reader event
Start event
3S
3J
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
Event number
Event name
Meaning
3P
Print event
Purge event
The events are prefixed with either A (for JES2) or B (for JES3). At least the
set of type 1, 2, 3J, and 3P events is needed to correctly track the several
stages of a jobs life. The creation of step-end events (3S) depends on the
value you specify in the STEPEVENTS keyword of the EWTROPTS
statement. The default is to create a step-end event only for abending steps
in a job or started task. The creation of print events depends on the value you
specify in the PRINTEVENTS keyword of the EWTROPTS statement. By
default, print events are created.
If you find that the current plan status of a job is not reflecting the status in
JES, you may have missing events. A good starting point is to run the OPC
AUDIT package for the affected occurrence to easily see which events are
processed from the Controller and which are missing, or you can browse your
event datasets for the jobname and jobnumber to prove which events are not
written.
Problem determination depends on which event is missing and whether the
events are created on a JES2 or JES3 system. In Table 11 on page 138, the
first column refers to the event type that is missing, and the second column
tells you what action to perform. The first entry in the table applies when all
137
event types are missing (when the event dataset does not contain any tracking
events).
Table 11. Problem determination of tracking events
Type
ALL
A1
B1
138
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
Type
A2/B2
139
Type
A3S/B3S
A3J/B3J
A3P
B3P
140
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
Type
A4/B4
A5
141
Type
B5
The following screen lists the types of data that can be traced. Ask your IBM
support center for the right settings related to your problem.
SERVERFLA(X'01000000')
SERVERFLA(X'02000000')
SERVERFLA(X'04000000')
SERVERFLA(X'00800000')
SERVERFLA(X'00010000')
SERVERFLA(X'03000000')
SERVERFLA(X'05000000')
SERVERFLA(X'06000000')
SERVERFLA(X'07000000')
SERVERFLA(X'FFFFFFFF')
SERVERFLA(X'FF7FFFFF')
142
*
*
*
*
*
*
*
*
*
*
*
BUFFERS
CONVERSATION
TCPIP SOCKET CALLS
PIF
SECURITY
BUFFERS AND CONVERSATION
BUFFERS AND TCPIP
CONNECTION AND TCPIP
CONNECTION BUFFERS AND TCPIP
ALL
ALL WITHOUT PIF
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
Level
Traced data
Errors
Called methods
Connections
IDs
Filters
PIF requests
Numbers of elements returns in queries
The default length of the trace is 512 KB for each instance. When the length
is exceeded, the trace is wrapped around. If an unexpected error occurs, the
trace must be copied as soon as possible. You will find the trace in the
directory, $DBDIR/OPC/engine_name.log.
The trace can either be activated at the line command with wopcconn or in
interactive mode. To control the current settings, issue the following
command:
wopcconn -view -e engine_name | -o object_id
143
T
******** OPC Connector manage program ********
Select instance menu
1. OPC
0. Exit
:
:
:
:
OPC
1929225022.1.1771#OPC::Engine#
itso7
Active
OPC version
: 2.3.0
1. Stop
the OPC Connector
2. Start the OPC Connector
3. Restart the OPC Connector
4. View/Change attributes
5. Remove instance
0. Exit
144
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
:
:
:
:
OPC
1929225022.1.1771#OPC::Engine#
itso7
Active
OPC version
: 2.3.0
2. Name
: OPC
: 524288
: 0
0. Exit
:
:
:
:
OPC
1929225022.1.1771#OPC::Engine#
itso7
Active
OPC version
: 2.3.0
2. Name
: OPC
: 524288
: 1
0. Undo changes
1. Commit changes
The changes works the next time you start the OPC Connector
6. Commit your changes and restart the OPC Connector to activate it.
145
T
REM
REM
set
set
REM
---------- Section to be customized --------change the following lines to adjust trace settings
TRACELEVEL=0
TRACEDATA=0
------ End of section to be customized -------
Change the value of the variable, TRACELEVEL, to activate the control flow
trace at different levels. Change the value of the variable, TRACEDATA, to
activate the data flow trace at different levels. Acceptable values range from 0
146
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
to 3. TRACELEVEL also admits the value, -1, which completely disables the
trace as shown in Table 13.
Table 13. Tracelevel values
Trace type
-1
Tracedata value
Trace type
No data is traced.
Note
147
148
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
149
Try to always bring TWS down properly via conman command line
on the master or FTA using the following commands:
unlink <FTA name>
stop <FTA name>; wait
shut <FTA name>; wait
If the mailman read corrupted data, try to bring TWS down normally.
If this is not successful, kill the mailman process with the following
steps.
Using command line UNIX:
1. Use ps -ef | grep maestro to find the process ID.
2. Issue kill -9 <process id> to kill the mailman process.
or with command line NT (commands in unsupported directory):
1. Use listproc to find the process ID.
2. Run killproc <process id> to kill the mailman process.
Batchman hung
Try to bring TWS down normally. If not successful, kill the mailman
process as explained in the previous bullet.
If the writer process for FTA is down or hung on the Master, it means that:
- FTA was not properly unlinked from the Master
- The writer read corrupted data
- Multiple writers are running for the same FTA
Use ps -ef | grep maestro to check writer processes running. If there is
more than one process for the same FTA, perform the following steps:
1. Shut down TWS normally.
2. Check the processes for multiple writers again.
3. If there are multiple writers, kill them.
netman process hung
- If multiple netman processes are running, try shutting down netman
properly first. If this is not successful, kill netman using the following
commands:
using command line UNIX:
1. Use ps -ef | grep maestro to find the running processes.
2. Issue kill -9 <process id> to kill netman process.
or with command line NT (commands in unsupported directory):
150
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
151
152
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
If the user definition existed previously, you can use the altpass
command to change the password for production day.
Jobs not running on NT or Unix
- batchman down
See the section, Batchman not up or will not stay up (batchman down)
on page 151.
- Fence limit set to 0
To change the fence limit via the conman command line:
for single FTA: lc <FTA name>;10
for all FTAs: lc @;10;noask
- Fence set above the limit
To change fence priority via the conman command line:
For all FTAs: f @;10;noask
- If dependencies are not met, it could be for the following reasons:
Start time not reached yet, or UNTIL time has passed
OPENS file not present yet
Job FOLLOW not complete
153
154
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
155
156
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
157
Note
158
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
The JOBMAN process launches the method script to perform one of the
following tasks:
Launch a job
Manage a job
Check for the existence of a file to satisfy an OPENS dependency
Get the status of an external job
The syntax of the method execution is as follows:
methodname -t task options -- taskstring
where:
task is one of the following:
LJ (Launch a job)
MJ (Manage a previously launched job)
CF (Check availability of a file OPENS dep)
159
The following are the options (related to the jobs properties) that should be
passed to the access method:
Workstation/Host/Master
Workstations node definition
Workstations port definition
The current production run number
The jobs run number
The jobs schedule name
The date of the schedule (in two formats)
The user ID (logon) for the job
The path to the output (stdlist) file for the job
The job ID
The job number
The string from the SCRIPTNAME or DOCOMMAND entry in the job
definition
The following screen shows an example method invocation where job TEST is
executed on the X-agent workstation, ITSO6, using the user ID, itso, and the
method name, wgetmethod.
wgmethod -t LJ
-c ITSO6,ITSO7,ITSO7
-n ITSO6
-p 31111 -r 143,143 -s MACDSH91
-d 20000410,955381986 -l itso
-o /opt/maestro/stdlist/2000.04.10/O13676.1053
-j TEST,13676 -- /home/itso//batchjobs/killer
160
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
In addition to TWS provided X-agents, you can always write your own
X-agents for platforms or applications that have not been supported by
TWS using the open API that is documented in the X-Agent Programmers
Reference.
7.2 What are the benefits of using TWS agent for SAP R/3?
By using the Tivoli Workload Scheduling Agent for SAP R/3, you can enhance
R/3 job scheduling in the following ways:
Eliminate the need in R/3 to repeatedly define jobs, even if the content of
the job does not change. With TWS, you define the job in R/3 once and
then reuse it.
Manage jobs that have dependencies with other R/3 instances or even
other resources or jobs originating from other sources on different
systems, such as Oracle Applications, PeopleSoft, and Baan IV. You can
create dependencies that allow you to structure your workflow and model
your business process.
Define recovery jobs in case your R/3 job fails. This helps in network
automation and helps you take quick action to recover from a failed job.
Define interdependencies between R/3 jobs and jobs that run on other
platforms including UNIX, Windows NT, MVS, and HP MPE.
161
To launch a job on an SAP R/3 X-agent, the TWS host executes r3batch,
passing it information about the job. Using the Extended Agents workstation
name as a key, r3batch looks up the corresponding entry in the r3options file
to determine which instance of R/3 will run the job. r3batch makes a copy of
the template and then marks the job in SAP R/3 as able to run and sets its
start time to now. It then monitors the job through completion, writing job
progress and status information to the jobs standard list file.
R3batch uses Remote Function Call (RFC) methods to handle R/3 jobs. RFC
is provided by SAP to call R/3 functions from external programs. RFC is
implemented as a platform-dependant library.
Note
R3batch 4.0 gained SAP certification for R/3 release 4.0 and 4.5.
INTRO
n/a
WAIT
ready
EXEC
active
SUCC
finished
ABEND
cancelled
The INTRO state indicates that TWS is in the process of introducing the job,
but, in R/3, the job has not yet entered the ready state. Because it takes some
time to get a job queued and into the ready column, the INTRO state may last
a few minutes if the R/3 system is particularly busy.
Although a job may be finished in R/3, TWS will keep it in the EXEC state if its
BDC sessions are not complete and you have not selected the Disable BDC
Wait option.
162
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
Note
The BDC acronymstands for Batch Data Collector and with the BDC wait
option, you can specify that an R/3 job launched by TWS will not be
considered complete until all of its BDC sessions have completed. This
prevents other TWS jobs that are dependent on the R/3 job from being
launched until all of the related BDC sessions for the R/3 job have
complete.
163
7.5 Installing of the TWS Extended Agent for SAP R/3 on AIX
We installed the Tivoli Workload Scheduling Agent for SAP R/3 on one of our
AIX-systems, as most SAP-systems being controlled by TWS run on AIX. For
a detailed description of the installation, its use with Tivoli Workload
Scheduler as well as the setup of SAP R/3 versions other than 4.6 B, refer to
the Tivoli Scheduling Agent for SAP R/3, GC31-5147.
164
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
7.5.2 Installation
1. Log in as root and change to TWShome, in our case, /opt/maestro.
2. Extract the software from the SAP.TAR file, which can be found either on
the installation CD-ROM or on your fileserver.
tar xvf path/SAP.TAR
This creates two files in the current directory: r3setup and r3btar.Z
3. Execute the r3setup script to decompress the file, r3btar.Z, perform the
initial setup, and create the r3options file.
/bin/sh r3setup -new | -update
where:
-new
165
166
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
167
batch administration privileges. When R/3 jobs are released by TWS, the
job log output in R/3 is found under this user name.
Password for RFC-User - The password for Userid. The R/3 user should
be given a password that does not expire.
Short Interval - The minimum interval in seconds for R/3 job status
updates. The default is 30.
Long Interval - The maximum interval in seconds for R/3 job status
updates. The default is 300.
Audit level - The Audit level is used to log TWS activities on R/3. A higher
level means more messages are logged. Valid values are 0 to 5. For R/3
versions earlier than 3.1G, enter a value of 0.
The following is a sample r3options file entry for an SAP R/3 Extended
Agent workstation called sap001:
168
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
These steps require an R/3 BASIS Administrator authority. You also need
SAP expertise to complete these steps. Otherwise consult your SAP
Administrator.
7.5.5.1 Overview of required steps
These procedures add new ABAP/4 function modules to your R/3 system,
and several new internal tables as well. No existing R/3 system objects are
modified.
Here is an overview of the procedure:
1. Create the authorization profile.
2. Create the TWS RFC user ID.
3. Copy the correction and transport files from the TWS server to the SAP
R/3 server.
4. Import correction and transport files into SAP R/3.
5. Verify the installation.
7.5.5.2 Creating the authorization profile for SAP R/3 V 4.6B
Before you create an RFC user ID for TWS batch processing, you need to
create the profile of the authorizations that the TWS user requires. The
authorizations required differ depending on your version of SAP R/3. The
SAP-defined authorizations are all found under the Object Class Basis:
Administration.
169
Note
Object
S_RFC
S_XMI_PROD
Text
Authorization check for
RFC access
Authorization for external
management interfaces
(XMI)
Authorization
S_RFC_ALL
X_XMI_ADMIN
S_BTCH_ADM
S_BTCH_ADM
S_BTCH_NAM
S_BTCH_ALL
S_BTCH_JOB
Batch processing:
Operations on batch jobs
S_BTCH_ALL
S_XMI_LOG
Internal access
authorization for XMI log
S_XMILOG_ADM
S_SPO_DEV
Spool: Device
authorization
S_SPO_DEV_AL
170
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
The names of the controlfile and datafile vary from release to release.
They are usually called K0000xx.tv1 (the control file) and R0000xx.tv1 (the
data file). In our case, the names were K000119.tv1 and R000119.tv1.
7.5.5.5 Import ABAP/4 Modules into SAP R/3
This procedure generates, activates, and commits new ABAP/4 function
modules to your SAP R/3 system and several new internal tables. No existing
R/3 system objects are modified.
1. Change to the following directory:
cd /usr/sap/trans/bin
where transport is the transport request and sid is your SAP R/3 system
ID. The name of the transport is tv1k0000xx.
3. Execute the tp tst command to test the import.
tp tst transport sid
After you have run this command, examine the log files in the
/user/sap/trans/log directory for error messages. Warnings of severity level
4 are normal.
If you have errors, check with a person experienced in Correction and
Transport, or try using unconditional modes to do the import.
4. Execute the following command to import all the files in the buffer:
tp import transport sid
This command generates the new ABAP/4 modules and commits them to
the SAP R/3 database. They automatically become active.
After you have run this command, examine the log files in the
/user/sap/trans/log directory for error messages. Warnings of severity level
171
5. When the import has completed, check the log files to verify that the
import was successful. The log files are in the /usr/sap/trans/log directory.
7.5.5.6 Troubleshooting the R/3 connection
If you are unable to submit SAP R/3 jobs using TWS after the SAP R/3
configuration, perform the following tests:
1. Make sure you can ping the SAP R/3 system from the TWS system. This
will show basic network connectivity.
2. Execute the following telnet command to verify connectivity:
telnet systemname 33xx
172
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
Option 1
For a UNIX Tivoli Workload Scheduler Extended Agent host, log in as root
and run the following command:
r3setup -maintain
Option 2
1. In UNIX, log in as root to the system where TWS is installed; for Windows
NT, log in as an administrator and start a DOS shell on the system where
TWS is installed.
2. Generate an encrypted version of the new password. To do this, use the
utility command, enigma, in TWShome/bin.
3. In a command shell, type:
enigma newpass
where newpass is the new password for the TWS RFC user ID. The enigma
command prints an encrypted version of the password.
4. Copy this encrypted password into the r3options file. The r3options file is
located in the TWShome/methods directory. The file can be edited with
any text editor. Be sure to copy the password exactly, preserving
upper/lower case and punctuation. The encrypted password will look
something like:
#TjM-pYm#-z82G-rB
If the encrypted password is mistyped, TWS will not be able to start or
monitor R/3 batch jobs.
173
174
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
3. Fill in the necessary fields for creating a new SAP R/3 Extended Agent
workstation as shown in Figure 98 on page 176.
In the name field, enter the workstation name that corresponds to the
entry in the r3options file
In the node field, enter null.
In the TCP Port field, enter any value between 1 and 65535. This field is
not valid for an SAP Extended Agent but requires a number greater than
zero to be entered.
175
4. In the All Workstations window, shown in Figure 99, you can see the
newly-defined SAP R/3 Extended Agent Workstation.
Figure 99. Displaying the newly-defined SAP R/3 Extended Agent Workstation
176
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
Remember that only after having run the Jnextday procedure , will this
workstation be available for scheduling.
The Select a Task Type window is displayed as shown in Figure 101 on page
178.
177
3. From the Task Type scroll down menu, select the SAP job type and click
OK. The Properties - Job Definition window, shown in Figure 102 on page
179, is displayed.
a. In the Name field, enter a Tivoli Workload Scheduler name for the SAP
R/3 job. The name can contain eight alphanumeric characters or 40
alphanumeric characters if you are using the expanded database
option. The job name must start with a letter.
b. In the Workstation field, use the ellipse (...) button to open a find
window to search and select an available workstation.
c. In the Description field, enter a description for the job. This field is an
optional text description of the job and can consist of up to 64
alphanumeric characters.
d. In the Login field, enter the TWS login ID that is used to execute the
job. Click the Add Parameter... button to add any predefined
parameters to the login ID.
e. In the Recovery Options fields, specify any recovery options for the
SAP R/3 job. Refer to the Tivoli Workload Scheduler Users Guide for
information about recovery options.
178
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
4. Click the Task tab. Next to the Job Name field, click the ellipse (...) button.
This opens the SAP Pick List window as shown in Figure 103 on page 180.
179
5. Use the SAP Pick list to find and select an SAP job, and click the OK
button. The job information will be propagated to the Task tab. These can
be seen in Figure 104 and Figure 105 on page 181.
180
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
Figure 104. Showing all SAP jobs on SAP R/3 application server for user maestro
Figure 105. Propagated Information from SAP Pick List in Task tab
181
6. Click the OK button in Figure 105 on page 181 to save the job definition in
the TWS database. You can see the newly-defined job in Figure 106.
For R/3 jobs controlled by TWS, the job log output in R/3 is found under the
user name defined in the r3user option of the r3options file.
182
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
183
3. From the task type scroll-down menu, select the SAP job type and click
OK. The Properties - Job Definition window, shown in Figure 109 on page
185, is displayed.
a. In the Name field, enter a Tivoli Workload Scheduler name for the SAP
R/3 job. The name can contain eight alphanumeric characters (or 40
alphanumeric characters if you are using the expanded database
option). The job name must start with a letter.
b. In the Workstation field, use the ellipse (...) button to open a find
window to search and select an available workstation.
c. In the Description field, enter a description for the job. This field is an
optional text description of the job and can be up to 64 alphanumeric
characters long.
d. In the Login field, enter the TWS login ID that is used to execute the
job. Click the Add Parameter... button to add any predefined
parameters to the login ID.
e. In the Recovery Options fields, specify any recovery options for the
SAP R/3 job. Refer to the Maestro UNIX Users Guide V6.0,
GC31-5136, for information about recovery options.
184
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
4. Click the Task tab from the window shown in Figure 110 on page 186.
185
5. Click the New... button. This opens the SAP Job Definition window. On the
SAP Job tab, enter information about the SAP job you are creating:
a. In the Job Name field, enter an SAP job name.
b. In the Target Host field, enter the name of the target workstation where
this job is executed. In our case, since we executed just an external
dummy program (sleep), this was left empty. It will be specified in the
next step.
186
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
6. On the Steps tab, you must add an ABAP or External program to the SAP
job:
a. In the Type field, select ABAP or External step from the drop down
menu.
b. Press the Add step button (the green plus sign).
c. In the Name field, enter an ABAP name or a fully-qualified path and
filename for an External program.
d. In the User field, enter the name of the SAP user who executes this
step.
e. In the Var/Parm field, enter a variant name or a parameter, if
necessary. Variants are used with ABAPs, and parameters are used
with External programs, but both are optional. Not all ABAPs require
variants, and not all External programs require parameters.
f. In the Target Host field, enter the SAP workstation where this step
executes. Target Hosts are only required for External programs.
g. Select any print parameters (for ABAPs) or control flags (for External
Programs). Refer to the corresponding SAP User Manuals for more
information.
These settings are shown in Figure 112 on page 188.
187
7. When you have completed creating the ABAP or External program steps,
click the OK button. The SAP job definition is saved to the R/3 database,
and the window is closed. If the SAP job was successfully saved to the R/3
database, the Task tab for the Properties - Job Definition window should
display an SAP job ID.
This is shown in Figure 113 on page 189.
188
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
Figure 113. Job Definition Window (Job-ID after saving to R/3 database)
189
3. The Job Stream Editor and Properties - Job Stream windows are opened.
We will first show the Properties - Job Stream window because the Job
stream Editor contains data only after the first job within the job stream is
defined.
4. In the Name field of the Properties - Job Stream window, enter a name for
the job stream. The name can contain eight alphanumeric characters, or, if
you are using the expanded database option, it can contain 16
alphanumeric characters. The job stream name must start with a letter.
5. In the Workstation field, enter the name of the SAP R/3 Extended Agent
workstation that executes this job stream, or use the ellipse (...) button to
search for an SAP workstation.
6. Click OK to close the Properties - Job Stream window after you have
completed all the desired fields. All fields and tabs on the Properties - Job
Stream window, other than Name and Workstation, are optional.
Depending on you general setup, you usually define a certain priority for a
job stream as well as for a job to put it into a certain execution window.
These settings can be seen in Figure 115 on page 191.
190
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
7. Add the SAP R/3 jobs that you want to be part of this job stream:
a. In the Job Stream Editor, select the Actions > Add Job >Job
Definition menu. After double-clicking with the left mouse button in the
white area of the window, the Properties - Job Definition window,
shown in Figure 116, is displayed.
191
b. In the Name field, enter the Tivoli Workload Scheduler name of the
SAP R/3 job you want to add to the job stream.
c. Repeat these steps to add all the desired SAP R/3 jobs.
This is shown in Figure 117.
8. The defined jobs show up as icons in the Job Stream Editor as shown in
Figure 118 on page 193.
192
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
9. If you add two or more jobs, you can specify dependencies between these
jobs, such as adding links or specifying external job / external job stream
dependencies. This is shown in Figure 119 and Figure 120 on page 194.
193
10.Create run cycles that determine the days that this job stream will execute.
11.Save the job stream by clicking Save from the File menu.
194
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
7.7.1.1 r3options
It is found on the Host FTA as defined in the X-agent workstation definition.
This file contains the connection parameters for each defined X-agent to
make an RFC connection to an R3 Host FTA. The r3batch method reads
down the r3options file until it finds a matching Xagent name and uses the
parameters found on that line.
7.7.1.2 r3batch.opts
This file must be created according to the manual and stored on the Host
FTA. The file defines the users authorized to run the r3batch method and
retrieve job information from the R3 Host FTA. In addition, it is also required
to invoke the new r3batch job definition window.
7.7.1.3 r3batch
This is the TWS method that runs and passes job information between TWS
and its RFC connection to the R3 Host FTA.
You may want to consult the SAP Basis Administrator if you do not know
some of the information that follows
1. What version of SAP X-agent are you using?
The following command will tell you what version is installed:
r3batch -v
195
where:
-t is a flag for 1 of 3 tasks that r3batch can call. PL refers to the picklist
task.
-c is a flag for any of the XAgent names as defined in the Maestro CPU
database.
-l is a flag for the r3login name. The login setup on the R3 System for
TWS.
-j is a flag for the job name. The " * " means any job (you may need to add
double quotes on some systems).
- - (dash dash) is a delimeter.
196
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
XMI errors
XMI errors are usually due to incorrect XMI authorization setup on the R/3
system side. The R/3 Basis Administrator must be involved to help fix this.
The correct authorizations for all the SAP versions we support are in the SAP
Xagent release notes. Giving the SAP_ALL authorization to the user is a
quick way to test if the problem is strictly due to authorizations.
Mixed versions of the SAP Application and Kernel can be a problem for
r3batch to interpret. Early SAP Application versions did not have XMI
Function Modules until Version 3.1i and later. When r3batch queries the R3
System for its version it is passed the Kernel version. If the Kernel version is
3.1i or later, r3batch will assume that the SAP Application is also, in which
case, r3batch will make calls to non-existent functions in the SAP Application
197
198
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
Note
The TWS Extended Agent works with SAP only through RFC (remote
function calls) commands. With SAP Versions 3.1f to 4.0b, the TWS
r3batch method is required to use the SAP SXMI_XBP_* RFC commands
for certification. With SAP Versions 4.5 to 4.6b, the TWS r3batch method is
required to use the BAPI_XBP_X* RFC commands, again for certification
with SAP. While using the SAP RFC commands ensures certification and
stability, they limit the flexibility and features available for the r3batch
method. For example, when the r3batch method tries to run a class A job in
R3, it makes a copy of a job already in a scheduled state and runs the copy.
The method uses the RFC command that copies the job; however, the SAP
RFC interface defaults to class C. There is no provision for the RFC
command to copy a class A job, nor for the r3batch to run an R3 job directly
outside of the SAP BAPI_XBP interface. The TWS r3batch method runs
copies of jobs in a scheduled state to eliminate redefining jobs in R3 every
time a job needs to be run. This makes running jobs multiple times through
TWS very quick and easy. If TWS used its own ABAPs to do the job copy,
we would lose certification and the reliability that our code would always
work with changes that occur in SAP. The best solution is to make the SAP
BAPI_XBP interface more flexible. For instance, allow class A jobs to be
copied via the RFC interface. As it stands right now, a copy of a job via the
RFC interface is not the same as creating a job via sm36.
Codepage errors
Errors of this type pertain to different language support, typically Japanese,
Chinese, and Korean. These problems require the creation of codepage
conversion tables to convert instructions from one language to another and
back to the original language. For example, 1100 (English) to 6300 (Chinese),
and 6300 (Chinese) back to 1100 (English).
To create a codepage conversion table, perform the following steps:
1. Log in R/3, user SAP*. client 000.
2. Call SM59 menu option RFC -> Generate "conv. tab" or start the report
"RSRFCPUT" using SE38. Include the following parameters:
199
"1100"
"8500"
The path must end with a path identifier (UNIX = "/" ; NT = "\").
This generates a <Source-code-page><Target-code-page>.CDP file, such as
11008500.CDP.
In addition to 11008500.CDP, generate a code page for each of the following:
85001100.CDP
01008500.CDP
85000100.CDP
In the .CDP file format, each line contains a number in Hex format. For
example, during a conversion in which all characters are transferred
identically, each line contains the line number in Hex format as shown in the
following:
0x00 (Where "Character 0x00=Line 0" is changed to character "0x00")
0x01
0x02
....
0xFF
3. Modify the "$HOME/.cshrc" file and add the following line:
set PATH_TO_CODEPAGE $HOME/codepage/
200
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
Note
7.8 Summary
In this chapter, we covered the Extended Agent concept, which is, basically,
used for extending TWS scheduling capabilities to foreign platforms and
applications.
As a sample X-agent, we discussed the implementation of Tivoli Workload
Scheduling Agent for SAP R/3. We covered the installation and gave
examples of scheduling SAP R/3 jobs using the Tivoli Workload Scheduling
Agent for SAP R/3. We also discussed some troubleshooting tips and
techniques.
Using the Tivoli Workload Scheduling Agent for SAP R/3, you will have the
benefits of eliminating the need in R/3 to repeatedly define jobs, managing
jobs that have dependencies with other R/3 instances or other
systems/applications, and being able to define recovery jobs in case your R/3
job fails.
201
202
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
203
Note
For running the Job Scheduling Console only, you just need one
workstation running the Tivoli Framework(TMR Server), Job Scheduling
Services, and the appropriate Connectors for TWS and OPC. In addition, if
the system running TWS Master is different from the TMR Server, you have
to make the TWS Master a Managed Node and install the Job Scheduling
Services and the TWS connector onto it as well.
However, if you intend to use the Tivoli Plus Module for TWS, you will have
to install additional products on the TMR Server running the Tivoli
Framework, such as TEC-Server, TEC-Console, Distributed Monitoring,
Software Distribution, additional Tivoli products on the TWS Master, and
TMA endpoints on all FTAs you want to be monitored.You also need
access to a DB instance.
In our case, we have installed all the components required to be able to use
the TWS Plus module.
itso8:
TWS MASTER, running TWS Engine V 7.0, TWS Extended Agent for
OS/390
Tivoli Framework 3.6.2 as Managed node, Job Scheduling Services, TWS
connector, TMA Endpoint
itso7:
FTA, running TWS Engine V 7.0
Tivoli Management Region SERVER, running Framework 3.6.2, TEC
Server 3.6.2, TEC Console 3.6.2., Log File Adapter 3.6.2, DM 3.6.1, SWD
3.6.2, Endpoint Gateway, TMA Endpoint, Job Scheduling Services, TWS
connector, and OPC connector
itso6:
FTA, running TWS Engine V 7.0, OPC Tracker Agent, TWS Extended
Agent for SAP R/3
TMA Endpoint
cypress.rtp.lab:
SAP R/3 V4.6B system. We just needed a working TCP/IP connection to
itso6, which is running the TWS Extended Agent for SAP R/3. In a
customer environment, you usually install the FTA and the Extended
204
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
Agent on the system running the SAP application; so, for monitoring the
SAP system, you would need at least the TMA Endpoint on it.
8.1.1.2 NT systems
These systems, named itso9, itso11, itso12 and itso13, were running NT 4.0
with Service Pack 5. Itso11 was the manager of the TWS domain,
NTDOMAIN, with its members, itso12 and itso13. Itso9 was a member of the
TWS MASTERDM.
8.1.1.3 OS/390 and OPC Environment
This is a two-member SYSPLEX running OS/390 Rel 2.6, JES2, and Tivoli
OPC 2.3 and is located in Mainz, Germany.
205
206
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
207
208
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
209
The challenge is to get OPC to change the TWS resource availability, and
TWS to change the resource availability on OPC. In this section, we will
describe how we achieved this.
210
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
The name of the resource is required and is enclosed in single quote marks.
SUBSYS is usually required because it identifies the tracker that will process
the command; unless specified, the default name, OPCA, will be used. We
can use AVAIL to make the entire resource available or unavailable, and we
211
212
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
2. Since this resource will be used solely for communication from TWS to
OPC, you need only to give it a name and description as shown in Figure
128 on page 214.
213
3. The default is a tick in the box indicating that the resource is available; you
should click the tick box to make it unavailable.
Important
You must change the supplied Used For default value Planning to Planning
and Control. If you leave the default value, OPC will ignore the availability
of the resource when submitting jobs. This JSC default differs from the
ISPF dialog default, which is Both, meaning Planning and Control.
4. OPC resources are logical; so, the name you use is your choice. Since
this is used solely for TWS-to-OPC communication, it could be named
TWS2OPC as a reminder. We have called our resource MICHAELA
because that is the name of the AIX box.
You need to define this special resource in the OPC job that is dependent on
the completion of a TWS job. Click on Resources in the left pane of the job
menu. A list of resources is displayed. You have to click on Target
Resources at the top, and select the Logical Resources (Special
Resources) pull-down menu as shown in Figure 129 on page 215.
214
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
5. Click the green cross on the logical resources panel to create a new
resource dependency as shown in Figure 130 on page 216.
215
6. Define that this job needs the logical resource that you previously created
as shown in Figure 131 on page 217.
216
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
217
The job is shown on the last line with an internal (OPC) status of Arriving. The
status details column shows the reason, Waiting for resources , as shown in
Figure 133 on page 219.
218
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
219
10.Click the right mouse button on the selected resource line, and select List
Jobs => Waiting for Resource from the pull-down menus as shown in
Figure 135.
The jobs waiting for this resource are listed with the reason they are waiting.
In Figure 136, the reason given is UNAVAL.
220
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
This operation will not be submitted by OPC until the logical resource is made
available. This job should not be started until a job controlled by TWS
completes, and when it does, TWS will cause the availability status to be
reset, as we describe next
221
2. On OS/390, create a job that will issue the SRSTAT command. In Figure
138 on page 223, we use the program, IKJEFT01, to make the TSO
environment for the SRSTAT command. Instead, you could use the program,
EQQEVPGM.
222
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
.
//YARI
JOB MSGLEVEL=(1,1),MSGCLASS=K,
//
CLASS=A
//STEP01 EXEC PGM=IKJEFT01
//SYSTSPRT DD SYSOUT=*
//EQQMLIB DD DISP=SHR,DSN=OPCESA.V2R3M0.SEQQMSG0
//EQQMLOG DD DISP=SHR,DSN=OPC.V2R3M0.MLOGA
//SYSTSIN DD *
//* Change resources in next line
SRSTAT 'MICHAELA' AVAIL(YES) SUBSYS(MSTR)
/*
4. Create a new job on TWS that launches the JCL script on OS/390. Call
LAUNCHYARI. See Figure 139 on page 224 and Figure 140 on page 224.
223
224
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
6. Select the TWS Master, right click, and select Submit->Job as shown in
Figure 141.
225
1. Create an OPC Job Stream, called OPCYARI. this will contain one job,
called YARIJOB.
2. Create a member, called YARIJOB, in the OPC job library. This member
should have the JCL that runs the program, EQQEVPGM, and which will
issue the SRSTAT command. The SRSTAT command can be coded to change
OPC special resources.
3. Our job modifies the special resource, called MICHAELA, to make it
available as shown in the following screen.
226
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
5. Create a TWS job call that submits the OPC Job Stream, OPCYARI, as
shown in Figure 145 on page 228 and Figure 146 on page 228.
227
228
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
$ Jnextday
7. Submit the TWS job, TWSYARI, from TWS as shown in Figure 147 and
Figure 148.
229
230
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
2. Login X/AGENT host as user maestro and create a JCL script. The
following script changes the state of the resource in OPC. The resource
must be predefined in OPC.
$ vi script.jcl
//YARI
JOB MSGLEVEL=(1,1),MSGCLASS=K,
//
CLASS=A
//STEP01 EXEC PGM=IKJEFT01
//SYSTSPRT DD SYSOUT=*
//EQQMLIB DD DISP=SHR,DSN=OPCESA.V2R3M0.SEQQMSG0
//EQQMLOG DD DISP=SHR,DSN=OPC.V2R3M0.MLOGA
//SYSTSIN DD *
//* Change resources in next line
SRSTAT 'MICHAELA' AVAIL(YES) SUBSYS(MSTR)
/*
3. In the maestro user home directory in UNIX, create a .netrc file that
automates the FTP command. This automation only affects the user
maestro and this specific IP-address. The file, yari.jcl, is transferred from
UNIX to OS390. In this example, the dataset is OPC.V2R3M0.JOBLIB,
and the member is YARI.
The following is the .netrc file syntax:
- machine - IP address of OS/390 node
231
After the macdef init command, you can add normal FTP commands.
4. Modify .netrc file attributes:
$ chmod 600 .netrc
5. Create a new shell script that starts an FTP transfer and submits a JCL
script on OS390:
$ vi opcresources.sh
#!/bin/ksh
#Connect to OS390 host. Initialazion script is taken from file .netrc
ftp 9.39.62.19
#Test if ftp command is success.
if [ $? != 0 ]
then
exit 1
fi
#Submit OPC job through TWS and give it random alias with prefix "res".
conman "sbd aixjes#'opc.v2r3m0.joblib(yari) = 0004';alias=res$$"
7. Modify the .jobmanrc file to get your profile settings from the .profile file:
$ vi .jobmanrc
232
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
#!/bin/ksh
. ./.profile
9. Create a job onto TWS that launches the opcresources.sh script. Call it
changeres. See Figure 151 and Figure 152 on page 234.
233
11.Submit the job from TWS. See Figure 153 on page 235 and Figure 154 on
page 235.
234
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
235
Important
At least Version 1.4.6. of X/AGENT on OS390 is required to get the job exit
code correctly with the MVSJES method.
8.2.4 Conclusion
For the preceding examples we assumed that your OS/390 system had the
TWS Extended Agent for OS390 installed. You may refer to Chapter 11,
TWS Extended Agent for MVS and OS/390 on page 331 for installation of
TWS Extended Agent for MVS and OS/390.
We implemented three different solutions to synchronize resources between
OPC and TWS. Each solution has its merits and we believe that the decision
to choose which method to use depends on the specifics of the environment.
We will give you some guidelines below:
Solution 1 is very simple to implement on the TWS side. It requires that you
predefine jobs on OS/390 with hardcoded SRSTAT commands, which cannot
be controlled dynamically from scripts on the TWS side. If you know exactly
what and how resources should be changed, this solution will be appropriate.
Solution 2 is also easy to implement from the TWS side. It also requires that
you predefine jobs on OS/390 with hardcoded SRSTAT commands, which
cannot be controlled dynamically from scripts on the TWS side. You could
have a job that makes the resource available and another job that makes it
unavailable. In this case these jobs are under OPC control and thus this
solution can also be used to make dependency links with other OPC jobs. It
also gives you an example of how to integrate with an OPC Jobstream.
Solution 3 gives you the ability to control resource changes completely from
TWS. This is most flexible solution of these three as it allows the SRSTAT
command to be coded at the TWS side. You can also influence other OPC
jobs if you use the MVS OPC X-agent method. The main disadvantage is
possible problems when checking the FTP script exit code. In the FTP client
there is no way to get an exit code if the transfer fails. This can be
circumvented if your script uses grep to parse error texts from FTP output.
One alternative would be to use the confirm feature in the FTP job stream,
requiring anoperator to check the output manually. The operator could then
set the job stream status to success or abend.
236
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
Note
Keep in mind that in all cases, if something goes wrong during execution of
conman submit you may not get error information through exit codes. You
may need to create your own scripts if you want to account for this
possibility.
237
238
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
The modres script takes input from command line parameters and modifiesan
existing resources status in the TWS database. At first, the script checks if a
specific resource is in the plan and it invokes the conman resource command
to change its value. In case the resource is not in the plan composer replace is
used. You can run this script only on the TWS Master or on a TWS
Workstation connected to the TWS Database via NFS. Modres does not
create new resources; so, make sure that the resource you are using is
already defined in the TWS Database.
239
#!/bin/ksh
RESOURCE=$1
RESNUM=$2
RESOPT=$3
umask 000
. /opt/maestro/.profile
function composerreplace {
composer "create /tmp/res.txt from resources"
grep "${RESOURCE}" /tmp/res.txt >/dev/null
if [ $? != 0 ]
then
echo "Resource $RESOURCE was not found!"
exit 1
fi
echo $RESOURCE $RESNUM
cat /tmp/res.txt |sed "s/^\($RESOURCE\)[ 0-9 ]*/\1 $RESNUM /g" | tee /tmp/res.txt
composer "replace /tmp/res.txt"
}
function conmanresource {
conman "resource $RESOURCE;$RESNUM;noask"
}
if [ -n "$RESOURCE" ] && [ -n "$RESNUM" ]
then
RE=`conman "sr $RESOURCE" 2>&1|grep $RESOURCE |wc -l`
RE=`expr $RE`
if (( $RE==2 )) && [[ $RESOPT = "r" ]]
then
RE=3
fi
case $RE in
1) composerreplace
;;
2) conmanresource
;;
3) conmanresource
composerreplace
;;
esac
else
echo "Need parameters!\nexample: modres {CPU#RESOURCE} {0-1024} [r]"
exit 1
fi
exit 0
Notice that there is one tabulator and no whitespaces right after $RESOURCE
before backslash \ in the sed command. You can do this example by pressing
Ctrl-I.
1. Save the script as /opt/maestro/scripts/modres.
240
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
3. Run the modres command only as maestro user or as a user who has
sufficient privileges. This command is case-sensitive. You need at least
TWS modify access rights to the TWS database.
Command usage:
modres [CPU]#[RESOURCE] [0-1024]
241
F1=Help
Esc+5=Reset
F9=Shell
F2=Refresh
F6=Command
F10=Exit
F3=Cancel
F7=Edit
Enter=Do
[Entry Fields]
/opt/maestro/mozart
read-write
[]
[-2]
[]
[]
no
no
both
[]
+
+
+
F4=List
F8=Image
F1=Help
Esc+5=Reset
F9=Shell
242
F2=Refresh
F6=Command
F10=Exit
F3=Cancel
F7=Edit
Enter=Do
[Entry Fields]
/opt/unison
read-write
[]
[-2]
[]
[]
no
no
both
[]
F4=List
F8=Image
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
+
+
+
Start NFS
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
both
243
F2=Refresh
Esc+6=Command
Esc+0=Exit
F3=Cancel
Esc+7=Edit
Enter=Do
+
+
+
+
+
#
#
#
#
+
+
#
+
+
+
+
+
#
#
#
#
#
#
+
#
+
+
F4=List
Esc+8=Image
244
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
Now, you can simply change the parameters for command line parameters
and affect any resource in TWS from OPC. You cannot create new resources
with the modres script.
8.4.1 Requirements
Each country has its own IT architecture (host-centric or distributed) and
software products. Each country has a central office that controls all the
national branch offices. Branch offices have several HP, SUN, NT, and AIX
servers running Oracle database. In the country's central office, TWS controls
the workload in the country's local environment.
The company HQ controls the central offices of all the countries.
The central marketing department wants to receive detailed sales reports
from the central offices of all countries, whether they are in one country or a
number of countries. A remarkable growth or decline in business is
registered.
The following process is used to produce this report:
1. Every Saturday, OPC schedules a jobstream, ACCOUNT#01, which is an
accounting application that runs in each national central office. This is
done by having the OPC Tracker Agent execute the script, submit1, which
submits a TWS jobstream to the MASTER workstation. This jobstream,
STATISTC, consists of two jobs.
2. The first job, sales_data, updates the sales data.
3. The second job, calculate, calculates the variation in the countrys
business. This process has to be finished by 0800 hours of the following
Monday. When a variation exceeding a defined threshold is detected,
notification is sent to the central office. In this case, the TWS job starts the
application, COLLECT#01, in OPC by submitting the job, NOTIFY, to the
Extended Agent for OS/390 workstation, AIXMVS.
4. This NOTIFY job starts an OPC application, called COLLECT#01, which
starts a collection application running Monday night after 1900 hours and
which is to be finished before 2300 hours.
245
In Figure 158 on page 247, you will find a process flowchart of our scenario
showing the interaction between OPC and TWS.
246
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
247
When submitting jobs or jobstreams through CLI with conman, you should
be aware of the fact that conman does not return the correct return code if
the submit failed; so, you have to provide the script with additional code to
search in the output of the conman submit... for strings, such as error,
failed, or similar words that cover situations where submissions failed.
248
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
249
250
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
Figure 162. General Job definition properties for Extended Agent Job, NOTIFY
251
This usually ends the definition of an Extended Agent job. You can include
this job in any jobstream you like and make it a predecessor or successor
of other jobs or jobstreams.
8.4.4 Implementation
The following is a description of how we implemented the solution.
On the OPC side, we created the accounting application as a job stream and
called it ACCOUNT#01. We used the rule-based run cycle to insert the job
stream into the plan on Saturday only as shown in Figure 164.
252
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
The accounting application must update the sales data in TWS, which means
that OPC must submit a TWS job stream. This can be realized with the help
of an OPC Tracker Agent, connected to the OPC Controller, where the
executed script accesses the TWS engine through its command line interface
(CLI).
You can either let the OPC Controller transfer the whole script to the tracker
agent or just have the Controller execute the script already residing on the
agent as we did. We used the explicit path name to execute the submit1
script.
000001 # SCRIPT THAT LIES ON A AIX TA, WHICH ACCESS THE CLI OF TWS
000002 # AND SUBMIT a job stream
000003 /opt/maestro/scripts/submit1
We used the above-mentioned script invoked by OPC via the Tracker Agent
to submit the schedule statistic to the MASTER. The following screen
contains the contents of the script, /opt/maestro/scripts/submit1 .
253
#!/bin/ksh
#
# submit job stream STATISTIC
su - maestro -c "conman submit sched=MASTER#statistic"
rc=$?
exit $rc
On the TWS side, we defined a job stream, called STATISTIC, running on the
MASTER. It includes a sales_data job, which is the predecessor of a
calculate job as shown in Figure 168.
The sales_data job is a dummy job and just does a sleep for 10 seconds.
The calculate job checks the variation against a defined threshold. When the
threshold is reached, it sends a notification to OPC, which, in turn, starts a
country-wide collection application. A possible solution could be that the
calculate job submits a TWS job on an Extended Agent for OS/390
workstation when the variation exceeds a certain level. This extended agent
job adds an OPC job stream into the current plan.
254
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
We have simulated this behavior by having the CALCULATE job execute the
shell-script threshold, which submits a job on the Extended Agent for OS/390
workstation in TWS if the variation exceeds a certain limit. A detailed
description follows.
Figure 169 and Figure 170 on page 256 show the General and Task
Properties of the job, CALCULATE, as defined in TWS.
255
256
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
Figure 171. Deadline of CALCULATE job as defined in Job properties job stream
#!/bin/ksh
#
# script used for job "calculate"
# if (simulated) variation is higher than 5, than
#
submit job "notify" on AIXMVS(Ext. Agent for OS/390)
#
# echo "dummy job - calculate variation VAR - "
VAR=6
if [ "$VAR" -gt "5" ]
then
echo "Variation > 5, submitting job AIXMVS#NOTIFY"
/opt/maestro/bin/conman submit job=AIXMVS#NOTIFY
fi
rc=$?
exit $rc
257
Figure 173 and Figure 174 on page 259 describe the NOTIFY job as defined
in TWS.
As you can see in Figure 174 on page 259, the job, NOTIFY, starts the
COLLECT#01 application in OPC with an input arrival time of 19:00h, a
deadline time of 23:00h, and the highest priority.
258
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
Back in OPC, the OPC jobstream, COLLECT#01, must run Monday night
after 19.00h and be finished before 23.00h. We use a special resource, called
AMI_MAX, which is only available between 19 and 23h. The jobstream, called
COLLECT#01, depends on this special resource. Figure 175 and Figure 176
on page 260 show the details of this special resource.
259
260
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
Figure 179 shows the jobs, sales_data and calculate, of the STATISTIC
jobstream finished and the NOTIFY job waiting.
Figure 179. Status of the STATISTIC jobstream and job notify on TWS
261
Why is the notify job waiting? This job is running on the Extended Agent for
OS/390 workstation-AIXMVS and submits the OPC jobstream, COLLECT#01,
which is waiting on a special resource, AMI_MAX, which is only available on
Monday between 1700 and 2300 hours (see Figure 176 on page 260); so,
both the TWS job notify as well as the OPC jobstream, COLLECT#01, are
waiting for this resource to become available. In real life, this resource will,
eventually, become available. In our test case, since it was not Monday
evening, we enabled it manually.
Figure 180 and Figure 181 on page 263 show the status of the OPC
jobstream, COLLECT#01, as well as the OPC special resource, AMI_MAX,
before the special resource was made available.
Figure 180. Status of COLLECT#01 OPC jobstream waiting for special resource
262
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
After making the special resource available, both the OPC jobstream and the
TWS job will finish successfully as shown in Figure 182.
263
264
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
265
The MVS extended agent is running on the TWS Master, and events on OPC
are passed from the TSSERVER and TSDSPACE started tasks, thus
enabling monitoring of OPC jobs from JSC. The NT machine where one of the
TWS jobs runs is itso12, and the UNIX jobs are running on its07 and its08,
which is the TWS Master.
266
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
267
As you move the cursor down into the frame, the pointer changes to a cross.
Click where you want the icon for the predecessor to be placed, and a dialog
box will appear as shown in Figure 187.
To identify the OPC job stream, you can use the application name and input
arrival time as we did here. The application name is required, and you can
use any combination of the following:
JOBNAME - This is the OPC operation name.
OPNO - This is the operation sequence number.
IA This is the input arrival date and time in a yymmddhhmm format . It is
unlikely you will use this parameter because it will hardcode the
dependency to one specific date.
IATIME - This is the Input Arrival TIme in an hhmm format.
Note
268
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
The TWS workstation that represents OPC must also be named. Click on the
box at the right of the Network agent field, and a Find workstation dialog
window opens as shown in Figure 188.
269
Click on the predecessor icon, and pull the dependency arrow to the
successor icon to create a link as shown in the completed job stream in
Figure 190 on page 271.
270
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
You can realign the icons by dragging them into position, or you can
right-click in the window and select Arrange Icons from the pop-up menu as
shown in Figure 191 on page 272.
271
The job stream is now complete as shown in Figure 192 on page 273.
272
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
273
We chose this view by selecting All Scheduled Job Streams from the OPC
Plan Lists on the menu in the left pane. By selecting All Scheduled Job
Streams from the Master Plan Lists, on the same menu, we can see the TWS
jobs stream shown in Figure 194 on page 275.
274
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
The TWS job stream is in Ready status. It will not start until the OPC job
stream successful completes.
8.5.2.4 Conclusion
The JSC provides a simple and effective mechanism for making a TWS job
dependent on an OPC job, and the JSC allows us to monitor the progress of
each job stream.
In this first JSC release, the dependency link works only one way. At this
moment, you cannot similarly make an OPC job depend on a TWS job, but
we expect future releases to allow this.
8.6 Summary
In this chapter, we put together three different end-to-end scheduling
scenarios using OPC and TWS:
Resource synchronization between OPC and TWS
Synchronization between OPC and TWS
275
276
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
277
TCPIPJOBNAME TCPIP4
HOSTNAME MCEVS4
DOMAINORIGIN NCS.MAINZ.IBM.COM
NSPORTADDR 53
RESOLVEVIA UDP
RESOLVERTIMEOUT 300
RESOLVERUDPRETRIES 1
TRACE RESOLVER
DATASETPREFIX TCPIP
In our example, the DNS name of the machine where the Controller resides is
MCEVS4.NCS.MAINZ.IBM.COM.
Trying to ping the Controller from your AIX box, you should receive output
similar to that shown in the following screen.
/ ping mcevs4.ncs.mainz.ibm.com
PING mcevs4.ncs.mainz.ibm.com: (9.39.62.19): 56 data bytes
64 bytes from 9.39.62.19: icmp_seq=0 ttl=63 time=25 ms
64 bytes from 9.39.62.19: icmp_seq=1 ttl=63 time=15 ms
64 bytes from 9.39.62.19: icmp_seq=2 ttl=63 time=12 ms
64 bytes from 9.39.62.19: icmp_seq=3 ttl=63 time=19 ms
64 bytes from 9.39.62.19: icmp_seq=4 ttl=63 time=17 ms
64 bytes from 9.39.62.19: icmp_seq=5 ttl=63 time=19 ms
64 bytes from 9.39.62.19: icmp_seq=6 ttl=63 time=14 ms
64 bytes from 9.39.62.19: icmp_seq=7 ttl=63 time=19 ms
If you want to ping the AIX box from the Controller side, you first have to
allocate the dataset that contains the TCP/IP parameter to the DD name
SYSTCPD. You can do it in ISPF Option 6, the TSO command processor.
278
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
Then, you can ping the AIX box in the common way. Double check that you
get the following output:
OPC internally reserves space only for one codepage. So if you are
using multiple code pages the last defined one will be used by OPC.
TCP (destination,...,destination) - This keyword specifies the network
addresses of all TCP/IP-connected Tracker Agents that are able to
communicate with the Controller for job-tracking purposes. Each
279
ROUTOPTS TCP(TWS1:146.84.32.100)
TCPIPID(TCPIP4)
TCPIPPORT(3112)
CODEPAGE(IBM-037)
Tivoli OPC communicates with AIX machines running the Tracker Agent.
The communication method is TCP/IP. Operations that specify a computer
280
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
JTOPTS WSFAILURE(LEAVE,REROUTE,IMMED)
WSOFFLINE(LEAVE,REROUTE,IMMED)
HIGHRC(0)
The highest return code generated in a job without causing the operation to
set to error is 0.
The default is 4. For non-MVS Tracker Agents, specify 0. You can also specify
this return code for each operation in the AUTOMATIC OPTIONS section of
the Application Description dialog.
When you want to see the standard list (std) output via the joblog retrieval
function in OPC dialog, you have to use catalog management with the
storelog parameter. Both parameters have to be defined in the OPCOPTS
Controller statement. See also the job_log parameter of the tracker parm.
OPCOPTS example
OPCOPTS CATMGT(YES)
STORELOG(ALL)
281
Create user
For...
Group
Recommended
home directory
tracker
OPC
default
/u/tracker
Even if you plan to run several instances of the Tracker Agent for the same
type of Controller on the same machine, you should run them all under the
same user ID.
You must use port numbers above 1024 to avoid running the Tracker Agent
with root authority. You should use port numbers much higher than this (for
example, above 5000) to avoid conflict with other programs.
Even if you plan to run several instances of the Tracker Agent for the same
type of Controller on the same machine, you should run them all under the
same user ID. A Tivoli OPC Controller needs one port, TCPIPPORT, which is
also the tracker's Controller port. The default port number is 424. Each
Tracker Agent needs two ports:
The tracker's Controller port, which is also the Controller's tracker port
(TCPIPPORT).
The tracker's local port, which must be unique for each machine. Tracker
Agents on different machines can have the same local port number. See
Figure 195 on page 283.
282
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
OPC Controller
TCPIP PORT
424
UNIX Tracker 1
Controller
424
local
5006
UNIX Tracker 2
Controller
424
local
5006
283
cd /usr/sys/inst.images
ftp control
user opc
passwd xxxxx
binary
get 'OPCESA.INST.SEQQEENU(EQQTXAIX)' tracker.image.aix
quit
In these examples, the Controller machine is control. You can receive the
image to any directory. /usr/sys/inst.images is the recommended and default
directory. tracker.image is an installp image.
284
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
If the Tracker Agent image is stored in another directory, use that directory
instead of /usr/sys/inst.images.
285
If the installation is being performed from tape, enter the device name. For
example:
/dev/rmt0
6. Position the pointer on the SOFTWARE to install line, and press F4.
F1=Help
Esc+5=Reset
F9=Shell
F2=Refresh
F6=Command
F10=Exit
F3=Cancel
F7=Edit
Enter=Do
[Entry Fields]
/usr/sys/inst.images
[_all_latest]
no
yes
no
yes
yes
no
no
yes
no
yes
+
+
+
+
+
+
+
+
+
+
+
F4=List
F8=Image
7. Select the required features from the list, position the pointer beside the
package, and press F7.
lqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqk
x
SOFTWARE to install
x
x
x
x Move cursor to desired item and press F7. Use arrow keys to scroll.
x
x
ONE OR MORE items can be selected.
x
x Press Enter AFTER making all selections.
x
x
x
x #--------------------------------------------------------------------- x
x #
x
x # KEY:
x
x # + = No license password required
x
x #
x
x #--------------------------------------------------------------------- x
x
x
x tracker
ALL x
x
+ 2.3.0.11 OPC Tracker Agent for AIX - Fix Level 11 English
x
x
x e Software
286
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
. COMMAND STATUS
Command: OK
stdout: yes
stderr: no
2.3.0.11
+-----------------------------------------------------------------------------+
Pre-installation Verification...
+-----------------------------------------------------------------------------+
Verifying selections...done
Verifying requisites...done
Results...
[MORE...56]
After the installation finished, the directory structure where you installed the
Tracker Agent should look like the information contained in the following
screen.
-rwxr-xr-x
dr-xr-x--dr-xr-x---rw-r--r-drwxr-xr-x
dr-xr-x--drwxrwxr-x
drwxrwxrwx
dr-xr-x--dr-xr-xr-x
-rw-r--r-drwxrwxr-x
drwxrwxrwx
1
2
2
1
2
2
2
2
2
4
1
2
3
opc
opc
opc
system
system
opc
opc
opc
opc
opc
system
opc
opc
1460
512
512
322
512
512
512
1024
512
512
37
512
512
Nov
Jun
Jun
Jun
Jun
Jun
Jun
Jun
Jun
Jun
Jun
Jun
Jun
24
06
06
06
06
06
06
20
06
06
06
06
19
1999
16:48
16:48
16:48
16:49
16:48
17:00
12:01
16:48
16:48
16:48
16:48
14:34
EQQPARM
bin
catalog
copyright.master
deinstl
doc
etc
log
methods
nls
productid
samples
tmp
A detailed explanation for every directory can be found in the book, Tivoli
Operations Planning and Control V2R3 Tracker Agent, SH19-4484.
287
To set the environment variable EQQHOME in the Bourne Shell (sh), enter
EQQHOME=/u/tracker
export $EQQHOME
EQQINSTANCE=myconfig.file
export EQQINSTANCE
You can also specify the name of the configuration file using the -f flag on the
eqqstart script or when you start the Tracker Agent directly. The -f flag takes
288
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
PATH=$PATH:/u/tracker/bin:
PS1='$PWD '
export PS1
export EQQHOME=/u/tracker
export EQQINSTANCE=aix1.cnf
/u/tracker/bin/eqqinit -tracker
Run the script from the tracker user ID. Use the -tracker parameter to create
the links.
289
290
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
291
Keywords
The following are the keywords and their definitions:
Controller_ipaddr= Controller_IP_Address - Specifies the IP addresses
for the systems where the Controllers are running. There can be up to 10
addresses, each in the format nnn.nnn.nnn.nnn, where nnn is in the range
1-254, or a host name. It is a required keyword; there is no default value.
Separate the addresses with commas. The Tracker Agent tries the first
address in the list at startup. If it is unable to make a connection, it tries
the next address in the list, and so on. Only one Controller can be
connected at any one time.
Controller_portnr= Controller_Port_Number - Specifies the port number
to which the Tracker Agent TCP-Writer connects, or a services name.
local_ipaddr= IP_Address - Specifies the IP address for the machine
where the Tracker Agent is running. It must be in the format
nnn.nnn.nnn.nnn, where nnn is in the range 1-254, or an environment
variable such as $HOST, or a host name.
local_portnr= Port_Number - Specifies the port number to which the
Tracker Agent TCP-Reader binds a socket.
292
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
293
294
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
- eqqtr - This is the interval that Tracker Agent will wait before attempting
to communicate with the Controller if a TCP read attempt fails.
- eqqtw - This is the interval that Tracker Agent will wait before
attempting to communicate with Controller if a TCP write attempt fails.
- eqqdr - This is the interval that Tracker Agent will wait before
attempting to connect and revalidate the connection to the controlling
system.
- sub nn - This is the interval that the nn submittor will wait before
retrying an operation (for example, because the number of processes
had reached the limit).
Example: The Tracker Agent has been installed on AIX 4.3.3. The IP interface
of the box can be reached through the address, 146.84.32.100. The OPC
Controller on OS/390 binds sockets with port number 3112 and IP address
9.39.62.19.
Controller_ipaddr = 9.39.62.19
Controller_portnr = 3112
local_ipaddr = 146.84.32.100
local_portnr = 1967
Controller_type = opc
eqqfilespace = 200
event_logsize = 1000
num_submittors = 1
job_log = delayed,keep
ew_check_file = ewriter_check
local_codepage = ISO8859-1
sub01_workstation_id = tws1
sub01_check_file = tws1.chk
295
296
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
directories on a local file system (or with a symbolic link to a local file system)
to improve log performance. If you run several instances of the Tracker Agent,
and they share the same directory, use variables in the configuration
parameter file to ensure that they do not use the same checkpoint and log
files. If the Tracker Agent is running from an NFS-mounted file system, it is
recommended that the log and temporary directories be configured on the
local file system. The eqqinit command (see Section 9.19, The Tracker
Agent utilities on page 304) initializes a directory on the local machine:
eqqinit -v
This command must be run as root. This local file system must have write
privileges for everyone including a root user who is logged in across the
network. The recommended name of the local directory is /var/tracker. You
might need administrator privileges to create the /var directory (if it does not
exist) and to create the links.
The /tmp directory is not suitable because this file system is frequently
cleaned when booting the system, and this would cause the Tracker Agent to
be unable to re-create its internal status. There can be a problem if a job
writes too much output: this can fill up the allocated space. To protect the
system, use a logical volume for the tmp and log directories (where this is
supported) or set up a separate file system for them. If this fills up, the Tracker
Agent will stop submitting jobs, but the operating system will continue to
work. You can use SMIT to create a logical volume. The log directory includes
an event writer checkpoint file, a message log (eqqmsglog), a trace log
(EQQtrc.log) for each Tracker Agent instance, and a submittor checkpoint file
for each submittor instance.
297
To update the TCP Reader to run without root authority, enter the following
command as root:
chown tracker $EQQHOME/bin/eqqtr
The generic and LoadLeveler submit processes must also run as root if the
user ID under which submitted jobs should be started is supplied by the
Controller. If the Tracker Agent is not required to run jobs under other user
IDs, the submitters can also be updated to run with normal user authority.
To update the submittor processes to run without root authority, enter the
following commands as root:
chown tracker $EQQHOME/bin/eqqls
chown tracker $EQQHOME/bin/eqqgssub
root $EQQHOME/bin/eqqtr
u+s $EQQHOME/bin/eqqtr
root $EQQHOME/bin/eqqls
u+s $EQQHOME/bin/eqqls
root $EQQHOME/bin/eqqgssub
u+s $EQQHOME/bin/eqqgssub
root $EQQHOME/bin/eqqgmeth
u+s $EQQHOME/bin/eqqgmeth
NFS restrictions
When running the Tracker Agent on NFS-mounted directories, the user ID
running the Tracker Agent must have write access to the file system. If the
Tracker Agent is running on an NFS-mounted file system, the superuser must
have write access to the file system.
Number of processes per user
If the Tracker Agent is running under a user ID other than root, or if many jobs
are run under one user ID, the number of processes per user ID should be
increased.
The following example is given for AIX only.
298
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
To
1.
2.
3.
4.
299
300
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
date
touch /tmp/file
date
and the touch command fails, the return code in the shell is set. On the next
command, date, the return code is reset to 0; so, the return code from the
touch is gone and the script will return 0 (job successful). If you want to verify
each step in the script, add tests after each call in the script to verify the shell
return code:
date
(test rc) - if rc nonzero exit with rc
touch /tmp/file
(test rc) - if rc nonzero exit with rc
date
(test rc) - if rc nonzero exit with rc
The test depends on the shell used to run the job; the syntax for /bin/sh is
different than for /bin/csh. If you want to monitor a single command directly,
specify the command as a single-line script. In the case of scripts that are
only one line long, the submittor monitors the actual command, and the return
code sent to the Controller is the return code from the execution of the
command. In this case, the command is run using a standard UNIX exec; so,
you cannot use shell syntax. Of course, testing return codes will only work if
the command returns a bad code when it fails and a zero code when it works.
If you are not sure, try the command from the UNIX command line, and echo
the return code from the shell. If the script is more than one line, the generic
submittor submits the script and monitors the shell for a return code. This
means that, in very rare cases, the script can have run without error, but an
error in the shell can result in an error return code. It is important to note that,
if more than 256 error codes are produced from the execution of a command
or program, they are processed modulus 256. This means that return code
multiples of 256 are treated as return code zero. For example, return code
769 (256*3 + 1) is treated as return code 0001, and so on. Make sure that the
correct code page is set for your terminal emulator. If the code page is
incorrect, such characters as , $, and # in scripts sent from the Tivoli OPC
Controller might be mistranslated, causing jobs to not run correctly.
301
You can use the -f flag to specify the configuration file. This overrides the
EQQINSTANCE environment variable. You must also set the EQQHOME
environment variable to point to the home directory. The following screen
appears when starting the Tracker Agent:
/u/tracker eqqstart
Starting tracker from /u/tracker EQQINSTANCE=aix1.cnf
/usr/lpp/tracker
06/21/00 10:51:38 /**************************************************/
06/21/00 10:51:38 /* Licensed Materials - Property of IBM
*/
06/21/00 10:51:38 /* 5695-007 (C) Copyright IBM Corp. 1994, 1999. */
06/21/00 10:51:38 /* 5697-OPC (C) Copyright IBM Corp. 1997, 1999. */
06/21/00 10:51:38 /* All rights reserved.
*/
06/21/00 10:51:38 /* US Government Users Restricted Rights */
06/21/00 10:51:38 /*
Use, duplication or disclosure restricted */
06/21/00 10:51:38 /*
by GSA ADP Schedule Contract with IBM Corp. */
06/21/00 10:51:38 /**************************************************/
06/21/00 10:51:38 Starting OPC Tracker Agent
06/21/00 10:51:38 FIX LEVEL 11, 1999/11/10
06/21/00 10:51:38 Hostname is itso6 AIX 4 3 000681704C00
06/21/00 10:51:38 ****************************************************
06/21/00 10:51:38 ***
TRACKER DAEMON STARTING UP
***
06/21/00 10:51:38 ***
STARTED BY
tracker ***
06/21/00 10:51:38 ****************************************************
06/21/00 10:51:38 REAL
UID : 7
06/21/00 10:51:38 EFFECTIVE UID : 7
The method for automatically starting the Tracker Agent depends on your
operating system. Edit the /etc/rc.tcpip file. This file is processed at startup to
initiate all TCP/IP-related processes. To add the Tracker Agent to the
/etc/rc.tcpip file, perform the following steps:
1. Log in as root.
2. Edit /etc/rc.tcpip, using an editor, such as vi.
3. At the bottom of the file add the following section:
302
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
set EQQINSTANCE=myconfig
set EQQHOME=/u/tracker
export EQQINSTANCE
export EQQHOME
/u/tracker/bin/eqqstart
/u/tracker eqqstop
Process KEY : 58871
EQQPARM configuration variable is /u/tracker/etc/aix1.cnf.
Eqqmon option is 5.
Sent SIGTERM to daemon process.
/u/tracker
303
- Owner tracker
- Group opc
- Size 6048
- 66 Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390
Remove the segment using the ipcrm m <identifier> command.
Perform the first step again to ensure that the segment is no longer listed.
304
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
/u/tracker eqqclean
Cleaning tracker from /u/tracker.
./EQQ5ab8.ENV
./EQQ5ab8.PGM
./EQQ5ab8.TRC
./eqqmsglog
./EQQtrc.log
./EQQ56dc.ENV
./EQQ56dc.PGM
./EQQ56dc.TRC
./CLISCR___000614_151913_00000003.OUT
./CLISCR___000614_151913_00000003.TMP
./LS_______000619_114505_0000000A.OUT
./LS_______000619_114714_0000000B.OUT
./CLISCR___000619_142652_0000000C.OUT
./CLISCR___000619_143418_0000000D.OUT
./CLISCR___000619_143418_0000000D.TMP
/u/tracker
305
TWS Agents provide fault tolerance and a rich set of scheduling functionality
in the distributed arena. Therefore, new development is likely to further move
in that direction.
On the other hand, master-slave agents like OPC tracker agents have
inherent design constraints that make them less flexible, and less likely to be
developed further.
So, today, if you have an OPC environment and are considering which agent
to use, the following guidelines could be useful to you:
The following is the rationale for using TWS Agents:
You need fault tolerance at the Agent and Domain level.
You do not need to use applications with some jobs on the mainframe and
some jobs on the distribute site right away.
Note
Remember that, today, it is possible to schedule and control TWS and OPC
jobs from the same console (JSC). You can also trigger the execution of jobs
from each scheduling engine on the other. This is shown in Chapter 8,
Enterprise scheduling scenarios on page 203. However, you need to have
two different application definitions (one on the mainframe and one on the
distributed environment) and link them.
You need to support platforms that are not currently supported by OPC
Tracker Agents but are supported by TWS Agents.
The following is the rationale for staying with the OPC Trackers:
You do not need fault tolerance.
You do not need any platforms other than the ones currently supported by
OPC Tracker Agent, and you will not need any others in the near future.
You want to mix distributed and mainframe operations (jobs) on the same
OPC application right now in the easiest way possible.
306
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
10.1 OPC
OPC is a mature stable scheduling platform used worldwide by thousands of
companies. One would expect to find them making similar use of OPC, but we
have observed wide variations in practices between companies. This section
discusses some considerations in making the best use of OPC.
307
308
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
309
The JSC uses the question mark symbol (?) to replace one alpha character.
The percent sign (%) replaces one numeric character, and an asterisk (*)
replaces zero or more alphanumeric characters. In the ISPF dialog panels,
the percent sign (%) replaces one alphanumeric character. Whatever naming
standard you choose, it should be consistent and simple to understand.
10.1.3.1 Internal documentation
OPC has many description fields that can be used for documentation within
the application, run cycle, and so on. Full use should be made of these fields,
especially for items, such as special resources and JCL variables. The name
of the responsible person or department will be useful; so, in the future, you
can check whether the item is still needed. There are data centers with
310
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
311
312
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
means OPC will not submit the job before the input arrival time, which is, by
default, the same as the application input arrival time. You can define for an
operation a different, later, time, but you should only do this for
time-dependent operations. The operation input arrival time will be used
when making external dependency links to this operation.
313
314
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
When the calendar has been updated, the operation status is manually
changed to complete, and the application is not seen for another year.
Even in such a small job stream, the dependencies are getting complicated.
By inserting a non-reporting works station, as shown in Figure 200 on page
316, you can simplify the dependencies.
315
316
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
Some data centers always use a non-reporting workstation as the very first
operation from which all other operations in the application are dependent,
and the very last operation is a non-reporting workstation. These are used to
provide the only connection points for external dependency links and are
intended to simplify coding inter-application dependencies.
317
318
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
The operator discovers that the job requires a special resource that is set as
unavailable and decides that it is no longer a reason to hold back this job.
Because it is fast and easy to use, the EX command is issued. A right-mouse
click against the job on the plan view gives the window shown in Figure 203
on page 320.
319
OPC submits the job, but what was not apparent is that the job was also
waiting on parallel servers. OPC only shows one reason why the operation
has not been submitted.
It is best to let OPC decide when best to submit jobs, and to change resource
settings in the plan when necessary. Use EX only as the last resort.
10.1.12 Education
To realize the most effective use of OPC, you should book your schedulers,
operation staff, and others that may use OPC and the JSC on training
courses. IBM/Tivoli regularly schedule public OPC courses worldwide. An
alternative is to request a private course for your company, to be run at the
nearest IBM/Tivoli location or on your premises. It is beneficial to send one or
two key personnel on a public class when you first start the project to move to
OPC and arrange to train the rest of the staff when you install OPC. Set up a
separate OPC system to be used for in-house training and testing so staff can
practice and familiarize themselves with using the product.
Information about IBM and Tivoli classes can be found on the Web at:
www.ibm.com/services/learning
www.tivoli.com
Courses are run worldwide and you can find the training courses nearest you
by selecting your country. Information about an OPC course on the UK
Learning Services Web-site is shown in Figure 204 on page 321.
320
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
321
mailman process (also know as the " " serverID) is critical to TWS processing
on the Master. If it is also serving FTAs or Standard Agents, the Master is
more vulnerable to outages due to network or machine hangs out on the TWS
network. It is important to note that the serverIDs should only be assigned to
FTAs, standard agents, or Domain Managers. They should never be assigned
to Extended Agents.
10.2.1.1 Small - Less than eight agents with one Master
Two mailman servers can handle 6 - 10 agents, all in the same domain. Each
agent should be attached to a mailman process other than the Master
mailman process. To enable this, the serverID field should contain a character
(A-Z) or a number (0-9). In cases where agents are running large numbers of
short jobs, more mailman servers can be added to distribute the resulting
message load.
10.2.1.2 Medium - 8 to 100 agents with one Master
Each group of eight agents in the TWS network should have a dedicated
mailman server. All agents are still in a single domain.
10.2.1.3 Large - More than 100 agents with one Master
Domain Managers should be used with each hosting 60-100 agents. Each
Domain Manager should have a dedicated mailman server on the Master.
Multiple mailman servers must be running on each Domain Manager. There is
one server ID for every eight agents hosted by the Domain Manager.
10.2.1.4 serverID
The Tivoli Workload Scheduler documentation recommends that each
mailman server process should serve eight agents. This is only intended as a
guide, and the tuning of this number can have a significant influence over the
performance of the TWS network. Values in the range of 4 - 12 are
reasonable, and which one is right for a particular network depends on many
factors including hardware size, network performance and design, and TWS
workload.
As a TWS network grows, this ratio should be tuned to minimize the
initialization time of the network. Initialization time is defined as the time from
the beginning of the Jnextday job until all functional agents are fully-linked.
322
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
Maestro V6.0 is not appropriate for any TWS production networks, and TWS
customers should migrate to V6.1 or V7.0 as soon as possible. TWS V6.0 has
problems that do not exist in later releases and no further patches will be
released for it.
323
time. Customers have not commonly reported this failure on Windows NT.
This failure produces an error message similar to the following:
MAILMAN:08:44/+ Too many open files on events file line = 1400. [1106.2]
The number of file descriptors that are pre-allocated for this purpose is a
tunable operating system parameter. A TWS administrator can use the UNIX
utility losses to determine the approximate number of files that TWS is using.
lsof binaries for a wide variety of UNIX operating systems can be obtained at
the following URL:
ftp://vic.cc.purdue.edu/pub/tools/unix/lsof/binaries/
324
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
Note
Before running any of these commands, make sure that you have a verified
and available backup of at least the TWS file system(s).
More information about rmstdlist, composer build, Jnextday, and the
~maestro/schedlog and ~maestro/stdlist directories can be found in the Tivoli
Workload Scheduler 7.0 Reference Guide, GC32-0424.
10.2.4.5 Inodes
TWS can consume large numbers of inodes when storing large numbers of
job output on UNIX systems in ~twsuser/stdlist. Inodes are not an issue on
Microsoft Windows operating systems. On an FTA, which runs large numbers
of jobs, inode consumption can grow quickly. If TWS aborts due to a lack of
inodes, production data may be lost. Inode usage can be checked using the
UNIX df command. Consult your operating system documentation for
information on using this command.
325
Where nnnnnnn is the new size of the message file. This change will remain
until the file is deleted and re-created. To change the default creation size for
.msg files, add the following line to ~maestro/StartUp and ~maestro/.profile:
export EVSIZE=nnnnnnn
Where, again, nnnnnnn is the size at which the .msg files are created after
being deleted.
BATCHMAN:05:17/* **************************************************
BATCHMAN:05:17/* Too many jobs are scheduled for BATCHMAN to handle
[2201.12]
BATCHMAN:05:17/* **************************************************************
son
bin/mailman
-parm value
10.2.7 Timeouts
False timeouts can occur in large TWS networks when mailman disconnects
from a remote writer process because no response has been received from
the remote node over a time interval. The agent may have responded, but the
message has become caught in traffic in the .msg file and does not arrive
before the timeout expires.
The MAESTRO log in ~twsuser/stdlist/(date) would contain messages, such
as the following:
326
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
MAILMAN:06:15/+
++++++++++++++++++++++++++++++++++++++++++++++++++++++++
MAILMAN:06:15/+WARNING: No incoming from <cpu> - disconnecting.
[207 3.25]
MAILMAN:06:15/+
++++++++++++++++++++++++++++++++++++++++++++++++++++++++
327
these processes running concurrently on the Master can affect the overall
performance of the TWS network.
For Tivoli Workload Scheduler V6.1 implementations, TWS Remote Console
V6.1 fully patched and running on Windows remote workstations provides a
much more efficient way to give users access to TWS on the Master. Remote
Console does not provide composer functionality, and, in cases where it is
needed, gcomposer should be used. gcomposer does not contain an
auto-refresh cycle and, therefore, does not consume the amounts of CPU and
network bandwidth that gconman does. Be sure to use filters to reduce the
amount of unnecessary data that is transmitted across the connection.
For Tivoli Workload Scheduler V7.0 implementations, the Job Scheduling
Console can be used to reduce the amount of resources on the Master that
are consumed by console monitoring activity. Limit views to only what the
operator or administrator needs or wants to see in the JSC. Views of
excessively large numbers of objects can cause delays in GUI operation.
10.2.11 Monitoring
Automated monitoring is an essential part of a successful TWS
implementation. Tivoli Distributed Monitoring and Tivoli Enterprise Console
(TEC) are good examples of products that can be linked to TWS to monitor
status and events from TWS. Distributed Monitoring can be used to monitor
critical systems and resources that TWS depends on to complete its
328
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
10.2.13 Deployment
Tivoli products, such as the Tivoli Management Agent and Tivoli Software
Distribution, can assist in the deployment of large numbers of TWS agents.
Such a solution is built on the unattended install mode provided by Install
Shield.
329
330
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
versions:
Table 18. Software requirements
Software requirements
Version
TWS
Operating system
MVS/ESA OS/390
TCP/IP version
331
tar parameter
cd_folder
Description
The pathname of your CD drive or folder
Platform
HPUX Hewlett-Packard
AIX IBM
MIPS MIPS-based
INTEL Intel-based
SOLARIS Sun Solaris
SUNOS SunOS
DGUX Data General UNIX
DECUX Digital UNIX
332
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
where: xxxxxxx is the volume serial number where the load library is located.
or:
333
SETPROG APF,ADD,DSN=YOURID.UNISON.LOADLIB,VOL=SMS
/* LIB: CPAC.PARMLIB(IKJTSO00)
*/
/* DOC: THIS MEMBER IS USED AT IPL TIME TO DEFINE THE AUTHORIZED */
/* COMMAND LIST, THE AUTHORIZED PROGRAM LIST, THE NOT
*/
/* BACKGROUND COMMAND LIST, THE AUTHORIZED BY THE TSO SERVICE */
/* FACILITY LIST, AND TO CREATE THE DEFAULTS THE SEND COMMAND */
/* WILL USE.
*/
/*
*/
AUTHCMD NAMES(
/* AUTHORIZED COMMANDS
*/ +
TSITCP00
/* FRANKE
*/ +
BINDDATA BDATA
/* DMSMS COMMANDS
*/ +
LISTDATA LDATA
/*
*/ +
334
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
Modify the following JCL for TSDSPACE that to suit your installation. It is
recommended that this job be a started task, rather than a submitted job
stream. It is important that the job not be canceled .
//TSDSPACE PROC
//IEFPROC EXEC
//
//
//STEPLIB DD
//SYSTCPD DD
//SYSTSIN DD
//SYSTSPRT DD
MEMBER=TSPARMS
PGM=TSITCP02,
REGION=4M,
TIME=NOLIMIT
DSN=SFRA4.UNISON.LOADLIB,DISP=SHR
DISP=SHR,DSN=TCPIP.IV4.TCPPARMS(TCPDATA)
DSN=SFRA4.UNISON.CNTL(&MEMBER),DISP=SHR
SYSOUT=*
335
The SYSTSIN allocation points to the parameter library for both started tasks.
See Table 20 for a description of OPC-related systsin parameters.
Table 20. OPC-related systsin parameters
Variable
Debug(no)
MAXWAIT
MCSSTORAGE
OPCMSGCLASS(*)
OPCSUBSYSTEM(OPCS)
PEERADDRESS(0 0 0 0)
PORT(5000)
PUTLINE(YES)
SUBSYS(UNIS)
336
Description
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
Variable
Description
PUTLINE
SVCDUMP(NO)
TCPNAME(TCPIP)
TCPIPSTACK(IBM)
TERMINATOR(X25)
WTP(NO)
337
338
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
3. Select the relevant operating system. Use Other for the MVS and OS/390
agent.
4. Enter the Domain or select it from the Master.
5. Enter the Time Zone and Description (optional).
6. In the Options area, select the Extended Agent. The screen is defaulted
with Autolink checked.
7. Enter access method MVSOPC, MVSCA7, or MVSJES. Enter the host
name. Click on OK and then File->Save. Close the window.
Workstation definitions are described in Table 22 on page 340.
339
Field
Name
Node
TCP Port
Description
The TWS workstation name of the
Extended Agent for MVS.
The node name or IP address of the MVS
system. This can be the same for more
than one Extended Agent for MVS.
(Appears in the JS Console Only)
The TCP address (port number) of the
MVS gateway on the MVS system. Enter
the same value as the PORT parameter
described in the SYSTSIN variable table.
Operating System
Select other
Domain
Use masterdm
Time Zone
Description
Workstation type
Resolve dependencies
Not used
Full status
Not used
Autolink
Not used
Ignored
Server
Not used
Host
340
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
You need to run the Jnextday script to activate the new workstation. The
workstation must then be in active status.
For Windows NT create the following files (assuming TWS is installed in the
path, C:\WIN32APP\maestro):
C:\WIN32APP\maestro\METHODS\MVSOPC.OPTS
341
Table 23 on page 342 shows the parameter for the method option file:
Table 23. Option files
LJuser=name
Cfuser=name
Gsuser=name
Checkinterval=min
Blocktime=min
Retrycount=count
342
Description
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
11.7 Defining internetwork dependencies for OPC with the new JSC
TWS job definitions are required for each MVS and OS/390 job you intend to
schedule and launch with TWS. They are defined similarly to other TWS jobs
and include job name, user name, special script name options, and optional
recovery options. There are two possibilities to define internetwork
dependencies for OPC methods: One launches OPC jobstreams and waits for
its completions, and the other monitors only a predefined OPC jobstream until
completion. Both complete events can be used to start successors at the
TWS side.
2. Select Extended Agent Task for the task type as shown in Figure 209 on
page 344.
343
3. Define the OPC parameter in the task field as shown in Figure 210. At
least, the application name is required.
344
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
where:
appl is the name of the OPC application to be inserted into the current
plan.
IA is the input arrival date and time in the form: yymmddhhmm.
IATIME is the input arrival time in the form: hhmm.
DEADLINE is the deadline arrival date and time in the form:
yymmddhhmm.
DEADLINETIME is the deadline arrival date and time in the form: hhmm.
PRIORITY is the priority (1-9) at which to run the application.
CPDEPR is the current plan dependency resolution selection.
- Y - Add all successor and predecessor dependencies
- N - Do not add any dependencies (default)
- P - Add predecessor dependencies
- S - Add successor dependencies
Note
All TWS Jobs that launch OPC applications must run on the extended
agent workstation.
345
3. Define the OPC parameter in the dependency field as shown in Figure 213
on page 347.
346
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
Note
where:
application is the name of the OPC application in the current plan.
IA is the input arrival date and time.
IATIME is the input arrival time.
JOBNAME is the MVS job name.
OPNO is the operation number (1-99). If included, the job is considered
completed when it reaches this operation number.
You can only add a link in the direction in which the internetwork dependency
becomes a predecessor as shown in Figure 214 on page 348.
347
5. Select Internetwok and then green cross in the upper right corner as
shown in Figure 216 on page 349.
348
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
349
350
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
351
352
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
353
354
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
Parameters have been added to, or changed in, the JOBOPTS statement to
handle the new Data Store options:
JOBLOGRETRIEVAL - A new value, DELAYEDST, has been added to this
keyword for specifying that the job log is to be retrieved by means of the
OPC Data Store.
DSTCLASS - A new parameter to define the reserved held class that is to
be used by the OPC Data Store associated with this tracker.
DSTFILTER - A new parameter to specify whether the job-completion
checker (JCC) requeues, to the reserved Data Store classes, only the
sysouts belonging to those classes.
Parameters have been added to, or changed in, the OPCOPTS statement to
be able to handle the new catalog management functions:
DSTTASK - Specifies whether or not the OPC Data Store is to be used.
JCCTASK - A new DST value has been added to specify whether the JCC
function is not needed, but the Data Store is used.
A parameter has been added to the OPCOPTS and the SERVOPTS
statements:
ARM activates automatic restart (with the Automatic Restart Manager) of a
failed OPC component.
A parameter has been added to the OPCOPTS statement for the Workload
Manager (WLM) support:
WLM defines the WLM options. That is, it defines the generic profile for a
critical job. The profile contains the WLM service class and policy.
355
356
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
Appendix B.
Command line
Schedule (1)
Job Stream
Schedule (2)
Job (1)
Job
Job (2)
Job Instance
CPU
Workstation
Definition
A unit of work consisting of a
set of jobs and their
dependencies.
The occurrence of a job stream
in the plan.
An executable file, task or
command, and its attributes. It
is scheduled to run as part of a
job stream
The occurrence of a job in the
plan.
A logical processor, typically, a
computer, that runs jobs.
Types of workstations include
Domain Managers, Backup
Domain Managers,
Fault-Tolerant Agents,
Standard Agents, and
Extended Agents.
357
Command line
Mozart Database
Files
Database
Definition
A collection of scheduling
objects including jobs, job
streams, workstations,
workstation classes, prompts,
parameters, users, domains,
calendars, and resources.
These files were modified by
gcomposer.
The scheduled activity for a
period, typically, 24-hours. The
plan is continuously updated to
show the current status of all
TWS activities. This file was
modified by gconman.
Symphony File
Plan
AT Time
Start Time
UNTIL Time
Deadline Time
ON and EXCEPT
Dates
Run Cycles
358
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
From the left panel, you select a list icon and click the Load List button (a
green arrow) to display the list. The right side of the window displays the list
results. You can also select to detach the list into a separate window using
the Detach list command available in the pop up menu of commands on a list
icon.
When you first start TWS, there are a number of default lists for you to use.
You can modify these lists or create your own groups and lists.
From the Job Scheduling Console, you can view both the configuration of
objects in the database and the status of objects in the plan.
359
Database lists
A database list displays objects that have been defined in the TWS database.
These can be jobs, job streams, workstations, workstation classes,
parameters, prompts, resources, domains, and users. In legacy Maestro,
these correspond to objects in the mozart database files that are modified
using the composer.
Plan lists
A plan list displays objects that have been scheduled and are included in
todays plan file. In legacy Maestro, these correspond to objects in the
Symphony file that are modified using conman.
360
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
361
362
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
maestro_plan
maestro_database
You can check if the Connectors are running for UNIX by executing the
following command at shell prompt:
ps -ef | grep "maestro_"
The process names listed above are displayed if they are active.
363
Time zones are disabled by default at the installation of the product. If the
timezone enable entry is missing from the globalopts file, time zones are
disabled.
B.8 Auditing
An auditing option has been implemented to track changes to the database
and the plan:
For the database, all user modifications are logged. However, the delta of
the modifications, or the before image and after image, will not be logged.
If an object is opened and saved, the action will be logged even if no
modification has been done.
For the plan, all user modifications to the plan are logged. Actions are
logged whether they are successful or not.
The auditing logs are created in the following directories:
TWShome/audit/plan
TWShome/audit/database
Audit files are logged to a flat text file on individual machines in the TWS
network. This minimizes the risk of audit failure due to network issues and
enables a straightforward approach to writing the log. The log formats are the
same for both plans and database in a general sense. The logs consist of a
header portion, which is the same for all records, an action ID, and a section
of data that will vary according to the action type. All data is kept in clear text
and formatted to be readable an editable from a text editor, such as vi or
notepad.
Note
For modify commands, two entries are made in the log for resources,
calendars, parameters and prompts. The modify command is displayed in
the log as the delete and add commands.
364
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
365
GMT Time - This field displays the GMT time the action was performed.
The format is hhmmss where hh is the hour, mm is the minutes, and ss is
the seconds.
Local Date - This field displays the local date the action was performed.
The local date is defined by the time zone option of the workstation. The
format is yyyymmdd where yyyy is the year, mm is the month, and dd is
the day.
Local Time - This field displays the local time the action was performed.
The local time is defined by the time zone option of the workstation. The
format is hhmmss where hh is the hour, mm is the minutes, and ss is the
seconds.
Object Type - This field displays the type of the object that was affected
by an action. The object type will be one of the following:
- DATABASE - Database definition
- DBWKSTN - Database workstation definition
- DBWKCLS - Database workstation class definition
- DBDOMAIN - Database domain definition
- DBUSER - Database user definition
- DBJBSTRM - Database job stream definition
- DBJOB - Database job definition
- DBCAL - Database calendar definition
- DBPROMPT - Database prompt definition
- DBPARM - Database parameter definition
- DBRES - Database resource definition
- DBSEC - Database security
- PLAN - Plan
- PLWKSTN - Plan workstation
- PLDOMAIN - Plan domain
- PLJBSTRM - Plan job stream
- PLJOB - Plan job
- PLPROMPT - Plan prompt
- PLRES - Plan resource
- PLFILE - Plan file
366
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
Action Type - This field displays what action was taken against the object.
The appropriate values for this field are dependent on the action being
taken. For the database, the Action Type can be ADD, DELETE, MODIFY,
EXPAND, or INSTALL. TWS will record ADD, DELETE, and MODIFY
actions for workstation, workstation classes, domains, users, jobs, job
streams, calendars, prompts, resources, and parameters in the database.
The Action Type field also records the installation of a new security file.
When makesec is run, TWS will record it as an INSTALL action for a
Security definition object. When dbexpand is run, it will be recorded as an
EXPAND action for the DATABASE object. LIST and DISPLAY actions for
objects are not logged. For fileaid, TWS will only log the commands that
result in the opening of a file. For parameters, the command line with
arguments is logged.
Workstation Name - This field displays the TWS workstation from which
the user is performing the action.
User ID - This field displays the logon user who performed the particular
action. On Win32 platforms, it will be the fully-qualified domain name,
domain\user.
Framework User - This field displays the Tivoli Framework-recognized
user ID. This is the login ID of the Job Scheduling Console user.
Object Name - This field displays the fully-qualified name of the object.
The format of this field will depend on the object type as shown here:
- DATABASE - N/A
- DBWKSTN - workstation
- DBWKCLS - workstation_class
- DBDOMAIN - domain
- DBUSER - [workstation#]user
- DBJBSTRM - workstation#jobstream
- DBJOB - workstation#job
- DBCAL - calendar
- DBPROMPT - prompt
- DBPARM - workstation#parameter
- DBRES - workstation#resource
- DBSEC - N/A
- PLAN - N/A
- PLWKSTN - workstation
367
- PLDOMAIN - domain
- PLJBSTRM - workstation#jobstream_instance
- PLJOB - workstation#jobstream_instance.job
- PLPROMPT - [workstation#]prompt
- PLRES - workstation#resource
- PLFILE - workstation#path(qualifier)
Action Dependent Data - This field displays the action-specific data
fields. The format of this data is dependent on the Action Type field.
368
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
|RIVERS\pyasa ||JAMUNA|
369
370
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
Job Scheduling
Console
OPC
Canceled
Status: Delete
Database
Database
TWS Command
Line
Mozart Database
Files
Databases
Definition
A collection of
scheduling objects
including jobs, job
streams,
workstations,
workstation
classes, prompts,
parameters, users,
domains,
calendars, and
resources. These
files were modified
by gcomposer.
A definition of the
data center,
including
application and job
descriptions,
periods,
workstations,
calendars, and
resources.
371
Job Scheduling
Console
Deadline Time
OPC
Time by which an
application or
operation should
be completed.
UNTIL Time
Definition
Time Deadline
Time The latest
time a job or job
stream will begin
execution.
Engine
Controller
The OPC
component that
runs on the
controlling system
and contains the
OPC tasks that
manage the OPC
plans and
databases.
Exclusionary run
cycle
Specifies when a
job stream must
not be run.
External Job
External
Dependency
Job
372
TWS Command
Line
Job (1)
Job
Operation
Job identifier
Operation number
Job Instance
Operation in the
current plan
An executable file,
task, or command,
and its attributes. It
is scheduled to run
as part of a job
stream.
A task performed
at a workstation.
A number used to
uniquely identify
the jobs in the job
stream.
Job (2)
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
The occurrence of
a job in the plan.
Job Scheduling
Console
OPC
TWS Command
Line
Job Stream
Application
Description
Schedule (1)
Occurrence
Schedule (2)
Job Stream
Instance
Job stream
template
Special resource
Offset-based run
cycle
Plan
Current Plan
A unit of work
consisting of a set
of jobs and their
dependencies.
The occurrence of
a job stream in the
plan.
A grouping of job
streams that
provides
scheduling
information, such
as a calendar, a
free-day rule, and
run cycles that can
be inherited by all
the jobstreams that
have been created
using the template.
Application Group
Logical resource
Definition
A logical
representation of a
resource, such as
tape drives,
communication
lines, databases, or
printers, that is
needed to run a
job.
Includes a
user-defined
period and an
offset, such as the
3rd day in a 90-day
cycle.
Symphony File
The scheduled
activity for a period,
typically 24-hours.
The plan is
continuously
updated to show
the current status
of all jobs.
373
Job Scheduling
Console
Rule-based run
cycle
OPC
Definition
Includes a rule,
such as the first
Friday of March or
the second
workday of the
week.
Run Cycles
Run-cycle
Running
Status: Started
Start Time
ON and EXCEPT
Dates
AT Time
Start Time
Successful
Status: Complete
Valid from
In-effect date
Valid to
Out-of-effect date
Workstation
374
TWS Command
Line
The scheduled
start time.
The job or job
stream has
successfully
completed.
CPU
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
A logical
processor, typically
a computer, that
runs jobs. Types of
workstations
include Domain
Managers, Backup
Domain Managers,
Fault-Tolerant
Agents, Standard
Agents, and
Extended Agents.
Job Scheduling
Console
Workstation
OPC
TWS Command
Line
Definition
A logical place
where OPC
controlled work
runs. Typically, a
computer that runs
jobs, but it can also
be a printer or a
representation of a
manual task or a
WTO. Types of
workstations are
Computer, Printer
and General.
Workstation
OPC
Job Scheduling
Console
TWS Command
Line
Application
Description
Job Stream
Schedule (1)
Application Group
Job stream
template
Definition
A unit of work
consisting of a set
of jobs and their
dependencies.
A grouping of job
streams that
provides
scheduling
information, such
as a calendar, a
free-day rule, and
run cycles that can
be inherited by all
the jobstreams that
have been created
using the template.
375
OPC
Controller
Current Plan
TWS Command
Line
Definition
Engine
The OPC
component that
runs on the
controlling system,
and that contains
the OPC tasks that
manage the OPC
plans and
databases.
Plan
The scheduled
activity for a period,
typically 24-hours.
The plan is
continuously
updated to show
the current status
of all TWS
activities. This file
was modified by
gconman.
Symphony File
Database
A definition of the
data center,
including
application and job
descriptions,
periods,
workstations,
calendars, and
resources.
External
Dependency
External Job
In-effect date
Valid from
Start Time
Exclusionary run
cycle
Databases
376
Job Scheduling
Console
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
The scheduled
start time.
Specifies when a
job stream must
not be run.
OPC
Job Scheduling
Console
TWS Command
Line
Occurrence
Job Stream
Instance
Schedule (2)
Operation
Job
Operation in the
current plan
Job Instance
Operation number
Job identifier
Operation
occurrence
Job instance
Operation in the
plan.
Out-of-effect date
Valid to
Offset-based run
cycle
Run-cycle
Special resources
Definition
The occurrence of
a job stream in the
plan.
A task performed
at a workstation.
Job (2)
The occurrence of
a job in the plan.
A number used to
uniquely identify
the job in the job
stream.
Includes a
user-defined
period and an
offset, such as the
3rd day in a 90-day.
Includes a rule,
such as the first
Friday of March or
the second
workday of the
week.
Rule-based run
cycle
ON and EXCEPT
Dates
Run Cycles
Logical resources
377
OPC
Job Scheduling
Console
Status: Complete
Successful
Status: Delete
Canceled
Status: Started
Running
Time by which an
application or
operation should
be completed
Workstation
Deadline Time
TWS Command
Line
Definition
The job or job
stream has been
completed.
The job or job
stream has been
deleted from the
plan.
The job has started
(jobs only).
UNTIL Time
Time Deadline
Time. The latest
time a job or job
stream will begin
execution.
A logical place
where OPC
controlled work
runs. Typically, a
computer that runs
jobs, but it can be
printer or a
representation of a
manual task or
WTO. Types of
workstations are
Computer, Printer,
and General.
Workstation
378
TWS Command
Line
Job Scheduling
Console
OPC
Definition
AT Time
Start Time
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
TWS Command
Line
CPU
Job Scheduling
Console
OPC
A logical
processor, typically
a computer, that
runs jobs. Types of
workstations
include Domain
Managers, Backup
Domain Managers,
Fault-Tolerant
Agents, Standard
Agents, and
Extended Agents.
Workstation
Job (1)
Job
Job (2)
Job Instance
Definition
An executable file,
task or command,
and its attributes. It
is scheduled to run
as part of a job
stream.
Operation in the
current plan
The occurrence of
a job in the plan.
A collection of
scheduling objects
including jobs, job
streams,
workstations,
workstation
classes, prompts,
parameters, users,
domains,
calendars, and
resources. These
files were modified
by gcomposer.
Mozart Database
Files
Database
ON and EXCEPT
Dates
Run Cycles
Run-cycle
Schedule (1)
Job Stream
Application
Description
379
TWS Command
Line
Job Scheduling
Console
OPC
Definition
Schedule (2)
Job Stream
Instance
Occurrence
The occurrence of
a job stream in the
plan.
Symphony File
UNTIL Time
380
Plan
Deadline Time
Current Plan
Time by which an
application or
operation should
be completed
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
The scheduled
activity for a period,
typically 24-hours.
The plan is
continuously
updated to show
the current status
of all jobs.
Time Deadline
Time. The latest
time a job or job
stream will begin
execution.
381
382
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
9. Review and document modifications that have been made to the TWS
configuration files: localopts, globalopts, and Netconf. Save copies of
these files as they will be replaced with defaults during upgrade.
10.Review the output of the Jnextday job from the production master. If there
are warning messages about the workload, identify the cause and fix the
offending object definitions.
11.Examine the production workload and look for occurrences of the "
character in job definitions. A script called quoter can be obtained from
Tivoli Customer Support Level 2 that will automate this process. Although
it did not cause a problem in TWS 5.2, the use of this character in a TWS
6.1 job definition will cause a failure. For more information about this
issue, contact Tivoli Customer Support Level 2.
12.Export the production workload to text files using the composer create
commands, and copy these files to a safe place. For security reasons,
passwords contained in TWS User definitions are not exported in this
process. A list of active accounts, including passwords, should be
compiled in case they are needed later.
13.Import these created files into the TWS 6.1 test environment using the
composer add and composer replace commands. Correct any errors or
warnings that occur during the import. Run the schedulr and compiler
commands to verify that the workload is fully compatible with TWS 6.1.
Correct any errors or warnings produced. You can find more information
about these commands in the Maestro UNIX User's Guide V6.0 ,
GC31-5136.
14.The upgrade of the production system will require that Jnextday be run
after the TWS 6.1 code is installed; therefore, the upgrade must be
performed at the production day rollover. On the day of the upgrade,
cancel the Jnextday job.
15.Verify that a good backup of the TWS file system exists on the production
master and that it is accessible to the TWS administrator performing the
upgrade.
16.Ensure that the file system on which TWS is installed in the production
environment has plenty of free space (150 to 200 MB).
17.Before starting the upgrade, set the CPU limit using the conman limit CPU
command for the entire production TWS network to 0. Raise the fence on
all CPUs to 101 (GO) using the conman fence command. You can find more
information about these commands in Chapter 9 of the Maestro UNIX
Users Guide V6.0, GC31-5136.
383
18.Stop all agents using the conman stop @;noask command, which is run on
the master. Unlink all agents using the conman unlink @;noask command,
which is run on the TWS Master.
3. Copy and expand the TWS 6.1 install imag, MAESTRO.TAR, into the TWS
home directory on the production master. See the Maestro NT/UNIX
Installation Guide V6.0, SC31-5135, for instructions.
4. Run the /bin/sh customize -new command. See the Maestro NT/UNIX
Installation Guide V6.0, SC31-5135, to determine which options to specify
for this command.
5. Apply all general availability (GA) patches for TWS 6.1 before starting
TWS in this environment. TWS patches can be found at the following URL:
ftp://ftp.tivoli.com/support/patches.
6. Import the previously saved text files into the new TWS 6.1 database using
the composer add and composer replace commands. Remember that, for
security reasons, the passwords in TWS User definitions is not exported
into these files. These passwords will need to be updated manually.
384
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
and
~maestro/stdlist/YYYY.MM.DD/NETMAN
for errors or warnings and take corrective action if possible. Use the Tivoli
Customer Support web site at http://www.tivoli.com/support to search for
solutions. Contact Tivoli Customer Support via electronic support or
telephone for any issues the require our assistance.
It is highly-recommended that the composer build commands and the
rmstdlist command are run on a regular basis in any TWS environment. The
builds should be run on the TWS Master during off-shift hours every two to
four weeks. The rmstdlist command should be run on every FTA and
S-AGENT, and on the master every 7 to 21 days. You can find more
information about these commands in the Maestro UNIX User's Guide V6.0,
GC31-5136.
385
take any longer and is no more complex than an upgrade and will ensure that
the new TWS 6.1 FTA does not inherit any problems from the 5.x FTA that
existed previously. A detailed procedure for removing TWS from a Windows
NT system is provided in the following section.
4. Use the NT Task Manager to verify that all TWS tasks and services have
terminated.
5. Use the Control Panel->Add/Remove Programs dialog to uninstall TWS
from the system.
6. Start a registry editor and remove all keys containing the string unison or
the string netman.
7. Delete the TWS user's home directory and the Unison directory. By default
these are c:\win32app\maestro and c:\win32app\unison.
8. Reboot the machine.
386
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
387
The Nordic GSE OPC site is a good OPC forum to get information, but it is
a member only zone. The Nordic GSE OPC site up can be accessed from
the following WEB URL:
http://www.gse.nordic.org/.
388
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
389
390
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
Record Code
Explanation
Ad
Aj
As
BI
Cf
Cj
Co
Cs
Da
DA
Dd
DD
Dj
Dr
Dependency, release
RD
Dependency, release
Es
Ej
Fy
Fn
Go
Gs
Hi
In
Initialization record
391
Record Code
392
Explanation
Jc
Jf
Jr
Jt
Kj
Lj
Lk
LINK_CPU
Lm
Limit command.
Mj
My
Mr
Modify resource
Ms
Nj
New Job
Qt
Rd
Re
Rf
Rj
Rp
Rr
Release resource
Rs
Sc
Sr
Ss
Schedule Done
St
Schedule Stop
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
Record Code
Explanation
Su
To
Tellop message
Uk
Us
Ua
User Action
393
394
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
395
Parallel Sysplex
RS/6000
SP
System/390
VTAM
396
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
397
398
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
Collection Kit
Number
IBM System/390 Redbooks Collection
SK2T-2177
IBM Networking Redbooks Collection
SK2T-6022
IBM Transaction Processing and Data Management Redbooks CollectionSK2T-8038
IBM Lotus Redbooks Collection
SK2T-8039
Tivoli Redbooks Collection
SK2T-8044
IBM AS/400 Redbooks Collection
SK2T-2849
IBM Netfinity Hardware and Software Redbooks Collection
SK2T-8046
IBM RS/6000 Redbooks Collection
SK2T-8043
IBM Application Development Redbooks Collection
SK2T-8037
IBM Enterprise Storage and Systems Management Solutions
SK3T-3694
399
400
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
ftp://ftp.tivoli.com/support/patches
http://www.asapuser.com/index.cfm
The ASAP user group is focusing on TWS and OPC. You need to get a
membership to access this forum, and the ASAP user group can be
accessed from this Web address.
http://www.egroups.com/group/maestro
The EGROUPS Maestro-L list forum is a very valuable resource to get
answers and other information about TWS, and the EFGROUPS
Maestro-L can be accessed from this Web address.
http://www.gse.nordic.org/
The Nordic GSE OPC site is a good OPC forum to get information, but it is
a member-only zone. The Nordic GSE OPC Web site can be accessed
from this Web address.
http://www.tivoli.com/support/faqs/Tivoli_OPC.html
You can access the OPC Web site from this Web address.
http://www.tivoli.com/support/faqs/OPC_ESA.html
The OPCESA Web site can be accessed from this Web address.
http://www.ibm.com/services/learning/
There are courses given for OPC and TWS in the IBM Learning Center,
which you can visit at this Web address.
http://www.ibm.com/solutions/softwaremigration/tivmigteam.html
This is a WEB site that you can find information about Software Migration
Project Office (SMPO) Service Offering.
401
402
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
[email protected]
Contact information is in the How to Order section at this site:
http://www.elink.ibmlink.ibm.com/pbl/pbl
Telephone Orders
United States (toll free)
Canada (toll free)
Outside North America
1-800-879-2755
1-800-IBM-4YOU
Country coordinator phone number is in the How to Order
section at this site:
http://www.elink.ibmlink.ibm.com/pbl/pbl
Fax Orders
United States (toll free)
Canada
Outside North America
1-800-445-9269
1-403-267-4455
Fax phone number is in the How to Order section at this site:
http://www.elink.ibmlink.ibm.com/pbl/pbl
This information was current at the time of publication, but is continually subject to change. The latest
information may be found at the Redbooks Web site.
IBM Intranet for Employees
IBM employees may register for information on workshops, residencies, and Redbooks by accessing
the IBM Intranet Web site at http://w3.itso.ibm.com/ and clicking the ITSO Mailing List button.
Look in the Materials repository for workshops, presentations, papers, and Web pages developed
and written by the ITSO technical professionals; click the Additional Materials button. Employees may
access MyNews at http://w3.ibm.com/ for redbook, residency, and workshop announcements.
403
Order Number
First name
Quantity
Last name
Company
Address
City
Postal code
Country
Telephone number
Telefax number
VAT number
Card issued to
Signature
We accept American Express, Diners, Eurocard, Master Card, and Visa. Payment by credit card not
available in all countries. Signature mandatory for credit card payment.
404
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
Application Program
Interface
BDC
CORBA
Common Object
Request Broker
Architecture
DMTF
Desktop Management
Task Force
GID
Group Identification
Definition
IBM
International Business
Machines Corporation
ITSO
International Technical
Support Organization
JSC
MAS
RFC
PSP
preventive service
planning
PTF
TMR
Tivoli Management
Region
TWS
Tivoli Workload
Scheduler
X-agent
Extended Agent
OMG
Object Management
Group
OPC
Operations, Planning
and Control
405
406
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
Index
Symbols
.msg file size 329
.msg files 325
.toc file 285
/etc/services 199
Numerics
0130 patch set 381
24/7 availability 1
3480 tape cartridge 333
A
ABAP/4 Modules 171
abend 123
abend code 123
abend message 124
Abended jobs 117
Abendu 124
abnormal termination 127
access method 157, 158
account name 333
adhoc jobs 326
aged job output 324
AIX/6000 4
AIXconsole.sh 76
altpass 153
API 9
APPC 11
Application Builder 308
Application Description dialog 281
application program interface
See API
AS/400 1
ascending order 116
ascii 232
AT Time 358
audit feature 364
Audit Level 168
Audit log header 368
auditing option 364
authorization profile 169
Authorization Roles 27
Autolink 340
automatic job tailoring 277
automatic recovery 277
B
BAAN 161
Baan 7
Backup Master 329
Basis Administrator 197
Batch Data Collector
See BDC
batch job execution 5
Batch jobs 87
BDC 163
BDC sessions 163
Best Practices
OPC 307
TWS 321
Blocktime 342
boot time 324
build commands 324
C
C runtime library 30
CA7 160
cache 38
Catalog Management 12
central point of control 273
centralized database 6
Checkinterval 342
chown 298
clock values 299
Closing MS-DOS window 82
CODEPAGE 31
codepage 279, 281
command translation 334
common interface 7
Common Object Request Broker Architecture
See CORBA
compiled security template 70
composer 41
composer replace 239
configuration profile 8
conman resource 239
console 134
console dump 134
control library 333
407
D
data center 2
Data Router 128
data set 15
Data Space 334
Data Store 11, 12
Data Store options 355
Database 358
Database applications 324
database errors 324
database installation 47
database object 362
dbexpand 365
Deadline Time 358
DECUX 332
Default Database lists 359
Default Plan lists 359
default time-out 280
dependency object 5
descending order 116
Desktop Management Task Force
See DMTF
destination name 280
df command 325
DGUX 332
Diagnose Flags 142
Digital OpenVMS 4
Digital UNIX 277
Disable BDC Wait 162
disk activity 327
disk I/O 328
Disk Space 329
Disk space 324
disk usage 324
DISP=MOD parameter 129
Distributed Monitoring 204, 328
DMTF 8
DNS name 278
408
Domain Manager 7
downgrade 147
dummy job 254
dump dataset 124
dumpsec 71
E
earliest time 358
EBCDIC-ASCII data conversion 334
e-commerce 1
Education 320
Endpoint 8
endpoint communications 9
Endpoint manager 9
end-to-end scheduling 1
EPILOG 129
Event Triggered Tracking 212
evtsize 325
EX Command 318
EXCEPT Date 358
EXIT11 12
EXIT7 13
export 284
Extended Agent 157
extended agent 161
Extended Agent Workstations 250
external program 162
F
False timeout 326
FAQ sites 387
fast I/O 38
fault-tolerant 9
Fault-tolerant agent
See FTA
file descriptor 324
File locks 324
File Manager 333
file permissions 297
filesystem 324
filesystem size 39
filter 328
foreign platforms 157
Forums 387
Framework classes 68
Framework User 367
free days 4
Free Inodes 329
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
G
Gateway 8
gateway methods 8
gateway program 334
gconman 358
General workstation 317
generic object database 8
Generic subtask 297
GID 24, 241
global refresh rate 361
globalopts 363
gracefull shutdown 334
Greenwich Mean Time 299
H
Hot standby controller 11
HP 1
HP MPE 161
HP ServiceGuard 329
HP-UX 4
I
I/O bottlenecks 328
IBM 1
IBM HACMP 329
IBM Learning Center 388
IBM Ultrastar 328
idle time 5
IEFU84 exit 334
IFuser 197
Incorrect output 124
INCORROUT 125
initialization statements 354
Inodes 325
Installation
Extended Agent for OS390 332
Job Scheduling Services 64
JSC (TWS) 72
JSC(OPC) 19
OPC connector 26
OPC Tracker Agent 282
OPC V2R3 11
TCP/IP server 30
Tivoli Framework 42
Tivoli Patch 52
TWS 37
X-agent for SAP R/3 165
installation exits 354
InstallShield 329
InstallShield options 330
instance settings 28
INTEL 332
Interlink 3.1 331
internetwork dependencies 343
Interpreted script file 300
inter-process communication 325
IP address 278, 280, 281
ISPF application 16
ISPF command table 16
ISPF panels 4
ISPF user 351
ISPF/PDF 16
IT console 329
J
Java 112
Java Development Kit 19
Java version 1.1.8 72
JCC
See job-completion checker
JCL procedures 15
JCL variable substitution 354
JES 12
JES control blocks 13
JES exits 13
JES2 7, 12
JES2 hotstart 14
JES3 7
Jnextday 41
Job 3
Job FOLLOW 153
job ID 160
Job identifier 88
Job instance 89
Deadline time 115
File dependencies 115
Priority 115
Prompt dependencies 115
Repeat range 115
Resource dependencies 115
job number 160
409
Job scheduling 1
Job Scheduling Console
See JSC
Job Scheduling Editor 99
Job Scheduling Services 37, 38 , 42
Job setup 3
Job stream
Carry forward 115
Deadline time 115
File dependencies 115
Limit 115
Priority 115
Prompt dependencies 115
Resource dependencies 115
Starting time 115
Time zone 115
job-completion checker 277
joblogretrieval exit 12
jobman.exe 158
jobmanrc 157
jobmon.exe 158
jobs waiting 220
Jobstream 3
JSC 1, 5, 79, 351
client 85
dataflow 18
Error List 89
initial window 83
Job status mapping 116
performance 112
pre-customized profile 85
preferences 85
Ready Lists 89
Starting 84
JSC Installation
On AIX 19
On Sun Solaris 20
On Windows 19
JTOPTS 281
K
keyword 123
keyword string 123
L
lack of inodes 325
latest time 358
legacy GUI 110
410
legacy systems 1
library 162
License Key 46
listproc 150
LJuser 197
LMHOSTS 43
load library 333
LoadLeveler 297
local port 282
localization 358
log format 365
logfile adapter 329
logical model 2
logical processor 357
Logical resources 89
Long Interval 168
long-term plan 3
Loop 124
LOOP procedure 130
lsof binaries 324
M
machine hang out 322
maestro_database 363
maestro_engine 362
maestro_plan 363
MaestroDatabase 68
MaestroEngine 68
MaestroPlan 68
mailman 150
Managed Node 37, 69
mapping file 21
MAS 12
Mass Update utility 353
Master 321
master copy 196
Master Domain Manager 7
maximum return code 281
MAXWAIT 336
member name 284
message file size 325
method.opts 158
Microsoft 325
migration 381
Migration checklist 381
Migration from 5..X to 6.1
checklist 381
Post installation 385
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
N
name server 43
netstat 151
NetView communication 3
Network Agent 7
Network connectivity 325
network hang out 322
Network status 329
network traffic 327
New terminology 371
NFS connections 241
NFS filesystems 243
NFS restrictions 298
NIS 43, 282
NOLIMIT parameter. 334
non-IDRC format 333
non-MVS platforms 277
O
Object Management Group
See OMG
offset 89
offset based run cycle 105
OMG/CORBA 8
OMVS segment 31
ON Date 358
one way link 275
OPC xvii, 1, 5, 7, 9, 79
address spaces 15
architecture 4
Business processing cycles 4
calendar 4
Calendar maintenance 314
commands 17
controller 4
Current plan length 314
database 4
Deadline time 313
Dependencies 3
dialogs 79
distribution tape 12
Input arrival time 312
installation 17
long term plan 9
Naming standards 309
Plans 3
Special resources 3
subsystems 14
tracker 4
trends 10
Workstations 3
OPC Connector 21, 26, 42
Creating instance 26
installing 26
OPC ID mapping 33
OPC PLEX 129
OPC Tracker Agent 277
AIX 277
OPC Tracker Agent for AIX/6000
411
P
PEERADDRESS 336
PeopleSoft 7, 161
PERFM procedure 132
physical disk 324
PIF
See Program Interface
ping 278
Plan 358
Policy Region. 56
PORT 336
port definition 160
port number 297
POSIX shell 353
postprocessing 3
412
preprocessing 3
preventive service planning
See PSP
Print operations 3
Problem analysis 123, 127
problem-type keywords 124
process id 150
Process Status 329
Processes 323
production run number 160
profile distribution 8
ProfileManager 58
Program Directory 12
Program Interface 4, 352
program temporary fix
See PTF
PROLOG 129
Prompts waiting reply 117
PSP 12
PTF 12
put command 232
PUTLINE 336, 337
R
R/3
Client Number 167
Instance 167
System Name 167
Userid 167
r3batch 161
R3batch 4.0 162
r3batch.opts 168
r3debug file 197
r3gateway 199
r3login name 196
R3OPT.EXE 168
r3options 161, 167, 168
RACF 32
RACF user ID 32
RAID array 328
record mode 330
recovery jobs 161
redbooks 387
reliability 324
Remote 158
remote clients 285
remote console 6, 328
remote file system 45
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
S
SAF 33
same paradigms 10
samples directories 304
SAP certification 162
SAP R/3 4, 7
Application version 195
Batch 161
configuration 169
database 171
environment 163
Hot Package 8 165
Job Class 198
Job states 162
Kernel version 195
RFC 162
Versions 165
X-agent 162
X-agent installation 165
SAP R/3 Extended Agent 161
benefits 161
define job 177
413
Sun 1
Sun Solaris 4
SunOS 4
SVCDUMP 337
Swap Space 329
Symphony File 358
SYS1.PARMLIB 14
SYSMDUMP 129
SYSPLEX 205
System abends 128
system administrator 323
System dump 129
System Load Avg 329
T
tar files 332
TCP Reader 298
TCP/IP 3.3 31
TCP/IP network 38
TCP/IP Parms 278
TCP/IP Server 17, 142
Diagnose Flags 142
installation 30
Trace 142
TCP/IP stack 30
TCPIPID 280
TCPIPPORT 280
TCPIPSTACK 337
TCPNAME 337
TCPTIMEOUT 280
TEC-Console 204
TEC-Server 204
Terminology changes
JSC 371
TWS 357
Terminology translation 371
third-party vendors 8
three-tiered management 9
threshold 254
Time zones 363
Timeout 326
timeout expire 326
timezone enable 363
Tivoli Administrator 22
Creating 21
Setting Logins 22
Tivoli API 8
Tivoli Customer Support 325
414
Tivoli Desktop 37
Tivoli Enterprise 8
Tivoli Framework 37, 112, 358
Tivoli Maestro 5.x 381
Tivoli Management Agent 8
Tivoli Management Framework 8
Tivoli Management Server
See TMR Server
Tivoli Object Dispatcher 48
Tivoli Plus Module for TWS 204
Tivoli Scheduling Agent for SAP R/3 163
Tivoli Software Distribution 329
Tivoli support database 124
Tivoli Workload Scheduler
See TWS
TMR Server 9, 37
Toolkits 9
touch 301
Trace
JSC 145
OPC Connector 143
TCP/IP Server 142
TRACEDATA. 146
TRACELEVEL 146
tracker user ID 288
transaction support 8
transformation 10
transport files 171
trends and directions 10
Troubleshooting
Job Scheduling Console 145
OPC 123
OPC connector 143
TCP/IP Server 142
TSDSPACE 334
TSO command 4, 16
TTY consoles 303
TWS 5, 6, 7, 9, 79
architecture 5, 7, 321
Backup Domain Manager 6
Calendars 5
Composer 10
Conductor 10
Connector 37
Connector instance 68
database 5
database files 7
Deployment 329
deployment 323
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
Domain Manager 6, 39
Engine 37
Extended Agent 7
Fault-tolerant Agent 6
FTA 322
Hardware 328
High availability 329
home directory 325
installling 39
Job Streams 5
JSC 110
JSC Client 6
Master 39, 55, 158, 321
Master Domain Manager 6
network 6, 322
overview 5
plan 5
Resources 237
RFC user 170
security 70, 116, 330
Standard Agent 7, 322
Terminology changes 357
trends 10
user account 38
valid versions 322
Workstations 6
TWS 5.2 165
TWS 6.1 381
TWS Connector 42
installation 67
multiple instances 69
start up 70
stopping 70
verify installation 68
TWS Extended Agent for MVS and OS/390 331
TWS OS/390 Extended Agent
OS/390 installation 333
Unix installation 332
Windows NT installation 333
TWS system requirements
on all operating systems 38
on Unix 38
on Windows NT 38
TWS versus SAP Job states 162
TWSHOME 332
TWS-MVS methods 332
U
UID 22, 241
Ultra2 328
unattended install 329, 330
uncompressed format 333
Unison 386
Unix command shell 116
UNIX Local 160
UNIX Remote Shell 160
unlink 327
unnecessary data 328
UNTIL Time 358
US keyboard mapping 48
User abends 127
user groups 387
Usermap 31, 33
utilities 304
V
VTAM 124
VTAM links 11
W
Windows 2000 72
Windows 98 351
Windows NT 4, 351
Windows NT 4.0 Service Pack 4 37
Windows NT 4.0 Service Pack 5 37
Windows NT Clusters 329
Windows NT start menu 81
wmaeutil 365
work days 4
Workload 1
workload 321
Workload Manager 352
Workstation 357
Change fence 115
Change limit 115
Link/unlink 115
Start/stop 115
wtwsconn 69
wtwsconn.sh utility 67
X
X/Open 8
X-agent
See Extended Agent
415
416
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
Document Number
Redbook Title
SG24-6013-00
End-to-End Scheduling with OPC and TWS
Review
O Very Good
O Customer
O Business Partner
O IBM, Lotus or Tivoli Employee
O None of the above
O Good
O Average
O Poor
O Solution Developer
417
End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environments
End-to-End Scheduling
with OPC and TWS
Mainframe and Distributed Environments
Use the Job
Scheduling Console
to integrate OPC and
TWS
Model your
environment using
realistic scheduling
scenarios
Implement SAP R/3
workload scheduling
INTERNATIONAL
TECHNICAL
SUPPORT
ORGANIZATION
BUILDING TECHNICAL
INFORMATION BASED ON
PRACTICAL EXPERIENCE
IBM Redbooks are developed by
the IBM International Technical
Support Organization. Experts
from IBM, Customers and
Partners from around the world
create timely technical
information based on realistic
scenarios. Specific
recommendations are provided
to help you implement IT
solutions more effectively in
your environment.
ISBN 0738418609