IBM DB2 10.5 For Linux, UNIX, and Windows - Developing Embedded SQL Applications
IBM DB2 10.5 For Linux, UNIX, and Windows - Developing Embedded SQL Applications
5
for Linux, UNIX, and Windows
SC27-4550-00
SC27-4550-00
Note
Before using this information and the product it supports, read the general information under Appendix B, Notices, on
page 217.
Edition Notice
This document contains proprietary information of IBM. It is provided under a license agreement and is protected
by copyright law. The information contained in this publication does not include any product warranties, and any
statements provided in this manual should not be interpreted as such.
You can order IBM publications online or through your local IBM representative.
v To order publications online, go to the IBM Publications Center at http://www.ibm.com/shop/publications/
order
v To find your local IBM representative, go to the IBM Directory of Worldwide Contacts at http://www.ibm.com/
planetwide/
To order DB2 publications from DB2 Marketing and Sales in the United States or Canada, call 1-800-IBM-4YOU
(426-4968).
When you send information to IBM, you grant IBM a nonexclusive right to use or distribute the information in any
way it believes appropriate without incurring any obligation to you.
Copyright IBM Corporation 1993, 2013.
US Government Users Restricted Rights Use, duplication or disclosure restricted by GSA ADP Schedule Contract
with IBM Corp.
Contents
Chapter 1. Introduction to embedded
SQL . . . . . . . . . . . . . . . . 1
Embedding SQL statements in a host language . . .
Embedded SQL statements in C and C++
applications. . . . . . . . . . . . . .
Embedded SQL statements in FORTRAN
applications. . . . . . . . . . . . . .
Embedded SQL statements in COBOL applications
Embedded SQL statements in REXX applications
Supported development software for embedded SQL
applications. . . . . . . . . . . . . . .
Setting up the embedded SQL development
environment . . . . . . . . . . . . . .
2
2
4
5
6
8
8
11
12
12
13
15
16
17
17
18
18
18
19
19
22
22
23
25
26
29
29
iii
iv
146
148
149
149
152
153
153
154
154
157
157
157
158
159
159
165
167
168
182
196
198
198
. 201
. 203
160
Index . . . . . . . . . . . . . . . 221
user for key parts of an SQL statement, such as the names of the tables and
columns to be searched, is a good example of a situation suited for
dynamic SQL.
Related information:
Installing and configuring Optim Performance Manager Extended Insight
The following guidelines and rules apply to the execution of embedded SQL
statements in C and C++ applications:
v You can begin the SQL statement string on the same line as the EXEC SQL
statement initializer.
v Do not split the EXEC SQL between lines.
v You must use the SQL statement terminator. If you do not use it, the
precompiler will continue to the next terminator in the application. This can
cause indeterminate errors.
v C and C++ comments can be placed before the statement initializer or after the
statement terminator.
v Multiple SQL statements and C or C++ statements may be placed on the same
line. For example:
EXEC SQL OPEN c1; if (SQLCODE >= 0) EXEC SQL FETCH c1 INTO :hv;
v Carriage returns, line feeds, and TABs can be included within quoted strings.
The SQL precompiler will leave these as is.
v Do not use the #include statement to include files containing SQL statements.
SQL statements are precompiled before the module is compiled. The precompiler
will ignore the #include statement. Instead, use the SQL INCLUDE statement to
import the include files.
v SQL comments are allowed on any line that is part of an embedded SQL
statement, with the exception of dynamically issued statements.
The format for an SQL comment is a double dash (--), followed by a string of
zero or more characters, and terminated by a line end.
Do not place SQL comments after the SQL statement terminator. These SQL
comments cause compilation errors because compilers interpret them as C or
C++ syntax.
You can use SQL comments in a static statement string wherever blanks are
allowed.
The use of C and C++ comment delimiters /* */ are allowed in both static
and dynamic embedded SQL statements.
The use of //-style C++ comments are not permitted within static SQL
statements
v SQL string literals and delimited identifiers can be continued over line breaks in
C and C++ applications. To do this, use a back slash (\) at the end of the line
where the break is desired. For example, to select data from the NAME column
in the staff table where the NAME column equals 'Sanders' you could do
something similar to the following sample code:
EXEC SQL SELECT "NA\
ME" INTO :n FROM staff WHERE name=Sa\
nders;
Any new line characters (such as carriage return and line feed) are not included
in the string that is passed to the database manager as an SQL statement.
v Substitution of white space characters, such as end-of-line and TAB characters,
occurs as follows:
When they occur outside quotation marks (but inside SQL statements),
end-of-lines and TABs are substituted by a single space.
When they occur inside quotation marks, the end-of-line characters disappear,
provided the string is continued properly for a C program. TABs are not
modified.
Chapter 1. Embedded SQL
Note that the actual characters used for end-of-line and TAB vary from platform
to platform. For example, UNIX and Linux based systems use a line feed.
v Use host variables exactly as declared when referencing host variables within an
SQL statement.
v Substitution of white space characters, such as end-of-line and TAB characters,
occurs as follows:
When they occur outside quotation marks (but inside SQL statements),
end-of-lines and TABs are substituted by a single space.
When they occur inside quotation marks, the end-of-line characters disappear,
provided the string is continued properly for a FORTRAN program. TABs are
not modified.
Note that the actual characters used for end-of-line and TAB vary from platform
to platform. For example, Windows-based platforms use the Carriage
Return/Line Feed for end-of-line, whereas UNIX and Linux based platforms use
just a Line Feed.
v
v
v
v
v
SQL statements can be continued onto more than one line. Each part of the
statement should be enclosed in single quotation marks, and a comma must
delimit additional statement text as follows:
CALL SQLEXEC SQL text,
additional text,
.
.
.
final text
In this example, the SQLCODE field of the SQLCA structure is checked to determine
whether the update was successful.
The following rules apply to embedded SQL statements: in REXX applications
v The following SQL statements can be passed directly to the SQLEXEC routine:
CALL
CLOSE
COMMIT
CONNECT
CONNECT TO
CONNECT RESET
DECLARE
DESCRIBE
DISCONNECT
EXECUTE
EXECUTE IMMEDIATE
FETCH
FREE LOCATOR
OPEN
PREPARE
RELEASE
ROLLBACK
SET CONNECTION
Other SQL statements must be processed dynamically using the EXECUTE
IMMEDIATE, or PREPARE and EXECUTE statements in conjunction with the
SQLEXEC routine.
v You cannot use host variables in the CONNECT and SET CONNECTION
statements in REXX.
v Cursor names and statement names are predefined as follows:
c1 to c100
Cursor names, which range from c1 to c50 for cursors declared without
the WITH HOLD option, and c51 to c100 for cursors declared using the
WITH HOLD option.
The cursor name identifier is used for DECLARE, OPEN, FETCH, and
CLOSE statements. It identifies the cursor used in the SQL request.
s1 to s100
Statement names, which range from s1 to s100.
10
11
general, group membership is considered for dynamic SQL statements, but is not
considered for static SQL statements. The exception to this general case occurs
when privileges are granted to PUBLIC: these are considered when static SQL
statements are processed.
Consider two users, PAYROLL and BUDGET, who need to perform queries against
the STAFF table. PAYROLL is responsible for paying the employees of the
company, so it needs to issue a variety of SELECT statements when issuing
paychecks. PAYROLL needs to be able to access each employee's salary. BUDGET
is responsible for determining how much money is needed to pay the salaries.
BUDGET should not, however, be able to see any particular employee's salary.
Because PAYROLL issues many different SELECT statements, the application you
design for PAYROLL could probably make good use of dynamic SQL. The
dynamic SQL would require that PAYROLL have SELECT privilege on the STAFF
table. This requirement is not a problem because PAYROLL requires full access to
the table.
However, BUDGET, should not have access to each employee's salary. This means
that you should not grant SELECT privilege on the STAFF table to BUDGET.
Because BUDGET does need access to the total of all the salaries in the STAFF
table, you could build a static SQL application to execute a SELECT
SUM(SALARY) FROM STAFF, bind the application and grant the EXECUTE
privilege on your application's package to BUDGET. This enables BUDGET to
obtain the required information, without exposing the information that BUDGET
should not see.
12
The statement text is not processed when an application is precompiled. In fact, the
statement text does not have to exist at the time the application is precompiled.
Instead, the SQL statement is treated as a host variable for precompilation
purposes and the variable is referenced during application execution.
Dynamic SQL support statements are required to transform the host variable
containing SQL text into an executable form. Also, dynamic SQL support
statements operate on the host variable by referencing the statement name. These
support statements are:
EXECUTE IMMEDIATE
Prepares and executes a statement that does not use any host variables.
Use this statement as an alternative to the PREPARE and EXECUTE
statements.
For example consider the following statement in C:
strcpy (qstring,"INSERT INTO WORK_TABLE SELECT *
FROM EMP_ACT WHERE ACTNO >= 100");
EXEC SQL EXECUTE IMMEDIATE :qstring;
PREPARE
Turns the character string form of the SQL statement into an executable
form of the statement, assigns a statement name, and optionally places
information about the statement in an SQLDA structure.
EXECUTE
Executes a previously prepared SQL statement. The statement can be
executed repeatedly within a connection.
DESCRIBE
Places information about a prepared statement into an SQLDA.
For example consider the following statement in C;
strcpy(hostVarStmt, "DELETE FROM org WHERE deptnumb = 15");
EXEC SQL PREPARE Stmt FROM :hostVarStmt;
EXEC SQL DESCRIBE Stmt INTO :sqlda;
EXEC SQL EXECUTE Stmt;
Note: The content of dynamic SQL statements follows the same syntax as static
SQL statements, with the following exceptions:
v The statement cannot begin with EXEC SQL.
v The statement cannot end with the statement terminator. An exception to this is
the CREATE TRIGGER statement which can contain a semicolon (;).
Chapter 2. Designing
13
Likely Best
Choice
v Static
v Either
v Dynamic
v Static
v Either
v Dynamic
v Either
v Either
v Static
Nature of Query
v Random
v Permanent
v Dynamic
v Either
v Either
v Dynamic
v Either
v Static
v Either
v Dynamic
14
earlier bind time. Note that if your transaction takes less than a couple of seconds
to complete, static SQL will generally be faster. To choose which method to use,
you should prototype both forms of binding.
Note: Static and dynamic SQL each come in two types, statements which make use
of host variables and ones which don't. These types are:
1. Static SQL statements containing no host variables
This is an unlikely situation which you may see only for:
v Initialization code
v Simple SQL statements
Simple SQL statements without host variables perform well from a performance
perspective in that there is no runtime performance increase, and the DB2
optimizer capabilities can be fully realized.
2. Static SQL containing host variables
Static SQL statements which make use of host variables are considered as the
traditional style of DB2 applications. The static SQL statement avoids the
runtime resource usage associated with the PREPARE and catalog locks
acquired during statement compilation. Unfortunately, the full power of the
optimizer cannot be used because the optimizer does not know the entire SQL
statement. A particular problem exists with highly non-uniform data
distributions.
3. Dynamic SQL containing no parameter markers
This is typical of interfaces such as the CLP, which is often used for executing
on-demand queries. From the CLP, SQL statements can only be issued
dynamically.
4. Dynamic SQL containing parameter markers
The key benefit of dynamic SQL statements is that the presence of parameter
markers allows the cost of the statement preparation to be amortized over the
repeated executions of the statement, typically a select, or insert. This
amortization is true for all repetitive dynamic SQL applications. Unfortunately,
just like static SQL with host variables, parts of the DB2 optimizer will not
work because complete information is unavailable.
15
plans which define how the database manager will most efficiently issue SQL
statements. Over time a database schema and data might change rendering the run
time access plans sub-optimal. This can lead to degradation in application
performance.
For this reason it is important to periodically refresh the information that is used to
ensure that the package runtime access plans are well-maintained.
The RUNSTATS command is used to collect current statistics on tables and indexes,
especially if significant update activity has occurred or new indexes have been
created since the last time the RUNSTATS command was issued. This provides the
optimizer with the most accurate information with which to determine the best
access plan.
Performance of Embedded SQL applications can be improved in several ways:
v Run the RUNSTATS command to update database statistics.
v Rebind application packages to the database to regenerate the run time access
plans (based on the updated statistics) that the database will use to physically
retrieve the data on disk.
v Using the REOPT bind option in your static and dynamic programs.
16
Linux on x64
Linux on POWER
Linux on System z
Windows on x64 (when using the Windows for x64 install image)
v Windows on IPF
v Linux on IPF
DB2 database systems support running 32-bit applications and routines on all
supported 64-bit operating system environments except Linux IA64 and Linux
System z.
For each of the host languages, the host variables used can be better in either 32-bit
or 64-bit platform or both. Check the various data types for each of the
programming languages.
??)
??<
??>
??/
??'
Caret '^'
??!
??-
Tilde '~'
Chapter 2. Designing
17
18
19
If you are not familiar with terms relating to the development of multi-threaded
applications (such as critical section and semaphore), consult the programming
documentation for your operating system.
A DB2 embedded SQL application can execute SQL statements from multiple
threads using contexts. A context is the environment from which an application
runs all SQL statements and API calls. All connections, units of work, and other
database resources are associated with a specific context. Each context is associated
with one or more threads within an application. Developing multi-threaded
embedded SQL applications with thread-safe code is only supported in C and C++.
It is possible to write your own precompiler, that along with features supplied by
the language allows concurrent multithread database access.
For each executable SQL statement in a context, the first run-time services call
always tries to obtain a latch. If it is successful, it continues processing. If not
(because an SQL statement in another thread of the same context already has the
latch), the call is blocked on a signaling semaphore until that semaphore is posted,
at which point the call gets the latch and continues processing. The latch is held
until the SQL statement has completed processing, at which time it is released by
the last run-time services call that was generated for that particular SQL statement.
The net result is that each SQL statement within a context is executed as an atomic
unit, even though other threads may also be trying to execute SQL statements at
the same time. This action ensures that internal data structures are not altered by
different threads at the same time. APIs also use the latch used by run-time
services; therefore, APIs have the same restrictions as run-time services routines
within each context.
Contexts may be exchanged between threads in a process, but not exchanged
between processes. One use of multiple contexts is to provide support for
concurrent transactions.
In the default implementation of threaded applications against a DB2 database,
serialization of access to the database is enforced by the database APIs. If one
thread performs a database call, calls made by other threads will be blocked until
the first call completes, even if the subsequent calls access database objects that are
unrelated to the first call. In addition, all threads within a process share a commit
scope. True concurrent access to a database can only be achieved through separate
processes, or by using the APIs that are described in this topic.
DB2 database systems provide APIs that can be used to allocate and manipulate
separate environments (contexts) for the use of database APIs and embedded SQL.
Each context is a separate entity, and any connection or attachment using one
context is independent of all other contexts (and thus all other connections or
attachments within a process). In order for work to be done on a context, it must
first be associated with a thread. A thread must always have a context when
making database API calls or when using embedded SQL.
All DB2 database system applications are multithreaded by default, and are
capable of using multiple contexts. You can use the following DB2 APIs to use
multiple contexts. Specifically, your application can create a context for a thread,
attach to or detach from a separate context for each thread, and pass contexts
between threads. If your application does not call any of these APIs, DB2 will
automatically manage the multiple contexts for your application:
v sqleAttachToCtx - Attach to context
20
v
v
v
v
v
These APIs have no effect (that is, they are no-ops) on platforms that do not
support application threading.
Contexts need not be associated with a given thread for the duration of a
connection or attachment. One thread can attach to a context, connect to a
database, detach from the context, and then a second thread can attach to the
context and continue doing work using the already existing database connection.
Contexts can be passed around among threads in a process, but not among
processes.
Even if the new APIs are used, the following APIs continue to be serialized:
v
v
v
v
sqlabndx - Bind
sqlaprep - Precompile Program
sqluexpr - Export
db2Import and sqluimpr - Import
Note:
1. The CLI automatically uses multiple contexts to achieve thread-safe, concurrent
database access on platforms that support multi-threading. While not
recommended, users can explicitly disable this feature if required.
2. By default, AIX does not permit 32-bit applications to attach to more than 11
shared memory segments per process, of which a maximum of 10 can be used
for DB2 database connections.
When this limit is reached, DB2 database systems return SQLCODE -1224 on an
SQL CONNECT. DB2 Connect also has the 10-connection limitation if local
users are running two-phase commit with a TP Monitor (TCP/IP).
The AIX environment variable EXTSHM can be used to increase the maximum
number of shared memory segments to which a process can attach.
To use EXTSHM with DB2 database systems, follow the listed steps:
In client sessions:
export EXTSHM=ON
Chapter 2. Designing
21
22
Suppose the first context successfully executes the SELECT and the
UPDATE statements, while the second context gets the semaphore and
accesses the data structure. The first context now tries to get the
semaphore, but it cannot because the second context is holding the
semaphore. The second context now attempts to read a row from table
TAB1, but it stops on a database lock held by the first context. The
application is now in a state where context 1 cannot finish before context 2
is done and context 2 is waiting for context 1 to finish. The application is
deadlocked, but because the database manager does not know that about
Chapter 2. Designing
23
24
.sqC
.sqx
.C
.cxx
You can use the OUTPUT precompile option to override the name and path of the
output modified source file. If you use the TARGET C or TARGET CPLUSPLUS
precompile option, the input file does not need a particular extension.
25
However, you can use the OUTPUT precompile option to specify a new name and
path for the output modified source file.
However, if you use the TARGET precompile option with the FORTRAN option
the input file can have any extension you prefer.
By default, the corresponding precompiler output files have the following
extensions:
.f
.for
However, you can use the OUTPUT precompile option to specify a new name and
path for the output modified source file.
26
#include
#include
#include
#include
#include
<stdio.h>
<stdlib.h>
<string.h>
<sqlenv.h>
<sqlutil.h>
int main()
{
int rc = 0;
EXEC SQL INCLUDE SQLCA;
3
4
5
6
/* execute an SQL statement (a query) using static SQL; copy the single row
of result values into host variables*/
EXEC SQL SELECT id, name, dept, salary
7
INTO :id, :name, :dept, :salary
FROM staff WHERE id = 310;
if (SQLCODE <0)
6
{
printf("Select Error: SQLCODE =
}
else
{
/* print the host variable values to standard output */
printf("\n Executing a static SQL query statement, searching for
\n the id value equal to 310\n");
printf("\n ID
Name
DEPT
Salary\n");
printf("
}
strcpy(hostVarStmtDyn, "UPDATE staff
SET salary = salary + 1000
WHERE dept = ?");
/* execute an SQL statement (an operation) using a host variable
and DYNAMIC SQL*/
EXEC SQL PREPARE StmtDyn FROM :hostVarStmtDyn;
if (SQLCODE <0)
{
printf("Prepare Error: SQLCODE =
}
else
{
EXEC SQL EXECUTE StmtDyn USING :dept;
}
if (SQLCODE <0)
{
printf("Execute Error: SQLCODE =
8
6
Chapter 3. Programming
27
}
/* Read the updated row using STATIC SQL and CURSOR */
EXEC SQL DECLARE posCur1 CURSOR FOR
SELECT id, name, dept, salary
FROM staff WHERE id = 310;
if (SQLCODE <0)
{
printf("Declare Error: SQLCODE =
}
EXEC SQL OPEN posCur1;
EXEC SQL FETCH posCur1 INTO :id, :name, :dept, :salary ;
if (SQLCODE <0)
{
printf("Fetch Error: SQLCODE =
}
else
{
printf(" Executing an dynamic SQL statement, updating the
\n salary value for the id equal to 310\n");
printf("\n ID
Name
DEPT
Salary\n");
printf("
}
9
6
10
6
11
6
Description
1
2
Include files: This directive includes a file into your source application.
Declaration section: Declaration of host variables that will be used to hold
values referenced in the SQL statements of the C application.
Local variable declaration: This block declares the local variables to be used in
the application. These are not host variables.
Including the SQLCA structure: The SQLCA structure is updated after the
execution of each SQL statement. This template application uses certain SQLCA
fields for error handling.
Connection to a database: The initial step in working with the database is to
establish a connection to the database. Here, a connection is made by executing
the CONNECT SQL statement.
Error handling: Checks to see if an error occurred.
Executing a query: The execution of this SQL statement assigns data returned
from a table to host variables. The C code used after the SQL statement
execution prints the values in the host variables to standard output.
3
4
6
7
28
Note
Description
10
11
29
30
SQLE850A (sqle850a.h)
If the code page of the database is 850 (ASCII Latin-1), this sequence sorts
character strings that are not FOR BIT DATA according to the host CCSID
500 (EBCDIC International) binary collation. This file is used by the
CREATE DATABASE API.
SQLE850B (sqle850b.h)
If the code page of the database is 850 (ASCII Latin-1), this sequence sorts
character strings that are not FOR BIT DATA according to the host CCSID
037 (EBCDIC US English) binary collation. This file is used by the CREATE
DATABASE API.
SQLE932A (sqle932a.h)
If the code page of the database is 932 (ASCII Japanese), this sequence
sorts character strings that are not FOR BIT DATA according to the host
CCSID 5035 (EBCDIC Japanese) binary collation. This file is used by the
CREATE DATABASE API.
SQLE932B (sqle932b.h)
If the code page of the database is 932 (ASCII Japanese), this sequence
sorts character strings that are not FOR BIT DATA according to the host
CCSID 5026 (EBCDIC Japanese) binary collation. This file is used by the
CREATE DATABASE API.
SQLJACB (sqljacb.h)
This file defines constants, structures, and control blocks for the DB2
Connect interface.
SQLSTATE (sqlstate.h)
This file defines constants for the SQLSTATE field of the SQLCA structure.
SQLSYSTM (sqlsystm.h)
This file contains the platform-specific definitions used by the database
manager APIs and data structures.
SQLUDF (sqludf.h)
This file defines constants and interface structures for writing user-defined
functions (UDFs).
SQLUV (sqluv.h)
This file defines structures, constants, and prototypes for the asynchronous
Read Log API, and APIs used by the table load and unload vendors.
Chapter 3. Programming
31
To locate INCLUDE files, the DB2 COBOL precompiler searches the current
directory first, then the directories specified by the DB2INCLUDE environment
variable. Consider the following examples:
v EXEC SQL INCLUDE payroll END-EXEC.
If the file specified in the INCLUDE statement is not enclosed in quotation
marks, as shown previously, the precompiler searches for payroll.sqb, then
payroll.cpy, then payroll.cbl, in each directory in which it looks.
v EXEC SQL INCLUDE pay/payroll.cbl END-EXEC.
If the file name is enclosed in quotation marks, as shown previously, no
extension is added to the name.
If the file name in quotation marks does not contain an absolute path, the
contents of DB2INCLUDE are used to search for the file, prepended to whatever
path is specified in the INCLUDE file name. For example, with DB2 database
systems for AIX, if DB2INCLUDE is set to '/disk2:myfiles/cobol', the
precompiler searches for './pay/payroll.cbl', then '/disk2/pay/payroll.cbl',
and finally './myfiles/cobol/pay/payroll.cbl'. The path where the file is
actually found is displayed in the precompiler messages. On Windows
platforms, substitute back slashes (\) for the forward slashes in the previously
shown example.
Note: The setting of DB2INCLUDE is cached by the DB2 command line processor.
To change the setting of DB2INCLUDE after any CLP commands have been issued,
enter the TERMINATE command, then reconnect to the database and precompile.
The include files that are intended to be used in your applications are described
here:
SQLCA (sqlca.cbl)
This file defines the SQL Communication Area (SQLCA) structure. The
SQLCA contains variables that are used by the database manager to
provide an application with error information about the execution of SQL
statements and API calls.
SQLCA_92 (sqlca_92.cbl)
This file contains a FIPS SQL92 Entry Level compliant version of the SQL
Communications Area (SQLCA) structure. This file should be included in
place of the sqlca.cbl file when writing DB2 applications that conform to
the FIPS SQL92 Entry Level standard. The sqlca_92.cbl file is
automatically included by the DB2 precompiler when the LANGLEVEL
precompiler option is set to SQL92E.
SQLCODES (sqlcodes.cbl)
This file defines constants for the SQLCODE field of the SQLCA structure.
SQLDA (sqlda.cbl)
This file defines the SQL Descriptor Area (SQLDA) structure. The SQLDA
is used to pass data between an application and the database manager.
SQLEAU (sqleau.cbl)
This file contains constant and structure definitions required for the DB2
security audit APIs. If you use these APIs, you need to include this file in
your program. This file also contains constant and keyword value
definitions for fields in the audit trail record. These definitions can be used
by external or vendor audit trail extract programs.
SQLETSD (sqletsd.cbl)
This file defines the Table Space Descriptor structure, SQLETSDESC, which
is passed to the Create Database API, sqlgcrea.
32
SQLE819A (sqle819a.cbl)
If the code page of the database is 819 (ISO Latin-1), this sequence sorts
character strings that are not FOR BIT DATA according to the host CCSID
500 (EBCDIC International) binary collation. This file is used by the
CREATE DATABASE API.
SQLE819B (sqle819b.cbl)
If the code page of the database is 819 (ISO Latin-1), this sequence sorts
character strings that are not FOR BIT DATA according to the host CCSID
037 (EBCDIC US English) binary collation. This file is used by the CREATE
DATABASE API.
SQLE850A (sqle850a.cbl)
If the code page of the database is 850 (ASCII Latin-1), this sequence sorts
character strings that are not FOR BIT DATA according to the host CCSID
500 (EBCDIC International) binary collation. This file is used by the
CREATE DATABASE API.
SQLE850B (sqle850b.cbl)
If the code page of the database is 850 (ASCII Latin-1), this sequence sorts
character strings that are not FOR BIT DATA according to the host CCSID
037 (EBCDIC US English) binary collation. This file is used by the CREATE
DATABASE API.
SQLE932A (sqle932a.cbl)
If the code page of the database is 932 (ASCII Japanese), this sequence
sorts character strings that are not FOR BIT DATA according to the host
CCSID 5035 (EBCDIC Japanese) binary collation. This file is used by the
CREATE DATABASE API.
SQLE932B (sqle932b.cbl)
If the code page of the database is 932 (ASCII Japanese), this sequence
sorts character strings that are not FOR BIT DATA according to the host
CCSID 5026 (EBCDIC Japanese) binary collation. This file is used by the
CREATE DATABASE API.
SQL1252A (sql1252a.cbl)
If the code page of the database is 1252 (Windows Latin-1), this sequence
sorts character strings that are not FOR BIT DATA according to the host
CCSID 500 (EBCDIC International) binary collation. This file is used by the
CREATE DATABASE API.
SQL1252B (sql1252b.cbl)
If the code page of the database is 1252 (Windows Latin-1), this sequence
sorts character strings that are not FOR BIT DATA according to the host
CCSID 037 (EBCDIC US English) binary collation. This file is used by the
CREATE DATABASE API.
SQLSTATE (sqlstate.cbl)
This file defines constants for the SQLSTATE field of the SQLCA structure.
SQLUDF (sqludf.cbl)
This file defines constants and interface structures for writing user-defined
functions (UDFs).
SQLUTBCQ (sqlutbcq.cbl)
This file defines the Table Space Container Query data structure,
SQLB-TBSCONTQRY-DATA, which is used with the table space container
query APIs, sqlgstsc, sqlgftcq, and sqlgtcq.
Chapter 3. Programming
33
SQLUTBSQ (sqlutbsq.cbl)
This file defines the Table Space Query data structure,
SQLB-TBSQRY-DATA, which is used with the table space query APIs,
sqlgstsq, sqlgftsq, and sqlgtsq.
34
The $DB2DIR/include32 directory will only exist on AIX, Solaris, HP-PA, and
HP-IPF.
You can use the following FORTRAN include files in your applications.
SQLCA (sqlca_cn.f, sqlca_cs.f)
This file defines the SQL Communication Area (SQLCA) structure. The
SQLCA contains variables that are used by the database manager to
provide an application with error information about the execution of SQL
statements and API calls.
Two SQLCA files are provided for FORTRAN applications. The default,
sqlca_cs.f, defines the SQLCA structure in an IBM SQL compatible
format. The sqlca_cn.f file, precompiled with the SQLCA NONE option,
defines the SQLCA structure for better performance.
SQLCA_92 (sqlca_92.f)
This file contains a FIPS SQL92 Entry Level compliant version of the SQL
Communications Area (SQLCA) structure. This file should be included in
place of either the sqlca_cn.f or the sqlca_cs.f files when writing DB2
applications that conform to the FIPS SQL92 Entry Level standard. The
sqlca_92.f file is automatically included by the DB2 precompiler when the
LANGLEVEL precompiler option is set to SQL92E.
SQLCODES (sqlcodes.f)
This file defines constants for the SQLCODE field of the SQLCA structure.
SQLDA (sqldact.f)
This file defines the SQL Descriptor Area (SQLDA) structure. The SQLDA
is used to pass data between an application and the database manager.
SQLEAU (sqleau.f)
This file contains constant and structure definitions required for the DB2
security audit APIs. If you use these APIs, you need to include this file in
your program. This file also contains constant and keyword value
definitions for fields in the audit trail record. These definitions can be used
by external or vendor audit trail extract programs.
SQLE819A (sqle819a.f)
If the code page of the database is 819 (ISO Latin-1), this sequence sorts
character strings that are not FOR BIT DATA according to the host CCSID
500 (EBCDIC International) binary collation. This file is used by the
CREATE DATABASE API.
SQLE819B (sqle819b.f)
If the code page of the database is 819 (ISO Latin-1), this sequence sorts
character strings that are not FOR BIT DATA according to the host CCSID
037 (EBCDIC US English) binary collation. This file is used by the CREATE
DATABASE API.
SQLE850A (sqle850a.f)
If the code page of the database is 850 (ASCII Latin-1), this sequence sorts
character strings that are not FOR BIT DATA according to the host CCSID
500 (EBCDIC International) binary collation. This file is used by the
CREATE DATABASE API.
SQLE850B (sqle850b.f)
If the code page of the database is 850 (ASCII Latin-1), this sequence sorts
character strings that are not FOR BIT DATA according to the host CCSID
037 (EBCDIC US English) binary collation. This file is used by the CREATE
DATABASE API.
Chapter 3. Programming
35
SQLE932A (sqle932a.f)
If the code page of the database is 932 (ASCII Japanese), this sequence
sorts character strings that are not FOR BIT DATA according to the host
CCSID 5035 (EBCDIC Japanese) binary collation. This file is used by the
CREATE DATABASE API.
SQLE932B (sqle932b.f)
If the code page of the database is 932 (ASCII Japanese), this sequence
sorts character strings that are not FOR BIT DATA according to the host
CCSID 5026 (EBCDIC Japanese) binary collation. This file is used by the
CREATE DATABASE API.
SQL1252A (sql1252a.f)
If the code page of the database is 1252 (Windows Latin-1), this sequence
sorts character strings that are not FOR BIT DATA according to the host
CCSID 500 (EBCDIC International) binary collation. This file is used by the
CREATE DATABASE API.
SQL1252B (sql1252b.f)
If the code page of the database is 1252 (Windows Latin-1), this sequence
sorts character strings that are not FOR BIT DATA according to the host
CCSID 037 (EBCDIC US English) binary collation. This file is used by the
CREATE DATABASE API.
SQLSTATE (sqlstate.f)
This file defines constants for the SQLSTATE field of the SQLCA structure.
SQLUDF (sqludf.f)
This file defines constants and interface structures for writing user-defined
functions (UDFs).
36
Procedure
To declare the SQLCA, code the INCLUDE SQLCA statement in your program:
v For C or C++ applications use:
EXEC SQL INCLUDE SQLCA;
v For Java applications, you do not explicitly use the SQLCA. Instead, use the
SQLException instance methods to get the SQLSTATE and SQLCODE values.
v For COBOL applications use:
EXEC SQL INCLUDE SQLCA END-EXEC.
What to do next
If your application must be compliant with the ISO/ANS SQL92 or FIPS 127-2
standard, do not use the statements previously shown or the INCLUDE SQLCA
statement.
If you want to use a specific user id (herrick) and password (mypassword), use the
following statement:
EXEC SQL CONNECT TO sample USER herrick USING mypassword;
dbname ] ;
Chapter 3. Programming
37
Description
username
password
dbname
If you want to use a specific user id (herrick) and password (mypassword), use the
following statement:
EXEC SQL CONNECT TO sample USER herrick USING mypassword END-EXEC.
If you want to use a specific user id (herrick) and password (mypassword), use the
following statement:
EXEC SQL CONNECT TO sample USER herrick USING mypassword
If you want to use a specific user id (herrick) and password (mypassword), use the
following statement:
CALL SQLEXEC CONNECT TO sample USER herrick USING mypassword
Data types that map to SQL data types in embedded SQL applications
To exchange data between an application and database, you must use the correct
data type mappings for the variables used.
When the precompiler finds a host variable declaration, it determines the
appropriate SQL type value. With each host language there are special mapping
rules which must be adhered to, unique only to that specific language.
38
SMALLINT
(500 or 501)
short
short int
sqlint16
INTEGER
(496 or 497)
int
long
long int
sqlint322
BIGINT
(492 or 493)
long long
long
__int64
sqlint643
float
double
Packed decimal
REAL5
(480 or 481)
DOUBLE6
(480 or 481)
DECIMAL(p,s)
(484 or 485)
Single character
CHAR(1)
(452 or 453)
CHAR(n)
(452 or 453)
Chapter 3. Programming
39
VARCHAR(n)
(448 or 449)
struct tag {
short int;
char[n]
}
1<=n<=32 672
struct tag {
short int;
char[n]
}
struct tag {
short int;
char[n]
}
32 673<=n<=32 700
CLOB(n)
(408 or 409)
sql type is
clob(n)
sql type is
clob_locator
sql type is
clob_file
BLOB(n)
(404 or 405)
sql type is
blob(n)
sql type is
blob_locator
sql type is
blob_file
DATE
(384 or 385)
TIME
(388 or 389)
40
XML8
(988 or 989)
struct {
sqluint32 length;
char
data[n];
}
1<=n<=2 147 483 647
SQLUDF_CLOB
BINARY
Binary data
1<= n <=255
VARBINARY
struct
myVarBinField_t
{sqluint16 length;char data[12];}
myVarBinField;
Varbinary data
1<= n <=32704
The following data types are only available in the DBCS or EUC environment
when precompiled with the WCHARTYPE NOCONVERT option.
Table 3. SQL Data Types Mapped to C and C++ Declarations
SQL Column Type1
sqldbchar
GRAPHIC(1)
(468 or 469)
GRAPHIC(n)
(468 or 469)
1<=n<=127
VARGRAPHIC(n)
(464 or 465)
struct tag {
short int;
sqldbchar[n]
}
1<=n<=16 336
Alternatively use sqldbchar[n+1]
where n is large enough to hold
the data
1<=n<=16 336
Chapter 3. Programming
41
LONG VARGRAPHIC8
(472 or 473)
struct tag {
short int;
sqldbchar[n]
}
16 337<=n<=16 350
The following data types are only available in the DBCS or EUC environment
when precompiled with the WCHARTYPE CONVERT option.
Table 4. SQL Data Types Mapped to C and C++ Declarations
SQL Column Type1
wchar_t
GRAPHIC(1)
(468 or 469)
GRAPHIC(n)
(468 or 469)
1<=n<=127
VARGRAPHIC(n)
(464 or 465)
struct tag {
short int;
wchar_t [n]
}
1<=n<=16 336
Alternately use char[n+1] where n Null-terminated variable-length double-byte
is large enough to hold the data
character string
Note: Assigned an SQL type of 400/401.
1<=n<=16 336
LONG VARGRAPHIC8
(472 or 473)
struct tag {
short int;
wchar_t [n]
}
16 337<=n<=16 350
The following data types are only available in the DBCS or EUC environment.
Table 5. SQL Data Types Mapped to C and C++ Declarations
SQL Column Type1
DBCLOB(n)
(412 or 413)
sql type is
dbclob(n)
42
sql type is
dbclob_locator
sql type is
dbclob_file
Note:
1. The first number under SQL Column Type indicates that an indicator variable is not provided, and the second
number indicates that an indicator variable is provided. An indicator variable is needed to indicate NULL values,
or to hold the length of a truncated string. These are the values that will be displayed in the SQLTYPE field of the
SQLDA for these data types.
2. For platform compatibility, use sqlint32. On 64-bit UNIX and Linux operating systems, "long" is a 64 bit integer.
On 64-bit Windows operating systems and 32-bit UNIX and Linux operating systems "long" is a 32 bit integer.
3. For platform compatibility, use sqlint64. The DB2 database system sqlsystm.h header file has a type definition for
sqlint64 as "__int64" on the supported Windows operating systems when using the Microsoft compiler, "long
long" on 32-bit UNIX and Linux operating systems, and "long" on 64 bit UNIX and Linux operating systems.
4.
The character string can be from 19 - 32 bytes in length without a null terminator depending on the number of
fractional seconds specified. The fractional seconds of the TIMESTAMP data type can be optionally specified with
0-12 digits of timestamp precision.
When a timestamp value is assigned to a timestamp variable with a different number of fractional seconds, the
value is either truncated or padded with 0's to match the format of the timestamp variable.
5. FLOAT(n) where 0 < n < 25 is a synonym for REAL. The difference between REAL and DOUBLE in the SQLDA is
the length value (4 or 8).
6. The following SQL types are synonyms for DOUBLE:
v FLOAT
v FLOAT(n) where 24 < n < 54 is a synonym for DOUBLE
v DOUBLE PRECISION
7. This is not a column type but a host variable type.
8. The SQL_TYP_XML/SQL_TYP_NXML value is returned by DESCRIBE requests only. It cannot be used directly
by the application to bind application resources to XML values.
9. The LONG VARCHAR and LONG VARGRAPHIC data types are deprecated and might be removed in a future
release. Choose the CLOB or DBCLOB data type instead.
The following items are additional rules for supported C and C++ data types:
v The data type char can be declared as char or unsigned char.
v The database manager processes null-terminated variable-length character string
data type char[n] (data type 460), as VARCHAR(m).
If LANGLEVEL is SAA1, the host variable length m equals the character
string length n in char[n] or the number of bytes preceding the first
null-terminator (\0), whichever is smaller.
If LANGLEVEL is MIA, the host variable length m equals the number of
bytes preceding the first null-terminator (\0).
v The database manager processes null-terminated, variable-length graphic string
data type, wchar_t[n] or sqldbchar[n] (data type 400), as VARGRAPHIC(m).
If LANGLEVEL is SAA1, the host variable length m equals the character
string length n in wchar_t[n] or sqldbchar[n], or the number of characters
preceding the first graphic null-terminator, whichever is smaller.
Chapter 3. Programming
43
short
sqlint32
sqlint64
float
double
SMALLINT
(500 or 501)
INTEGER
(496 or 497)
BIGINT
(492 or 493)
REAL
(480 or 481)
DOUBLE
(480 or 481)
Not supported
DECIMAL(p,s)
(484 or 485)
CHAR(n)
(452 or 453)
VARCHAR(n)
(448 or 449) (460 or 461)
44
struct {
sqluint16 length;
char[n]
}
1<=n<=32 672
LONG VARCHAR2
(456 or 457)
struct {
sqluint16 length;
char[n]
}
32 673<=n<=32 700
CLOB(n)
(408 or 409)
struct {
sqluint32 length;
char
data[n];
}
struct {
sqluint32 length;
char
data[n];
}
char[9]
DATE
(384 or 385)
TIME
(388 or 389)
TIMESTAMP(p)
(392 or 393)
0<=p<=12
XML
(988/989)
Not supported
Note: The following data types are only available in the DBCS or EUC
environment when precompiled with the WCHARTYPE NOCONVERT option.
Chapter 3. Programming
45
1<=n<=127
VARGRAPHIC(n)
(400 or 401)
1<=n<=16 336
LONG VARGRAPHIC2
(472 or 473)
struct {
sqluint16 length;
sqldbchar[n]
}
16 337<=n<=16 350
DBCLOB(n)
(412 or 413)
struct {
sqluint32 length;
sqldbchar data[n];
}
SMALLINT
(500 or 501)
46
Packed decimal
Single-precision floating
point
Double-precision floating
point
Fixed-length character
string
INTEGER
(496 or 497)
BIGINT
(492 or 493)
DECIMAL(p,s)
(484 or 485)
REAL2
(480 or 481)
DOUBLE3
(480 or 481)
CHAR(n)
(452 or 453)
VARCHAR(n)
(448 or 449)
01 name.
49 length PIC S9(4) COMP-5.
49 name PIC X(n).
Variable-length character
string
1<=n<=32 672
LONG VARCHAR6
(456 or 457)
01 name.
49 length PIC S9(4) COMP-5.
49 data PIC X(n).
Long variable-length
character string
32 673<=n<=32 700
CLOB(n)
(408 or 409)
Large object
variable-length character
string
Large object
variable-length binary
string
BLOB(n)
(404 or 405)
1<=n<=2 147 483 647
BLOB locator variable4
(960 or 961)
Chapter 3. Programming
47
19 to 32 byte character
string
DATE
(384 or 385)
TIME
(388 or 389)
TIMESTAMP(p)
(392 or 393)
0<=p<=12
XML5
(988 or 989)
The following data types are only available in the DBCS environment.
Table 9. SQL Data Types Mapped to COBOL Declarations
SQL Column Type1
GRAPHIC(n)
(468 or 469)
VARGRAPHIC(n)
(464 or 465)
01 name.
49 length PIC S9(4) COMP-5.
49 name PIC G(n) DISPLAY-1.
1<=n<=16 336
LONG VARGRAPHIC6
(472 or 473)
01 name.
49 length PIC S9(4) COMP-5.
49 name PIC G(n) DISPLAY-1.
Variable length
double-byte character
string with 2-byte string
length indicator
16 337<=n<=16 350
DBCLOB(n)
(412 or 413)
48
Note:
1. The first number under SQL Column Type indicates that an indicator variable is not provided, and the second
number indicates that an indicator variable is provided. An indicator variable is needed to indicate NULL values,
or to hold the length of a truncated string. These are the values that will be displayed in the SQLTYPE field of the
SQLDA for these data types.
2. FLOAT(n) where 0 < n < 25 is a synonym for REAL. The difference between REAL and DOUBLE in the SQLDA is
the length value (4 or 8).
3. The following SQL types are synonyms for DOUBLE:
v FLOAT
v FLOAT(n) where 24 < n < 54 is a synonym for DOUBLE.
v DOUBLE PRECISION
4. This is not a column type but a host variable type.
5. The SQL_TYP_XML/SQL_TYP_NXML value is returned by DESCRIBE requests only. It cannot be used directly
by the application to bind application resources to XML values.
6. The LONG VARCHAR and LONG VARGRAPHIC data types are deprecated and might be removed in a future
release. Choose the CLOB or DBCLOB data type instead.
identifier
identifier
identifier
identifier
PIC
PIC
PIC
PIC
S9(3)V COMP-3
SV9(3) COMP-3
S9V COMP-3
SV9 COMP-3
Chapter 3. Programming
49
INTEGER*2
INTEGER*4
REAL*4
REAL*8
Packed decimal
CHARACTER*n
SMALLINT
(500 or 501)
INTEGER
(496 or 497)
REAL2
(480 or 481)
DOUBLE3
(480 or 481)
DECIMAL(p,s)
(484 or 485)
CHAR(n)
(452 or 453)
VARCHAR(n)
(448 or 449)
LONG VARCHAR5
(456 or 457)
CLOB(n)
(408 or 409)
CHARACTER*10
CHARACTER*8
50
CHARACTER*19 to
CHARACTER*32
SQL_TYP_XML
Note:
1.
The first number under SQL Column Type indicates that an indicator variable is not provided, and the second
number indicates that an indicator variable is provided. An indicator variable is needed to indicate NULL values,
or to hold the length of a truncated string. These are the values that will be displayed in the SQLTYPE field of the
SQLDA for these data types.
2. FLOAT(n) where 0 < n < 25 is a synonym for REAL. The difference between REAL and DOUBLE in the SQLDA is
the length value (4 or 8).
3. The following SQL types are synonyms for DOUBLE:
v FLOAT
v FLOAT(n) where 24 < n < 54 is a synonym for DOUBLE.
v DOUBLE PRECISION
4. This is not a column type but a host variable type.
5. The LONG VARCHAR data type is deprecated, not recommended, and might be removed in a future release.
Choose the CLOB data type instead.
Chapter 3. Programming
51
Packed decimal
DECIMAL(p,s)
(484 or 485)
CHAR(n)
(452 or 453)
Equivalent to CHAR(n)
Equivalent to CHAR(n)
VARCHAR(n)
(448 or 449)
LONG VARCHAR5
(456 or 457)
CLOB(n)
(408 or 409)
CLOB locator variable4
(964 or 965)
CLOB file reference
variable4
(920 or 921)
BLOB(n)
(404 or 405)
Equivalent to CHAR(10)
Equivalent to CHAR(8)
Equivalent to CHAR(26)
SQL_TYP_XML
DATE
(384 or 385)
TIME
(388 or 389)
TIMESTAMP
(392 or 393)
XML
(988 or 989)
52
The following data types are only available in the DBCS environment.
Table 12. SQL Column Types Mapped to REXX Declarations
SQL Column Type1
GRAPHIC(n)
(468 or 469)
Equivalent to GRAPHIC(n)
Equivalent to GRAPHIC(n)
VARGRAPHIC(n)
(464 or 465)
LONG VARGRAPHIC5
(472 or 473)
DBCLOB(n)
(412 or 413)
DBCLOB locator variable4
(968 or 969)
DBCLOB file reference
variable4
(924 or 925)
Note:
1. The first number under Column Type indicates that an indicator variable is not provided, and the second number
indicates that an indicator variable is provided. An indicator variable is needed to indicate NULL values, or to
hold the length of a truncated string.
2. FLOAT(n) where 0 < n < 25 is a synonym for REAL. The difference between REAL and DOUBLE in the SQLDA is
the length value (4 or 8).
3. The following SQL types are synonyms for DOUBLE:
v FLOAT
v FLOAT(n) where 24 < n < 54 is a synonym for DOUBLE.
v DOUBLE PRECISION
4. This is not a column type but a host variable type.
5. The LONG VARCHAR and LONG VARGRAPHIC data types are deprecated, not recommended, and might be
removed in a future release. Use the CLOB or DBCLOB data type instead.
Chapter 3. Programming
53
Depending on the language, this example will either fail to compile because
variable x is not declared in function foo2(), or the value of x is not set to 10 in
foo2(). To avoid this problem, you must either declare x as a global variable, or
pass x as a parameter to function foo2() as follows:
foo1(){
.
.
.
BEGIN SQL DECLARE SECTION;
int x;
END SQL DECLARE SECTION;
54
x=10;
foo2(x);
.
.
.
}
foo2(int x){
.
.
.
y=x;
.
.
.
}
C and C++
COBOL
FORTRAN
EXEC
01
01
01
01
01
01
01
EXEC
Chapter 3. Programming
55
For example, to generate the declarations for the STAFF table in the SAMPLE
database in C in the output file staff.h, issue the following command:
db2dclgn -d sample -t staff -l C
56
Note that the execution of this statement includes conversion between DECIMAL
and DOUBLE data types.
To make the query results more readable on your screen, you could use the
following SELECT statement:
SELECT EMPNO, CHAR(SALARY+BONUS) FROM EMPLOYEE
57
the base application type containing the serialized string representation of the XML
document. It can also be determined internally, which requires interpretation of the
data. For Unicode encoded documents, a byte order mark (BOM), consisting of a
Unicode character code at the beginning of a data stream is recommended. The
BOM is used as a signature that defines the byte order and Unicode encoding
form.
Existing character and binary types, which include CHAR, VARCHAR, CLOB, and
BLOB may be used in addition to XML host variables for fetching and inserting
data. However, they will not be subject to implicit XML parsing, as XML host
variables would. Instead, an explicit XMLPARSE function with default white space
stripping is injected and applied.
XML and XQuery restrictions on developing embedded SQL applications
To declare XML host variables in embedded SQL applications:
In the declaration section of the application, declare the XML host variables as LOB
data types:
v
SQL TYPE IS XML AS CLOB(n) <hostvar_name>
where <hostvar_name> is a CLOB host variable that contains XML data encoded
in the mixed code page of the application.
v
SQL TYPE IS XML AS DBCLOB(n) <hostvar_name>
where <hostvar_name> is a CLOB file that contains XML data encoded in the
application mixed code page.
v
SQL TYPE IS XML AS DBCLOB_FILE <hostvar_name>
where <hostvar_name> is a DBCLOB file that contains XML data encoded in the
application graphic code page.
v
SQL TYPE IS XML AS BLOB_FILE <hostvar_name>
58
59
The following table provides examples for the supported host languages:
60
C and C++
COBOL
FORTRAN
REXX
C and C++
EXEC SQL UPDATE staff SET job = Clerk WHERE job = Mgr;
if ( SQLCODE < 0 )
printf( "Update Error: SQLCODE =
COBOL
EXEC SQL UPDATE staff SET job = Clerk WHERE job = Mgr END_EXEC.
IF SQLCODE LESS THAN 0
DISPLAY UPDATE ERROR: SQLCODE = , SQLCODE.
FORTRAN
EXEC SQL UPDATE staff SET job = Clerk WHERE job = Mgr
if ( sqlcode .lt. 0 ) THEN
write(*,*) Update error: sqlcode = , sqlcode
Chapter 3. Programming
61
C or C++
COBOL
FORTRAN
REXX
62
63
v The precompiler supports the same scope rules as the C and C++ programming
languages. Therefore, you can use the same name for two different variables
each existing within their own scope. In the following example, both
declarations of the variable called empno are allowed; the second declaration
does not cause an error:
file: main.sqc
...
void scope1()
{
EXEC SQL BEGIN DECLARE SECTION ;
short empno;
EXEC SQL END DECLARE SECTION ;
64
...
}
void scope2()
{
EXEC SQL BEGIN DECLARE SECTION ;
char[15 + 1] empno;
65
The following example is a sample SQL declare section with host variables
declared for supported SQL data types:
EXEC SQL BEGIN DECLARE SECTION;
.
.
.
short
short
sqlint32
sqlint32
float
double
char
char
struct
struct
sql type
sql type
sql type
sql type
sql type
sql type
sql type
sql type
sql type
sql type
sql type
struct
struct
struct
struct
66
age = 26;
year;
salary;
deptno;
bonus;
wage;
mi;
name[6];
{
short len;
char data[24];
} address;
{
short len;
char data[32695];
} voice;
is clob(1m)
chapter;
is clob_locator
chapter_locator;
is clob_file
chapter_file_ref;
is blob(1m)
video;
is blob_locator
video_locator;
is blob_file
video_file_ref;
is dbclob(1m)
tokyo_phone_dir;
is dbclob_locator
tokyo_phone_dir_lctr;
is dbclob_file
tokyo_phone_dir_flref;
is varbinary(12)
myVarBinField;
is binary(4)
myBinField;
{
short len;
sqldbchar data[100];
} vargraphic1;
{
short len;
wchar_t data[100];
} vargraphic2;
{
short len;
sqldbchar data[10000];
} long_vargraphic1;
{
short len;
wchar_t data[10000];
/*
/*
/*
/*
/*
/*
/*
/*
SQL
SQL
SQL
SQL
SQL
SQL
SQL
SQL
type
type
type
type
type
type
type
type
500
500
496
496
480
480
452
460
*/
*/
*/
*/
*/
*/
*/
*/
/* SQL type
448 */
/* SQL type
456 */
/* SQL type
408 */
/* SQL type
964 */
/* SQL type
920 */
/* SQL type
404 */
/* SQL type
960 */
/* SQL type
916 */
/* SQL type
412 */
/* SQL type
968 */
/* SQL type
924 */
/* SQL type
908 */
/* SQL type
912 */
} long_vargraphic2;
sqldbchar graphic1[100];
wchar_t
graphic2[100];
char
char
char
short
date[11];
time[9];
timestamp[27];
wage_ind;
.
.
.
EXEC SQL END DECLARE SECTION;
The SQLCODE declaration is assumed during the precompile step. Note that when
using this option, the INCLUDE SQLCA statement must not be specified.
In an application that is made up of multiple source files, the SQLCODE and
SQLSTATE variables can be defined in the first source file as in the previous
example. Subsequent source files should modify the definitions as follows:
extern sqlint32 SQLCODE;
extern char
SQLSTATE[6];
Chapter 3. Programming
67
available after the first fetch, you can repeat the FETCH statement to obtain the
next set of rows. The cumulative sum of the total number of rows fetched is stored
in sqlca.sqlerrd[2].
In the following example, two array host variables are declared, empno and
lastname. Each can hold up to 100 elements. Because there is only one FETCH
statement, this example retrieves 100 rows, or less.
EXEC SQL BEGIN DECLARE SECTION;
char
empno[100][8];
char
lastname[100][15];
EXEC SQL END DECLARE SECTION;
EXEC SQL DECLARE empcr CURSOR FOR
SELECT empno, lastname FROM employee;
EXEC SQL OPEN empcr;
EXEC SQL WHENEVER NOT FOUND GOTO end_fetch;
while (1) {
EXEC SQL FETCH empcr INTO :empno :lastname;
...
...
}
end_fetch:
EXEC SQL CLOSE empcr;
/* bulk fetch
*/
/* 100 or less rows */
Starting in DB2 V10.1 Fix Pack 2 and later, DB2 for Linux, UNIX, and Windows
Embedded SQL C/C++ supports host variables as structure array during the fetch.
The array size is determined by the structure array defined in the DECLARE SECTION.
DB2 for Linux, UNIX, and Windows Embedded SQL C/C++ will not support
another structure array within the main structure array (nested structure arrays); if
it finds such usage, user will see a compiler error.
This table will be used in the following scenarios:
CREATE
INSERT
INSERT
INSERT
FETCH statement using structure for host variables with using structure for
indicators:
EXEC SQL BEGIN DECLARE SECTION;
struct MyStruct
{
int c1;
char c2[11];
}MyStructVar;
struct MyStructInd
{
short c1_ind;
short c2_ind;
}MyStructVarInd;
EXEC SQL END DECLARE SECTION;
...
EXEC SQL DECLARE cur3 CURSOR FOR SELECT C1, C2 FROM TN1;
EXEC SQL OPEN cur;
EXEC SQL WHENEVER NOT FOUND DO CONTINUE;
do
{
68
FETCH statement using structure array for host variable with structure array for
indicators:
EXEC SQL BEGIN DECLARE SECTION;
int rows_before, rows_fetched_this_time;
int nrows;
struct MyStruct
{
int c1;
char c2[11];
}MyStructVar[3];
struct MyStructInd
{
short c1_ind;
short c2_ind;
}MyStructVarInd[3];
EXEC SQL END DECLARE SECTION;
...
EXEC SQL DECLARE cur3 CURSOR FOR SELECT C1 FROM TN1;
EXEC SQL OPEN cur3;
EXEC SQL FETCH cur3 INTO :MyStructVar :MyStructInd;
rows_before=0;
EXEC SQL WHENEVER NOT FOUND DO BREAK;
while (1)
{
EXEC SQL FETCH cur3 INTO :MyStructVar :MyStructInd;
rows_fetched_this_time = sqlca.sqlerrd[2] - rows_before;
rows_before = sqlca.sqlerrd[2];
// print output after calling FETCH Statement
for (i= 0; i < rows_fetched_this_time; i++)
{
printf("%d(%d) %s(%d)\n",
MyStructVar[i].c1, MyStructVarInd[i].c1_ind,
MyStructVar[i].c2, MyStructVarInd[i].c2_ind,
}
printf("Total rows_before = %d\n", rows_before);
}
EXEC SQL WHENEVER NOT FOUND CONTINUE;
if (sqlca.sqlcode != SQL_ERROR)
{
rows_fetched_this_time = sqlca.sqlerrd[2] - rows_before;
for (i= 0; i < rows_fetched_this_time; i++)
{
printf("%d %d %s\n", c1[i], MyStructVar[i].c2);
}
}
else{
...
}
printf ( " Total # of rows fetched : %d\n", sqlca.sqlerrd[2];
Chapter 3. Programming
69
Note: Support is enabled only for PRECOMPILE option with COMPATIBILITY_MODE set
to ORA. A single FETCH operation does not support multiple structures or a
combination of structures and host variable arrays.
If the number of elements for an indicator array variable does not match the
number of elements of the corresponding host array variable, an error is returned.
Starting in DB2 V10.1 Fix Pack 2 and later, you can use structure and structure
array as indicator placeholder. The current indicator table does not support as
indicators for the structure array host variables. Structure array for indicator will
be supported only with COMPATIBILITY_MODE set to ORA.
// declaring structure array of size 3 for indicator
EXEC SQL BEGIN DECLARE SECTION;
...
struct MyStructInd
70
{
short c1_ind;
short c2_ind;
} MyStructVarInd[3];
EXEC SQL END DECLARE SECTION;
...
// using structure array host variables & indicators structure type
// array while executing FETCH statement
// 'MyStructVar' is structure array for host variables
// 'MyStructVarInd' is structure array for indicators
EXEC SQL FETCH cur INTO :MyStructVar :MyStructVarInd;
Note: Array size of structure used for indicator must be equal or more than array
size of structure used for host variables. All members in structure array used for
indicator must be of data type short. Also, number of members in structures used
for host variables and corresponding indicators must be equal. If any of these
conditions are not met then PRECOMPILE will throw an error.
auto
extern
static
register
const
volatile
(2)
double
(3)
short
int
INTEGER (SQLTYPE 496)
BIGINT (SQLTYPE 492)
,
varname
;
=
*
&
value
const
volatile
sqlint32
(4)
long
int
Chapter 3. Programming
71
sqlint64
__int64
long long
int
(5)
long
int
Notes:
1
char
auto
extern
static
register
const
volatile
unsigned
,
72
CHAR
C String
;
=
value
CHAR
(1)
varname
*
&
const
volatile
C String
(2)
varname
(
[
varname
*
&
length
const
volatile
Notes:
1
struct
auto
extern
static
register
tag
const
volatile
(1)
{ short
var1 ;
int
; }
unsigned
,
varname
Values
*
&
const
volatile
Values
=
{ value-1 ,
value-2
Chapter 3. Programming
73
Notes:
1
In form 2, length can be any valid constant expression. Its value after
evaluation determines if the host variable is VARCHAR (SQLTYPE 448) or
LONG VARCHAR (SQLTYPE 456).
74
wchar_t and sqldbchar data types for graphic data in C and C++
embedded SQL applications
The size and encoding of DB2 graphic data is constant from one platform to
another for a particular code page. However, the size and internal format of the
ANSI C or C++ wchar_t data type depends on which compiler and operating
system you are using.
The sqldbchar data type, however, is defined by DB2 to be two bytes in size, and
is intended to be a portable way of manipulating DBCS and UCS-2 data in the
same format in which it is stored in the database.
You can define all DB2 C graphic host variable types using either wchar_t or
sqldbchar. You must use wchar_t if you build your application using the
WCHARTYPE CONVERT precompile option.
Note: When specifying the WCHARTYPE CONVERT option on a Windows
operating system, you must note that wchar_t on Windows operating systems is
Unicode. Therefore, if your C or C++ compiler's wchar_t is not Unicode, the
wcstombs() function call can fail with SQLCODE -1421 (SQLSTATE=22504). If this
happens, you can specify the WCHARTYPE NOCONVERT option, and explicitly
call the wcstombs() and mbstowcs() functions from within your program.
If you build your application with the WCHARTYPE NOCONVERT precompile
option, you should use sqldbchar for maximum portability between different DB2
client and server platforms. You can use wchar_t with WCHARTYPE
NOCONVERT, but only on platforms where wchar_t is defined as two bytes in
length.
If you incorrectly use either wchar_t or sqldbchar in host variable declarations, you
will receive an SQLCODE 15 (no SQLSTATE) at precompile time.
Chapter 3. Programming
75
76
Chapter 3. Programming
77
struct
auto
extern
static
register
tag
const
volatile
(1)
{ short
var-1 ;
(2)
sqldbchar
wchar_t
int
var-2 [ length
] ; }
,
Variable
*
&
const
volatile
Variable:
variable-name
= {
value-1
value-2
Notes:
1
To determine which of the two graphic types to be used, see the description
of the wchar_t and sqldbchar data types in C and C++.
length can be any valid constant expression. Its value after evaluation
determines if the host variable is VARGRAPHIC (SQLTYPE 464) or LONG
VARGRAPHIC (SQLTYPE 472). The value of length must be greater than or
equal to 1, and not greater than the maximum length of LONG
VARGRAPHIC which is 16 350.
78
(1)
auto
extern
static
register
const
volatile
sqldbchar
wchar_t
,
CHAR
C string
;
=
value
CHAR
(2)
varname
*
&
const
volatile
C string
(3)
varname
(
[
varname
*
&
length
const
volatile
Notes:
1
To determine which of the two graphic types to be used, see the description
of the wchar_t and sqldbchar data types in C and C++.
Chapter 3. Programming
79
SQL TYPE IS
auto
extern
static
register
const
volatile
XML AS
BLOB
CLOB
DBCLOB
(1)
(
length
,
variable-name
*
&
LOB data
const
volatile
LOB data
Notes:
1
80
5. A length for the LOB must be specified; that is, the following declaration is not
permitted:
SQL TYPE IS BLOB my_blob;
6. If the LOB is not initialized within the declaration, no initialization will be done
within the precompiler-generated code.
7. If a DBCLOB is initialized, it is the user's responsibility to prefix the string with
an 'L' (indicating a wide-character string).
Note: Wide-character literals, for example, L"Hello", should only be used in a
precompiled program if the WCHARTYPE CONVERT precompile option is
selected.
8. The precompiler generates a structure tag which can be used to cast to the host
variable's type.
BLOB example:
Declaration:
static Sql Type is Blob(2M) my_blob=SQL_BLOB_INIT("mydata");
CLOB example:
Declaration:
volatile sql type is clob(125m) *var1, var2 = {10, "data5data5"};
DBCLOB example:
Declaration:
SQL TYPE IS DBCLOB(30000) my_dbclob1;
Declaration:
SQL TYPE IS DBCLOB(30000) my_dbclob2 = SQL_DBCLOB_INIT(L"mydbdata");
Chapter 3. Programming
81
struct my_dbclob2_t {
sqluint32
length;
wchar_t
data[30000];
} my_dbclob2 = SQL_DBCLOB_INIT(L"mydbdata");
SQL TYPE IS
auto
extern
static
register
const
volatile
BLOB_LOCATOR
CLOB_LOCATOR
DBCLOB_LOCATOR
,
Variable
Variable
*
&
variable-name
const
volatile
= init-value
82
The syntax for declaring file reference host variables in C or C++ is:
SQL TYPE IS
auto
extern
static
register
const
volatile
XML AS
BLOB_FILE
CLOB_FILE
DBCLOB_FILE
Variable
Variable
*
&
variable-name
const
volatile
= init-value
Note: This structure is equivalent to the sqlfile structure located in the sql.h
header. See Figure 1 to refer to the syntax diagram.
Chapter 3. Programming
83
The first declaration is a pointer to a 10-byte character array. This is a valid host
variable. The second is not a valid declaration. The parentheses are not allowed
in a pointer to a character. The third declaration is an array of pointers. This is
not a supported data type.
The host variable declaration:
char *ptr;
/* Correct */
v Only the asterisk can be used as an operator over a host variable name.
v The maximum length of a host variable name is not affected by the number of
asterisks specified, because asterisks are not considered part of the name.
v Whenever using a pointer variable in an SQL statement, you should leave the
optimization level precompile option (OPTLEVEL) at the default setting of 0 (no
optimization). This means that no SQLDA optimization will be done by the
database manager.
Data members are only directly accessible in SQL statements through the implicit
this pointer provided by the C++ compiler in class member functions. You cannot
explicitly qualify an object instance (such as SELECT name INTO :my_obj.staff_name
...) in an SQL statement.
84
If you directly refer to class data members in SQL statements, the database
manager resolves the reference using the this pointer. For this reason, you should
leave the optimization level precompile option (OPTLEVEL) at the default setting
of 0 (no optimization).
The following example shows how you might directly use class data members
which you have declared as host variables in an SQL statement.
class STAFF
{
.
.
.
public:
.
.
.
short int hire( void )
{
EXEC SQL INSERT INTO staff ( name,id,salary )
VALUES ( :staff_name, :staff_id, :staff_salary );
staff_in_db = (sqlca.sqlcode == 0);
return sqlca.sqlcode;
}
};
In this example, class data members staff_name, staff_id, and staff_salary are
used directly in the INSERT statement. Because they have been declared as host
variables (see the first example in this section), they are implicitly qualified to the
current object with the this pointer. In SQL statements, you can also refer to data
members that are not accessible through the this pointer. You do this by referring
to them indirectly using pointer or reference host variables.
The following example shows a new method, asWellPaidAs that takes a second
object, otherGuy. This method references its members indirectly through a local
pointer or reference host variable, as you cannot reference its members directly
within the SQL statement.
short int STAFF::asWellPaidAs( STAFF otherGuy )
{
EXEC SQL BEGIN DECLARE SECTION;
short &otherID = otherGuy.staff_id
double otherSalary;
EXEC SQL END DECLARE SECTION;
EXEC SQL SELECT SALARY INTO :otherSalary
FROM STAFF WHERE id = :otherID;
if( sqlca.sqlcode == 0 )
return staff_salary >= otherSalary;
else
return 0;
}
85
SQL TYPE IS
BINARY
VARBINARY
length
Example
Declaring:
SQL TYPE IS BINARY(4) myBinField;
N (1<= N <=255)
Declaring:
SQL TYPE IS VARBINARY(12) myVarBinField;
On retrieval from the database, the length of the data is set properly in the
corresponding structure.
86
You can easily accomplish the same thing through use of local pointer or reference
variables, which are set outside the SQL statement, to point to the required scoped
variable, then used inside the SQL statement to refer to it. The following example
shows the correct method to use:
EXEC SQL BEGIN DECLARE SECTION;
char (& localName)[20] = ::name;
EXEC SQL END DECLARE SECTION;
EXEC SQL
SELECT name INTO :localName FROM STAFF
WHERE name = Sanders;
87
In general, although eucJP and eucTW store GRAPHIC data as UCS-2, the
GRAPHIC data in these databases is still non-ASCII eucJP or eucTW data.
Specifically, any space padded to such GRAPHIC data is DBCS space (also known
as ideographic space in UCS-2, U+3000). For a UCS-2 database, however,
GRAPHIC data can contain any UCS-2 character, and space padding is done with
UCS-2 space, U+0020. Keep this difference in mind when you code applications to
retrieve UCS-2 data from a UCS-2 database versus UCS-2 data from eucJP and
eucTW databases.
Binary storage of variable values using the FOR BIT DATA clause
in C and C++ embedded SQL applications
You can declare certain database columns by using the FOR BIT DATA clause.
These columns, which generally contain characters, are used to hold binary
information.
You cannot use the standard C or C++ string type 460 for columns designated FOR
BIT DATA. The database manager truncates this data type when a null character is
encountered. Use either the VARCHAR (SQL type 448) or CLOB (SQL type 408)
structures.
88
3. Any errors the external C preprocessor encounters are reported in a file with a
name corresponding to the original source file, but with a .err extension.
For example, you can use macro expansion in your source code as follows:
#define SIZE 3
EXEC SQL BEGIN DECLARE SECTION;
char a[SIZE+1];
char b[(SIZE+1)*3];
struct
{
short length;
char data[SIZE*6];
} m;
SQL TYPE IS BLOB(SIZE+1) x;
SQL TYPE IS CLOB((SIZE+2)*3) y;
SQL TYPE IS DBCLOB(SIZE*2K) z;
EXEC SQL END DECLARE SECTION;
The previous declarations resolve to the following example after you use the
PREPROCESSOR option:
EXEC SQL BEGIN DECLARE SECTION;
char a[4];
char b[12];
struct
{
short length;
char data[18];
} m;
SQL TYPE IS BLOB(4) x;
SQL TYPE IS CLOB(15) y;
SQL TYPE IS DBCLOB(6144) z;
EXEC SQL END DECLARE SECTION;
89
{
short length;
char data[10];
} name;
struct
{
short
years;
double salary;
} info;
} staff_record;
The fields of a host structure can be any of the valid host variable types. Valid
types include all numeric, character, and large object types. Nested host structures
are also supported up to 25 levels. In the example shown previously, the field info
is a sub-structure, whereas the field name is not, as it represents a VARCHAR field.
The same principle applies to LONG VARCHAR, VARGRAPHIC and LONG
VARGRAPHIC. Pointer to host structure is also supported.
There are two ways to reference the host variables grouped in a host structure in
an SQL statement:
v The host structure name can be referenced in an SQL statement.
EXEC SQL SELECT id, name, years, salary
INTO :staff_record
FROM staff
WHERE id = 10;
References to field names must be fully qualified, even if there are no other host
variables with the same name. Qualified sub-structures can also be referenced. In
the preceding example, :staff_record.info can be used to replace
:staff_record.info.years, :staff_record.info.salary.
Because a reference to a host structure (first example) is equivalent to a
comma-separated list of its fields, there are instances where this type of reference
might lead to an error. For example:
EXEC SQL DELETE FROM :staff_record;
Other uses of host structures, which can cause an SQL0087N error to occur, include
PREPARE, EXECUTE IMMEDIATE, CALL, indicator variables and SQLDA
references. Host structures with exactly one field are permitted in such situations,
as are references to individual fields (second example).
90
The preceding example declares an indicator table with 10 elements. It can be used
in an SQL statement as follows:
EXEC SQL SELECT id, name, years, salary
INTO :staff_record INDICATOR :ind_tab
FROM staff
WHERE id = 10;
The following lists each host structure field with its corresponding indicator
variable in the table:
staff_record.id
ind_tab[0]
staff_record.name
ind_tab[1]
staff_record.info.years
ind_tab[2]
staff_record.info.salary
ind_tab[3]
Note: An indicator table element, for example ind_tab[1], cannot be referenced
individually in an SQL statement. The keyword INDICATOR is optional. The
number of structure fields and indicators do not have to match; any extra
indicators are unused, as are extra fields that do not have indicators assigned to
them.
A scalar indicator variable can also be used in the place of an indicator table to
provide an indicator for the first field of the host structure. This is equivalent to
having an indicator table with only one element. For example:
short scalar_ind;
EXEC SQL SELECT id, name, years, salary
INTO :staff_record INDICATOR :scalar_ind
FROM staff
WHERE id = 10;
Chapter 3. Programming
91
struct tag
{
short i[2];
} test_record;
The array will be expanded into its elements when test_record is referenced in an
SQL statement making :test_record equivalent to :test_record.i[0],
:test_record.i[1].
Then...
k>n
k=n
k<n
For input:
When the database manager encounters an input host variable of one of
these SQLTYPE values that does not end with a null-terminator, it will
assume that character n+1 will contain the null-terminator character.
v If the LANGLEVEL option on the PREP command is MIA:
For output:
If...
Then...
92
Chapter 3. Programming
93
For example:
EXEC SQL BEGIN DECLARE SECTION END-EXEC.
77 dept
pic s9(4) comp-5.
01 userid
pic x(8).
01 passwd.
EXEC SQL END DECLARE SECTION END-EXEC.
type
type
type
type
type
type
type
type
type
type
408
964
920
404
960
916
412
968
924
464
*/
*/
*/
*/
*/
*/
*/
*/
*/
*/
*
EXEC SQL END DECLARE SECTION END-EXEC.
94
01
77
variable-name
picture-string
PICTURE
PIC
IS
USAGE
IS
(1)
COMP-3
COMPUTATIONAL-3
COMP-5
COMPUTATIONAL-5
VALUE
value
Notes:
1
Floating point
(1)
01
77
variable-name
IS
COMPUTATIONAL-1
COMP-1
(2)
USAGE
COMPUTATIONAL-2
COMP-2
Chapter 3. Programming
95
IS
VALUE
value
Notes:
1
Fixed Length
IS
01
77
variable-name
picture-string
PICTURE
PIC
IS
VALUE
value
Variable length
01 variable-name .
IS
49 identifier-1
PICTURE
PIC
S9(4)
IS
USAGE
96
COMP-5
COMPUTATIONAL-5
IS
VALUE
value
IS
49 identifier-2
PICTURE
PIC
picture-string
IS
VALUE
value
Chapter 3. Programming
97
Fixed Length
IS
01
77
variable-name
picture-string
PICTURE
PIC
USAGE
IS
DISPLAY-1
IS
VALUE
value
Variable Length
01 variable-name .
IS
49 identifier-1
S9(4)
PICTURE
PIC
IS
COMP-5
COMPUTATIONAL-5
IS
VALUE
value
USAGE
IS
49 identifier-2
picture-string
PICTURE
PIC
USAGE
IS
DISPLAY-1
IS
VALUE
value
98
01 variable-name
SQL TYPE IS
USAGE
IS
(
length
BLOB
CLOB
DBCLOB
K
M
G
CLOB example:
Declaring:
01 MY-CLOB USAGE IS SQL TYPE IS CLOB(125M).
DBCLOB example:
Declaring:
01 MY-DBCLOB USAGE IS SQL TYPE IS DBCLOB(30000).
99
The syntax for declaring large object (LOB) locator host variables in COBOL is:
01 variable-name
SQL TYPE IS
USAGE
IS
BLOB-LOCATOR
CLOB-LOCATOR
DBCLOB-LOCATOR
SQL TYPE IS
USAGE
IS
BLOB-FILE
CLOB-FILE
DBCLOB-FILE
100
01 foo1.
10 a pic s9(4) comp-5.
10 a1 redefines a pic x(2).
10 b pic x(10).
That is, the subordinate item a1 that is declared with the REDEFINES clause, is not
automatically expanded out in such situations. If a1 is unambiguous, you can
explicitly refer to a subordinate with a REDEFINES clause in an SQL statement, as
follows:
... INTO :foo1.a1 ...
or
... INTO :a1 ...
Binary storage of variable values using the FOR BIT DATA clause
in COBOL embedded SQL applications
You can declare certain database columns using the FOR BIT DATA clause. These
columns, which generally contain characters, are used to hold binary information.
The CHAR(n), VARCHAR, LONG VARCHAR, and BLOB data types are the
COBOL host variable types that can contain binary data. You must use these data
types when working with columns with the FOR BIT DATA attribute.
Note: The LONG VARCHAR data type is deprecated and might be removed in a
future release.
Chapter 3. Programming
101
Group data items in the declare section can have any of the valid host variable
types described previously as subordinate data items. This includes all numeric
and character types, as well as all large object types. You can nest group data items
up to 10 levels. Note that you must declare VARCHAR character types with the
subordinate items at level 49, as in the example shown previously. If they are not
at level 49, the VARCHAR is treated as a group data item with two subordinates,
and is subject to the rules of declaring and using group data items. In the previous
example, staff-info is a group data item, whereas staff-name is a VARCHAR.
The same principle applies to LONG VARCHAR, VARGRAPHIC, and LONG
VARGRAPHIC. You may declare group data items at any level between 02 and 49.
You can use group data items and their subordinates in four ways:
Method 1.
The entire group may be referenced as a single host variable in an SQL statement:
EXEC SQL SELECT id, name, dept, job
INTO :staff-record
FROM staff WHERE id = 10 END-EXEC.
Note: The reference to staff-id is qualified with its group name using the prefix
staff-record., and not staff-id of staff-record as in pure COBOL.
102
Assuming there are no other host variables with the same names as the
subordinates of staff-record, the preceding statement can also be coded as in
method 3, eliminating the explicit group qualification.
Method 3.
Here, subordinate items are referenced in a typical COBOL fashion, without being
qualified to their particular group item:
EXEC SQL SELECT id, name, dept, job
INTO
:staff-id,
:staff-name,
:staff-dept,
:staff-job
FROM staff WHERE id = 10 END-EXEC.
Method 4.
To resolve the ambiguous reference, you can use partial qualification of the
subordinate item, for example:
EXEC SQL SELECT id, name, dept, job
INTO
:staff-id,
:staff-name,
:staff-info.staff-dept,
:staff-info.staff-job
FROM staff WHERE id = 10 END-EXEC.
Other uses of group items that cause an SQL0087N to occur include PREPARE,
EXECUTE IMMEDIATE, CALL, indicator variables, and SQLDA references. Groups
with only one subordinate are permitted in such situations, as are references to
individual subordinates, as in methods 2, 3, and 4 shown previously.
Chapter 3. Programming
103
For example:
01 staff-indicator-table.
05 staff-indicator pic s9(4) comp-5
occurs 7 times.
This indicator table can be used effectively with the first format of group item
reference shown previously:
EXEC SQL SELECT id, name, dept, job
INTO :staff-record :staff-indicator
FROM staff WHERE id = 10 END-EXEC.
104
/* SQL type
/* SQL type
/* SQL type
/* SQL type
/* SQL type
/* SQL type
/* SQL type
/* SQL type
/* SQL type
/* SQL type
/* SQL type
/* SQL type
/* SQL type
/* SQL type
/* SQL type
/* SQL type
/* SQL type
/* SQL type
500 */
496 */
480 */
480 */
452 */
452 */
448 */
448 */
408 */
964 */
920 */
404 */
960 */
916 */
384 */
388 */
392 */
500 */
The SQLCOD declaration is assumed during the precompile step. The variable named
SQLSTATE can also be SQLSTA. Note that when using this option, the INCLUDE
SQLCA statement should not be specified.
For applications that contain multiple source files, the declarations of SQLCOD and
SQLSTATE can be included in each source file, as shown previously.
Chapter 3. Programming
105
varname
INTEGER*2
INTEGER*4
REAL*4
REAL *8
DOUBLE PRECISION
/ initial-value /
Fixed length
Syntax for character host variables in FORTRAN: fixed length
,
varname
CHARACTER
*n
/ initial-value /
Variable length
,
SQL TYPE IS VARCHAR (length)
varname
106
VARCHAR example:
Declaring:
sql type is varchar(1000) my_varchar
my_varchar(1000+2)
my_varchar_length
my_varchar_data(1000)
my_varchar(1),
+
my_varchar_length )
equivalence( my_varchar(3),
+
my_varchar_data )
my_lvarchar(10000+2)
my_lvarchar_length
my_lvarchar_data(10000)
my_lvarchar(1),
+
my_lvarchar_length )
equivalence( my_lvarchar(3),
+
my_lvarchar_data )
However, because blanks can be significant in passwords, you should declare host
variables for passwords as VARCHAR, and have the length field set to reflect the
actual password length:
EXEC SQL BEGIN DECLARE SECTION
character*8 dbname, userid
sql type is varchar(18) passwd
EXEC SQL END DECLARE SECTION
character*18 passwd_string
equivalence(passwd_data,passwd_string)
dbname = sample
userid = userid
passwd_length= 8
passwd_string = password
EXEC SQL CONNECT TO :dbname USER :userid USING :passwd
Chapter 3. Programming
107
BLOB
CLOB
) variable-name
length
K
M
G
BLOB example:
Declaring:
sql type is blob(2m) my_blob
my_blob(2097152+4)
my_blob_length
my_blob_data(2097152)
my_blob(1),
+
my_blob_length )
equivalence( my_blob(5),
+
my_blob_data )
CLOB example:
Declaring:
sql type is clob(125m) my_clob
my_clob(131072000+4)
my_clob_length
my_clob_data(131072000)
my_clob(1),
+
my_clob_length )
equivalence( my_clob(5),
+
my_clob_data )
108
variable-name
BLOB_LOCATOR
CLOB_LOCATOR
BLOB_FILE
CLOB_FILE
variable-name
109
character
integer*4
integer*4
integer*4
character*255
equivalence(
my_file(267)
my_file_name_length
my_file_data_length
my_file_file_options
my_file_name
my_file(1),
+
my_file_name_length )
equivalence( my_file(5),
+
my_file_data_length )
equivalence( my_file(9),
+
my_file_file_options )
equivalence( my_file(13),
+
my_file_name )
110
To ensure that a character string is not converted to a numeric data type, enclose
the string with single quotation marks as in the following example:
VAR = 100
REXX sets the variable VAR to the 3 byte character string 100. If single quotation
marks are to be included as part of the string, follow this example:
VAR = "100"
When inserting numeric data into a CHARACTER field, the REXX interpreter
treats numeric data as integer data, thus you must concatenate numeric strings
explicitly and surround them with single quotation marks.
The API was executed. The REXX variable SQLCA contains the
completion status of the API. If SQLCA.SQLCODE is not zero,
SQLMSG contains the text message associated with that value.
-1
-2
-3
-6
The SQLCA REXX variable could not be built. This indicates that
there was not enough memory available or the REXX variable pool
was unavailable for some reason.
-7
The SQLMSG REXX variable could not be built. This indicates that
there was not enough memory available or the REXX variable pool
was unavailable for some reason.
Chapter 3. Programming
111
-8
-9
-10
-11
-12
-13
-14
-15
-16
The REXX variable specified for the error text could not be set.
-17
-18
Note: The values -8 through -18 are returned only by the GET ERROR
MESSAGE API.
SQLMSG
If SQLCA.SQLCODE is not 0, this variable contains the text message
associated with the error code.
SQLISL
The isolation level. Possible values are:
RR
Repeatable read.
RS
Read stability.
CS
Cursor stability. This is the default.
UR
Uncommitted read.
NC
No commit. (NC is only supported by some host or System i
servers.)
SQLCA
The SQLCA structure updated after SQL statements are processed and DB2
APIs are called.
SQLRODA
The input/output SQLDA structure for stored procedures invoked using
the CALL statement. It is also the output SQLDA structure for stored
procedures invoked using the Database Application Remote Interface
(DARI) API.
SQLRIDA
The input SQLDA structure for stored procedures invoked using the
Database Application Remote Interface (DARI) API.
112
SQLRDAT
An SQLCHAR structure for server procedures invoked using the Database
Application Remote Interface (DARI) API.
113
114
variable-name
LANGUAGE TYPE
BLOB
CLOB
DBCLOB
LOCATOR
Example:
CALL SQLEXEC DECLARE :hv1, :hv2 LANGUAGE TYPE CLOB LOCATOR
Data represented by LOB locators returned from the engine can be freed in
REXX/SQL using the FREE LOCATOR statement which has the following format:
Syntax for FREE LOCATOR statement
,
FREE
LOCATOR
:
variable-name
Example:
CALL SQLEXEC FREE LOCATOR :hv1, :hv2
variable-name
LANGUAGE TYPE
BLOB
CLOB
DBCLOB
FILE
Example:
CALL SQLEXEC DECLARE :hv3, :hv4 LANGUAGE TYPE CLOB FILE
File reference variables in REXX contain three fields. For the preceding example
they are:
hv3.FILE_OPTIONS.
Set by the application to indicate how the file will be used.
hv3.DATA_LENGTH.
Set by DB2 to indicate the size of the file.
Chapter 3. Programming
115
hv3.NAME.
Set by the application to the name of the LOB file.
For FILE_OPTIONS, the application sets the following keywords:
Keyword (integer value)
Meaning
READ (2)
File is to be used for input. This is a regular file that can be opened, read
and closed. The length of the data in the file (in bytes) is computed (by the
application requester code) upon opening the file.
CREATE (8)
On output, create a new file. If the file already exists, it is an error. The
length (in bytes) of the file is returned in the DATA_LENGTH field of the file
reference variable structure.
OVERWRITE (16)
On output, the existing file is overwritten if it exists, otherwise a new file
is created. The length (in bytes) of the file is returned in the DATA_LENGTH
field of the file reference variable structure.
APPEND (32)
The output is appended to the file if it exists, otherwise a new file is
created. The length (in bytes) of the data that was added to the file (not the
total file length) is returned in the DATA_LENGTH field of the file reference
variable structure.
Note: A file reference host variable is a compound variable in REXX, thus you
must set values for the NAME, NAME_LENGTH and FILE_OPTIONS fields in addition to
declaring them.
You should include this statement at the end of LOB applications. Note that you
can include it anywhere as a precautionary measure to clear declarations which
might have been left by previous applications, such as at the beginning of a REXX
SQL application.
116
Chapter 3. Programming
117
118
Chapter 3. Programming
119
v COBOL (updat.sqb)
The following three examples are from the updat sample. See this sample for a
complete program that shows how to modify table data in COBOL.
The following example shows how to insert table data:
EXEC SQL INSERT INTO staff
VALUES (999, Testing, 99, :job-update, 0, 0, 0)
END-EXEC.
The following example shows how to update table data where job-update is a
reference to a host variable declared in the declaration section of the source
code:
EXEC SQL UPDATE staff
SET job=:job-update
WHERE job=Mgr
END-EXEC.
120
sqldaid CHAR
sqldabc INTEGER
sqln SMALLINT
sqld SMALLINT
HEADER
sqldata POINTER
sqlind POINTER
Procedure
v Provide the largest SQLDA (that is, the one with the greatest number of
SQLVAR entries) that is needed. The maximum number of columns that can be
returned in a result table is 255. If any of the columns being returned is either a
LOB type or a distinct type, the value in SQLN is doubled, and the number of
SQLVAR entries needed to hold the information is doubled to 510. However, as
most SELECT statements do not even retrieve 255 columns, most of the allocated
space is unused.
v Provide a smaller SQLDA with fewer SQLVAR entries. In this case, if there are
more columns in the result than SQLVAR entries allowed for in the SQLDA, no
descriptions are returned. Instead, the database manager returns the number of
select list items detected in the SELECT statement. The application allocates an
SQLDA with the required number of SQLVAR entries, then uses the DESCRIBE
statement to acquire the column descriptions.
v When any of the columns returned has a LOB or user defined type, provide an
SQLDA with the exact number of SQLVAR entries.
What to do next
For all three methods, the question arises as to how many initial SQLVAR entries
you should allocate. Each SQLVAR element uses up 44 bytes of storage (not
counting storage allocated for the SQLDATA and SQLIND fields). If memory is
plentiful, the first method of providing an SQLDA of maximum size is easier to
implement.
The second method of allocating a smaller SQLDA is only applicable to
programming languages such as C and C++ that support the dynamic allocation of
Chapter 3. Programming
121
memory. For languages such as COBOL and FORTRAN that do not support the
dynamic allocation of memory, use the first method.
122
Procedure
v A fixed-length header, 16 bytes in length, containing fields such as SQLN and
SQLD
v A variable-length array of SQLVAR entries, of which each element is 44 bytes in
length on 32-bit platforms, and 56 bytes in length on 64-bit platforms.
What to do next
The number of SQLVAR entries needed for fulsqlda is specified in the SQLD field
of minsqlda. Assume this value is 20. Therefore, the storage allocation required for
fulsqlda is:
16 + (20 * sizeof(struct sqlvar))
This value represents the size of the header plus 20 times the size of each SQLVAR
entry, giving a total of 896 bytes.
You can use the SQLDASIZE macro to avoid doing your own calculations and to
avoid any version-specific dependencies.
Chapter 3. Programming
123
Procedure
Code your application to perform the following steps:
1. Store the value 20 in the SQLN field of fulsqlda (the assumption in this
example is that the result table contains 20 columns, and none of these columns
are LOB columns).
2. Obtain information about the SELECT statement using the second SQLDA
structure, fulsqlda. Two methods are available:
v Use another PREPARE statement, specifying fulsqlda instead of minsqlda.
v Use the DESCRIBE statement specifying fulsqlda.
What to do next
Using the DESCRIBE statement is preferred because the costs of preparing the
statement a second time are avoided. The DESCRIBE statement reuses information
previously obtained during the prepare operation to fill in the new SQLDA
structure. The following statement can be issued:
EXEC SQL DESCRIBE STMT INTO :fulsqlda
Procedure
Code your application to do the following tasks:
1. Analyze each SQLVAR description to determine how much space is required
for the value of that column.
Note that for LOB values, when the SELECT is described, the data type given
in the SQLVAR is SQL_TYP_xLOB. This data type corresponds to a plain LOB
host variable, that is, the whole LOB will be stored in memory at one time. This
will work for small LOBs (up to a few MB), but you cannot use this data type
for large LOBs (say 1 GB) because the stack is unable to allocate enough
memory. It will be necessary for your application to change its column
definition in the SQLVAR to be either SQL_TYP_xLOB_LOCATOR or
SQL_TYPE_xLOB_FILE. (Note that changing the SQLTYPE field of the SQLVAR
also necessitates changing the SQLLEN field.) After changing the column
definition in the SQLVAR, your application can then allocate the correct amount
of storage for the new type.
2. Allocate storage for the value of that column.
3. Store the address of the allocated storage in the SQLDATA field of the SQLDA
structure.
124
What to do next
These steps are accomplished by analyzing the description of each column and
replacing the content of each SQLDATA field with the address of a storage area
large enough to hold any values from that column. The length attribute is
determined from the SQLLEN field of each SQLVAR entry for data items that are
not of a LOB type. For items with a type of BLOB, CLOB, or DBCLOB, the length
attribute is determined from the SQLLONGLEN field of the secondary SQLVAR
entry.
In addition, if the specified column allows nulls, the application must replace the
content of the SQLIND field with the address of an indicator variable for the
column.
For a successful FETCH, you could write the application to obtain the data from
the SQLDA and display the column headings. For example:
display_col_titles( sqldaPointer ) ;
After the data is displayed, you should close the cursor and release any
dynamically allocated memory. For example:
EXEC SQL CLOSE pcurs ;
EMB_SQL_CHECK( "CLOSE CURSOR" ) ;
Chapter 3. Programming
125
The effect of this macro is to calculate the required storage for an SQLDA with n
SQLVAR elements.
To create an SQLDA structure with COBOL, you can either embed an INCLUDE
SQLDA statement or use the COPY statement. Use the COPY statement when you
want to control the maximum number of SQLVAR entries and hence the amount of
storage that the SQLDA uses. For example, to change the default number of
SQLVAR entries from 1489 to 1, use the following COPY statement:
COPY "sqlda.cbl"
replacing --1489-by --1--.
The FORTRAN language does not directly support self-defining data structures or
dynamic allocation. No SQLDA include file is provided for FORTRAN, because it
is not possible to support the SQLDA as a data structure in FORTRAN. The
precompiler will ignore the INCLUDE SQLDA statement in a FORTRAN program.
However, you can create something similar to a static SQLDA structure in a
FORTRAN program, and use this structure wherever an SQLDA can be used. The
file sqldact.f contains constants that help in declaring an SQLDA structure in
FORTRAN.
Execute calls to SQLGADDR to assign pointer values to the SQLDA elements that
require them.
The following table shows the declaration and use of an SQLDA structure with one
SQLVAR element.
Language
C and C++
126
Language
COBOL
Chapter 3. Programming
127
Language
FORTRAN
include sqldact.f
integer*2 sqlvar1
parameter ( sqlvar1 = sqlda_header_sz + 0*sqlvar_struct_sz )
C
! Header
integer*2
integer*2
integer*4
integer*4
integer*2
character*30
! First Variable
equivalence(
equivalence(
equivalence(
equivalence(
equivalence(
equivalence(
equivalence(
equivalence(
equivalence(
+
equivalence(
+
out_sqltype1
out_sqllen1
out_sqldata1
out_sqlind1
out_sqlnamel1
out_sqlnamec1
out_sqlda(sqlda_sqldaid_ofs), out_sqldaid )
out_sqlda(sqlda_sqldabc_ofs), out_sqldabc )
out_sqlda(sqlda_sqln_ofs), out_sqln
)
out_sqlda(sqlda_sqld_ofs), out_sqld
)
out_sqlda(sqlvar1+sqlvar_type_ofs), out_sqltype1 )
out_sqlda(sqlvar1+sqlvar_len_ofs), out_sqllen1
)
out_sqlda(sqlvar1+sqlvar_data_ofs), out_sqldata1 )
out_sqlda(sqlvar1+sqlvar_ind_ofs), out_sqlind1
)
out_sqlda(sqlvar1+sqlvar_name_length_ofs),
out_sqlnamel1
)
out_sqlda(sqlvar1+sqlvar_name_data_ofs),
out_sqlnamec1
)
128
DATE
384/385
SQL_TYP_DATE / SQL_TYP_NDATE
TIME
388/389
SQL_TYP_TIME / SQL_TYP_NTIME
TIMESTAMP
392/393
SQL_TYP_STAMP / SQL_TYP_NSTAMP
400/401
SQL_TYP_CGSTR / SQL_TYP_NCGSTR
BLOB
404/405
SQL_TYP_BLOB / SQL_TYP_NBLOB
CLOB
408/409
SQL_TYP_CLOB / SQL_TYP_NCLOB
DBCLOB
412/413
SQL_TYP_DBCLOB / SQL_TYP_NDBCLOB
VARCHAR
448/449
SQL_TYP_VARCHAR / SQL_TYP_NVARCHAR
CHAR
452/453
SQL_TYP_CHAR / SQL_TYP_NCHAR
LONG VARCHAR
456/457
SQL_TYP_LONG / SQL_TYP_NLONG
460/461
SQL_TYP_CSTR / SQL_TYP_NCSTR
VARGRAPHIC
464/465
SQL_TYP_VARGRAPH / SQL_TYP_NVARGRAPH
GRAPHIC
468/469
SQL_TYP_GRAPHIC / SQL_TYP_NGRAPHIC
LONG VARGRAPHIC
472/473
SQL_TYP_LONGRAPH / SQL_TYP_NLONGRAPH
FLOAT
480/481
SQL_TYP_FLOAT / SQL_TYP_NFLOAT
480/481
SQL_TYP_FLOAT / SQL_TYP_NFLOAT
484/485
SQL_TYP_DECIMAL / SQL_TYP_DECIMAL
INTEGER
496/497
SQL_TYP_INTEGER / SQL_TYP_NINTEGER
SMALLINT
500/501
SQL_TYP_SMALL / SQL_TYP_NSMALL
n/a
804/805
SQL_TYP_BLOB_FILE / SQL_TYPE_NBLOB_FILE
n/a
808/809
SQL_TYP_CLOB_FILE / SQL_TYPE_NCLOB_FILE
n/a
812/813
SQL_TYP_DBCLOB_FILE / SQL_TYPE_NDBCLOB_FILE
n/a
960/961
SQL_TYP_BLOB_LOCATOR / SQL_TYP_NBLOB_LOCATOR
n/a
964/965
SQL_TYP_CLOB_LOCATOR / SQL_TYP_NCLOB_LOCATOR
n/a
968/969
SQL_TYP_DBCLOB_LOCATOR / SQL_TYP_NDBCLOB_LOCATOR
XML
988/989
SQL_TYP_XML / SQL_TYP_XML
n/a
n/a
REAL
DECIMAL
Note: These defined types can be found in the sql.h include file located in the include sub-directory of the sqllib
directory. (For example, sqllib/include/sql.h for the C programming language.)
1. For the COBOL programming language, the SQLTYPE name does not use underscore (_) but uses a hyphen (-)
instead.
2. This is a null-terminated graphic string.
3. This is a null-terminated character string.
4. The difference between REAL and DOUBLE in the SQLDA is the length value (4 or 8).
5. Precision is in the first byte. Scale is in the second byte.
Chapter 3. Programming
129
Procedure
To process a variable-list SELECT statement, code your application to do the
following steps:
1. Declare an SQLDA.
An SQLDA structure must be used to process varying-list SELECT statements.
2. PREPARE the statement using the INTO clause.
The application then determines whether the SQLDA structure declared has
enough SQLVAR elements. If it does not, the application allocates another
130
SQLDA structure with the required number of SQLVAR elements, and issues an
additional DESCRIBE statement using the new SQLDA.
3. Allocate the SQLVAR elements.
Allocate storage for the host variables and indicators needed for each SQLVAR.
This step involves placing the allocated addresses for the data and indicator
variables in each SQLVAR element.
4. Process the SELECT statement.
A cursor is associated with the prepared statement, opened, and rows are
fetched using the properly allocated SQLDA structure.
To execute this statement, specify a host variable or an SQLDA structure for the
USING clause of the EXECUTE statement. The contents of the host variable is used
to specify the value of EMPNO.
The data type and length of the parameter marker depend on the context of the
parameter marker inside the SQL statement. If the data type of a parameter marker
is not obvious from the context of the statement in which it is used, use a CAST
Chapter 3. Programming
131
specification to specify the data type. A parameter marker for which you use a
CAST specification is a typed parameter marker. A typed parameter marker is
treated like a host variable of the data type used in the CAST specification. For
example, the statement SELECT ? FROM SYSCAT.TABLES is invalid because the data
type of the result column is unknown. However, the statement SELECT CAST(? AS
INTEGER) FROM SYSCAT.TABLES is valid because the CAST specification indicates
that the parameter marker represents an INTEGER value; the data type of the
result column is known.
If the SQL statement contains more than one parameter marker, the USING clause
of the EXECUTE statement must specify one of the following types of information:
v A list of host variables, one variable for each parameter marker
v An SQLDA that has one SQLVAR entry for each parameter marker for non-LOB
data types or two SQLVAR entries per parameter marker for LOB data types
The host variable list or SQLVAR entries are matched according to the order of the
parameter markers in the statement, and the data types must be compatible.
Note: Using a parameter marker in a dynamic SQL statement is like using a host
variable in a static SQL statement in that the optimizer does not use distribution
statistics and might not choose the best access plan.
The rules that apply to parameter markers are described in the PREPARE
statement topic.
v COBOL (varinp.sqb)
The following example is from the COBOL sample varinp.sqb, and shows how
to use a parameter marker in search and update conditions:
EXEC SQL BEGIN DECLARE SECTION END-EXEC.
01 pname
pic x(10).
01 dept
pic s9(4) comp-5.
01 st
pic x(127).
132
01 parm-var
pic x(5).
EXEC SQL END DECLARE SECTION END-EXEC.
-
133
In the previous statement inout_median, out_sqlcode, and out_buffer are host variables
and medianind, codeind, and bufferind are null indicator variables.
Note: You can also call stored procedures dynamically by preparing a CALL
statement.
,
(
argument
END; END-EXEC;
argument:
parameter-name =>
expression
DEFAULT
NULL
Parameter description
procedure-name
A name of the procedure, which is described in the catalogue, that you
want to call.
argument description
parameter-name
The name of the parameter that the argument is assigned to. When
you assign an argument to a paramater by name, all the arguments
that follow the (parameter) must be assigned by name.
You can only specify a named argument once (implicitly or
explicitly).
Named arguments are not supported on a call to an uncataloged
procedure.
expression or DEFAULT or NULL
Each specification of expression, the DEFAULT keyword, or the
NULL keyword is an argument of the CALL. The nth unnamed
argument of the CALL statement corresponds to the nth parameter
that is defined in the CREATE PROCEDURE statement for the
procedure.
Named arguments correspond to the same named parameter,
regardless of the order in which arguments are specified.
134
Chapter 3. Programming
135
You can think of the result of a select-statement as being a table having rows and
columns, much like a table in the database. If only one row is returned, you can
deliver the results directly into host variables specified by the SELECT INTO
statement.
If more than one row is returned, you must use a cursor to fetch them one at a
time. A cursor is a named control structure used by an application program to
point to a specific row within an ordered set of rows.
Procedure
For embedded SQL applications, you can use the following techniques to scroll
through data that has been retrieved:
v Keep a copy of the data that has been fetched in the application memory and
scroll through it by some programming technique.
v Use SQL to retrieve the data again, typically by using a second SELECT
statement.
Procedure
To keep a copy of the data, your application can do the one of the following tasks:
v Save the fetched data in virtual storage.
v Write the data to a temporary file (if the data does not fit in virtual storage).
One effect of this approach is that a user, scrolling backward, always sees exactly
the same data that was fetched, even if the data in the database was changed in
the interim by a transaction.
v Using an isolation level of repeatable read, the data you retrieve from a
transaction can be retrieved again by closing and opening a cursor. Other
applications are prevented from updating the data in your result set. Isolation
levels and locking can affect how users update data.
Procedure
You can retrieve data a second time by using any of the following methods:
v Retrieve data from the beginning
136
To retrieve the data again from the beginning of the result table, close the active
cursor and reopen it. This action positions the cursor at the beginning of the
result table. But, unless the application holds locks on the table, others may have
changed it, so what had been the first row of the result table may no longer be.
v Retrieve data from the middle
To retrieve data a second time from somewhere in the middle of the result table,
issue a second SELECT statement and declare a second cursor on the statement.
For example, suppose the first SELECT statement was:
SELECT * FROM DEPARTMENT
WHERE LOCATION = CALIFORNIA
ORDER BY DEPTNO
Now, suppose that you want to return to the rows that start with DEPTNO =
M95 and fetch sequentially from that point. Code the following statement:
SELECT * FROM DEPARTMENT
WHERE LOCATION = CALIFORNIA
AND DEPTNO >= M95
ORDER BY DEPTNO
To retrieve the same rows in reverse order, specify that the order is descending,
as in the following statement:
SELECT * FROM DEPARTMENT
WHERE LOCATION = CALIFORNIA
ORDER BY DEPTNO DESC
A cursor on the second statement retrieves rows in exactly the opposite order
from a cursor on the first statement. Order of retrieval is guaranteed only if the
first statement specifies a unique ordering sequence.
For retrieving rows in reverse order, it can be useful to have two indexes on the
DEPTNO column, one in ascending order, and the other in descending order.
137
statement in the example, and an index on DEPTNO for the second. Because rows are
fetched in order by the index key, the second order need not be the same as the
first.
Again, executing two similar SELECT statements can produce a different ordering
of rows, even if no statistics change and no indexes are created or dropped. In the
example, if there are many different values of LOCATION, the database manager can
choose an index on LOCATION for both statements. Yet changing the value of DEPTNO
in the second statement to the following example can cause the database manager
to choose an index on DEPTNO:
SELECT * FROM DEPARTMENT
WHERE LOCATION = CALIFORNIA
AND DEPTNO >= Z98
ORDER BY DEPTNO
Because of the subtle relationships between the form of an SQL statement and the
values in this statement, never assume that two different SQL statements will
return rows in the same order unless the order is uniquely determined by an
ORDER BY clause.
Procedure
To update previously retrieved data, you can do one of two things:
v If you have a second cursor on the data to be updated and the SELECT
statement uses none of the restricted elements, you can use a cursor-controlled
UPDATE statement. Name the second cursor in the WHERE CURRENT OF
clause.
v In other cases, use UPDATE with a WHERE clause that names all the values in
the row or specifies the primary key of the table. You can issue one statement
many times with different values of the variables.
138
Procedure
To
1.
2.
3.
process a cursor:
Specify the cursor using a DECLARE CURSOR statement.
Perform the query and build the result table using the OPEN statement.
Retrieve rows one at a time using the FETCH statement.
What to do next
An application can use several cursors concurrently. Each cursor requires its own
set of DECLARE CURSOR, OPEN, CLOSE, and FETCH statements.
139
The following example selects from a table using a cursor, opens the cursor,
fetches, updates, or delete rows from the table, then closes the cursor.
EXEC SQL DECLARE c1 CURSOR FOR SELECT * FROM staff WHERE id >= 310;
EXEC SQL OPEN c1;
EXEC SQL FETCH c1 INTO :id, :name, :dept, :job:jobInd, :years:yearsInd, :salary,
:comm:commInd;
The sample shows almost all possible cases of table data modification.
v COBOL (openftch.sqb)
The following example is from the sample openftch. This example selects from a
table using a cursor, opens the cursor, and fetches rows from the table.
EXEC SQL DECLARE c1 CURSOR FOR
SELECT name, dept FROM staff
WHERE job=Mgr
FOR UPDATE OF job END-EXEC.
EXEC SQL OPEN c1 END-EXEC
* call the FETCH and UPDATE/DELETE loop.
perform Fetch-Loop thru End-Fetch-Loop
until SQLCODE not equal 0.
EXEC SQL CLOSE c1 END-EXEC.
140
= %s\n", appMsg);
= %d\n", line);
= %s\n", file);
= %ld\n",
pSqlca->sqlcode);
strcat( sqlInfo, sqlInfoToken);
/* get error message */
rc = sqlaintp( errorMsg, 1024, 80, pSqlca);
/* return code is the length of the errorMsg string */
if( rc > 0)
{ sprintf( sqlInfoToken, "%s\n", errorMsg);
strcat( sqlInfo, sqlInfoToken);
}
/* get SQLSTATE message */
rc = sqlogstt( sqlstateMsg, 1024, 80, pSqlca->sqlstate);
if (rc == 0)
{ sprintf( sqlInfoToken, "%s\n", sqlstateMsg);
strcat( sqlInfo, sqlInfoToken);
}
if( pSqlca->sqlcode < 0)
{ sprintf( sqlInfoToken, "--- end error report ---\n");
strcat( sqlInfo, sqlInfoToken);
printf("%s", sqlInfo);
return 1;
}
else
{ sprintf( sqlInfoToken, "--- end warning report ---\n");
strcat( sqlInfo, sqlInfoToken);
printf("%s", sqlInfo);
return 0;
} /* endif */
} /* endif */
return 0;
}
C developers can also use an equivalent function, sqlglm(), which has the
signature:
sqlglm(char *message_buffer_ptr, int *buffer_size_ptr, int *msg_size_ptr)
COBOL Example: From CHECKERR.CBL
********************************
* GET ERROR MESSAGE API called *
********************************
call "sqlgintp" using
by value buffer-size
by value line-width
by reference sqlca
by reference error-buffer
returning error-rc.
************************
* GET SQLSTATE MESSAGE *
************************
call "sqlggstt" using
by value buffer-size
by value line-width
by reference sqlstate
by reference state-buffer
returning state-rc.
if error-rc is greater than 0
display error-buffer.
if state-rc is greater than 0
display state-buffer.
if state-rc is less than 0
display "return code from GET SQLSTATE =" state-rc.
Chapter 3. Programming
141
142
The first element of the SQLWARN array, SQLWARN0, contains a blank if all other
elements are blank. SQLWARN0 contains a W if at least one other element contains
a warning character.
Note: If you want to develop applications that access various IBM RDBMS servers
you should:
v Where possible, have your applications check the SQLSTATE rather than the
SQLCODE.
v If your applications will use DB2 Connect, consider using the mapping facility
provided by DB2 Connect to map SQLCODE conversions between unlike
databases.
Starting in DB2 V10.1 Fix Pack 2 and later, if the application is precompiled using
COMPATIBILITY_MODE ORA, then following single statement extends syntaxes for
existing COMMIT and ROLLBACK statements to specify connection reset option:
Chapter 3. Programming
143
EXEC
EXEC
EXEC
EXEC
SQL
SQL
SQL
SQL
ROLLBACK RELEASE;
ROLLBACK WORK RELEASE;
COMMIT RELEASE;
COMMIT WORK RELEASE;
144
145
Source Files
With SQL
Statements
Precompiler
(db2 PREP)
Source Files
Without SQL
Statements
BINDFILE
Create a
Bind File
Modified
Source Files
Object
Files
Libraries
PACKAGE
Create a
Package
Executable
Program
Bind
File
Binder
(db2 BIND)
(Package)
146
You must always precompile a source file against a specific database, even if
eventually you do not use the database with the application. In practice, you can
use a test database for development, and after you fully test the application, you
can bind its bind file to one or more production databases. This practice is known
as deferred binding.
Note: Running an embedded application on an older client version than the client
where precompilation occurred is not supported, regardless of where the
application was compiled. For example, it is not supported to precompile an
embedded application on a DB2 V9.5 client and then attempt to run the
application on a DB2 V9.1 client.
If your application uses a code page that is not the same as your database code
page, you need to consider which code page to use when precompiling.
If your application uses user-defined functions (UDFs) or user-defined distinct
types (UDTs), you might need to use the FUNCPATH parameter when you precompile
your application. This parameter specifies the function path that is used to resolve
UDFs and UDTs for applications containing static SQL. If FUNCPATH is not specified,
the default function path is SYSIBM, SYSFUN, USER, where USER refers to the
current user ID.
Before precompiling an application you must connect to a server, either implicitly
or explicitly. Although you precompile application programs at the client
workstation and the precompiler generates modified source and messages on the
client, the precompiler uses the server connection to perform some of the
validation.
The precompiler also creates the information the database manager needs to
process the SQL statements against a database. This information is stored in a
package, in a bind file, or in both, depending on the precompiler options selected.
A typical example of using the precompiler follows. To precompile a C embedded
SQL source file called filename.sqc, you can issue the following command to
create a C source file with the default name filename.c and a bind file with the
default name filename.bnd:
DB2 PREP filename.sqc BINDFILE
Chapter 4. Building
147
objects referenced by the static SQL statements in the source file. For
example, you cannot precompile a SELECT statement unless the table it
references exists in the database.
With the VERSION parameter, the bind file (if the BINDFILE parameter is
used) and the package (either if bound at PREP time or if bound separately)
is designated with a particular version identifier. Many versions of
packages with the same name and creator can exist at once.
Bind File
If you use the BINDFILE parameter, the precompiler creates a bind file (with
extension .bnd) that contains the data required to create a package. This
file can be used later with the BIND command to bind the application to
one or more databases. If you specify BINDFILE and do not specify the
PACKAGE parameter, binding is deferred until you invoke the BIND
command. Note that for the command line processor (CLP), the default for
PREP does not specify the BINDFILE parameter. Thus, if you are using the
CLP and want the binding to be deferred, you need to specify the BINDFILE
parameter.
Specifying SQLERROR CONTINUE creates a package, even if errors occur when
binding SQL statements. Those statements that fail to bind for
authorization or existence reasons can be incrementally bound at execution
time if VALIDATE RUN is also specified. Any attempt to issue them at run
time generates an error.
Message File
If you use the MESSAGES parameter, the precompiler redirects messages to
the indicated file. These messages include warning and error messages that
describe problems encountered during precompilation. If the source file
does not precompile successfully, use the warning and error messages to
determine the problem, correct the source file, and then attempt to
precompile the source file again. If you do not use the MESSAGES parameter,
precompilation messages are written to the standard output.
148
149
The DB2 database server has more flexibility when it can consider a list of schemas
during package resolution. The list of schemas is similar to the SQL path that is
provided by the CURRENT PATH special register. The schema list is used for
user-defined functions, procedures, methods, and distinct types.
Note: The SQL path is a list of schema names that DB2 should consider when
trying to determine the schema for an unqualified function, procedure, method, or
distinct type name.
If you need to associate multiple variations of a package (that is, multiple sets of
BIND options for a package) with a single compiled program, consider isolating
the path of schemas that are used for SQL objects from the path of schemas that
are used for packages.
The CURRENT PACKAGE PATH special register allows you to specify a list of
package schemas. Other DB2 family products provide similar capability with
special registers such as CURRENT PATH and CURRENT PACKAGESET, which
are pushed and popped for nested procedures and user-defined functions without
corrupting the runtime environment of the invoking application. The CURRENT
PACKAGE PATH special register provides this capability for package schema
resolution.
Many installations use more than one schema for packages. If you do not specify a
list of package schemas, you must issue the SET CURRENT PACKAGESET
statement (which can contain at most one schema name) each time you require a
package from a different schema. If, however, you issue a SET CURRENT
PACKAGE PATH statement at the beginning of the application to specify a list of
schema names, you do not need to issue a SET CURRENT PACKAGESET
statement each time a package in a different schema is needed.
For example, assume that the following packages exist, and, using the following
list, that you want to invoke the first one that exists on the server:
SCHEMA1.PKG1, SCHEMA2.PKG2, SCHEMA3.PKG3, SCHEMA.PKG, and
SCHEMA5.PKG5. Assuming the current support for a SET CURRENT
PACKAGESET statement in DB2 for Linux, UNIX, and Windows (that is, accepting
a single schema name), a SET CURRENT PACKAGESET statement have to be
issued before trying to invoke each package to specify the specific schema. For this
example, five SET CURRENT PACKAGESET statements need to be issued.
However, using the CURRENT PACKAGE PATH special register, a single SET
statement is sufficient. For example:
SET CURRENT PACKAGE PATH = SCHEMA1, SCHEMA2, SCHEMA3, SCHEMA, SCHEMA5;
Note: In DB2 for Linux, UNIX, and Windows, you can set the CURRENT
PACKAGE PATH special register in the db2cli.ini file, by using the
SQLSetConnectAttr API, in the SQLE-CLIENT-INFO structure, and by including
the SET CURRENT PACKAGE PATH statement in embedded SQL programs. Only
DB2 for z/OS, Version 8 or later, supports the SET CURRENT PACKAGE PATH
statement. If you issue this statement against a DB2 for Linux, UNIX, and
Windows server or against DB2 for i, -30005 is returned.
You can use multiple schemas to maintain several variations of a package. These
variations can be a very useful in helping to control changes made in production
environments. You can also use different variations of a package to keep a backup
version of a package, or a test version of a package (for example, to evaluate the
150
impact of a new index). A previous version of a package is used in the same way
as a backup application (load module or executable), specifically, to provide the
ability to revert to a previous version.
For example, assume the PROD schema includes the current packages used by the
production applications, and the BACKUP schema stores a backup copy of those
packages. A new version of the application (and thus the packages) are promoted
to production by binding them using the PROD schema. The backup copies of the
packages are created by binding the current version of the applications using the
backup schema (BACKUP). Then, at runtime, you can use the SET CURRENT
PACKAGE PATH statement to specify the order in which the schemas should be
checked for the packages. Assume that a backup copy of the application MYAPPL
has been bound using the BACKUP schema, and the version of the application
currently in production has been bound to the PROD schema creating a package
PROD.MYAPPL. To specify that the variation of the package in the PROD schema
should be used if it is available (otherwise the variation in the BACKUP schema is
used), issue the following SET statement for the special register:
SET CURRENT PACKAGE PATH = PROD, BACKUP;
If you need to revert to the previous version of the package, the production
version of the application can be dropped with the DROP PACKAGE statement,
which causes the old version of the application (load module or executable) that
was bound using the BACKUP schema to be invoked instead (application path
techniques could be used here, specific to each operating system platform).
Note: This example assumes that the only difference between the versions of the
package are in the BIND options that were used to create the packages (that is,
there are no differences in the executable code).
The application does not use the SET CURRENT PACKAGESET statement to select
the schema it wants. Instead, it allows DB2 to pick up the package by checking for
it in the schemas listed in the CURRENT PACKAGE PATH special register.
Note: The DB2 for z/OS precompile process stores a consistency token in the
DBRM (which can be set using the LEVEL option), and during package resolution
a check is made to ensure that the consistency token in the program matches the
package. Similarly, the DB2 for Linux, UNIX, and Windows bind process stores a
timestamp in the bind file. DB2 for Linux, UNIX, and Windows also supports a
LEVEL option.
Another reason for creating several versions of a package in different schemas
could be to cause different BIND options to be in affect. For example, you can use
different qualifiers for unqualified name references in the package.
Applications are often written with unqualified table names. This supports
multiple tables that have identical table names and structures, but different
qualifiers to distinguish different instances. For example, a test system and a
production system might have the same objects created in each, but they might
have different qualifiers (for example, PROD and TEST). Another example is an
application that distributes data into tables across different DB2 systems, with each
table having a different qualifier (for example, EAST, WEST, NORTH, SOUTH;
COMPANYA, COMPANYB; Y1999, Y2000, Y2001). With DB2 for z/OS, you specify
the table qualifier using the QUALIFIER option of the BIND command. When you
use the QUALIFIER option, users do not have to maintain multiple programs, each
of which specifies the fully qualified names that are required to access unqualified
tables. Instead, the correct package can be accessed at runtime by issuing the SET
Chapter 4. Building
151
152
At this point it is possible to run both applications A1 and A2, which will be
executed from packages SCHEMA1.PKG versions VER1 and VER2. If, for example,
the first package is dropped (using the DROP PACKAGE SCHEMA1.PKG
VERSION VER1 SQL statement), an attempt to run the application A1 will fail
with a package not found error.
When a source file is precompiled but a package is not created, a bind file and
modified source file are generated with matching timestamps. To run the
application, the bind file is bound in a separate BIND step to create a package and
the modified source file is compiled and linked. For an application that requires
multiple source modules, the binding process must be done for each bind file.
In this deferred binding scenario, the application and package timestamps match
because the bind file contains the same timestamp as the one that was stored in
the modified source file during precompilation.
153
v User object modules, generated by the language compiler from the modified
source files and other files not containing SQL statements.
v Host language library APIs, supplied with the language compiler.
v The database manager library containing the database manager APIs for your
operating environment. Refer to the appropriate programming documentation
for your operating platform for the specific name of the database manager
library you need for your database manager APIs.
154
ID Used
Chapter 4. Building
155
Invoking Environment
ID Used
The following table shows the combination of the DYNAMICRULES value and the
runtime environment that yields each dynamic SQL behavior.
Table 18. How DYNAMICRULES and the Runtime Environment Determine Dynamic SQL Statement Behavior
DYNAMICRULES Value
BIND
Bind behavior
Bind behavior
RUN
Run behavior
Run behavior
DEFINEBIND
Bind behavior
Define behavior
DEFINERUN
Run behavior
Define behavior
INVOKEBIND
Bind behavior
Invoke behavior
INVOKERUN
Run behavior
Invoke behavior
The following table shows the dynamic SQL attribute values for each type of
dynamic SQL behavior.
Table 19. Definitions of Dynamic SQL Statement Behaviors
Dynamic SQL
Attribute
Authorization ID
The implicit or
ID of User Executing Routine definer (not
the routine's package
explicit value of the Package
BIND OWNER command
owner)
parameter
Current statement
authorization ID when
routine is invoked.
Default qualifier
for unqualified
objects
The implicit or
explicit value of the
BIND QUALIFIER
command parameter
No
Can execute
GRANT,
REVOKE, ALTER,
CREATE, DROP,
COMMENT ON,
RENAME, SET
INTEGRITY, and
SET EVENT
MONITOR STATE
156
CURRENT
SCHEMA Special
Register
Current statement
authorization ID when
routine is invoked.
Yes
No
No
One package is created for each separately precompiled source code module. If an
application has five source files, of which three require precompilation, three
packages or bind files are created. By default, each package is given a name that is
the same as the name of the source module from which the .bnd file originated,
but truncated to 8 characters. To explicitly specify a different package name, you
must use the PACKAGE USING parameter on the PREP command. The version of a
package is given by the VERSION precompile parameter and defaults to the empty
string. If the name and schema of this newly created package is the same as a
package that currently exists in the target database, but the version identifier
differs, a new package is created and the previous package still remains. However
if a package exists that matches the name, schema and the version of the package
being bound, then that package is dropped and replaced with the new package
being bound (specifying ACTION ADD on the bind would prevent that and an error
(SQL0719) would be returned instead).
Chapter 4. Building
157
In some situations, however, you might want to rebind packages that are valid. For
example, you might want to take advantage of a newly created index, or use
updated statistics after executing the RUNSTATS command.
Packages can be dependent on certain types of database objects such as tables,
views, aliases, indexes, triggers, referential constraints, and table check constraints.
If a package is dependent on a database object (such as a table, view, trigger, and
so on), and that object is dropped, the package is placed into an invalid state. If the
object that is dropped is a UDF, the package is placed into an inoperative state.
When the package is marked inoperative, the next use of a statement in this
package causes an implicit rebind of the package using non-conservative binding
semantics in order to be able to resolve to SQL objects considering the latest
changes in the database schema that caused that package to become inoperative.
For static DML in packages, the packages can rebind implicitly, or by explicitly
issuing the REBIND command (or corresponding API), or the BIND command (or
corresponding API). The implicit rebind is performed with conservative binding
semantics if the package is marked invalid, but uses non-conservative binding
semantics when the package is marked inoperative.
You must use the BIND command to rebind a package for a program which was
modified to include more, fewer, or changed SQL statements. You must also use
the BIND command if you need to change any bind options from the values with
which the package was originally bound. The REBIND command provides the
option to resolve with conservative binding semantics (RESOLVE CONSERVATIVE) or
to resolve by considering new routines, data types, or global variables (RESOLVE
ANY, which is the default option). The RESOLVE CONSERVATIVE option can be used
only if the package was not marked inoperative by the database manager
(SQLSTATE 51028). You should use REBIND whenever your situation does not
specifically require the use of BIND, as the performance of REBIND is significantly
better than that of BIND.
When multiple versions of the same package name coexist in the catalog, only one
version can be rebound at a time.
In IBM Data Studio Version 3.1 or later, you can use the task assistant for
rebinding packages. Task assistants can guide you through the process of setting
options, reviewing the automatically generated commands to perform the task, and
running these commands. For more details, see Administering databases with task
assistants.
Bind considerations
If your application uses a code page that differs from the database code page, you
must ensure that the code page used by the application is compatible with the
database code page during the bind process.
If your application issues calls to any of the database manager utility APIs, such as
IMPORT or EXPORT, you must bind the supplied utility bind files to the database.
You can use bind options to control certain operations that occur during binding,
as in the following examples:
v The QUERYOPT bind parameter takes advantage of a specific optimization class
when binding.
158
v The EXPLSNAP bind parameter stores Explain Snapshot information for eligible
SQL statements in the Explain tables.
v The FUNCPATH bind parameter properly resolves user-defined distinct types and
user-defined functions in static SQL.
If the bind process starts but never returns, it might be that other applications
connected to the database hold locks that you require. In this case, ensure that no
applications are connected to the database. If they are, disconnect all applications
on the server and the bind process will continue.
If your application will access a server using DB2 Connect, you can use the BIND
command parameters available for that server.
Bind files are not compatible with earlier versions of DB2 for Linux, UNIX, and
Windows. In mixed-level environments, DB2 for Linux, UNIX, and Windows can
only use the functions available to the lowest level of the database environment.
For example, if a version 8 client connects to a version 7.2 server, the client will
only be able to use version 7.2 functions. As bind files express the functionality of
the database, they are subject to the mixed-level restriction.
If you need to rebind higher-level bind files on lower-level systems, you can:
v Use a lower level IBM data server client to connect to the higher-level server
and create bind files which can be shipped and bound to the lower-level DB2 for
Linux, UNIX, and Windows environment.
v Use a higher-level IBM data server client in the lower-level production
environment to bind the higher-level bind files that were created in the test
environment. The higher-level client passes only the options that apply to the
lower-level server.
Blocking considerations
When you want to turn blocking off for an embedded SQL application and the
source code is not available, the application must be rebound using the BIND
command and setting the BLOCKING NO clause.
Existing embedded SQL applications must be rebound using the BIND command
and setting the BLOCKING ALL or BLOCKING UNAMBIGUOUS clauses to request blocking
(if they are not already bound in this fashion). Embedded applications will retrieve
the LOB values from the server a row at a time, when a block of rows have been
retrieved from the server
159
only when the application calls the task requiring SQL statements, and only if an
associated package does not already exist.
Another advantage of the deferred binding method is that it lets you create
packages without providing source code to end users. You can ship the associated
bind files with the application.
160
Because several of the utilities supplied with DB2 Connect are developed using
embedded SQL, they must be bound to an IBM mainframe database server before
they can be used with that system. If you do not use the DB2 Connect utilities and
interfaces, you do not have to bind them to each of your IBM mainframe database
servers. The lists of bind files required by these utilities are contained in the
following files:
v
v
v
v
Binding one of these lists of files to a database will bind each individual utility to
that database.
If a DB2 Connect server product is installed, the DB2 Connect utilities must be
bound to each IBM mainframe database server before they can be used with that
system. Assuming the clients are at the same fix pack level, you need to bind the
utilities only once, regardless of the number of client platforms involved.
For example, if you have 10 Windows clients, and 10 AIX clients connecting to DB2
for z/OS via DB2 Connect Enterprise Edition on a Windows server, perform one of
the following steps:
v Bind ddcsmvs.lst from one of the Windows clients.
v Bind ddcsmvs.lst from one of the AIX clients.
v Bind ddcsmvs.lst from the DB2 Connect server.
This example assumes that:
v All the clients are at the same service level. If they are not then, in addition, you
might need to bind from each client of a particular service level
v The server is at the same service level as the clients. If it is not, then you need to
bind from the server as well.
In addition to DB2 Connect utilities, any other applications that use embedded
SQL must also be bound to each database that you want them to work with. An
application that is not bound will usually produce an SQL0805N error message
when executed. You might want to create an additional bind list file for all of your
applications that need to be bound.
For each IBM mainframe database server that you are binding to, perform the
following steps:
1. Make sure that you have sufficient authority for your IBM mainframe database
server management system:
System z
The authorizations required are:
v SYSADM or
v SYSCTRL or
v BINDADD and CREATE IN COLLECTION NULLID
Note: The BINDADD and the CREATE IN COLLECTION NULLID
privileges provide sufficient authority only when the packages do not
already exist. For example, if you are creating them for the first time.
Chapter 4. Building
161
If the packages already exist, and you are binding them again, then the
authority required to complete the task(s) depends on who did the
original bind.
A) If you did the original bind and you are doing the bind again, then
having any of the previously listed authorities will allow you to
complete the bind.
B) If your original bind was done by someone else and you are doing
the second bind, then you will require either the SYSADM or the
SYSCTRL authorities to complete the bind. Having just the BINDADD
and the CREATE IN COLLECTION NULLID authorities will not allow
you to complete the bind. It is still possible to create a package if you
do not have either SYSADM or SYSCTRL privileges. In this situation
you would need the BIND privilege on each of the existing packages
that you intend to replace.
VSE or VM
The authorization required is DBA authority. If you want to use the
GRANT option on the bind command (to avoid granting access to each
DB2 Connect package individually), the NULLID user ID must have
the authority to grant authority to other users on the following tables:
v system.syscatalog
v system.syscolumns
v system.sysindexes
v system.systabauth
v system.syskeycols
v
v
v
v
system.syssynonyms
system.syskeys
system.syscolauth
system.sysuserauth
162
For example:
ddcspkgn @ddcsmvs.lst
To determine these values for DB2 Connect execute the ddcspkgn utility, for
example:
ddcspkgn @ddcsmvs.lst
Note:
a. Using the bind option sqlerror continue is required; however, this option
is automatically specified for you when you bind applications using the
DB2 tools or the Command Line Processor (CLP). Specifying this option
turns bind errors into warnings, so that binding a file containing errors can
still result in the creation of a package. In turn, this allows one bind file to
be used against multiple servers even when a particular server
implementation might flag the SQL syntax of another to be invalid. For this
reason, binding any of the list files ddcsxxx.lst against any particular IBM
mainframe database server should be expected to produce some warnings.
b. If you are connecting to a DB2 database through DB2 Connect, use the bind
list db2ubind.lst and do not specify sqlerror continue, which is only valid
when connecting to a IBM mainframe database server. Also, to connect to a
DB2 database, it is recommended that you use the DB2 clients provided
with DB2 and not DB2 Connect.
3. Use similar statements to bind each application or list of applications.
4. If you have remote clients from a previous release of DB2, you might need to
bind the utilities on these clients to DB2 Connect.
Package versioning
If you need to create multiple versions of an application, you can use the VERSION
parameter in the PRECOMPILE command. This option allows multiple versions of the
same package name (that is, the package name and creator name) to coexist.
For example, assume that you have an application called foo1, which is compiled
from foo1.sqc. You would precompile and bind the package foo1 to the database
and deliver the application to the users. The users could then run the application.
Chapter 4. Building
163
To make subsequent changes to the application, you would update foo1.sqc, then
repeat the process of recompiling, binding, and sending the application to the
users. If the VERSION parameter was not specified for either the first or second
precompilation of foo1.sqc, the first package is replaced by the second package.
Any user who attempts to run the old version of the application will receive the
SQLCODE -818, indicating a mismatched timestamp error.
To avoid the mismatched timestamp error and in order to allow both versions of
the application to run at the same time, use package versioning. As an example,
when you build the first version of foo1, precompile it using the VERSION
parameter, as follows:
DB2 PREP FOO1.SQC VERSION V1.1
This first version of the program may now be run. When you build the new
version of foo1, precompile it with the command:
DB2 PREP FOO1.SQC VERSION V1.2
At this point this new version of the application will also run, even if there still are
instances of the first application still executing. Because the package version for the
first package is V1.1 and the package version for the second is V1.2, no naming
conflict exists: both packages will exist in the database and both versions of the
application can be used.
You can use the ACTION parameter of the PRECOMPILE or BIND commands with the
VERSION parameter of the PRECOMPILE command. You use the ACTION parameter to
control the way in which different versions of packages can be added or replaced.
Package privileges do not have granularity at the version level. That is, a GRANT
or a REVOKE of a package privilege applies to all versions of a package that share
the name and creator. So, if package privileges on package foo1 were granted to a
user or a group after version V1.1 was created, when version V1.2 is distributed
the user or group has the same privileges on version V1.2. This behavior is usually
required because typically the same users and groups have the same privileges on
all versions of a package. If you do not want the same package privileges to apply
to all versions of an application, you should not use the PRECOMPILE VERSION
parameter to accomplish package versioning. Instead, you should use different
package names (either by renaming the updated source file, or by using the
PACKAGE USING parameter to explicitly rename the package).
In this example, db_name is the name of the database, user_name is the name of
the user, and file_name is the name of the application that will be bound. Note
that user_name and schema_name are typically the same value. Then use the SET
CURRENT PACKAGESET statement to specify which package to use, and
therefore, which qualifiers will be used. If COLLECTION is not specified, then the
default qualifier is the authorization identifier that is used when binding the
164
bldapp
Application programs
bldrtn
bldmc
bldmt
bldcli
Note: By default the bldapp sample scripts for building executables from source
code will build 64-bit executables.
The following table lists the build files by platform and programming language,
and the directories where they are located. In the online documentation, the build
file names are hot-linked to the source files in HTML. The user can also access the
text files in the appropriate samples directories.
Table 21. Build files by language and platform
Platform >
Language
C
samples/c
AIX
HP-UX
Linux
Solaris
Windows
bldapp
bldrtn
bldmt
bldmc
bldapp
bldrtn
bldmt
bldmc
bldapp
bldrtn
bldmt
bldmc
bldapp
bldrtn
bldmt
bldmc
bldapp.bat
bldrtn.bat
bldmt.bat
bldmc.bat
Chapter 4. Building
165
AIX
HP-UX
Linux
Solaris
Windows
C++
samples/cpp
bldapp
bldrtn
bldmt
bldmc
bldapp
bldrtn
bldmt
bldmc
bldapp
bldrtn
bldmt
bldmc
bldapp
bldrtn
bldmt
bldmc
bldapp.bat
bldrtn.bat
bldmt.bat
bldmc.bat
n/a
n/a
n/a
IBM COBOL
samples/cobol
bldapp
bldrtn
bldapp
bldrtn
bldapp.bat
bldrtn.bat
bldapp
bldrtn
bldapp
bldrtn
bldapp
bldrtn
bldapp.bat
bldrtn.bat
The build files are used in the documentation for building applications and
routines because they demonstrate very clearly the compile and link options that
DB2 recommends for the supported compilers. There are generally many other
compile and link options available, and users are free to experiment with them. See
your compiler documentation for all the compile and link options provided.
Besides building the sample programs, developers can also build their own
programs with the build files. The sample programs can be used as templates that
can be modified by users to assist in their application development.
Conveniently, the build files are designed to build a source file with any file name
allowed by the compiler. This is unlike the makefiles, where the program names
are hardcoded into the file. The makefiles access the build files for compiling and
linking the programs they make. The build files use the $1 variable on UNIX and
Linux and the %1 variable on Windows operating systems to substitute internally
for the program name. Incremented numbers for these variable names substitute
for other arguments that might be required.
The build files allow for quick and easy experimentation, as each one is suited to a
specific kind of program-building, such as stand-alone applications, routines
(stored procedures and UDFs) or more specialized program types such as
multi-connection or multi-threaded programs. Each type of build file is provided
wherever the specific kind of program it is designed for is supported by the
compiler.
The object and executable files produced by a build file are automatically
overwritten each time a program is built, even if the source file is not modified.
This is not the case when using a makefile. It means a developer can rebuild an
existing program without having to delete previous object and executable files, or
modifying the source.
The build files contain a default setting for the sample database. If the user is
accessing another database, they can simply supply another parameter to override
the default. If they are using the other database consistently, they could hardcode
this database name, replacing sample, within the build file itself.
For embedded SQL programs, except when using the IBM COBOL precompiler on
Windows, the build files call another file, embprep, that contains the precompile
166
and bind steps for embedded SQL programs. These steps might require the
optional parameters for user ID and password, depending on where the embedded
SQL program is being built.
Finally, the build files can be modified by the developer for his or her convenience.
Besides changing the database name in the build file (explained previously) the
developer can easily hardcode other parameters within the file, change compile
and link options, or change the default DB2 instance path. The simple,
straightforward, and specific nature of the build files makes tailoring them to your
needs an easy task.
Error-checking utilities
The DB2 Client provides several utility files. The utility files contain functions that
you can use for error checking and printing out error information. Utility files are
provided for each language in the samples directory.
When used with an application program, the error-checking utility files provide
helpful error information, and make debugging a DB2 program much easier. Most
of the error-checking utilities use the DB2 APIs GET SQLSTATE MESSAGE (sqlogstt)
and GETERROR MESSAGE (sqlaintp) to obtain pertinent SQLSTATE and SQLCA
information related to problems encountered in program execution. The CLI utility
file, utilcli.c, does not use these DB2 APIs; instead it uses equivalent CLI
statements. With all the error-checking utilities, descriptive error messages are
printed out to allow the developer to quickly understand the problem. Some DB2
programs, such as routines (stored procedures and user-defined functions), do not
need to use the utilities.
Here are the error-checking utility files used by DB2 supported compilers for the
different programming languages:
Table 22. Error-checking utility files by language
Language
Non-embedded
SQL source file
Non-embedded
SQL header file
Embedded SQL
source file
Embedded SQL
header file
utilapi.c
utilapi.h
utilemb.sqc
utilemb.h
utilapi.C
utilapi.h
utilemb.sqC
utilemb.h
checkerr.cbl
n/a
n/a
n/a
checkerr.cbl
n/a
n/a
n/a
C
samples/c
C++
samples/cpp
IBM COBOL
samples/cobol
Micro Focus COBOL
samples/cobol_mf
In order to use the utility functions, the utility file must first be compiled, and then
its object file linked in during the creation of the target program's executable file.
Both the makefile and build files in the samples directories do this for the
programs that require the error-checking utilities.
The example demonstrates how the error-checking utilities are used in DB2
programs. The utilemb.h header file defines the EMB_SQL_CHECK macro for the
functions SqlInfoPrint() and TransRollback():
Chapter 4. Building
167
\
\
\
\
\
\
SqlInfoPrint() checks the SQLCODE and prints out any available information
related to the specific error encountered. It also points to where the error occurred
in the source code. TransRollback() allows the utility file to safely rollback a
transaction where an error has occurred. It uses the embedded SQL statement EXEC
SQL ROLLBACK. The example demonstrates how the C program dbuse calls the utility
functions by using the macro, supplying the value "Delete with host variables
-- Execute" for the MSG_STR parameter of the SqlInfoPrint() function:
EXEC SQL DELETE FROM org
WHERE deptnumb = :hostVar1 AND
division = :hostVar2;
EMB_SQL_CHECK("Delete with host variables -- Execute");
The EMB_SQL_CHECK macro ensures that if the DELETE statement fails, the transaction
will be safely rolled back, and an appropriate error message printed out.
Developers are encouraged to use and expand upon these error-checking utilities
when creating their own DB2 programs.
$EXTRA_CFLAG
Contains "-q64" for an instance where 64-bit support is enabled; otherwise,
it contains no value.
168
-I$DB2PATH/include
Specify the location of the DB2 include files. For example:
$HOME/sqllib/include.
-c
Perform compile only; no link. Compile and link are separate steps.
Link Options:
xlc
$EXTRA_CFLAG
Contains "-q64" for an instance where 64-bit support is enabled; otherwise,
it contains no value.
-o $1
$1.o
utilemb.o
If an embedded SQL program, include the embedded SQL utility object file
for error checking.
utilapi.o
If not an embedded SQL program, include the DB2 API utility object file
for error checking.
-ldb2
-L$DB2PATH/$LIB
Specify the location of the DB2 runtime shared libraries. For example:
$HOME/sqllib/$LIB. If you do not specify the -L option, the compiler
assumes the following path: /usr/lib:/lib.
Refer to your compiler documentation for additional compiler options.
AIX C++ embedded SQL and DB2 administrative API applications compile and
link options:
The compile and link options for building C++ embedded SQL and DB2
administrative API applications with the IBM XL C/C++ for AIX compiler are
available in the bldapp build script.
Compile and link options for bldapp
Compile options:
xlC
EXTRA_CFLAG
Contains "-q64" for an instance where 64-bit support is enabled; otherwise,
it contains no value.
-I$DB2PATH/include
Specify the location of the DB2 include files. For example:
$HOME/sqllib/include.
-c
Perform compile only; no link. Compile and link are separate steps.
Link options:
xlC
Chapter 4. Building
169
EXTRA_CFLAG
Contains "-q64" for an instance where 64-bit support is enabled; otherwise,
it contains no value.
-o $1
$1.o
utilapi.o
Include the API utility object file for non-embedded SQL programs.
utilemb.o
Include the embedded SQL utility object file for embedded SQL programs.
-ldb2
-L$DB2PATH/$LIB
Specify the location of the DB2 runtime shared libraries. For example:
$HOME/sqllib/$LIB. If you do not specify the -L option, the compiler
assumes the following path /usr/lib:/lib.
Refer to your compiler documentation for additional compiler options.
HP-UX C application compile and link options:
The compile and link options for building C embedded SQL and DB2 API
applications with the HP-UX C compiler are available in the bldapp build script.
Compile and link options for bldapp
Compile options:
cc
The C compiler.
$EXTRA_CFLAG
If the HP-UX platform is IA64 and 64-bit support is enabled, this flag
contains the value +DD64; if 32-bit support is enabled, it contains the value
+DD32.
-Ae
+DD64
+DD32
-I$DB2PATH/include
Specifies the location of the DB2 include files.
-c
Perform compile only; no link. Compile and link are separate steps.
Link options:
cc
$EXTRA_CFLAG
If the HP-UX platform is IA64 and 64-bit support is enabled, this flag
contains the value +DD64; if 32-bit support is enabled, it contains the value
+DD32.
170
+DD64
+DD32
-o $1
$1.o
utilemb.o
If an embedded SQL program, include the embedded SQL utility object file
for error checking.
utilapi.o
If a non-embedded SQL program, include the DB2 API utility object file for
error checking.
$EXTRA_LFLAG
Specify the runtime path. If set, for 32-bit it contains the value
-Wl,+b$HOME/sqllib/lib32, and for 64-bit: -Wl,+b$HOME/sqllib/lib64. If
not set, it contains no value.
-L$DB2PATH/$LIB
Specify the location of the DB2 runtime shared libraries. For 32-bit:
$HOME/sqllib/lib32; for 64-bit: $HOME/sqllib/lib64.
-ldb2
$EXTRA_CFLAG
If the HP-UX platform is IA64 and 64-bit support is enabled, this flag
contains the value +DD64; if 32-bit support is enabled, it contains the value
+DD32.
-ext
+DD64
+DD32
-I$DB2PATH/include
Specifies the location of the DB2 include files. For example:
$HOME/sqllib/include
-c
Perform compile only; no link. Compile and link are separate steps.
Link options:
aCC
$EXTRA_CFLAG
If the HP-UX platform is IA64 and 64-bit support is enabled, this flag
contains the value +DD64; if 32-bit support is enabled, it contains the value
+DD32.
+DD64
+DD32
-o $1
$1.o
171
utilemb.o
If an embedded SQL program, include the embedded SQL utility object file
for error checking.
utilapi.o
If a non-embedded SQL program, include the DB2 API utility object file for
error checking.
$EXTRA_LFLAG
Specify the runtime path. If set, for 32-bit it contains the value
"-Wl,+b$HOME/sqllib/lib32", and for 64-bit: "-Wl,+b$HOME/sqllib/lib64". If
not set, it contains no value.
-L$DB2PATH/$LIB
Specify the location of the DB2 runtime shared libraries. For 32-bit:
$HOME/sqllib/lib32; for 64-bit: $HOME/sqllib/lib64.
-ldb2
$EXTRA_C_FLAGS
Contains one of the following flags:
v -m31 on Linux for zSeries only, to build a 32-bit library;
v -m32 on Linux for x86, x64 and POWER, to build a 32-bit library;
v -m64 on Linux for zSeries, POWER, x64, to build a 64-bit library; or
v No value on Linux for IA64, to build a 64-bit library.
-I$DB2PATH/include
Specify the location of the DB2 include files.
-c
Perform compile only; no link. This script file has separate compile and
link steps.
Link options:
$CC
The gcc or xlc_r compiler; use the compiler as a front end for the linker.
$EXTRA_C_FLAGS
Contains one of the following flags:
v -m31 on Linux for zSeries only, to build a 32-bit library;
v -m32 on Linux for x86, x64 and POWER, to build a 32-bit library;
v -m64 on Linux for zSeries, POWER, x64, to build a 64-bit library; or
v No value on Linux for IA64, to build a 64-bit library.
172
-o $1
$1.o
utilemb.o
If an embedded SQL program, include the embedded SQL utility object file
for error checking.
utilapi.o
If a non-embedded SQL program, include the DB2 API utility object file for
error checking.
$EXTRA_LFLAG
For 32-bit it contains the value "-Wl,-rpath,$DB2PATH/lib32", and for
64-bit it contains the value "-Wl,-rpath,$DB2PATH/lib64".
-L$DB2PATH/$LIB
Specify the location of the DB2 static and shared libraries at link-time. For
example, for 32-bit: $HOME/sqllib/lib32, and for 64-bit:
$HOME/sqllib/lib64.
-ldb2
$EXTRA_C_FLAGS
Contains one of the following flags:
v -m31 on Linux for zSeries only, to build a 32-bit library;
v -m32 on Linux for x86, x64 and POWER, to build a 32-bit library;
v -m64 on Linux for zSeries, POWER, x64, to build a 64-bit library; or
v No value on Linux for IA64, to build a 64-bit library.
-I$DB2PATH/include
Specify the location of the DB2 include files.
-c
Perform compile only; no link. This script file has separate compile and
link steps.
Link options:
g++
$EXTRA_C_FLAGS
Contains one of the following flags:
v -m31 on Linux for zSeries only, to build a 32-bit library;
v -m32 on Linux for x86, x64 and POWER, to build a 32-bit library;
v -m64 on Linux for zSeries, POWER, x64, to build a 64-bit library; or
v No value on Linux for IA64, to build a 64-bit library.
-o $1
$1.o
Chapter 4. Building
173
utilemb.o
If an embedded SQL program, include the embedded SQL utility object file
for error checking.
utilapi.o
If a non-embedded SQL program, include the DB2 API utility object file for
error checking.
$EXTRA_LFLAG
For 32-bit it contains the value "-Wl,-rpath,$DB2PATH/lib32", and for
64-bit it contains the value "-Wl,-rpath,$DB2PATH/lib64".
-L$DB2PATH/$LIB
Specify the location of the DB2 static and shared libraries at link-time. For
example, for 32-bit: $HOME/sqllib/lib32, and for 64-bit:
$HOME/sqllib/lib64.
-ldb2
The C compiler.
-xarch=$CFLAG_ARCH
This option ensures that the compiler will produce valid executables when
linking with libdb2.so. The value for $CFLAG_ARCH is set as follows:
v "v8plusa" for 32-bit applications on Solaris SPARC
v "v9" for 64-bit applications on Solaris SPARC
v "sse2" for 32-bit applications on Solaris x64
v "amd64" for 64-bit applications on Solaris x64
-I$DB2PATH/include
Specify the location of the DB2 include files. For example:
$HOME/sqllib/include
-c
Perform compile only; no link. This script has separate compile and link
steps.
Link options:
cc
-xarch=$CFLAG_ARCH
This option ensures that the compiler will produce valid executables when
linking with libdb2.so. The value for $CFLAG_ARCH is set to either
"v8plusa" for 32-bit, or "v9" for 64-bit.
-mt
174
-o $1
$1.o
utilemb.o
If an embedded SQL program, include the embedded SQL utility object file
for error checking.
utilapi.o
If not an embedded SQL program, include the DB2 API utility object file
for error checking.
-L$DB2PATH/$LIB
Specify the location of the DB2 static and shared libraries at link-time. For
example, for 32-bit: $HOME/sqllib/lib32, and for 64-bit:
$HOME/sqllib/lib64.
$EXTRA_LFLAG
Specify the location of the DB2 shared libraries at run time. For 32-bit it
contains the value "-R$DB2PATH/lib32", and for 64-bit it contains the
value "-R$DB2PATH/lib64".
-ldb2
-xarch=$CFLAG_ARCH
This option ensures that the compiler will produce valid executables when
linking with libdb2.so. The value for $CFLAG_ARCH is set as follows:
v "v8plusa" for 32-bit applications on Solaris SPARC
v "v9" for 64-bit applications on Solaris SPARC
v "sse2" for 32-bit applications on Solaris x64
v "amd64" for 64-bit applications on Solaris x64
-I$DB2PATH/include
Specify the location of the DB2 include files. For example:
$HOME/sqllib/include
-c
Perform compile only; no link. This script has separate compile and link
steps.
Link options:
CC
-xarch=$CFLAG_ARCH
This option ensures that the compiler will produce valid executables when
linking with libdb2.so. The value for $CFLAG_ARCH is set to either
"v8plusa" for 32-bit, or "v9" for 64-bit.
-mt
175
Note: If POSIX threads are used, DB2 applications also have to link with
-lpthread, whether or not they are threaded.
-o $1
$1.o
utilemb.o
If an embedded SQL program, include the embedded SQL utility object file
for error checking.
utilapi.o
If a non-embedded SQL program, include the DB2 API utility object file for
error checking.
-L$DB2PATH/$LIB
Specify the location of the DB2 static and shared libraries at link-time. For
example, for 32-bit: $HOME/sqllib/lib32, and for 64-bit:
$HOME/sqllib/lib64.
$EXTRA_LFLAG
Specify the location of the DB2 shared libraries at run time. For 32-bit it
contains the value "-R$DB2PATH/lib32", and for 64-bit it contains the
value "-R$DB2PATH/lib64".
-ldb2
-Od
-c
Perform compile only; no link. The batch file has separate compile and link
steps.
-W2
-DWIN32
Compiler option necessary for Windows operating systems.
Link options:
link
176
-out:%1.exe
Specify a filename
%1.obj Include the object file
utilemb.obj
If an embedded SQL program, include the embedded SQL utility object file
for error checking.
utilapi.obj
If not an embedded SQL program, include the DB2 API utility object file
for error checking.
db2api.lib
Link with the DB2 library.
177
Procedure
v Building and running embedded SQL applications
There are three ways to build the embedded SQL application, tbmod, from the C
source file tbmod.sqc in sqllib\samples\c, or from the C++ source file tbmod.sqx
in sqllib\samples\cpp:
If connecting to the sample database on the same instance, enter:
178
bldapp tbmod
If accessing another database on the same instance, enter the executable name
and the database name:
tbmod database
If accessing another database on the same instance, enter the executable name
and the database name:
dbthrds database
Example
The following examples show you how to build and run DB2 API and embedded
SQL applications.
Chapter 4. Building
179
To build the DB2 API non-embedded SQL sample program, cli_info, from either
the source file cli_info.c, in sqllib\samples\c, or from the source file
cli_info.cxx, in sqllib\samples\cpp, enter:
bldapp cli_info
The result is an executable file, cli_info.exe. You can run the executable file by
entering the executable name (without the extension) on the command line:
cli_info
180
For the multi-connection sample program, dbmcon.exe, you require two databases.
If the sample database is not yet created, you can create it by entering db2sampl on
the command line of a DB2 command window. The second database, here called
sample2, can be created with one of the following commands:
If creating the database locally:
db2 create db sample2
attach to node_name
create db sample2
detach
catalog db sample2 as sample2 at node node_name
Procedure
To ensure that the TCP/IP listener is running:
1. Set the environment variable DB2COMM to TCP/IP as follows:
db2set DB2COMM=TCPIP
2. Update the database manager configuration file with the TCP/IP service name
as specified in the services file:
db2 update dbm cfg using SVCENAME TCPIP_service_name
Each instance has a TCP/IP service name listed in the services file. Ask your
system administrator if you cannot locate it or do not have the file permission
to change the services file.
3. Stop and restart the database manager in order for these changes to take effect:
db2stop
db2start
Results
The dbmcon.exe program is created from five files in either the samples\c or
samples\cpp directories:
dbmcon.sqc or dbmcon.sqx
Main source file for connecting to both databases.
dbmcon1.sqc or dbmcon1.sqx
Source file for creating a package bound to the first database.
dbmcon1.h
Header file for dbmcon1.sqc or dbmcon1.sqx included in the main source
file, dbmcon.sqc or dbmcon.sqx, for accessing the SQL statements for
creating and dropping a table bound to the first database.
dbmcon2.sqc or dbmcon2.sqx
Source file for creating a package bound to the second database.
dbmcon2.h
Header file for dbmcon2.sqc or dbmcon2.sqx included in the main source
file, dbmcon.sqc or dbmcon.sqx, for accessing the SQL statements for
creating and dropping a table bound to the second database.
Chapter 4. Building
181
-qpgmname\(mixed\)
Instructs the compiler to permit CALLs to library entry points with
mixed-case names.
-qlib
-I$DB2PATH/include/cobol_a
Specify the location of the DB2 include files. For example:
$HOME/sqllib/include/cobol_a.
-c
Perform compile only; no link. Compile and link are separate steps.
Link options:
182
cob2
-o $1
$1.o
checkerr.o
Include the utility object file for error-checking.
-L$DB2PATH/$LIB
Specify the location of the DB2 runtime shared libraries. For example:
$HOME/sqllib/lib32.
-ldb2
-c
$EXTRA_COBOL_FLAG="-C MFSYNC"
Enables 64-bit support.
-x
Link Options:
cob
-x
-o $1
$1.o
-L$DB2PATH/$LIB
Specify the location of the DB2 runtime shared libraries. For example:
$HOME/sqllib/lib32.
-ldb2
-ldb2gmf
Link to the DB2 exception-handler library for Micro Focus COBOL.
Refer to your compiler documentation for additional compiler options.
HP-UX Micro Focus COBOL application compile and link options:
The compile and link options for building COBOL embedded SQL and DB2 API
applications with the Micro Focus COBOL compiler for HP-UX are available in the
bldapp build script.
Chapter 4. Building
183
-cx
$EXTRA_COBOL_FLAG
Contains "-C MFSYNC" if the HP-UX platform is IA64 and 64-bit support
is enabled.
Link options:
cob
-x
$1.o
checkerr.o
Include the utility object file for error checking.
-L$DB2PATH/$LIB
Specify the location of the DB2 runtime shared libraries.
-ldb2
-ldb2gmf
Link to the DB2 exception-handler library for Micro Focus COBOL.
Refer to your compiler documentation for additional compiler options.
Solaris Micro Focus COBOL application compile and link options:
The compile and link options for building COBOL embedded SQL and DB2 API
applications with the Micro Focus COBOL compiler on Solaris are available in the
bldapp build script.
Compile and link options for bldapp
Compile options:
cob
$EXTRA_COBOL_FLAG
For 64-bit support, contains the value "-C MFSYNC"; otherwise it contains
no value.
-cx
Link options:
cob
-x
$1.o
checkerr.o
Include the utility object file for error-checking.
-L$DB2PATH/$LIB
Specify the location of the DB2 static and shared libraries at link-time. For
example: $HOME/sqllib/lib64.
184
-ldb2
-ldb2gmf
Link with the DB2 exception-handler library for Micro Focus COBOL.
Refer to your compiler documentation for additional compiler options.
Linux Micro Focus COBOL application compile and link options:
These compile and link options are available for building COBOL embedded SQL
and DB2 API applications with the Micro Focus COBOL compiler on Linux, as
demonstrated in the bldapp build script.
Compile and link options for bldapp
Compile options:
cob
-cx
$EXTRA_COBOL_FLAG
For 64-bit support, contains the value "-C MFSYNC"; otherwise it contains
no value.
Link options:
cob
-x
-o $1
$1.o
checkerr.o
Include the utility object file for error checking.
-L$DB2PATH/$LIB
Specify the location of the DB2 runtime shared libraries.
-ldb2
-ldb2gmf
Link to the DB2 exception-handler library for Micro Focus COBOL.
Refer to your compiler documentation for additional compiler options.
Windows IBM COBOL application compile and link options:
The compile and link options for building COBOL embedded SQL and DB2 API
applications on Windows operating systems with the IBM VisualAge COBOL
compiler are available in the bldapp.bat batch file.
Compile and link options for bldapp
Compile options:
cob2
-qpgmname(mixed)
Instructs the compiler to permit CALLs to library entry points with
mixed-case names.
Chapter 4. Building
185
-c
Perform compile only; no link. Compile and link are separate steps.
-qlib
-Ipath Specify the location of the DB2 include files. For example:
-I"%DB2PATH%\include\cobol_a".
%EXTRA_COMPFLAG%
If "set IBMCOB_PRECOMP=true" is uncommented, the IBM COBOL
precompiler is used to precompile the embedded SQL. It is invoked with
one of the following formulations, depending on the input parameters:
-q"SQL(database sample CALL_RESOLUTION DEFERRED)"
precompile using the default sample database, and defer call
resolution.
-q"SQL(database %2 CALL_RESOLUTION DEFERRED)"
precompile using a database specified by the user, and defer call
resolution.
-q"SQL(database %2 user %3 using %4 CALL_RESOLUTION DEFERRED)"
precompile using a database, user ID, and password specified by
the user, and defer call resolution. This is the format for remote
client access.
Link options:
cob2
Link options:
cbllink
Use the linker to link edit.
-l
checkerr.obj
Link with the error-checking utility object file.
db2api.lib
Link with the DB2 API library.
186
If you are building DB2 sample programs using the script files provided, the
include file path specified in the script files must be changed to point to the
cobol_i directory and not the cobol_a directory.
If you are NOT using the "System z host data type support" feature of the IBM
COBOL Set for AIX compiler, or you are using an earlier version of this
compiler, then the DB2 include files for your applications are in the following
directory:
$HOME/sqllib/include/cobol_a
187
v If you are using the "System/390 host data type support" feature of the IBM
VisualAge COBOL compiler, the DB2 include files for your applications are in
the following directory:
%DB2PATH%\include\cobol_i
If you are building DB2 sample programs using the batch files provided, the
include file path specified in the batch files must be changed to point to the
cobol_i directory and not the cobol_a directory.
If you are NOT using the "System/390 host data type support" feature of the
IBM VisualAge COBOL compiler, or you are using an earlier version of this
compiler, then the DB2 include files for your applications are in the following
directory:
%DB2PATH%\include\cobol_a
v The DB2 COPY files for Micro Focus COBOL reside in %DB2PATH%\include\
cobol_mf. Set the COBCPY environment variable to include the directory as
follows:
set COBCPY="%DB2PATH%\include\cobol_mf;%COBCPY%"
You must ensure that the previously mentioned environment variables are
permanently set in the System settings. This can be checked by going through
the following steps:
1. Open the Control Panel
Select System
Select the Advanced tab
Click Environment Variables
Check the System variables list for the required environment variables. If
not present, add them to the System variables list
Setting them in either the User settings, at a command prompt, or in a script is
insufficient.
2.
3.
4.
5.
What to do next
You must make calls to all DB2 application programming interfaces using calling
convention 74. The DB2 COBOL precompiler automatically inserts a
CALL-CONVENTION clause in a SPECIAL-NAMES paragraph. If the
SPECIAL-NAMES paragraph does not exist, the DB2 COBOL precompiler creates
it, as follows:
188
Identification Division
Program-ID. "static".
special-names.
call-convention 74 is DB2API.
Also, the precompiler automatically places the symbol DB2API, which is used to
identify the calling convention, after the "call" keyword whenever a DB2 API is
called. This occurs, for example, whenever the precompiler generates a DB2 API
runtime call from an embedded SQL statement.
If calls to DB2 APIs are made in an application which is not precompiled, you
should manually create a SPECIAL-NAMES paragraph in the application, similar
to that given previously. If you are calling a DB2 API directly, then you will need
to manually add the DB2API symbol after the "call" keyword.
Configuring the Micro Focus COBOL compiler on Linux:
To run Micro Focus COBOL routines, you must ensure that the Linux runtime
linker and DB2 processes can access the dependent COBOL libraries in the
/usr/lib directory.
About this task
Create symbolic links to /usr/lib for the COBOL shared libraries as root. The
simplest way to create symbolic links to /usr/lib is to link all COBOL library files
from $COBDIR/lib to /usr/lib:
ln -s $COBDIR/lib/libcob* /usr/lib
-s
-s
-s
-s
-s
-s
-s
-s
-s
-s
-s
-s
-s
-s
-s
-s
/opt/lib/mfcobol/lib/libcobrts.so /usr/lib
/opt/lib/mfcobol/lib/libcobrts_t.so /usr/lib
/opt/lib/mfcobol/lib/libcobrts.so.2 /usr/lib
/opt/lib/mfcobol/lib/libcobrts_t.so.2 /usr/lib
/opt/lib/mfcobol/lib/libcobcrtn.so /usr/lib
/opt/lib/mfcobol/lib/libcobcrtn.so.2 /usr/lib
/opt/lib/mfcobol/lib/libcobmisc.so /usr/lib
/opt/lib/mfcobol/lib/libcobmisc_t.so /usr/lib
/opt/lib/mfcobol/lib/libcobmisc.so.2 /usr/lib
/opt/lib/mfcobol/lib/libcobmisc_t.so.2 /usr/lib
/opt/lib/mfcobol/lib/libcobscreen.so /usr/lib
/opt/lib/mfcobol/lib/libcobscreen.so.2 /usr/lib
/opt/lib/mfcobol/lib/libcobtrace.so /usr/lib
/opt/lib/mfcobol/lib/libcobtrace_t.so /usr/lib
/opt/lib/mfcobol/lib/libcobtrace.so.2 /usr/lib
/opt/lib/mfcobol/lib/libcobtrace_t.so.2 /usr/lib
Chapter 4. Building
189
specifies the location of the COPY files. The DB2 COPY files for Micro Focus
COBOL reside in sqllib/include/cobol_mf under the database instance
directory.
To include the directory, enter:
On bash or Korn shell:
export COBCPY=$HOME/sqllib/include/cobol_mf:$COBDIR/cpylib
On C shell:
setenv COBCPY $HOME/sqllib/include/cobol_mf:$COBDIR/cpylib
On C shell:
setenv LD_LIBRARY_PATH $LD_LIBRARY_PATH:$HOME/sqllib/lib:$COBDIR/lib
Results
Note: You might want to set COBCPY, COBDIR, and LD_LIBRARY_PATH in the
.bashrc, .kshrc (depending on shell being used), .bash_profile, .profile
(depending on shell being used), or in the .login. .
Configuring the Micro Focus COBOL compiler on AIX:
Before you develop Micro Focus COBOL applications that contain embedded SQL
and DB2 API calls on AIX operating systems, you must configure the Micro Focus
COBOL compiler.
About this task
Follow the listed steps if you develop applications that contain embedded SQL and
DB2 API calls with the Micro Focus COBOL compiler.
Procedure
v When you precompile your application using the PRECOMPILE command, use the
target mfcob option.
v You must include the DB2 COBOL COPY file directory in the Micro Focus
COBOL environment variable COBCPY. The COBCPY environment variable
specifies the location of the COPY files. The DB2 COPY files for Micro Focus
COBOL are in sqllib/include/cobol_mf under the database instance directory.
To include the directory, enter:
On bash or Korn shell:
export COBCPY=$COBCPY:$HOME/sqllib/include/cobol_mf
On C shell:
setenv COBCPY $COBCPY:$HOME/sqllib/include/cobol_mf
Note: You might want to set COBCPY in the .profile or .login file.
Configuring the Micro Focus COBOL compiler on HP-UX:
You must set COBCPY environment variable in your HP-UX instance before you can
compile an embedded SQL application with the Micro Focus COBOL compiler.
190
Procedure
v When you precompile your application with the PRECOMPILE command, use the
target mfcob option.
v You must include the DB2 COBOL COPY file directory in the Micro Focus
COBOL environment variable COBCPY. The COBCPY environment variable
specifies the location of COPY files. The DB2 COPY files for Micro Focus
COBOL is in sqllib/include/cobol_mf under the database instance directory.
To include the directory,
On bash or Korn shell, enter:
export COBCPY=$COBCPY:$HOME/sqllib/include/cobol_mf
On C shell, enter:
setenv COBCPY ${COBCPY}:${HOME}/sqllib/include/cobol_mf
Note: You might want to set COBCPY in the .profile or .login file.
Configuring the Micro Focus COBOL compiler on Solaris:
You must set COBCPY environment variable in your Solaris instance before you can
compile an embedded SQL application with the Micro Focus COBOL compiler.
Procedure
v When you precompile your application with the db2 prep command, use the
target mfcob option.
v You must include the DB2 COBOL COPY file directory in the Micro Focus
COBOL environment variable COBCPY. The COBCPY environment variable
specifies the location of COPY files. The DB2 COPY files for Micro Focus
COBOL is insqllib/include/cobol_mf under the database instance directory.
To include the directory, enter:
On bash or Korn shells:
export COBCPY=$COBCPY:$HOME/sqllib/include/cobol_mf
On C shell:
setenv COBCPY $COBCPY:$HOME/sqllib/include/cobol_mf
Chapter 4. Building
191
For an embedded SQL program, bldapp passes the parameters to the precompile
and bind script, embprep. If no database name is supplied, the default sample
database is used. The user ID and password parameters are only needed if the
instance where the program is built is different from the instance where the
database is located.
To build the non-embedded SQL sample program client from the source file
client.cbl, enter:
bldapp client
The result is an executable file client. You can run the executable file against the
sample database by entering:
client
Procedure
v There are three ways to build the embedded SQL application, updat, from the
source file updat.sqb:
1. If connecting to the sample database on the same instance, enter:
bldapp updat
192
database to which you want to connect; the third parameter, $3, specifies the user
ID for the database, and $4 specifies the password.
For an embedded SQL program, bldapp passes the parameters to the precompile
and bind script, embprep. If no database name is supplied, the default sample
database is used. The user ID and password parameters are only needed if the
instance where the program is built is different from the instance where the
database is located.
To build the non-embedded SQL sample program, client, from the source file
client.cbl, enter:
bldapp client
The result is an executable file client. You can run the executable file against the
sample database by entering:
client
Procedure
v There are three ways to build the embedded SQL application, updat, from the
source file updat.sqb:
1. If connecting to the sample database on the same instance, enter:
bldapp updat
193
The batch file, bldapp.bat, contains the commands to build a DB2 application
program. It takes up to four parameters, represented inside the batch file by the
variables %1, %2, %3, and %4.
The first parameter, %1, specifies the name of your source file. This is the only
required parameter for programs that do not contain embedded SQL. Building
embedded SQL programs requires a connection to the database so three optional
parameters are also provided: the second parameter, %2, specifies the name of the
database to which you want to connect; the third parameter, %3, specifies the user
ID for the database, and %4 specifies the password.
For an embedded SQL program using the default DB2 precompiler, bldapp.bat
passes the parameters to the precompile and bind file, embprep.bat.
For an embedded SQL program using the IBM COBOL precompiler, bldapp.bat
copies the .sqb source file to a .cbl source file. The compiler performs the
precompile on the .cbl source file with specific precompile options.
For either precompiler, if no database name is supplied, the default sample
database is used. The user ID and password parameters are only needed if the
instance where the program is built is different from the instance where the
database is located.
The following examples show you how to build and run DB2 API and embedded
SQL applications.
To build the non-embedded SQL sample program client from the source file
client.cbl, enter:
bldapp client
The result is an executable file client.exe. You can run the executable file against
the sample database by entering the executable name (without the extension):
client
Procedure
v There are three ways to build the embedded SQL application, updat, from the
source file updat.sqb:
1. If connecting to the sample database on the same instance, enter:
bldapp updat
194
The result is an executable file client.exe. You can run the executable file against
the sample database by entering the executable name (without the extension):
client
Procedure
v There are three ways to build the embedded SQL application, updat, from the
source file updat.sqb:
1. If connecting to the sample database on the same instance, enter:
bldapp updat
195
1. If accessing the sample database on the same instance, enter the executable
name (without the extension):
updat
Procedure
To build and run your REXX applications:
v On Windows operating systems, your application file can have any name. After
creation, you can run your application from the operating system command
prompt by invoking the REXX interpreter as follows:
REXX file_name
v On AIX, you can run your application using either of the following two
methods:
At the shell command prompt, type rexx name where name is the name of
your REXX program.
If the first line of your REXX program contains a "magic number" (#!) and
identifies the directory where the REXX/6000 interpreter resides, you can run
your REXX program by typing its name at the shell command prompt. For
example, if the REXX/6000 interpreter file is in the /usr/bin directory,
include the following line as the very first line of your REXX program:
#! /usr/bin/rexx
Then, make the program executable by typing the following command at the
shell command prompt:
chmod +x name
Run your REXX program by typing its file name at the shell command
prompt.
Note: On AIX, you should set the LIBPATH environment variable to include the
directory where the REXX SQL library, db2rexx is located. For example:
export LIBPATH=/lib:/usr/lib:/$DB2PATH/lib
196
197
Procedure
1. Precompile the application by issuing the PRECOMPILE command. For example:
C application: db2 PRECOMPILE myapp.sqc BINDFILE
C++ application: db2 PRECOMPILE myapp.sqx BINDFILE
198
The BIND command associates the application package with and stores the
package within the database.
3. Compile the modified application source and the source files that do not
contain embedded SQL to create an application object file (a .obj file). For
example:
C application: cl -Zi -Od -c -W2 -DWIN32 myapp.c
C++ application: cl -Zi -Od -c -W2 -DWIN32 myapp.cxx
4. Link the application object files with the DB2 and host language libraries to
create an executable program using the link command. For example:
link -debug -out:myapp.exe myapp.obj
Chapter 4. Building
199
200
Section
Value
connectionLevelLoadBalancing
parameter
<database>
enableWLB parameter
<wlb>
maxTransportIdleTime
<wlb>
maxTransportWaitTime
<wlb>
maxTransports
<wlb>
maxRefreshInterval
<wlb>
Value
enableAcr parameter
acrRetryInterval parameter
201
Value
maxAcrRetries parameter
enableAlternateServerListFirstConnect parameter
alternateserverlist parameter
The embedded application cannot perform seamless failover. Also, the ability to
resolve the data source name (DSN) with <dsncollection> section entry in the
db2dsdriver.cfg file is only supported with IBM Data Server Driver Package.
In DB2 Version 10 Fix Pack 1 and later fix packs, IBM data server clients supports
the use of the <dsncollection> section entry in the db2dsdriver.cfg file to resolve
DSN entry.
The following steps outline the process involved with database alias resolution:
1. The embedded SQL application requests to CONNECT to the database alias.
2. The embedded SQL application looks up the catalog database directory to see if
the specified database alias name exists.
v If information is found, the embedded application uses the database name,
host name, and port number information from the catalog. Proceed to step 4.
v If information is not found, the <dsncollection> sections in the
db2dsriver.cfg file is used to resolve the database alias name to the database
name, host name, and port number information.
3. The application looks for database alias information in the db2dsriver.cfg file:
v If database alias information is not found, a database connection error is
returned to the embedded SQL application.
v If database alias information is found, the database name, host name, port
number, and data server driver parameters that are specified in the <dsn>
section are used.
4. Using the database name, host name, and port number, the <databases> section
for matching entry is searched.
202
5. If a matching entry for the database name, host name, and port number is
found in the <databases> section, the parameters specified under the matching
<database> section is applied to the connection.
6. The database connection is attempted with information that is specified in the
catalog and db2dsdriver.cfg file.
In DB2 Version 10 Fix Pack 1 and later fix packs, embedded SQL application can
use following timeout values in the db2dsdriver.cfg file:
v MemberConnectTimeout
v
v
v
v
ReceiveTimeout
TcpipConnectTimeout
keepAliveTimeOut
ConnectionTimeout
Any unrecognized data server keywords are ignored silently by the embedded
SQL application.
203
204
When compatibility mode is switched on, the following features are supported:
v C-array host variables for use with FETCH INTO statements
v INDICATOR variable arrays for use with FETCH INTO statements
v New CONNECT statement syntax
v Using double quotation marks to specify file names with the INCLUDE
statement
v Simple type definition for the VARCHAR type
Starting in DB2 V10.1 Fix Pack 2 and later, when compatibility mode is switched
on, the following features are supported:
v Structure type and structure array indicators for an associated structure of
non-array or array host variables
v Suppression of unspecified indicator variable error if you do not specify the
NULL indicator in an application when it is needed, through setting the
UNSAFENULL parameter of the PRECOMPILE command to YES
v EXEC SQL ROLLBACK and EXEC SQL COMMIT statements with the RELEASE
option
Additionally, the following features are supported for embedded SQL C and
embedded SQL C++ applications even if you do not issue the PRECOMPILE
command with the COMPATIBILITY_MODE parameter set to ORA:
v Use of the STATICASDYNAMIC string for the GENERIC parameter of the BIND
command, to provide true dynamic SQL behavior for the package bound in a
session
v Use of a string literal with the PREPARE statement
v Use of the BREAK action with the WHENEVER statement
205
In the following example, two array host variables are declared, empno and
lastname. Each can hold up to 100 elements. Because there is only one FETCH
statement, this example retrieves 100 rows, or less.
EXEC SQL BEGIN DECLARE SECTION;
char
empno[100][8];
char
lastname[100][15];
EXEC SQL END DECLARE SECTION;
EXEC SQL DECLARE empcr CURSOR FOR
SELECT empno, lastname FROM employee;
EXEC SQL OPEN empcr;
EXEC SQL WHENEVER NOT FOUND GOTO end_fetch;
while (1) {
EXEC SQL FETCH empcr INTO :empno :lastname;
...
...
}
end_fetch:
EXEC SQL CLOSE empcr;
/* bulk fetch
*/
/* 100 or less rows */
Starting in DB2 V10.1 Fix Pack 2, embedded SQL C/C++ applications support host
variables as a structure array during a fetch operation. The array size is
determined by the structure array that you define in the DECLARE SECTION
statement.
In the following example, a structure array of host variables is used for a FETCH
statement:
EXEC SQL BEGIN DECLARE SECTION;
struct MyStruct
{
int c1;
char c2[11];
} MyStructVar[3];
EXEC SQL END DECLARE SECTION;
...
// MyStrutVar is a structure array for host variables
EXEC SQL FETCH cur INTO :MyStructVar;
Note: Support is enabled only for PRECOMPILE option with COMPATIBILITY_MODE set
to ORA. A single FETCH operation does not support multiple structures or a
combination of structures and host variable arrays.
Embedded SQL C/C++ will create a compiler error if a structure array is defined
within the another structure array (for example, nested structure arrays).
Specific array element cannot be specified in the FETCH statement. Unexpected
token error may be returned.
206
In the following example, the indicator variable array called bonus_ind is declared.
It can have up to 100 elements, the same amount as declared for the array variable,
bonus. When the data is being fetched, if the value of bonus is NULL, the value in
bonus_ind will be negative.
EXEC SQL BEGIN DECLARE SECTION;
char
empno[100][8];
char
lastname[100][15];
short edlevel[100];
double bonus[100];
short bonus_ind[100];
EXEC SQL END DECLARE SECTION;
EXEC SQL DECLARE empcr CURSOR FOR
SELECT empno, lastname, edlevel, bonus
FROM employee
WHERE workdept = D21;
EXEC SQL OPEN empcr;
EXEC SQL WHENEVER NOT FOUND GOTO end_fetch;
while (1) {
EXEC SQL FETCH empcr INTO :empno :lastname :edlevel,
:bonus INDICATOR :bonus_ind
...
...
}
end_fetch:
EXEC SQL CLOSE empcr;
If the number of elements for an indicator array variable does not match the
number of elements of the corresponding host array variable, an error is returned.
Starting in DB2 V10.1 Fix Pack 2 and later, application can now check
sqlca.sqlerrd[2] to get cumulative sum of number of rows populated successfully
till the last FETCH in non-arrays host variables. This enhancement is available
even if COMPATIBILITY_MODE ORA is not set during PRECOMPILE.
// declaring structure array of size 3 for indicator
EXEC SQL BEGIN DECLARE SECTION;
...
struct MyStructInd
{
short c1_ind;
short c2_ind;
} MyStructVarInd[3];
EXEC SQL END DECLARE SECTION;
...
// using structure array host variables & indicators structure type
// array while executing FETCH statement
// 'MyStructVar' is structure array for host variables
// 'MyStructVarInd' is structure array for indicators
EXEC SQL FETCH cur INTO :MyStructVar :MyStructVarInd;
207
Note: The array size of the structure that you use for indicator variables must be
equal to or greater than the array size of the structure that you use for host
variables. All members in the structure array that you use for indicator variables
must use the short data type. The number of members in the structures that you
use for host variables and corresponding indicators must be equal. The PRECOMPILE
command returns an error if any of the conditions are not satisfied.
dbname ] ;
Description
username
password
dbname
Note: Even if you do not set the COMPATIBILITY_MODE parameter to ORA while
precompiling, an application can check the sqlca.sqlerrd[2] structure to get the
cumulative sum of the number of rows that were successfully populated till the
last fetch in non-array host variables.
208
209
Note: The PRECOMPILE command issues SQL statement is not supported error if
you use the new syntax without setting the COMPATIBILITY_MODE parameter to ORA.
210
Documentation feedback
We value your feedback on the DB2 documentation. If you have suggestions for
how to improve the DB2 documentation, send an email to [email protected].
The DB2 documentation team reads all of your feedback, but cannot respond to
you directly. Provide specific examples wherever possible so that we can better
understand your concerns. If you are providing feedback on a specific topic or
help file, include the topic title and URL.
Do not use this email address to contact DB2 Customer Support. If you have a DB2
technical issue that the documentation does not resolve, contact your local IBM
service center for assistance.
211
The form number increases each time a manual is updated. Ensure that you are
reading the most recent version of the manuals, as listed below.
Note: The DB2 Information Center is updated more frequently than either the PDF
or the hard-copy books.
Table 25. DB2 technical information
212
Name
Form Number
Available in print
Availability date
Administrative API
Reference
SC27-5506-00
Yes
Administrative Routines
and Views
SC27-5507-00
No
SC27-5511-00
Yes
SC27-5512-00
Yes
Command Reference
SC27-5508-00
Yes
Yes
Yes
Database Monitoring
Guide and Reference
SC27-4547-00
Yes
SC27-5529-00
Yes
SC27-5530-00
Yes
DB2 Workload
Management Guide and
Reference
SC27-5520-00
Yes
Developing ADO.NET
and OLE DB
Applications
SC27-4549-00
Yes
Developing Embedded
SQL Applications
SC27-4550-00
Yes
Developing Java
Applications
SC27-5503-00
Yes
SC27-5504-00
No
Developing RDF
Applications for IBM
Data Servers
SC27-5505-00
Yes
Developing User-defined
Routines (SQL and
External)
SC27-5501-00
Yes
GI13-2084-00
Yes
Form Number
Available in print
Availability date
Yes
Globalization Guide
SC27-5531-00
Yes
GC27-5514-00
Yes
GC27-5515-00
No
Message Reference
Volume 1
SC27-5523-00
No
Message Reference
Volume 2
SC27-5524-00
No
SC27-5526-00
No
Partitioning and
Clustering Guide
SC27-5532-00
Yes
pureXML Guide
SC27-5521-00
Yes
SC27-5525-00
No
SQL Procedural
Languages: Application
Enablement and Support
SC27-5502-00
Yes
Yes
Yes
SC27-5527-00
Yes
Troubleshooting and
Tuning Database
Performance
SC27-4548-00
Yes
Upgrading to DB2
Version 10.5
SC27-5513-00
Yes
SC27-5519-00
Yes
XQuery Reference
SC27-5522-00
No
Form Number
Available in print
Availability date
Yes
SC27-5517-00
Yes
SC27-5518-00
Yes
213
Procedure
To start SQL state help, open the command line processor and enter:
? sqlstate or ? class code
where sqlstate represents a valid five-digit SQL state and class code represents the
first two digits of the SQL state.
For example, ? 08003 displays help for the 08003 SQL state, and ? 08 displays help
for the 08 class code.
214
215
216
Appendix B. Notices
This information was developed for products and services offered in the U.S.A.
Information about non-IBM products is based on information available at the time
of first publication of this document and is subject to change.
IBM may not offer the products, services, or features discussed in this document in
other countries. Consult your local IBM representative for information about the
products and services currently available in your area. Any reference to an IBM
product, program, or service is not intended to state or imply that only that IBM
product, program, or service may be used. Any functionally equivalent product,
program, or service that does not infringe any IBM intellectual property right may
be used instead. However, it is the user's responsibility to evaluate and verify the
operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter
described in this document. The furnishing of this document does not grant you
any license to these patents. You can send license inquiries, in writing, to:
IBM Director of Licensing
IBM Corporation
North Castle Drive
Armonk, NY 10504-1785
U.S.A.
For license inquiries regarding double-byte character set (DBCS) information,
contact the IBM Intellectual Property Department in your country or send
inquiries, in writing, to:
Intellectual Property Licensing
Legal and Intellectual Property Law
IBM Japan, Ltd.
19-21, Nihonbashi-Hakozakicho, Chuo-ku
Tokyo 103-8510, Japan
The following paragraph does not apply to the United Kingdom or any other
country/region where such provisions are inconsistent with local law:
INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS
PUBLICATION AS IS WITHOUT WARRANTY OF ANY KIND, EITHER
EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS
FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or
implied warranties in certain transactions; therefore, this statement may not apply
to you.
This information could include technical inaccuracies or typographical errors.
Changes are periodically made to the information herein; these changes will be
incorporated in new editions of the publication. IBM may make improvements,
changes, or both in the product(s) and/or the program(s) described in this
publication at any time without notice.
Any references in this information to websites not owned by IBM are provided for
convenience only and do not in any manner serve as an endorsement of those
Copyright IBM Corp. 1993, 2013
217
websites. The materials at those websites are not part of the materials for this IBM
product and use of those websites is at your own risk.
IBM may use or distribute any of the information you supply in any way it
believes appropriate without incurring any obligation to you.
Licensees of this program who wish to have information about it for the purpose
of enabling: (i) the exchange of information between independently created
programs and other programs (including this one) and (ii) the mutual use of the
information that has been exchanged, should contact:
IBM Canada Limited
U59/3600
3600 Steeles Avenue East
Markham, Ontario L3R 9Z7
CANADA
Such information may be available, subject to appropriate terms and conditions,
including, in some cases, payment of a fee.
The licensed program described in this document and all licensed material
available for it are provided by IBM under terms of the IBM Customer Agreement,
IBM International Program License Agreement, or any equivalent agreement
between us.
Any performance data contained herein was determined in a controlled
environment. Therefore, the results obtained in other operating environments may
vary significantly. Some measurements may have been made on development-level
systems, and there is no guarantee that these measurements will be the same on
generally available systems. Furthermore, some measurements may have been
estimated through extrapolation. Actual results may vary. Users of this document
should verify the applicable data for their specific environment.
Information concerning non-IBM products was obtained from the suppliers of
those products, their published announcements, or other publicly available sources.
IBM has not tested those products and cannot confirm the accuracy of
performance, compatibility, or any other claims related to non-IBM products.
Questions on the capabilities of non-IBM products should be addressed to the
suppliers of those products.
All statements regarding IBM's future direction or intent are subject to change or
withdrawal without notice, and represent goals and objectives only.
This information may contain examples of data and reports used in daily business
operations. To illustrate them as completely as possible, the examples include the
names of individuals, companies, brands, and products. All of these names are
fictitious, and any similarity to the names and addresses used by an actual
business enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which
illustrate programming techniques on various operating platforms. You may copy,
modify, and distribute these sample programs in any form without payment to
IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating
218
platform for which the sample programs are written. These examples have not
been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or
imply reliability, serviceability, or function of these programs. The sample
programs are provided "AS IS", without warranty of any kind. IBM shall not be
liable for any damages arising out of your use of the sample programs.
Each copy or any portion of these sample programs or any derivative work must
include a copyright notice as follows:
(your company name) (year). Portions of this code are derived from IBM Corp.
Sample Programs. Copyright IBM Corp. _enter the year or years_. All rights
reserved.
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of
International Business Machines Corp., registered in many jurisdictions worldwide.
Other product and service names might be trademarks of IBM or other companies.
A current list of IBM trademarks is available on the web at Copyright and
trademark information at www.ibm.com/legal/copytrade.shtml.
The following terms are trademarks or registered trademarks of other companies
v Linux is a registered trademark of Linus Torvalds in the United States, other
countries, or both.
v Java and all Java-based trademarks and logos are trademarks or registered
trademarks of Oracle, its affiliates, or both.
v UNIX is a registered trademark of The Open Group in the United States and
other countries.
v Intel, Intel logo, Intel Inside, Intel Inside logo, Celeron, Intel SpeedStep, Itanium,
and Pentium are trademarks or registered trademarks of Intel Corporation or its
subsidiaries in the United States and other countries.
v Microsoft, Windows, Windows NT, and the Windows logo are trademarks of
Microsoft Corporation in the United States, other countries, or both.
Other company, product, or service names may be trademarks or service marks of
others.
Appendix B. Notices
219
220
Index
Special characters
.NET
batch files
165
Numerics
32-bit platforms
64-bit platforms
16
16
A
AIX
C applications
compiler and link options 168
C++ applications
compiler and link options 169
IBM COBOL applications
building 191
compiler and link options 182
Micro Focus COBOL applications
compiler and link options 183
application design
COBOL
include files 31
Japanese and traditional Chinese EUC
considerations 101
data passing 129
declaring sufficient SQLVAR entities 121
describing SELECT statement 124
executing statements without variables 13
NULL values 59
package versions with same name 163
parameter markers 131
retrieving data a second time 136
REXX 113
saving user requests 131
scrolling through previously retrieved data 136
SQLDA structure guidelines 125
variable-list SELECT statement processing 130
application development
COBOL example 94
embedded SQL overview 1
exit list routines 143
applications
binding 160
building embedded SQL 16, 198
embedded SQL 16, 198
arrays
host variables 67, 205
asynchronous events 20
authorities
binding 160
B
batch files
building embedded SQL applications
BIGINT data type
COBOL 46
Copyright IBM Corp. 1993, 2013
165
221
C
C language
application template 26
applications
building (UNIX) 177
building (Windows) 178
compiler options (AIX) 168
compiler options (HP-UX) 170
compiler options (Linux) 172
compiler options (Solaris) 174
compiler options (Windows) 176
batch files 198
build files 165
development environment 26
error-checking utility files 167
multiconnection applications
building on Windows 180
multithreaded applications
Windows 178
C/C++ language
applications
building (Windows) 178
compiler options (AIX) 169
compiler options (HP-UX) 171
compiler options (Linux) 173
compiler options (Solaris) 175
compiler options (Windows) 176
executing static SQL statements 119
input files 25
multiple thread database access 20
output files 25
build files 165
Chinese (Traditional) EUC considerations
class data members 84
comments 119
connecting to databases 37
data types
functions 44
methods 44
overview 39
stored procedures 44
supported 39
declaring graphic host variables 74
disconnecting from databases 143
embedded SQL statements 2
error-checking utility files 167
file reference declarations 83
FOR BIT DATA 88
graphic host variables 74, 77, 78
host structure support 89
host variables
declaring 65
initializing 88
naming 64
purpose 63
include files 29
indicator tables 91
Japanese EUC considerations 87
LOB data declarations 80
LOB locator declarations 82
member operator restrictions 87
multiconnection applications
building (Windows) 180
multithreaded applications
Windows 178
null-terminated strings 92
numeric host variables 71
222
87
compilers (continued)
IBM COBOL
AIX 187
Windows 187
Micro Focus COBOL
AIX 190
HP-UX 191
Solaris 191
Windows 188
compiling
embedded SQL applications 153
completion codes
SQL statements 36
configuration files
VisualAge 168
VisualAge C++ (AIX) 180
consistency
tokens 152
contexts
application dependencies between 23
database dependencies between 23
setting between threads 20
setting in multithreaded DB2 applications
details 20
CREATE IN COLLECTION NULLID authority 160
CREATE PROCEDURE statement
embedded SQL applications 133
critical sections
multithreaded embedded SQL applications 23
CURRENT EXPLAIN MODE special register
dynamic SQL statements 157
CURRENT PATH special register
bound dynamic SQL 157
CURRENT QUERY OPTIMIZATION special register
bound dynamic SQL 157
cursors
embedded SQL applications 135, 138
multiple in application 138
names
REXX 6
processing
SQLDA structure 125
summary 138
rows
deleting 139
retrieving 138
updating 139
sample program 139
D
data
deleting
statically executed SQL applications 139
fetched 136
retrieving
second time 136, 137
scrolling through previously retrieved 136
updating
previously retrieved data 138
statically executed SQL applications 139
Data Manipulation Language (DML)
dynamic SQL performance 13
data retrieval
static SQL 135
data structures
user-defined with multiple threads 22
Index
223
data types
BINARY 94
C
embedded SQL applications 39,
C++
embedded SQL applications 39,
class data members in C/C++ 84
CLOB 88
COBOL 46
compatibility issues 56
conversion
C/C++ 39
COBOL 46
FORTRAN 49
REXX 51
DECIMAL
FORTRAN 49
embedded SQL applications
C/C++ 39, 84, 88
mappings 56
FOR BIT DATA
C/C++ 88
COBOL 101
FORTRAN 49
graphic types 75
host variables 56, 84
mappings
embedded SQL applications 38,
pointers in C/C++ 83
VARCHAR
C/C++ 88
databases
accessing
multiple threads 20
contexts 20
DATE data type
C/C++ 39
COBOL 46
FORTRAN 49
REXX 51
DB2 Information Center
versions 214
DB2ARXCS.BND REXX bind file 197
db2bfd command
overview 154
db2dclgn command
declaring host variables 56
DBCLOB data type
COBOL 46
REXX 51
dbclob_file C/C++ data type 39
dbclob_locator C/C++ data type 39
DBCLOB-FILE COBOL data type 46
DBCLOB-LOCATOR COBOL data type
ddcs400.lst file 160
ddcsmvs.lst file 160
ddcsvm.lst file 160
ddcsvse.lst file 160
DDL
statements
dynamic SQL performance 13
deadlocks
multithreaded applications 23
DECIMAL data type
conversion
C/C++ 39
COBOL 46
224
84, 88
84, 88
56
46
E
embedded SQL applications
access plans 160
authorization 11
C/C++
include files 29
restrictions 17
statements 2
COBOL
include files 31
statements 5
compiling 8, 201
db2dsdriver.cfg file 201
declare section 2
exception handlers
overview 143
EXEC SQL INCLUDE SQLCA statement
EXECUTE IMMEDIATE statement
overview 13
EXECUTE statement
overview 13
exit list routines 143
explain snapshots
binding 158
Extended UNIX Code (EUC)
Chinese (Traditional)
C/C++ applications 87
COBOL applications 101
FORTRAN applications 110
Japanese
C/C++ applications 87
COBOL applications 101
FORTRAN applications 110
22
F
FETCH statement
host variables 120
repeated data access 136
SQLDA structure 124
file reference declarations in REXX 115
files
reference declarations in C/C++ 83
FIPS 127-2 standard
declaring SQLSTATE and SQLCODE as host variables
flagger utility for precompiling 147
FLOAT data type
C/C++ conversion 39
COBOL 46
FORTRAN 49
REXX 51
FOR BIT DATA data type 88
FOR UPDATE clause
details 139
FORTRAN language
applications
host variables 104
input files 25
output files 25
Chinese (Traditional) code set 110
comments 119
connecting to databases 37
data types 49
embedding SQL statements 4
file reference declarations 109
host variables
declaring 105
naming 104
referencing 4
include files 34
indicator variables 110
Japanese code set 110
LOB data declarations 108
LOB locator declarations 109
multibyte character sets 110
numeric host variables 106
programming 18
restrictions 104
SQL declare section example 105
SQLCODE variables 105
SQLSTATE variables 105
Index
142
225
G
get error message API
error message retrieval 140
predefined REXX variables 111
graphic data
host variables
C/C++ embedded SQL applications 78
COBOL embedded SQL applications 97
VARGRAPHIC 77
GRAPHIC data type
C/C++ 39
COBOL 46
FORTRAN 49
REXX 51
selecting 75
H
help
SQL statements 214
host structure support
C/C++ 89
COBOL 102
host variables
C-array 67
C/C++ applications 63
character data declarations
COBOL 96
FORTRAN 106
class data members 84
COBOL applications 46
declaring
C/C++ 65
COBOL 94
db2dclgn declaration generator 56
embedded SQL application overview
FORTRAN 105
variable list statement 130
dynamic SQL 13
embedded SQL applications
C/C++ 80
COBOL 98
FORTRAN 108
overview 53
REXX 114
enabling compatibility features 205
file reference declarations
C/C++ 83
COBOL 100
FORTRAN 109
REXX 115
REXX (clearing) 116
FORTRAN applications 4
graphic data
C/C++ 74, 75
COBOL 97
FORTRAN 110
host language statements 53
initializing in C/C++ 88
LOB data declarations
C/C++ 80
COBOL 98
FORTRAN 108
REXX 114
LOB file reference declarations 116
226
I
55
include files
C/C++ embedded SQL applications 29
COBOL embedded SQL applications 31
FORTRAN embedded SQL applications 34
locating in COBOL applications 5
overview 29
INCLUDE SQLCA statement
declaring SQLCA structure 36
INCLUDE SQLDA statement
creating SQLDA structure 125
INCLUDE statement
BIND command
STATICASDYNAMIC option 205
CONNECT statement 205
double quotation marks 205
indicator tables
C/C++ 91
COBOL 104
indicator variables
C 67
compatibility features 205
FORTRAN 110
identifying null SQL values 59
REXX 116
INTEGER data type
C/C++ 39
COBOL 46
FORTRAN 49
REXX 51
INTEGER*2 FORTRAN data type 49
INTEGER*4 FORTRAN data type 49
interrupt handlers
overview 143
isolation levels
repeatable read (RR) 136
J
Japanese Extended UNIX Code (EUC) code page
C/C++ embedded SQL applications 87
COBOL embedded SQL applications 101
FORTRAN embedded SQL applications 110
L
LANGLEVEL precompile option
MIA 39
SAA1 39
SQL92E 67, 95, 105
large objects (LOBs)
C/C++ declarations 80
locators
declarations in C/C++ 82
latches 20
libdb2.so libraries
restrictions 203
link options
C applications 170
linking
details 153
Linux
C
applications 172
C++
applications 173
libraries
libaio.so.2 203
Micro Focus COBOL
applications 185
configuring compilers 189
LOB data type
data declarations in C/C++ 80
long C/C++ data type 39
long int C/C++ data type 39
long long C/C++ data type 39
long long int C/C++ data type 39
LONG VARCHAR data type
C/C++ 39
COBOL 46
FORTRAN 49
REXX 51
LONG VARGRAPHIC data type
C/C++ 39
COBOL 46
FORTRAN 49
REXX 51
N
notices 217
NULL
SQL value
indicator variables 59
null-terminated character form
null-terminator 39
NULLID 160
NUMERIC data type
C/C++ 39
COBOL 46
FORTRAN 49
REXX 51
numeric host variables
C/C++ 71
COBOL 95
FORTRAN 106
39
O
Object REXX for Windows applications
building 197
optimizer
dynamic SQL 13
static SQL 13
M
macro expansion
C/C++ language 88
member operators
C/C++ restriction 87
MIA LANGLEVEL precompile option
multi-threaded applications
building
C++ (Windows) 178
files 165
multibyte code pages
Chinese (Traditional) code sets
C/C++ 87
COBOL 101
FORTRAN 110
39
packages
creating
BIND command and existing bind file
embedded SQL applications 149
host database servers 160
inoperative 158
invalid state 158
privileges
overview 163
REXX application support 197
schemas 149
System i database servers 160
time stamp errors 152
versions
privileges 163
same name 163
parameter markers
dynamic SQL
determining statement type 130
example 132
variable input 131
examples 132
typed 131
performance
dynamic SQL 13
FOR UPDATE clause 139
PICTURE (PIC) clause in COBOL types 46
157
Index
227
precompilation
accessing host application servers through DB2
Connect 147
accessing multiple servers 147
C/C++ 87
consistency tokens 152
dynamic SQL statements 13
embedded SQL applications 147
flagger utility 147
FORTRAN 18
time stamps 152
PRECOMPILE command
embedded SQL applications
accessing multiple database servers 148
building from command line 198
C/C++ 198
overview 145
PREPARE statement
arbitrary statement processing 130
overview 13
preprocessor functions
SQL precompiler 88
procedures
CALL statement 133
parameters
types 133
Q
qualification operator in C/C++ 87
queryopt precompile/bind option
code page considerations 158
R
REAL SQL data type
C/C++ 39
COBOL 46
FORTRAN 49
REXX 51
REAL*2 FORTRAN SQL data type 49
REAL*4 FORTRAN SQL data type 49
REAL*8 FORTRAN SQL data type 49
REBIND command
rebinding 158
rebinding
process 158
REBIND command 158
REDEFINES clause
COBOL 100
repeatable read (RR)
re-retrieving data 136
result codes 36
RESULT REXX predefined variable 111
return codes
declaring SQLCA 36
REXX language
APIs
SQLDB2 18
SQLDBS 18
SQLEXEC 18
applications
embedded SQL (building) 196
embedded SQL (running) 196
host variables 110
bind files 197
228
S
SAA1 LANGLEVEL precompile option 39
samples
IBM COBOL 182
SELECT statement
declaring SQLDA 121
describing after allocating SQLDA 124
EXECUTE statement 13
retrieving
data a second time 136
multiple rows 138
updating retrieved data 138
variable-list 130
semaphores 23
serialization
data structures 22
SQL statement execution 20
SET CURRENT PACKAGESET statement 149, 164
short data type
C/C++ 39
short int data type 39
signal handlers
overview 143
SMALLINT data type
C/C++ 39
COBOL 46
FORTRAN 49
122
Index
229
sqleDetachFromCtx API
multiple contexts 20
sqleEndCtx API
multiple contexts 20
sqleGetCurrentCtx API
multiple contexts 20
sqleInterruptCtx API
multiple contexts 20
SQLENV include file
C/C++ applications 29
COBOL applications 31
FORTRAN applications 34
sqleSetTypeCtx API
multiple contexts 20
SQLETSD include file 31
SQLException
embedded SQL applications 140
SQLEXEC REXX API
processing SQL statements 6
registering 113
restrictions 18
SQLEXT include file 29
sqlint64 C/C++ data type 39
SQLISL predefined variable 111
SQLJACB include file 29
SQLMON include file
C/C++ applications 29
COBOL applications 31
FORTRAN applications 34
SQLMONCT include file 31
SQLMSG predefined variable 111
SQLRDAT predefined variable 111
SQLRIDA predefined variable 111
SQLRODA predefined variable 111
SQLSTATE
include files
C/C++ applications 29
COBOL applications 31
FORTRAN applications 34
overview 142
SQLSYSTM include file 29
SQLUDF include file
C/C++ applications 29
SQLUTBCQ include file 31
SQLUTBSQ include file 31
SQLUTIL include file
C/C++ applications 29
COBOL applications 31
FORTRAN applications 34
SQLUV include file 29
SQLUVEND include file 29
SQLVAR entities
declaring sufficient number 121, 123
SQLWARN
overview 142
SQLXA include file 29
static SQL
comparison to dynamic SQL 13
host variables 53, 55
retrieving data 135
storage
allocating to hold rows 124
declaring sufficient SQLVAR entities 121
stored procedures
REXX applications 135
success codes 36
230
symbols
C/C++ language restrictions
88
T
tables
fetching rows 139
names
resolving unqualified 164
resolving unqualified names 164
terms and conditions
publications 214
threads
multiple
embedded SQL applications 20, 23
recommendations 22
UNIX applications 22
TIME data types
C/C++ 39
COBOL 46
FORTRAN 49
REXX 51
time stamps
precompiler-generated 152
TIMESTAMP data type
C/C++ 39
COBOL 46
FORTRAN 49
REXX 51
truncation
host variables 59
indicator variables 59
typed parameter markers 131
U
UNIX
C applications
building 177
Micro Focus COBOL applications
USAGE clause in COBOL types 46
utilities
binding 160
ddcspkgn 160
utility APIs
include files
C/C++ applications 29
COBOL applications 31
FORTRAN applications 34
V
VARBINARY data type
embedded SQL applications 86
host variables 85
VARCHAR data type
C/C++
details 39
FOR BIT DATA substitute 88
COBOL 46
conversion to C/C++ 39
FORTRAN 49
REXX 51
VARGRAPHIC data type
C/C++ conversion 39
COBOL 46
192
W
warnings
truncation 59
wchar_t data type
C/C++ embedded SQL applications 75
WCHARTYPE precompiler option
data types available with NOCONVERT and CONVERT
options 39
details 75
Windows
C/C++ applications
building 178
compiler options 176
link options 176
COBOL applications
building 193
compiler options 185
link options 185
Micro Focus COBOL applications
building 195
compiler options 186
link options 186
X
XML
C/C++ applications
executing XQuery expressions 117
COBOL applications 117
declarations
embedded SQL applications 57
XMLQUERY function 19
XQuery expressions 19, 117
XML data retrieval
C applications 62
COBOL applications 62
XML data type
host variables in embedded SQL applications 57
identifying in SQLDA 59
XML encoding
overview 57
XQuery statements
declaring host variables in embedded SQL applications
57
Index
231
232
Printed in USA
SC27-4550-00
Spine information: