Eclipse Installation Guide
Eclipse Installation Guide
Version 2017.2
Installation Guide
ECLIPSE Installation Guide
Copyright Notice
Copyright © 2017 Schlumberger. All rights reserved.
This work contains the confidential and proprietary trade secrets of Schlumberger and
may not be copied or stored in an information retrieval system, transferred, used,
distributed, translated or retransmitted in any form or by any means, electronic or
mechanical, in whole or in part, without the express written permission of the copyright
owner.
Schlumberger, the Schlumberger logotype, and other words or symbols used to identify
the products and services described herein are either trademarks, trade names or
service marks of Schlumberger and its licensors, or are the property of their respective
owners. These marks may not be copied, imitated or used, in whole or in part, without
the express prior written permission of Schlumberger. In addition, covers, page headers,
custom graphics, icons, and other design elements may be service marks, trademarks,
and/or trade dress of Schlumberger, and may not be copied, imitated, or used, in whole
or in part, without the express prior written permission of Schlumberger. Other company,
product, and service names are the properties of their respective owners.
Security Notice
The software described herein is configured to operate with at least the minimum
specifications set out by Schlumberger. You are advised that such minimum
specifications are merely recommendations and not intended to be limiting to
configurations that may be used to operate the software. Similarly, you are advised that
the software should be operated in a secure environment whether such software is
operated across a network, on a single system and/or on a plurality of systems. It is up
to you to configure and maintain your networks and/or system(s) in a secure manner. If
you have further questions as to recommendations regarding recommended
specifications or security, please feel free to contact your local Schlumberger
representative.
ECLIPSE Installation Guide
Table of Contents
1 Introduction .................................................................................................. 1
Simulator computing configurations .............................................................................. 1
Supported platforms and software .................................................................................. 2
i
ECLIPSE Installation Guide
ii
ECLIPSE Installation Guide
1
Introduction
Using the ECLIPSE or Intersect simulation software, you can create models of your reservoir or field to
match historical data and to evaluate different production and recovery scenarios. You can use the
simulators with Schlumberger programs such as Petrel or the ECLIPSE pre and post-processor applications
(EPP) to model, assemble, visualize and analyze your data.
The simplest simulator configuration is a standalone installation on a Windows computer, but if your
organization has a computing cluster, you can use it to run your simulation jobs. This chapter provides an
overview of the standalone and cluster installations and lists the supported platforms and software.
In this case, all of the applications and data required for running simulation jobs are on the workstation or
computer. The licensing server(s) is usually in the network to provide an access point for all users, but it
can be installed on individual Windows computers too.
You can also use network connections to computing clusters to process the simulation jobs. In this case,
you might choose to run smaller simulation jobs on your computer and larger, more complex jobs on the
cluster. These clusters use either Microsoft Windows or Linux operating systems but share many features.
A head node or control node assigns simulation requests/jobs to the available compute nodes. To do this,
the cluster requires its own software and data which is kept on local high-speed storage. This includes
Introduction
1
ECLIPSE Installation Guide
scheduling software which controls how and when the simulation requests are processed. The cluster also
has the simulation software installed and uses it to process the submitted requests. The cluster and your
computer need to share the simulation data so that the cluster can run the simulation and return the results
for analysis.
With cluster configurations, there is a great deal of flexibility in where the software components are
installed. For example, the license server could be installed on the head node, or be a separate machine in
the network. There could be a single installation of the simulation software, with the cluster starting up
multiple copies (instances) when needed, or there could be several installations across the cluster. This
allows IT professionals to choose a setup that fits with the available cluster equipment.
Installation on a cluster requires a number of steps to configure it, install the software and to set up
connections from each Windows computer to the cluster. The installation is normally carried out by IT
professionals familiar with network environments and Windows and Linux computing clusters.
Windows PC or workstation
Operating systems
Windows 7 Professional 64-bit, Service Pack 1.
Windows 10 64-bit Pro and Enterprise.
No 32-bit systems are supported.
Required software
Microsoft .NET 3.5 and Microsoft Installer 3.0
Intel MPI 5.1.3. This is used for parallel processing. It is included on the installation DVD and
you install it separately from the simulation software.
Carnac 1.2b145 to support the EPP program FloGrid. Installed automatically.
Other requirements
Minimum screen resolution of 1024x768
Introduction
2
ECLIPSE Installation Guide
Linux cluster
For simulator job submission. Linux systems must be based on the Intel Xeon x86_64 chips.
Operating systems:
Red Hat Enterprise Linux Server 6, Update 6 (x86_64), or later.
Red Hat Enterprise Linux Server 7, Update 2 (x86_64), or later.
CentOS Linux Server 7, Update 2 (x86_64), or later.
Job schedulers/queueing systems:
LSF 9.1
PBS Pro 12.x
UGE 8.1.3
These are not supplied on the simulator installation DVD and you must purchase and install
them separately.
The installation DVD contains an integration kit for each of these schedulers which provides an
interface between the license server and the scheduler. Each integration kit enables jobs to be
started only when licenses are available and to queue jobs when no licenses are available.
Message Passing Interfaces (MPI)
Intel MPI 5.1.3.
Platform MPI 9.1.2
Both of these are installed automatically during Linux installation if you select tools. After
installation you select and configure your chosen MPI.
Introduction
3
ECLIPSE Installation Guide
Licensing platforms
Note: For the 2015 or later versions of the simulators you must install or upgrade your license server to use
Schlumberger Licensing 2015, or later (2017). This is available on the installation DVD for Windows-
based licensing and is in the Tools directory for Linux licensing.
Introduction
4
ECLIPSE Installation Guide
2
Install the simulator on a PC
In a typical standalone installation, you use the simulator software with other applications to model,
visualize and characterize data, for example with Petrel which provides full field and reservoir management
tools for processing data from seismic survey through to simulation. ECLIPSE is also supplied with a
number of legacy pre and post-processor (EPP) applications to help with the construction of simulation
datasets. The simulator and associated applications are accessed through the Simulation Launcher. The
software components are shown in the following figure.
The Simulation Launcher and Petrel, if it is installed, send requests to a management and control program
called ECLRUN which starts the simulator or EPP application. Intel MPI supports parallel processing and
so ECLRUN interacts with it when running simulation jobs which use the multiple processors/parallel
processing available on personal computers or workstations.
• Activate .NET framework 3.5 in the Control Panel if you have not done so already for another
application. Simulation Launcher requires .NET framework 3.5 to function. Windows 10, Windows
Server 2012 R2, and Windows Server 2016 all require that the user manually activates .NET
framework 3.5. The Microsoft support website contains guidance on activating .NET framework 3.5
for these operating systems.
If you need any advice on these items, consult your IT department.
The installation DVD contains all of the software that you need: the simulator software, associated
programs and utilities, the Simulation Launcher, Intel MPI and licensing. You use normal Windows-based
installation procedures to install the software and then contact your IT administrator to connect to the
license server. Petrel is available separately.
There are several stages to the installation:
1. Install the simulator software and reboot your PC.
2. Install Intel MPI (message passing interface) to allow parallel processing and use with other
applications.
3. Complete the installation by setting up the license connection, performing some installation checks,
and running some test simulations.
You may need help from your IT department to complete the installation, for example with setting up
your connection to the license server.
Note: To complete the installation, you need a connection to the Schlumberger licensing server.
Note: It is recommended that you test the installation with one of the example datasets before running your
own models.
Alternatively, if you already have an AFI file then you can skip the first two steps and go directly to
running INTERSECT.
If you have an ECLIPSE dataset that you want to simulate in INTERSECT, use the Migrator option in the
same way as above to convert the ECLIPSE dataset to AFI format.
cd \data\dir1
eclrun -v 2012.2 eclipse DATASET1
cd \data\dir2
eclrun -v 2013.1 eclipse DATASET2
cd \data\dir3
eclrun -v 2014.1 e300 DATASET3
cd \data\dir4
eclrun -v 2014.2 e300 DATASET4
eclrun eclipse DATASET5
To start the runs, double-click on the RUN.BAT in Windows Explorer. The example shows how you can
specify particular release versions, or leave out the version number to run the latest version. You can use
similar commands but with other simulators, such as frontsim, ix, and ecl2ix.
2. Type eclrun ecl2ix <basename> where <basename> is the root name of the input dataset,
for example EX2.
The command generates the AFI file but does not run or simulate it.
3. After the migration is complete, run the simulation by typing eclrun ix <basename>.
If the input ECLIPSE dataset is altered, the AFI file has to be regenerated from the dataset first before you
can simulate it using INTERSECT.
An AFI file consists of {basename}.afi and its associated files basename_ECL2IX.gsg,
basename_ECL2IX_IX.ixf, basename_ECL2IX_FM.ixf, basename_fm_edits.ixf and
basename_reservoir_edits.ixf.
3
Install the simulator on Microsoft
HPC
This section describes how to set up and configure a Microsoft Windows Server 2012 HPC cluster, to allow
simulation jobs to be run on it. This involves some cluster-side and PC-side configuration. An example
network is shown in the following figure.
The installed software on the personal computer or workstation is the same as for the standalone Windows
installation and you can use Petrel or EPP to run the simulation job on your PC. However, ECLRUN is
configured with extra connection options to access the Windows HPC network to run simulation jobs. The
simulator is typically installed in a file system on Network Attached Storage (NAS) but can be installed on
the cluster head node. The simulator must be accessible from the PC and the cluster. To run a simulation
job, ECLRUN sends job requests to the HPC Client Pack. The simulation dataset is on a shared disk visible
to both the workstation and the Windows HPC cluster. The cluster head node allocates the job to the
compute nodes which process the job in parallel. On completion, the simulation results are sent to the
shared disk and the HPC Client Pack sends ECLRUN information about the job's status.
On the cluster, the Infiniband network provides high throughput communications links between the cluster
nodes. The private network contains data for the operation of the cluster.
The workstation, cluster, license server and shared disks are on a network, typically ethernet, which they
use for communication and to share data.
Note: It is important that you download the correct firmware or you can risk damaging your card. The
firmware needs to be installed before the IB drivers. The current generation of IB drivers (2.2.1) are
available from http://www.openfabrics.org for the generic drivers. There may be specialist drivers for
certain modern IB cards that you obtain directly from the Host Channel Adapter (HCA) manufacturers.
You need to check on this with your card manufacturer.
Once the drivers are installed check that you can see the IB HCAs in the device manager. If they are not in
the device manager, the HPC installation will not be able to use them.
• Only install simulation software. If you are installing ECLIPSE, clear the EPP components option,
when prompted for the components to install.
• Do not install Intel MPI as Windows HPC has its own MPI.
Note: For the 2015 or later versions of the simulators you must install or upgrade your license server to use
Schlumberger Licensing 2015, or later (2017). This is available on the installation DVD for Windows-
based licensing and is in the Tools directory for Linux licensing.
For a new installation, you must set up and configure a license server to provide your users with access to
the software (if you are updating an existing installation the license server will already be configured).
Typically, you use Network Attached Storage (NAS) for the license server and choose a location that is
accessible by the cluster and your users' PCs.
The licensing software is on the installation DVD (insert the DVD, select Install Products, then
Schlumberger Licensing and follow the installation instructions).
After installing the software, open the "Schlumberger Licensing User Guide" which is located in the
directory where you installed the licensing software (for example C:\Program Files
(x86)\Schlumberger\Schlumberger Licensing\<version>). Use the guide to configure
the licensing server and to provide your users with access to it.
Note: The use of Universal Naming Convention (UNC) paths is vital to the current implementation of HPC
as the simulator is installed on a file system that must be visible to the head node and the compute nodes.
All of the required DLLs are installed not only in the lib directory as previously but also in the bin
directory. This means you do not have to install the simulator software on the compute nodes.
The first line sets the location of the license server for the simulator and the optional License Aware
Activation Filter. The second line identifies the location of the simulator installation which must be shared
and accessible by all users and machines that submit simulations to the cluster. The remaining
configuration parameters provide information for the Message Passing Interface (MPI).
You may have to set other environment variables depending on how you provide access to your cluster:
• If your users are going to submit Multiple Realization (MR) jobs to the cluster, you need to set the MR
scheduling environment variable:
cluscfg setenvs ECL_MR_SCHEDULING=true
You must also add the following to the eclrun. config file on each client machine:
<LSFLicenses><True></LSFLicenses>
• The DefaultHoldDuration parameter specifies the time between license checks. By default, if
no licenses are available, the scheduler waits 900 seconds before checking again. A setting of 90 is
recommended:
cluscfg setparams DefaultHoldDuration=90
• To check the environment variables settings, use
cluscfg listenvs
This sets the variable for all users and all compute nodes. If these are not set, ECLRUN will return an
error. These settings require HPC Service Pack 3 to work.
• If you want to alter any of the system parameters you can find those that are available using the
command:
mpiexec –help, mpiexec –help2 or mpiexec –help3
Be careful when changing settings as they can cause unforeseen problems.
• Set the license server by IP address rather than name, although both methods should work.
• Wherever simulation data is stored it must be visible to the compute nodes for both read and write
operations. The ECLRUN program checks this before it submits a job.
For HPC the eclrun command takes the form:
eclrun –s localhost –q <cluster name> -u <user on cluster> eclipse
<datafile>
You can also use the debug option when you are testing the connection:
eclrun –s localhost –q <cluster name> -u <user on cluster> --
debug=both eclipse <datafile>
Note: The server for submitting to HPC is always localhost. This option can only be used on systems
where the HPC client pack has been installed.
Configure nodes
To set up the nodes to run the simulation software, you need to:
3rdparty\PC\resource\vcredist_vs9
3rdparty\PC\resource\vcredist_vs10
3rdparty\PC\resource\vcredist_vs11
If you do not set the CCP_SCHEDULER environment variable, you see the error:
D:\>cluscfg listenvs
No connection could be made because the target machine actively refused it
127.0.0.1:5800
If you do not set the scheduler, ECLRUN and remote job submission does not work.
4
Install the simulator on Linux
This section describes how to set up and configure a Linux cluster to run simulation jobs. This involves
some cluster-side and PC-side configuration. An example network is shown in the following figure.
The installed software on the personal computer or workstation is the same as for the standalone Windows
installation and you can use Petrel or EPP to run the simulation job. However, ECLRUN is configured to
access the Linux network to run simulation jobs. ECLRUN is installed on both the workstation and in the
Linux cluster. It works with various scheduling systems to manage job submission and completions. LSF,
PBS and UGE. ECLRUN and a scheduler are installed on each cluster node. The cluster head node
allocates the job to the compute nodes which process the simulation the job in parallel. On completion, the
simulation results are sent to the shared disk and information about the job status is sent to the workstation.
The simulator is typically installed in a file system on Network Attached Storage (NAS) but can be
installed on the cluster head node for example.
On the cluster, the Infiniband network provides high throughput communications links between the cluster
nodes. The private network contains cluster data.
The workstation, cluster, license server and shared disks use the ethernet network to communicate and
share data.
For the shared disk configuration to work, the shared disk must be accessible by both the Linux and
Windows file systems. That is, the disk must support both the Windows and Linux file naming
conventions. The user id and permission mapping must work on both Windows and Linux. Where this is
not possible, there are separate data disks with the PC/workstation transferring data to a Windows disk and
the Linux cluster using a Linux disk.
Note: You can also install the drivers from the Redhat installation disk. Depending on the type of hardware
in your system, you may need to download and install a later driver from the OpenFabrics web site.
InfiniBand drivers
Follow the installation instructions from the InfiniBand supplier. You may find that IP over IB is working
properly but the simulator is not. If you are using Red Hat 5 or later, you can correct this by editing the
file /etc/security/limits.conf and adding the following two lines:
The value represents the number of kilobytes that may be locked by a process. The file above contains
further documentation.
Note: The steps described above will allow any user in the system to lock as much as the whole memory
set in the configuration files.
Infinipath drivers
To obtain drivers and instructions, visit this site: http://www.qlogic.com.
#!/bin/sh
# $Id: startup.svr4,v 1.10 2008/04/08 06:13:09 xltang Exp $
#
# Start and stop LSF daemons, System V / OSF version
# Make sure we're running a shell that understands functions
#
# The following is for the Linux chkconfig utlility
# chkconfig: 35 99 01
# description: Load Sharing Facility
#
# The following is for the Linux insserv utility
### BEGIN INIT INFO
# Provides: lsf
# Required-Start: $remote_fs
# Required-Stop: $remote_fs
# Default-Start: 3 5
# Default-Stop: 0 1 2 6
# Description: Start LSF daemons
### END INIT INFO
#line added so simulator can run over Infiniband when under LSF control.
ulimit -l 102400000000
check_env () {
if [ x$LSF_ENVDIR = x ]; then
# Using default path of lsf.conf...
LSF_CONF=/lsftop/lsf/conf/lsf.conf
Note: LSF by default saves temporary files in a hidden directory called . lsbatch which is inside the
user’s home directory. This can cause problems if the home directories don't have much free space, or
quotas are enabled. This can be avoided by adding the following setting in lsf. conf:
LSB_STDOUT_DIRECT=Y
You must restart LSF for this change to take effect.
Note: We highly recommend that you do not use any LSF extensions for LSF HPC. These can cause
problems if the versions of LSF HPC and Intel MPI are not compatible.
1. In the LSF installation directory (/lsf in this example), edit the file
/lsf/9.0/linux2.6-glibc2.3-x86_64/bin/intelmpi_wrapper
2. Search for the line MPI_TOPDIR="........"
3. Replace the line with the correct location of the Intel MPI. If the default settings have been used this
line should look like this
MPI_TOPDIR="/ecl/tools/linux_x86_64/intel/mpi/5.0/"
4. Find all occurrences of "$MPI_TOPDIR/bin" and replace them with "$MPI_TOPDIR/bin64"
5. If you wish to use SSH to start the MPI daemons:
a. Search for the line
MPDBOOT_CMD="$MPI_TOPDIR/bin64/mpdboot"
b. Change it to
MPDBOOT_CMD="$MPI_TOPDIR/bin64/mpdboot -r /usr/bin/ssh"
Note: It is assumed in this chapter that the software is installed in the default location /ecl.
After installing the simulator software, you must install the licensing software if you do not already have a
licensing server in your network. You then configure the MPI settings to support the simulators.
If this is not successful, it means that the DVD has not mounted automatically and you have to mount
it manually. To do this:
a. Unmount the installation disk using the command: umount -k /dev/cdrom
b. Mount the installation disk use the command mount /dev/cdrom /tmp/a.
Please note that /tmp/a should be an existing directory.
c. To start the installation, type /media/cdrom/ECLIPSE/UNIX/install/cdinst.csh
You are prompted to continue the installation.
3. Press enter to select the default Linux installation.
This displays a list of options and prompts you for the installation directory.
1) E300
2) ECLIPSE
3) Expand
4) Extract
5) FrontSim
6) ParallelE300
7) ParallelECLIPSE
8) Rename
9) Tools
4. For the choice, type a and for the installation directory, type /ecl. Then press enter.
Note: Always install ECLIPSE to the same location as any previous versions. There is an internal
directory structure to prevent any old versions being overwritten.
ECLIPSE for different architectures should also be installed into the same location
6. At the prompt Do you want to install the macros [default n]? type y and press
the return key.
Always answer y (yes) to the prompt unless you are installing an earlier version, in which case, do not
install the macros. The current macros directory, if any, will be backed up with a suffix consisting of
the date and time of install, for example macros. backup. 13: 24: 52. 230209.
7. If you want to install the ECLIPSE documentation and data, rerun the cdinst.csh script and select
the option when prompted.
Note: For the 2015 or later versions of the simulators you must install or upgrade your license server to use
Schlumberger Licensing 2015, or later (2017). This is available on the installation DVD for Windows-
based licensing and is in the Tools directory for Linux licensing.
For a new installation, you must set up and configure a license server to provide your users with access to
the software (if you are updating an existing installation the license server will already be configured).
Typically, you use Network Attached Storage (NAS) for the license server and choose a location that is
accessible by the cluster and your users' PCs.
The licensing software is installed along the Linux tools. You can get the software and the "Schlumberger
Licensing User Guide" from the tools directory /ec/tools/linux_x8664/flexlm1112/.
CAUTION: If you support other applications which rely on ssh to work, you may need to re-evaluate these
instructions as they could affect the other applications.
comp001:/home/user>ssh comp002
Last login: Thu Apr 9 10:37:03 2009 from comp001.geoquest
comp002:/home/user>
If the connection is successful, follow the instructions in Update the user configuration information to set
up each user's configuration files. If not, follow the instructions in the flowchart to generate the
authentication keys.
comp002:/home/user>cd .ssh
comp002:/home/user/.ssh>cat config
StrictHostKeyChecking=no
2. Change the file permissions for the config file to 400 if necessary.
3. Change the file permissions for the authorized_keys to 600 if necessary.
4. Set the permissions of the $HOME and $HOME/. ssh directories so that other users do not have write
access to them.
or in the local work directory. For the <mpi_version>, see Linux cluster. Add the following tuning
parameters to the file:
The Intel MPI detects and uses the correct interface. Setting an environment variable will force an
interconnect to be chosen. The interconnect settings are shown in the following table.
Note: The latest Intel MPI HYDRA launch mechanism is a 32-bit application that starts the simulator
processes. This means 32-bit libraries must be installed for this to work. On Redhat 6, the required rpm is
called glibc-2.12-1.47.el6.i686.rpm. Follow the on-screen instructions and prompts to
complete the installation.
Example
To set the MPI to use the DAPL device, add the following line to the user’s . cshrc file.
setenv MPI_IC_ORDER UDAPL
or
setenv MPI_IC_ORDER udapl:tcp
Using upper case for UDAPL sets the instruction to use that device or exit. Using lower case sets the
instruction to try the device and, if it doesn’t work, try another. Shared memory is always implemented if
possible. The standard ECLIPSE scripts will add a few extra lines into the top of the output files and these
will display the interconnects used.
The following extract from the FILENAME. OUT file uses TCP to communicate between nodes.
host | 0 1
=====|===========
0 : SHM TCP
1 : TCP SHM
Alternatively you can override it by using command line arguments to the eclrun script. The following
arguments force the Platform MPI to use the IBV setting, regardless of the environment variable setting.
eclrun -c plmpi -–mpi-args=”-IBV” eclipse DATASETNAME
#!/bin/tcsh
#Edit the line below to reflect the location of the license server.
#LM_LICENSE_FILE was used by pre 2007.2
setenv SLBSLS_LICENSE_FILE [email protected]
ECLRUN passwords
The "ECLRUN User Guide" contains a list of restricted characters which cannot be used in passwords as
ECLRUN cannot pass them to the cluster.
Note: Only ECLRUN is supported. The old scripts starting with the character @ are not able to schedule
licenses.
...
REPORT Keyword migration summary:
+-------------------------+
| Fully migrated | 49 |
| Partially migrated | 5 |
| Not migrated | 6 |
| Ignored | 8 |
| Total | 68 |
+-------------------------+
INFO EX2.afi is created.
ix_tools checked in
INFO Successfully checked in license feature 'ix_tools'
INFO Migration complete. Elapsed time 0hrs:0mins:10secs (10secs)
REPORT Message summary
+---------------------------+
| Message Level | Frequency |
+---------------------------+
| INFO | 16 |
| WARNING | 17 |
| ERROR | 0 |
+---------------------------+
INFO Run finished normally.
A
Legacy macros and scripts
Note: We recommend you use ECLRUN to run most of the simulators and other programs. Please refer to
the "ECLRUN User Guide" for more details. The legacy macros are still released but may have limited
capabilities compared to ECLRUN.
For Linux systems, the software can be run using scripts supplied in the macros directory in place of
ECLRUN. The following scripts are available to run the License Manager:
The default location for the following files for use with the License Manager is the ecl / macros
directory:
The following macros are available to run the principal simulator software programs:
The following macros are available to run the utility programs (those marked * are no longer released):
• Batch operations:
@fill Corner point geometry generation
• File manipulation:
@expand Merge INCLUDE files into master file
• Viewing:
@manuals Launches Manuals bookshelf
Note: Not all programs are supplied with the 2013.2 release; some older applications have been retired. The
scripts have been left so they can be used to run older versions.
• The following environmental variables can be set to control LSF functionality within the macros:
ECL_LSF set true if LSF is installed
ECL_LSF_BYDEFAULT If this and ECL_LSF are set to TRUE then jobs are submitted using LSF
without the need for the -lsf flag. For compatibility with ECLIPSE
Office the -nolsf flag has been added as a new option to stop nested
bsub commands.
ECL_LSFHPC This switches on the LSF HPC extensions, but it is recommended that you
do not use this extension and remove it from all settings. However, if you
must use LSF HPC make sure that you follow the configuration steps in
"Configuring changes if you require LSF HPC".
ECL_LSF_LICCHECK set if LSF is to control FLEXlm licensing and the LSF-SIS integration kit
has been installed (contact Platform Computing www.platform.com for
details).
ECL_MR_SCHEDULING set this variable to correctly queue multiple realization jobs. Follow the
instructions in the LSF-SIS integration kit.
For parallel ECLIPSE, namely @mpieclipse and @mpie300 the following special options are
available:
-ibmmpi Option for processing parallel jobs using IBM POE MPI
If -procs is not set, the macro will attempt to read the PARALLEL keyword in the dataset to get the number
of processors.
Examples
@eclipse -version 2017.1 -local -lsf TESTCASE
@mpie300 -lsf -lsfqueue normal -version 2017.1 -local TESTCASE
B
Installation DVD content
The software is distributed on a disk which contains:
• The simulators for all x86 64-bit architectures.
• Intel MPI Runtime installation.
• Documentation in PDF (Portable Document Format).
• License server and dongle drivers.
• Utility resources which contain:
• Benchmark and datasets (ECLIPSE)
• PC resources
• Scheduler integration kit.