ECLRUN User Guide
ECLRUN User Guide
Version 2024.2.1
ECLRUN
User Guide
ECLRUN User Guide
Copyright notice
This work contains the confidential and proprietary trade secrets of SLB and may not be
copied or stored in an information retrieval system, transferred, used, distributed,
translated or retransmitted in any form or by any means, electronic or mechanical, in
whole or in part, without the express written permission of the copyright owner.
SLB, Schlumberger, the SLB logotype, and other words or symbols used to identify the
products and services described herein are either trademarks, trade names or service
marks of SLB and its licensors, or are the property of their respective owners. These
marks may not be copied, imitated or used, in whole or in part, without the express prior
written permission of SLB. In addition, covers, page headers, custom graphics, icons,
and other design elements may be service marks, trademarks, and/or trade dress of
SLB, and may not be copied, imitated, or used, in whole or in part, without the express
prior written permission of SLB. Other company, product, and service names are the
properties of their respective owners.
An asterisk (*) is used throughout this document to designate other marks of SLB.
Security notice
The software described herein is configured to operate with at least the minimum
specifications set out by SLB. You are advised that such minimum specifications are
merely recommendations and not intended to be limiting to configurations that may be
used to operate the software. Similarly, you are advised that the software should be
operated in a secure environment whether such software is operated across a network,
on a single system and/or on a plurality of systems. It is up to you to configure and
maintain your networks and/or system(s) in a secure manner. If you have further
questions as to recommendations regarding recommended specifications or security,
please feel free to contact your local SLB representative.
ECLRUN User Guide
Table of Contents
1 Introduction ........................................................................................... 6
3 Configuration ...................................................................................... 13
Configuration variables .......................................................................................... 14
LsfLicenses - Equivalent to WLMLICENSEAWARE ............................................. 15
Variables in configuration files .......................................................................... 17
Configuration files................................................................................................... 17
Linux platforms ................................................................................................. 18
eclrun.config ..................................................................................................... 19
SimLaunchConfig.xml....................................................................................... 20
Example ........................................................................................................ 20
User configuration file ....................................................................................... 21
Network configuration file ................................................................................. 21
Runtime Environment configuration ..................................................................... 22
Password-less connection ..................................................................................... 22
Using PuTTY .................................................................................................... 22
Using OpenSSH ............................................................................................... 23
File transfer and shared drives ......................................................................... 23
Example ........................................................................................................ 26
Workload manager configuration...................................................................... 26
Windows HPC ............................................................................................... 26
Table of Contents
iii
ECLRUN User Guide
4 Usage ................................................................................................... 30
General usage and syntax ...................................................................................... 30
Options ............................................................................................................. 31
General ......................................................................................................... 31
Stand-alone commands.................................................................................... 32
Specifying parallelism and processor count ..................................................... 32
Local execution of a simulator............................................................................... 33
Local queuing system ....................................................................................... 33
Set processor affinity..................................................................................... 33
Remote Linux with no queuing system ............................................................. 34
Example ........................................................................................................ 34
Submission to a Workload manager ..................................................................... 34
Example submission commands ...................................................................... 35
Intersect ............................................................................................................ 36
Intersect command line options .................................................................... 36
Intersect Migrator .......................................................................................... 37
Intersect Auto-generation of CSV ................................................................. 38
Intersect coupled simulations ........................................................................ 38
HDF converter .................................................................................................. 39
VISAGE ............................................................................................................ 39
Running VISAGE simulations ....................................................................... 39
Single simulation .............................................................................................. 39
Batched simulation ........................................................................................... 40
Workload manager specific usage ........................................................................ 40
IBM/Platform LSF ............................................................................................. 40
Microsoft Windows HPC ................................................................................... 41
SchedMD Slurm ............................................................................................... 41
DELFI On-Demand Reservoir Simulation......................................................... 41
Altair PBS Pro................................................................................................... 43
Torque............................................................................................................... 43
MPI library specific usage ...................................................................................... 43
MPI command line options ............................................................................... 44
Intel MPI ........................................................................................................... 44
OpenMPI .......................................................................................................... 44
Microsoft MPI.................................................................................................... 44
IBM/Platform MPI ............................................................................................. 45
Table of Contents
iv
ECLRUN User Guide
5 Troubleshooting .................................................................................. 46
Diagnostics and log files ........................................................................................ 46
Common problems.................................................................................................. 48
Command not found ......................................................................................... 48
Communication or connection errors................................................................ 49
Your simulation runs leave files behind on the server ...................................... 49
ECLRUN fails to fetch results from the server .................................................. 50
Local queue errors............................................................................................ 50
EclNJobs variable is set to X meaning that only X job(s) can be run (Y
requested). Check DATA file and ECLRUN configuration files.* ................... 50
Neither the TEMP nor a TMP environmental variable was detected on this
machine......................................................................................................... 50
Number of job(s) running equals 5 and is limited to 5. Cannot run additional 4
job(s) at the moment. Next check in 20 second(s)... ..................................... 50
EclNJobs variable is set to 5 meaning that only 5 job(s) can be run (6
requested). Check DATA file and ECLRUN configuration files. .................... 50
Local license check skipped: 'lmutil.exe' is not recognized as an internal or
external command, operable program or batch file....................................... 51
Licenses unavailable (requested: eclipse = 1 : networks = 1 : lgr = 1 /
missing: ['networks', 'lgr']). Next check in 20 second(s)... ............................. 51
Unable to checkout licenses. ........................................................................ 51
Submission to Microsoft HPC ........................................................................... 51
The Specified directory is not a network path: D:\......................................... 51
The total path length must not exceed 260 characters ................................. 51
Other................................................................................................................. 52
Error Unable to find executable for program(s):
'['ConverterSummaryData2DataBase']'...'. .................................................... 52
Unable to locate Intel MPI installation. .......................................................... 52
Table of Contents
v
ECLRUN User Guide
1
Introduction
ECLRUN is a utility for running the Schlumberger simulators and their pre- and post-processors.
Simulations can be run on local and remote servers and workload management systems. The tool is used
either directly at the command prompt or indirectly by GUI-based applications such as Petrel and
Simulation Launcher.
ECLRUN is an interface between an end user wishing to run a simulation and different types of remote and
local operating and queuing systems. The main functions of the ECLRUN software are:
Introduction
6
ECLRUN User Guide
2
Installation instructions
The Simulation Runtime package provides the following utilities:
ECLRUN supports both Windows and Linux operating systems is always installed when installing The
Simulation Runtime. For Windows operating systems, ECLRUN is packaged and installed via a standard
Windows MSI installer in which the Simulation Launcher can be optionally installed alongside ECLRUN.
For Linux operating systems, ECLRUN is packaged as an RPM and can be unpacked and installed using
standard commands.
In multi-machine situations, ECLRUN must be installed on both the machine submitting the workflow
client and also the remote machine that initializes the workflow server. The remote server can either be a
standalone singular workstation or cluster headnode depending on the workflow.
Note: The server must have a version of ECLRUN equal to or newer than the client.
To install ECLRUN and (optionally) the Simulation Launcher, follow these steps:
Installation instructions
7
ECLRUN User Guide
4. Should you wish to not install the Simulation Launcher, you can disable this component at the Custom
Setup section by clicking on the grey drive icon next to it and selecting "Entire feature will be
unavailable".
5. Proceed through the installation steps until the installation is complete.
The installer automatically appends the installation directory to your PATH environment variable so that it
can be executed on the command line without the need to manually locate it every time.
Note: A new command line window must be started and used after installation to ensure the environment
changes have propagated.
The Simulation Runtime package includes plink.exe and pscp.exe. These are part of the PuTTY
implementation of OpenSSH.
For upgrades of the Simulation Runtime, simply run through the same steps as an installation and select
"upgrade" when it appears.
For Downgrades of the Simulation Runtime, uninstall the existing version and install the version you desire.
If you wish to remove the Simulation Runtime, access the "Add or Remove a program" section in the
Control Panel, select "Schlumberger Simulation Runtime" and select "Uninstall".
The working directory must be a shared location with write access in which the HPC Cluster can also
access.
For the Simulation Runtime package installation, please follow the same steps outlined in the Windows
Installation - Standard.
Note: In version 2018.1 and later of this software, it is required that all compute nodes have access to the
same Simulation Runtime software. This is typically achieved by installing the Simulator and Simulation
Runtime software in a directory on the head node of the cluster and then sharing this path via SMB.
Windows HPC requires further configuration after installation. Refer to Windows HPC
Linux installation
Note: The Simulation Runtime is only officially supported for Red Hat and CentOS systems.
Installation instructions
8
ECLRUN User Guide
The Linux installation procedure will differ depending upon which distribution of Linux you are using.
Installations using native RPM commands will be more straightforward.
The RPM can be installed in a specific directory using the --prefix option when unpacking and
installing. However, If no prefix is given the default install location is /opt/ecl/.
The prefix is unlikely to persist if you install or upgrade the Simulation Runtime via the yum package
manager.
Note: Administrator privileges may be required to install to the Simulation Runtime. This is dependent on
the permissions of the folder you wish to install to.
Note: Remember to refresh your shell environment so that the environment changes take effect.
For upgrades of the installation of the Simulation runtime on Linux, simply run the same commands as
before as the previous installation will be overwritten.
To downgrade the Simulation Runtime, uninstall the existing runtime and install the older version.
To remove the Simulation Runtime (using the native RPM method) run rpm -e simulation_rte.
For rpm2cpio uninstalls, manually remove the files associated with the Simulation Runtime package.
Test deployment
The aim of a test deployment is to be able to test a specific version of ECLRUN as a part of the simulator
workflow without interfering with the production installation. The test deployment has to be performed by
system administrators.
Installation instructions
9
ECLRUN User Guide
contents (that is eclrun) and this can potentially stop certain versions of the simulators from being launched
properly. No production environment can allow such a risk.
ECLRUN has a version matching mechanism to ensure that the latest version of ECLRUN is always the
one on the server. This means that the server installation may be upgraded to a newer version without
upgrading clients (Windows machines running Petrel). Client installations may be upgraded in due course.
We strongly recommend that the latest version of ECLRUN is installed on the server.
Installing new software is always a risk, especially in production environments and therefore has to be
conducted with caution. A test deployment enables you to anticipate any potential issues without causing
disturbance to the production deployment.
Prerequisites
There must be a Linux server that is either a fresh installation (with no simulator installed) or a production
server (with at least one version of the simulator installed and being used by clients). In the case of a fresh
installation, verify that the SSH daemon is installed and enabled.
It is assumed that LSF (the Load Sharing Facility developed by Platform Computing) is already installed on
the Linux cluster and that it is configured to work with the simulators.
Installation instructions
10
ECLRUN User Guide
unsetenv ECLPATH
setenv PATH /testecl/macros\:$PATH
if ( -r /testecl/macros/@eclrunsetup.csh ) then
source /testecl/macros/@eclrunsetup.csh
endif
• If the test user's shell is bash then edit the bashrc file as follows:
unset ECLPATH
export PATH=/testecl/macros\:$PATH
if [ -r /testecl/macros/@eclrunsetup.sh ]; then
. /testecl/macros/@eclrunsetup.sh
fi
• If the test user has a local copy of eclrun in his default home directory (for example /home/
testuser) then delete it.
Note: Remember to unset the ECLPATH environmental variable just for the test user as it may be modified
by global config files (/etc/csh.cshrc or /etc/bashrc) for all the users.
Testing
• Log out and the log back in as the test user to open a new session with processing start up files (that is su
- testuser). The test user, when associated with the test installation, will not be able to use the
production installation.
• Run the eclrun command and verify its build and release date to make sure that the correct test
ECLRUN is being used
• Go to a folder that contains the test dataset.
• Run a simulation locally using the command eclrun SIM TESTCASE.DATA and see if it works
(where SIM is your simulator of choice).
• Run a remote simulation from Windows using the same test user id.
Note: The remote version of ECLRUN must always be the latest one.
The above installation procedure describes how to set up just one user account to test the test deployment.
However, the setting can be made for more users if required for testing.
The test ' start up file (.cshrc or bashrc) must be edited to point to the production installation.
Installation instructions
11
ECLRUN User Guide
Known limitations
The following known limitations exist at present:
• Spaces and non-portable characters are not supported in the names of referenced files. The use of
characters other than alphanumeric characters, hyphens, and underscores should be avoided when
transmitting data between platforms as there is no guarantee that both platforms are capable of
interpreting these characters.
• ECLRUN does not support passwords containing any of the characters listed below:
◦'
◦"
◦\
◦&
◦|
◦(
◦)
◦<
◦>
◦^
Note: The use of passwords on the command line is retained for compatibility only. The use of a
passphrase-less authentication mechanism is strongly encouraged.
• The ~ notation used in Linux, for example using ~/path to represent /home/user/path, is not
allowed in the INCLUDE or SLAVE paths in the simulator dataset.
• Included files in simulations must not use absolute paths if they are to be executed anywhere other than
the local machine. These absolute paths will never be valid on the remote compute infrastructure and will
result in simulation failure. Paths relative to the principal simulation case file should always be used.
Installation instructions
12
ECLRUN User Guide
3
Configuration
ECLRUN supports a hierarchical structure of configuration files. A configuration file is not required by
ECLRUN to operate. If a configuration file does not exist then ECLRUN will use default values. However,
if a configuration file exists, the file must be in the correct XML format with valid variable names and
values.
The eclrun.config file is installed automatically during the installation on Linux platforms. The
eclrun.config file is not installed on Windows platforms by default. The default location is
ecl\macros. The eclrun.config is a well-formed XML file and is formatted consistently across all
the supported configuration files in the hierarchy.
The SimLaunchConfig.xml file is created when you first open, and then exit, the Simulation Launcher
on Windows platforms. The file can then be accessed in either the $USERPROFILE\Application
Data\Schlumberger\Simulation Launcher directory or through the Simulation Launcher
application.
As a result of the hierarchical configuration, a user can easily override the global settings by creating a
configuration file in their home directory.
Configuration variables will generally only affect the behaviour of the instance of ECLRUN running
locally. When submitting to a remote compute infrastructure, ECLRUN will execute again and evaluate the
configuration files local to that instance.
Note: The names of configuration variable tags can have any capitalization but opening and closing tags
must match for the XML configuration to be well-formed and parsable.
Configuration
13
ECLRUN User Guide
Configuration variables
The meaning of all supported configuration file variables is presented below.
Options specified as 'boolean' must use the text strings True and False.
• ALLOWABSPATHSINDATA - accepts boolean values only. Defaults to False. If set to True, simulation
cases with non-portable absolute paths will be transferred to the remote compute infrastructure, even if
the simulation may fail.
• AffinityOnWindows - accepts boolean values only. Defaults to True. If set to False it disables the
processor affinity setting when running serial simulations locally on Windows Vista or higher.
• ChkLicOnWin - accepts boolean values only. Applies to Windows only. If False (default) then license
availability is not checked prior to running a local Windows simulation. If True, ECLRUN will attempt to
determine if all the licenses requested for the simulation case are available. If any license is missing, the
job will be held in the queue to wait until it becomes available.
Note: This option requires that the FlexLM license tool lmutil.exe is present in the current path. This
tool is provided in the Schlumberger Licensing package provided with most simulator installations.
Configuration
14
ECLRUN User Guide
•
LsfLicenses - Equivalent to WLMLICENSEAWARE
Note: The name of this variable is retained for compatibility reasons. This option enables license-aware
scheduling in all workload-managed runtime environments, such as LSF, SLURM, Windows HPC, and
PBS.
Note: This option is set on the submitting (client) instance of ECLRUN but will affect the nature of the job
submitted to the remote workload manager.
• LsfLicDuration - accepts integer values. Default 1. When licence aware scheduling is in use on the
LSF workload manager, a duration parameter is provided to the LSF bsub command. This option allows
the value to be modified. Only integer values are accepted. These are passed through to bsub as "minute"
values.
• PrivateSshKey - accepts a path to the user's private SSH key generated by PuTTY (http://
www.PuTTY.org). See the PuTTY documentation for details on how to generate SSH keys.
Note: Passphrase-less authentication using the pageant.exe agent from the PuTTY distribution is also
supported and is preferred. This mechanism is retained for compatibility.
• PUTTYSSHBINARY - Override PuTTY SSH (plink) binary. Must be absolute path to binary file.
Replacement binary must support PuTTY-like arguments.
• PUTTYSCPBINARY - Override PuTTY SCP (pscp) binary. Must be absolute path to binary file.
Replacement binary must support PuTTY-like arguments.
• PUTTYSSHADDARGS - Provide additional command line arguments to PuTTY SSH (plink) command
when invoked by ECLRUN.
• PUTTYSCPADDARGS - Provide additional command line arguments to PuTTY SCP (pscp) command
when invoked by ECLRUN.
Configuration
15
ECLRUN User Guide
• OPENSSHBINARY - Override OpenSSH ssh binary. Must be absolute path to binary file. Replacement
binary must support OpenSSH-like arguments.
• OPENSCPBINARY - Override OpenSSH scp binary. Must be absolute path to binary file. Replacement
binary must support OpenSSH-like arguments.
• OPENSSHADDARGS - Provide additional command line arguments to OpenSSH ssh command when
invoked by ECLRUN.
• OPENSCPADDARGS - Provide additional command line arguments to OpenSSH scp command when
invoked by ECLRUN.
• Queue - default workload manager queue.
• Queuesystem - default queuesystem for jobs.
• Remoteuser - default username on submission server.
• RTEZIPCOMPRESS - Accepts True or False. Defaults to True. Selectively disable compression of results
when retrieving them from a remove HPC cluster. Results will still be packaged in a zip file for transfer.
• Server - default submission server for jobs.
• Shell - system shell which should be used by the bsub command when submitting to LSF. The Shell
variable defaults to the remote user's default shell.
• SimHistory - maximum number of rows in the history file. If the number is exceeded then the earliest
finished and failed simulations will be removed from the file to compensate for the excess. ECLRUN
does not throw an error if the number of finished and failed simulations to be removed is not sufficient
(the mechanism does not remove jobs that are: not started, running, or queued). This affects
the length of lists of simulations in the Simulation Launcher, which always reflects the current state of
the file. Defaults to 100 rows.
• SingleSimLicensing - accepts boolean values only. Intersect simulator-specific. Applies to a client
performing job submission only. If False (default), the simulator will use the old licensing scheme. If
True, the simulator will use Intersect Enabler licensing scheme (sim_ul feature), unless overridden by a
command line --use-sim-ul=no option.
• TempRunDir - remote work directory. A temporary directory for each run will be created in this
location if file transfer is performed. This replaces the --directory parameter in earlier versions of
ECLRUN.
• USEOPENSSH - Use OpenSSH binaries on on Windows operating systems instead of PuTTY. Defaults to
False. Has no effect on Linux operating systems.
Note: Requires a functional version of OpenSSH ssh.exe and scp.exe to be locatable in the user's path. This
option is intended to be used with the OpenSSH implementation provided by Windows 10. Other Windows
implementations of OpenSSH may function but are not explicitly supported.
Configuration
16
ECLRUN User Guide
◦ False - GPU resource requests from the workload manager are disabled. GPU enabled simulations
will be submitted but may fail as the required resources will not be requested.
Note: This option must be set an ECLRUN configuration file local to the workload manager where it must
take effect, e.g. the global eclrun.config on the headnode of the cluster.
Note: GPU resources must be available and configured in the workload manager for this option to function
correctly.
If the same variable is defined in at least two different configuration files, the one which is higher in the
hierarchy will take precedence over the lower priority one.
Configuration files
The table below shows the hierarchical structure of configuration files and their relative priority.
To maintain compatibility with the Simulation Launcher, ECLRUN not only supports its own configuration
file (eclrun.config) but also respects the ECLRUN configuration section of the Simulation Launcher's
XML configuration file.
Note: If unexpected behaviour is observed on Windows platforms, check the Simulation Launcher-
generated configuration for any conflicts with the desired configuration.
Configuration
17
ECLRUN User Guide
A variable defined high in the hierarchy overrides the corresponding variable defined lower in the hierarchy.
The mechanism works on a variable to variable basis.
Linux platforms
On Linux platforms, only eclrun.config is evaluated. The system configuration is located under ecl/
macros i.e. co-located with the eclrun binary. Users may also define a private configuration file located
in their home directory, also named eclrun.config.
Configuration
18
ECLRUN User Guide
eclrun.config
This is a well-formed XML file. The document-level element is named <Configuration/> with one
child element named <Eclrun/>. The <Eclrun/> element is a parent of all supported variables.
The Windows installation does not create an eclrun.config file. It can be created by the user or
administrator as required.
ECLRUN always searches for eclrun.config in the same location as the eclrun executable. By default,
it is in the macros/ directory.
Configuration
19
ECLRUN User Guide
<Configuration>
<Eclrun>
<FileMode>both</FileMode>
<WindowDirs>%HOME%</WindowDirs>
<TempRunDir>%HOME%</TempRunDir>
<LsfLicenses>False</LsfLicenses>
<EfPort>8080</EfPort>
<EfPath>enginframe</EfPath>
<EfSsl>enginframe</EfSsl>
</Eclrun>
</Configuration
This configuration can be used as a starting point for Windows or Linux configuration files.
SimLaunchConfig.xml
This is a well-formed XML file. The document-level element is named <Configuration/> with
potentially many child elements, among which one is named <Eclrun/>. ECLRUN variables are defined
as their sub-elements (as it is done in eclrun.config).
This file is created when the Simulation Launcher is started for the first time. Variables defined here take
precedence over corresponding variables in eclrun.config. If the Simulation Launcher is not going to
be used then the ECLRUN configuration should be controlled by eclrun.config.
There are two more XML tags that are used by ECLRUN:
• NetworkFile
• SimLauncherQueues
The NetworkFile element is used for pointing to a network configuration file. If this element is set and
valid, the network file settings override the local ones accordingly.
Example
The example below shows the way the network_config.xml file can be linked from the
SimLaunchConfig.xml file:
<NetworkFile>\\server\dir\network_config.xml</NetworkFile>
The SimLauncherQueues XML element is used to define a list of queue names for remote servers.
Contrary to the other settings, if different queue lists are defined in different configuration files in the
hierarchy, they merge rather than override themselves. The list of queues is used by ECLRUN in an
interactive mode to provide the user with predefined queue names when submitting to a remote machine
with a queue system. This does not stop the user from choosing a queue from outside the list.
The example below shows a list of two queues defined on two different servers:
Configuration
20
ECLRUN User Guide
<SimLauncherQueues>
<Queue name="Xeon" options="" remotequeues="Xeon"
server="remote_server"/>
<Queue name="Opteron" options="" remotequeues="Opteron"
server="remote_server2"/>
</SimLauncherQueues>
See the Simulation Launcher User Guide for more information about the SimLaunchConfig.xml file.
The custom configuration file has to be placed in the ' home directory. This location is identified by the
Linux $HOME environmental variable. The name of the custom configuration file is fixed to
eclrun.config.
The network file follows the same XML format as the SimLaunchConfig.xml file and has the same
hierarchical dependencies.
The location of the network file is not fixed (unlike the SimLaunchConfig.xml file) and therefore it
must be linked to from one of the fixed configuration files. The only configuration file that can point to the
network file is SimLaunchConfig.xml. This file is located in one of the directories.
or
The file may define additional queues as well as redefining selected configuration variables.
If the network file is linked from the current user configuration file, it does not override the 'all users'
configuration. If the file is linked from the 'all users' configuration file, it overrides all of the configuration
files.
To link to a network file, use the <NetworkFile/> XML tag. The network file name is not restricted but
it must be a valid file name on Windows and any correct Windows path may be used as a network path
(including UNC paths). The example below shows a fragment of the SimLaunchConfig.xml file that
points to a file called network.xml in the network path '\\server\teams\configuration'.
Configuration
21
ECLRUN User Guide
<NetworkFile>\\server\teams\configuration\network.xml
</NetworkFile>
See the Simulation Launcher User Guide for additional information on the network configuration file.
Password-less connection
ECLRUN supports an SSH password-less connection when submitting remotely from Windows to Linux
based workload managers. Submission to Windows HPC does not require a password as the authentication
mechanisms that are integral to Windows are used.
Before the password-less connection can be used it must be properly set up. In particular, it requires that a
pair of private/public SSH keys are generated in PuTTY and then set up on local and remote machines.
When the keys are set up correctly, the user is not prompted for a password when submitting remotely (or
when killing/checking a job).
Note: A password may be provided on the command line for Windows clients connecting via SSH to Linux
compute infrastructure. This option is retained for compatibility only and is strongly discouraged.
Configuration of password-less authentication using SSH keys and Pageant/ssh-agent is both more secure
and more flexible.
Using PuTTY
The procedure for setting up PuTTY keys is:
Note: The private key file can be protected with a password for security.
• The public key must be appended to the $HOME/.ssh/authorized_keys file on the remote Linux
machine for the user.
• Start the Pageant PuTTY client utility. This agent will act as a keystore for PuTTY SSH keys and will
present any private keys to which you have granted access to any PuTTY SSH client that requests them.
Configuration
22
ECLRUN User Guide
• In the system tray, right click the Pageant icon and select 'Add key'. Select the private key file generated
above. Provide and enter your password as required.
Note: Pageant will remain running while you are logged in but must be restarted and given access to your
SSH keys after logging out.
• Pageant will now respond to requests by ECLRUN via plink.exe or pscp.exe to authenticate
without prompting for a password.
Using OpenSSH
• Use the ssh-keygen utility to generate a keypair. This should be protected with a passphrase. The public
component of the keypair must be appended to the $HOME/.ssh/authorized_keys file on the
remote Linux machine for the user.
• The default identity located in $HOME/.ssh will be used for authentication to remote systems
• The utilities ssh-agent and ssh-add can be used to cache OpenSSH identity files protected by a
passphrase to allow non-interactive use with ECLRUN.
Note: For OpenSSH functionality on windows the USEOPENSSH configuration option must be set to True
Configuration based keyfile Older versions of ECLRUN were limited to using a key file. This method is
provided for compatibility but has some limitations:
Configuration is as follows:
• Place the private key file in a default or custom location on the local machine.
• By default, ECLRUN searches for a private key file named PuTTY.ppk that has been copied (or
moved) to the user's profile directory (which is given by the %USERPROFILE% environment variable,
for example, c:\Users\jsmith).
• A custom location for the private key file is any location on the local machine that is pointed to by a
configuration file variable in one of ECLRUN's configuration files. The PPK file can be renamed
according to these settings. The full path to the file (including its name and file extension) has to be
communicated to ECLRUN using the PrivateSshKey configuration file variable. Invoking ECLRUN
with the -p option disables the password-less connection.
Configuration
23
ECLRUN User Guide
device. A data storage area containing simulation cases is presented to both Windows clients via the SMB
protocol as well as Linux-based machines, typically via NFS. This means that if a file is created on a shared
drive on Windows it is available in the corresponding path on Linux.
If no shared drive configuration is available, for example when the client machine and compute
infrastructure do not share the same storage device, ECLRUN will transfer the input and output data to and
from the remote compute infrastructure. The correct simulation input case data and results data are
determined by ECLRUN and are packaged and transferred during submission and subsequent check
operations. In this case, simulation input files will be transferred over the network to a uniquely named
temporary directory created by default under the folder pointed to by the TempRunDir configuration file
variable in eclrun.config on the remote submission machine. This defaults to the user home directory.
This file transfer mechanism is the most flexible but is unlikely to be the most efficient when use of a
shared storage device is possible.
Shared drives are controlled by configuration variables in the eclrun.config file local to the
submission server. This is typically the head node of the chosen simulation cluster. The shared drives
mechanism cannot be controlled by any of the local configuration files.
The remote global configuration file can be modified by an administrative user. However, by creating a
remote user configuration file, a user can set where to mount shared drives.
Note: For ECLRUN shared mode configuration to work correctly, all data directories on the cluster should
be statically mounted (not auto-mounted). Auto mounted directories interfere with the working of
ECLRUN's shared directory workflow and can cause data to be transferred unnecessarily.
ECLRUN creates a unique file in a datasets directory when shared mode is configured. ECLRUN then
searches the directories defined in <windowsdirs> for this file. If these directories are auto-mounted, the
search result is negative and ECLRUN falls back (if configured) to transfer mode where data is transferred
to the area configured in the <temprundir> variable of eclrun.config. This not only slows down
submission and increases network traffic, but it can also cause problems with certain Petrel workflows.
In addition, auto-mounting directories can cause Eclipse working directories to be unmounted during a
simulation, causing subsequent Eclipse disk writes to fail and therefore the simulation to fail or abort.
If a shared drive cannot be detected, the remote cluster can request that the local machine fall back to a file
transfer workflow. This may not be desirable and can be overridden in the configuration.
There are three configuration variables involved in making the decision about whether to use the shared
drive mechanism or to transfer files: FileMode, WindowsDirs, and TempRunDir.
• FileMode decides whether shared drives are to be used as the only mechanism of handling files
(SHARE), whether the only mode allowed is file transfer (TRANSFER), or if both are allowed (BOTH). If
FileMode=BOTH, ECLRUN will try to detect shared drives first and, if that fails, it will transfer files.
If FileMode=SHARE and no file share is detected, ECLRUN raises an error message and terminates.
• WindowsDirs defines a comma-separated list of possible Linux mount points (paths) to be checked
when searching for shared drives.
Configuration
24
ECLRUN User Guide
• TempRunDir represents a location to transfer files to when either a shared drive is not detected
(FileMode=BOTH) or when only file transfer is allowed (FileMode=TRANSFER).
This diagram shows the concept of transferring files when a shared drive is not detected.
This diagram shows the concept of shared drives with the Windows to Linux mapping shown below the
data repository.
Configuration
25
ECLRUN User Guide
Example
This example demonstrates how to set up a shared drive and shows the contents of the remote
eclrun.config file
<Configuration><Eclrun>
<FileMode>BOTH</FileMode>
<WindowsDirs>/linux/shared</WindowsDirs>
<TempRunDir>%HOME%</TempRunDir>
</Eclrun></Configuration
The Linux path: /linux/shared should be then mounted on Windows, that is:
When you try to submit a DATA file which is physically located on theX:\ drive (or in any subdirectory), it
will be automatically detected at submission time and the DATA file will not be transferred. You do not need
to specify any additional ECLRUN command line options because the shared drives detecting mechanism
will work transparently. If the DATA file is not on the X:\ drive and therefore the shared drive is not
detected, the file will be transferred to the user's home directory on the remote machine.
Windows HPC
For Windows HPC Cluster head nodes, three cluster-wide environment variables need to be set:
• SLBSLS_LICENSE_FILE
• ECLPATH
• F_UFMTENDIAN.
Note: ECLPATH should be set to a location that represents the simulator and simulation runtime location. It
is convenient to use an SMB share that is accessible to all compute nodes in the cluster.
Note: Windows HPC requires that the simulation case data is shared between clients and the compute
infrastructure. There is no file transfer mechanism currently available for Windows HPC.
Configuration
26
ECLRUN User Guide
A solution to this problem is to interrogate the available licenses in a given environment and only schedule
the simulation for execution when the necessary licenses are available.
ECLRUN can interrogate a given simulation case and determine the required licenses for that simulation. It
then passes this information to the workload manager in use and the information can be used to make
scheduling decisions. ECLRUN only provides licenses as an abstracted numerical resource to the workload
manager. It is required that the workload manager understands those resource requests and makes the
appropriate scheduling decision.
License resource information is provided when the configuration option LsfLicenses is set to True.
This option must be configured at the location where jobs are to be scheduled. In other words, if jobs are to
be run on a Windows workstation, the setting is configured locally. If jobs are to be submitted to a remote
cluster, the option must be configured on the head node of that cluster (as specified by the -s command
line option to ECLRUN.
When executing locally, with license scheduling enabled, ECLRUN will act as a workload manager and
prevent the execution of a simulation until licenses are available. ECLRUN interrogates the
SLBSLS_LICENSE_FILE and LM_LICENSE_FILE environment variables to locate licenses. The
SLBSLS_LICENSE_FILE environment variable takes precedence over LM_LICENSE_FILE. If
SLBSLS_LICENSE_FILE does not exist, LM_LICENSE_FILE is expected to point to a valid license.
The license reservation mechanism is currently supported when submitting to LSF, PBS Pro, and Microsoft
HPC. These workload managers must be configured to interpret the ECLRUN-provided license resources
and to interrogate the license service in use.
Information on how to integrate Schlumberger licenses as resources into these workload managers are
provided in the Simulation Runtime distribution in the 3rdparty/integration directory. These files
are provided as examples with configuration guidance. Integration will require the compute resource
administrator to adapt them as appropriate to their environment.
Configuration
27
ECLRUN User Guide
ECLRUN supports license-aware scheduling for multiple realization (MR) workflows. MR is a concept of
running multiple simulation cases that represent different realizations of the same model using just one set
of simulation license features.
To enable MR license scheduling in ECLRUN for PBS and LSF, set the ECL_MR_SCHEDULING
configuration option on the submission host. This is typically the head node of the cluster. It is often more
convenient to set this as an environment variable that is set upon login for each user. For example, use a file
in /etc/profile.d/slbvars.csh containing:
setenv ECL_MR_SCHEDULING 1
For Windows HPC clusters set ECL_MR_SCHEDULING to True using the cluscfg setenvs
command (see Windows PC installation - Windows HPC). When the variable is set and ECLRUN detects a
valid MR simulation case, ECLRUN will pass the shared simulation license request to the workload
manager's scheduler. See the detailed instructions in the 3rdparty/integration folder on the
distribution media for each scheduler. License scheduling for MR jobs will only work on homogenous
clusters where the submission host has the same architecture as the execution hosts.
For Slurm clusters, licence resources must be defined as sacct resources. A functional sacct database
backend must be present. In addition, FlexLM licence resources must be defined with a server and this
server must be passed as part of the licence resource request. In order for ECLRUN to do this, the
configuration option RTESLURMFLEXSERVER must be set to the same value as the server in the sacct
database providing SLB licences.
Simulator-specific notes
• Eclipse: Only some Eclipse license requirements can be inferred from the data deck. Some license
features must be explicitly mentioned in a LICENSES stanza (for example, gaslift). For more
information, consult the Eclipse Keyword reference documentation.
• Eclipse: parallel and multiple_realisation license features are interchangeable after version
2011.1. This means that the parallel license feature may act as multiple_realisation and
vice versa. This equivalence can only be approximated by most license scheduling mechanisms and may
not be 100% reliable in all circumstances.
• Intersect: ECLRUN by default does not use Intersect Enabler licensing scheme (sim_ul feature). This
licensing scheme can be enabled by setting the SingleSimLicensing configuration file variable to
True (defaults to False). The scheme can also be enabled or disabled from the command line using the --
use-sim-ul option.
• Eclipse E300: Version 2020.2 and later of this simulator can use an alternative licences for the
thermal and compositional functionality, checking out the alternative sim_thermal and
sim_compositional features when requested. ECLRUN will detect this has been enabled in the
LICENCES stanza of the E300 case and modify the requested licences appropriately.
Configuration
28
ECLRUN User Guide
• Intersect: Version 2020.2 and later of this simulator can use an alternative licences for the ix_hot and
ix_compositional functionality, checking out the alternative sim_thermal and
sim_compositional features when requested. ECLRUN must be passed the command line
argument --use-sim-ulopt
• SchedMD Slurm A minimum major version of 23 is required for Slurm operation. A functional sacct
database backend must be present and accessible by all nodes in the cluster. MR Licence scheduling,
when enabled, uses the wckey capability in sacct for job tracking and correct licence assignment.
Users must be able to generate arbitrary wckey values foir this to function.
Configuration
29
ECLRUN User Guide
4
Usage
where
• PROGRAM is
◦ the name of a supported simulator. For example: e300, eclipse, frontsim,ix,visage
◦ check to check on the status of the previously submitted simulation
◦ kill to abort a queued or running job
• FILE is the name of the input data file. This is required for both submissions of a simulation and in
conjunction with the check and kill operations. It may include a leading path and a trailing extension.
Each runtime environment will include variations of these options. Some are specific to particular workload
managers and environments.
When a simulation is requested to be executed via ECLRUN, state information about the simulation is
associated with the given input file. This allows subsequent calls to ECLRUN when provided with the
location of the data file, to query the status of or terminate the simulation without needing to provide the
complete command line.
Note: If a simulation is requested to be executed for the same data file before the previous simulation has
completed, the state information will be overwritten and the previous simulation will be ignored. This may
result in simulation data being overwritten. Ensure that the previous simulation has completed by using the
eclrun check FILE command.
Usage
30
ECLRUN User Guide
Options
General
-h, --help
Show the help message and exit.
--debug=DEBUG
Display debugging messages. The choices are:
• file - Increases the ECLRUN logging level to debug for logfiles only.
• both - Increases the ECLRUN logging level to debug for logfiles and the console.
-v VERSION, --version=VERSION
Version of the application to run; default=latest,
ECLRUN will attempt to detect the most recent version of the chosen application when this option
is not specified.
If this option is specified ECLRUN will only attempt to execute the version specified.
--np=NUM_PROCESSORS
Specify the number of processors. Defaults to 1. If specified (greater than or equal to 1) this setting
overrides the default mechanism of determining the number of processors needed.
-c COMM, --comm=COMM
Method of inter-process communication for parallel runs. Normally this is not necessary as the
system will determine the best available. Valid values are:
--exe-args=EXE-ARGS
Pass a string of arguments (EXE-ARGS) directly without any parsing to the executable as command
line parameters. Applies to both local and remote submissions. Use quotation marks to indicate the
beginning and the end of the string. Implemented for all supported programs.
--license-check=OVERRIDE_LICENSE_CONFIG_SETTINGS
This option overrides the configuration file variable ChkLicOnWinwhich enables or disables the
license check for local Windows simulations. If the license check is disabled, the
LICENSESkeyword in the DATA file is also ignored. The available options are:
Usage
31
ECLRUN User Guide
--summary-conversion=yes|no
This option controls the HDF summary conversion step. It accepts yes and no values. The default
is yes, which means that the conversion step is performed. To disable the conversion, type no.
Stand-alone commands
ECLRUN provides a number of options that are intended to stand alone without a simulator. These are
generally used to interrogate aspects of the runtime environment.
--report-versions PROGRAM
Print out the list of all installed versions of the specified PROGRAM. The list of versions contains
entries separated by single spaces, and sorted in reverse alphanumeric order. If the program is not
installed, the report prints out 'none'.
--report-queues [--queuesystem=QSYSTEM] [-s SERVER] [-u user]
Display a space-separated list of Workload manager queues in alphabetical order (in addition use the
-sand -uremote submission options). This option may not be meaningful for all workload
managers.
--install-path [-v VERSION] PROGRAM
Print out the full installation path to the specified program (i.e. eclipse, frontsim). The path
does not contain the name of executable itself. To query for a specific version, use the '-v'option.
If the program is not found, the report prints out 'not installed'.
--zip-input PROGRAM FILE
Create a zip file containing the input file as well as all include files. The resulting file's basename
will be the same as the input file's, for example CASE.DATA -> CASE.zip. This works with all
supported simulators on all supported operating systems.
For Eclipse simulators (eclipse, e300, frontsim), the parallelism of the job is determined by parsing the
simulation case. To run Eclipse in parallel mode, you must add the PARALLEL keyword to the RUNSPEC
section of the DATA file. Specifying the --np parameter to Eclipse is typically ignored at execution and the
parallelism determined from the deck is used.
Some simulators may also support the --threads argument. ECLRUN will disregard this option for
simulators that do not support threading.
Note: The total number of processors requested is the product of the --np and --threads arguments.
This can result in very large numbers of processors being requested at runtime
Usage
32
ECLRUN User Guide
Note: While not strictly required, it is good practice to run the following command after a simulation
completes:
This will remove any state information left by the previous simulations.
ECLRUN uses the eclrun.mutex file in the Windows TEMP directory as a semaphore for
synchronization purposes (the MUTEX file contains a PID of the instance of ECLRUN, which is using it at
present). If this file exists it means that an instance of ECLRUN is checking the availability of resources
and the other instances of ECLRUN must wait until it is finished. When the check on the availability of
resources is complete, ECLRUN removes this file and another one may start the same process. Only one
instance of ECLRUN may check the availability of resources at any one time.
An application that runs on a multicore or multi-CPU Windows workstation is not permanently associated
with any of the workstation's cores. The operating system may change the core that the application operates
on, and this will cause the reallocation of resources every time that the change happens. This in turn will hit
the application's performance.
Running a serial simulation would be more efficient if the simulation process were tied to the same core
throughout the entire run. Unfortunately, processor affinity is not set by default. Using ECLRUN sets the
processor's affinity for each serial simulation.
Usage
33
ECLRUN User Guide
The feature for setting processor affinity for local serial simulations on Windows is enabled by default in
ECLRUN. The AffinityOnWindows configuration file variable has to be set to False in order to disable
this capability.
Affinity is supported for Eclipse 100, Eclipse 300, FrontSim and VISAGE.
To enable the affinity setting for a FrontSim simulation requires that the THREADFS keyword is set to 1 in
the simulation DATA file, that is the simulation is run in serial mode. Setting processor affinity is not
supported for versions of FrontSim older than 2011.1. See the FrontSim User Guide for more details on this
keyword.
Affinity is automatically set by the MPI library when running parallel simulations if configured correctly
(see the Eclipse Installation Guide).
Note: Omitting the -q option on a command line containing a -s parameter may unexpectedly result in
this mode of execution. The simulation will execute on the specified server, typically the head node of a
cluster.
Example
When the simulation is finished, all result files are automatically copied to the user's local working
directory. You do not need to use the check command.
Unlike local simulation execution, workload managers execute the simulation asynchronously at a time
determined by resource availability. ECLRUN will submit a correctly described job to the workload
manager, specifying the appropriate resources and command line to execute. The progress of the simulation
must be determined by running the ECLRUN check operation. When the simulation terminates, this
check operation will return and indicate the state of the job. If necessary, results data will be transferred
during these check operations.
Usage
34
ECLRUN User Guide
Submission of an Intersect simulation to a PBS cluster (pbscluster) running 32-way parallel as user
clusteruser:
Usage
35
ECLRUN User Guide
Intersect
The program names for the Intersect suite of simulators are as follows:
• Intersect - ix
• Intersect Migrator - ecl2ix
• ENS - ens
• Intersect Auto-generation of CSV = ixf2csv
Intersect uses the .afi file extension. The Intersect Migrator uses the .DATA file extension. ECLRUN
will attempt to auto-detect the correct case file if the suffix is omitted.
--use-sim-ul=yes|no
This option makes the simulator use the Intersect Enabler licensing scheme (sim_ul feature) and
overrides the configuration file variable SingleSimLicensing. If neither is set, the value
defaults to no. Consult simulator documentation for details of the Intersect Enabler licensing
scheme. The available options are:
--use-sim-ulopt
This option makes the simulator use the alternative licence features for ix_hot and
ix_compositional, sim_thermal and sim_compositional. This option will also
ensure ECLRUN requests the correct licence resources from the chosen workload manager if
required.
Usage
36
ECLRUN User Guide
-t NUM_THREADS, --threads=NUM_THREADS
Specify the number of threads to spawn per MPI rank (heavyweight process). --np represents
number of heavyweight processes, which each spawns number of threads specified by --threads
option. This option is not available in all simulators. Consult the relevant simulator documentation
for information on threading support and usage.
--gpu, --gpus
Request GPU enabled simulation. Not all simulation cases are compatible with GPU use, nor are all
combinations of processes counts and threads. If a simulation is determined to be incompatible with
GPU use, submission will be blocked. Consult the relevant simulator documentation for information
on GPU support and usage.
--auto-hybrid
Request Intersect to determine an appropriate number of processes and threads for this the submitted
case. This option should be used in combination with the --np option to indicate the total number
of CPU cores the simulation should use. If threading is recommended for the simulation case in
question, a combination of processors and threads will be determined and used for submission to the
chosen workload manager.
Note: ECLRUN will use the Intersect case analyser to determine if the requested simulation is compatible
with the use of a GPU. If a simulation case or combination of options are not compatible with GPU enabled
execution, submission of that simulation will be blocked. Certain simulator options may not be compatible
with GPU accelerated execution. Consult the simulator documentation for further details.
Intersect Migrator
Migrator is a conversion tool. It is used to convert Eclipse DATA files to the Intersect AFI format.
Running Migrator is usually the first step when running a simulation, where you first convert an existing
Eclipse DATA file to an Intersect AFI file. When the AFI file has been generated it can be then edited.
The example below shows how to convert an existing DATA file named CASE.DATA to an Intersect AFI
file (CASE.AFI):
Upon a successful completion of the conversion, you will see a newly generated CASE.AFI file. In the
above example, the input file was specified with DATA extension, which is not mandatory - if the file
extension is omitted, ECLRUN will still search for the CASE.DATA file.
The following example shows how to submit a Migrator task on a remote queue (lsfQueue) on a remote
server (called server):
Usage
37
ECLRUN User Guide
This program name provides access to the Auto-generation of CSV capability of supported Intersect
versions. The input data set required is an Intersect case. The principal file is the Intersect .afi file.
Example execution:
Upon a successful completion of the process a file named case_IXF2CSV.PRT will be produced with
logging information.
Note: ixf2csv This "simulator" does not produce any console output.
Coupled Intersect simulations can be launched directly from ECLRUN as a single simulation using a
command line similar to the following:
The configuration file specified is a whitespace separated table of the following form:
FILE NP ENGINE
REMOTE_RESERVOIR.afi 1 fm
COUPLED_RES1.afi 2 ix
COUPLED_RES2.afi 4 ix
Additionally Intersect can be coupled with PIPESIM using the ENGINE name ixes. This requires at least
one installation of PIPESIM in the target runtime environment. It also requires a mimum version of
Intersect to be installed. For more information consult the Intersect documentation.
Note: If PIPESIM is to be executed on a windows system, a directory in ECLPATH must be created that
matches the installed version of PIPESIM. e.g. if PIPESIM 2021.2 is installed, ensure the directory c:
\ecl\2021.2 exists. this diretory may already exist and contain another simulator installation. If the
directory must be created it need not contain any files. The presence of the directory is sufficient to ensure
correct operation.
Usage
38
ECLRUN User Guide
HDF converter
HDF (Hierarchical Data Format) is a database file format. HDF is open source and the software that
operates on it is distributed at no cost.
Petrel uses the HDF format in a type of simulation summary file, with the .h5 file extension. The summary
file is a product of the conversion of the standard simulation summary file to the .h5 file format. This
format enables Petrel to process summary vectors faster.
By default, the conversion step is always initialized by ECLRUN after each simulation by calling a
conversion tool, named ConvertSummaryData2DataBase.exe (the executable is located in the
same folder as the simulator executable, such as c:\ecl\2014.1\bin\pc). The resulting .h5 file is
then processed by Petrel.
The conversion step is performed for any supported simulator, simulation workflow (local and remote) and
operating system, regardless of the simulation status. The conversion is conducted even if ECLRUN has
been called outside of Petrel.
To disable the conversion step, add the --summary-conversion=no option to the ECLRUN
command line.
VISAGE
The program names for the Visage suite of simulators are as follows:
• Visage - visage
• Visage batch - visage
• Reservoir Geomechanics coupling - rgcoupler
Visage simulations use the .mii file extension. ECLRUN will attempt to auto-detect the correct case file if
the file suffix is omitted.
ECLRUN supports two modes for running VISAGE simulations: the single simulation of an MII file and a
batch simulation of a group of MII files listed in a VBT file. Either mode can be executed locally or
remotely on a cluster.
Note: Reservoir coupled simulations are generated by Petrel Geomechanics and are normally executed with
Petrel as the workflow manager.
Single simulation
The following example shows the execution of an eight-way parallel simulation run of a single MII file:
Usage
39
ECLRUN User Guide
Batched simulation
The following example shows the contents of a VBT file set up for batch simulation:
EREC0A.MII
EREC00.MII
EREC01.MII
EREC02.MII
A batch simulation allows for a group of VISAGE simulations to be run together by a single instance of
ECLRUN. When submitted remotely using a file transfer, all of the input and include files for all of the
cases that belong to the batch run will be transferred to the remote cluster. All simulation output files will
also be fetched back when checking the status (with the check command).
The following example demonstrates how a batch simulation can be submitted to a remote LSF queue:
Note: The program name for single simulations and batch simulations is visage. Providing the VBT file
extension explicitly enables a batch simulation.
IBM/Platform LSF
To submit to an LSF workload manager the --queuesystem parameter may be omitted as long as -q is
specified:
If running on the headnode of the LSF cluster, the -s parameter may be omitted:
Usage
40
ECLRUN User Guide
Note: Simulation data for use in a Windows HPC cluster must reside on a file system that is shared between
the Windows workstation, the cluster head node, and the compute nodes. This is typically achieved by
locating data on a filer supporting the SMB protocol. ECLRUN will locate the data by UNC path and use
this path for the Windows HPC task. Data stored on the local drives of the Windows workstation can be
used but the location must be shared with the HPC cluster nodes. ECLRUN will check the data path is
available via SMB on a UNC path before allowing job submission.
SchedMD Slurm
To submit to a Slurm cluster, the --queuesystem parameter must be set to SLURM
Submission to the ODRS system follows the same command line form as other workload managers:
No username, queue or server information is required as these are determined by the ODRS system when
the user authenticates. A minimum configuration is required in order to determine the location of the ODRS
cloud service. These should be added to the relevant client configuration files (see: Configuration files).
1. ECLRUN connects to the DELFI system to authenticate the user. The user must have a web browser
available. This will be automatically launched and, on first contact with the DELFI system, the user will
be requested to authenticate. On subsequent invocations a cached authentication token will be used. This
is valid for 24 hours after which time the user must prove their identity via the browser once again.
2. ECLRUN connects to the ODRS system. Initial checks take place to query if the requested simulator and
version combination are available.
Usage
41
ECLRUN User Guide
The status of the simulation is checked and results are retrieved using the eclrun check command as
for any other workload manager.
All data is transferred over the HTTPS protocol. Simple HTTP proxy support is available if required by
setting the HTTPS_PROXY environment variable. For example:
HTTPS_PROXY=<http://my-local.proxy.server:8080>
The following Configuration options are required when connecting to the ODRS system. These will be
provided by Schlumberger:
SIMRUNHOST
The location of the DELFI ODRS service.
DELFITOKENSHOST
The location of the DELFI authentication service.
AUTHCLIENT
A client identifier for the authentication service.
The following configuration options can be optionally specified and will affect the operation of ECLRUN
when communicating with ODRS.
DELFIHASHCACHE
Default: False. Set to True to enable caching uploads. This option will use a common session for
simulations and a single input data filestore. Local files have a hash generated based on their
contents and the hashcache filestore is checked for the presence of this value. If a matching file is
found, upload of this input data is skipped. This can reduce the amount of input data required to be
transferred significantly if the simulations being executed share common input files, for example
when running uncertainty workflows.
Note: While this option can provide significant reductions in data transfer, overhead issues can arise when
re-using a persistent session for simulation. If any issues are encounter using ODRS and this option is
enabled, disable it, attempt to re-submit the simulation and determine if this has any effect on simulation
submission before contacting Schlumberger support.
DELFIFILEUPLOADCHUNKSIZEINMB
Default: 50. DELFI data transfer is broken into chunks. This option determines the size of the
chunks to transfer. Setting this value too low can reduce overall throughput. Setting this value too
high can compromise transfer performance for larger files. The default value should be appropriate
for most users.
Usage
42
ECLRUN User Guide
DELFIPARALLELFILEUPLOADCOUNT
Default: 8. Determines the number of files to consider uploading concurrently.
DELFIPARALLELFILEDOWNLOADCOUNT
Default: 8. Determines the number of files to consider downloading concurrently.
DELFIFILEDOWNLOADTHREADCOUNT
Default: 16. Determines the number of threads used to service concurrent file transfer chunk
requests.
DELFIRUNTIMELIMIT
Default: 604800. Maximum simulation execution time in seconds.
Torque
When submitting to the Torque queuing system, add the --queuesystem=TORQUE command line
option:
Note: Torque support is included as an extension of PBS behaviour and is retained for compatibility with
previous releases.
Note: Torque does not support advanced resource scheduling for licenses, etc.
To run parallel simulations on Windows machines, the proper MPI libraries need to be installed. Microsoft
MPI executables should be used on a Microsoft HPC cluster which are provided by default.
By default, ECLRUN will determine the correct hosts on which to execute the MPI simulation, including
when being executed under a workload manager.
A user can also provide a list of hosts to run the simulation on by using -m option. The example below
shows how to use -m option:
Usage
43
ECLRUN User Guide
Note: This hostfile must be in a form that can be interpreted by the MPI library in use.
Intel MPI
Intel MPI is the default communication type for parallel jobs on Windows workstations and Linux.
ECLRUN will auto-detect the latest version of Intel MPI on both Linux and Windows. It will also respect
the I_MPI_ROOT environment variable to locate an Intel MPI runtime.
By default, you do not need to register a password with wmpiregister.exe. If simulations are to be
executed between multiple Windows workstations, password registration is required.
wmpiregister.exe can be found in the bin directory under the Intel MPI installation folder.
OpenMPI
OpenMPI is an optional communication type for parallel jobs on Linux machines.
ECLRUN will auto-detect the installation of OpenMPI. Not all simulators are compatible with OpenMPI.
In such cases ECLRUN will default to a functional MPI for that simulator on that platform.
Microsoft MPI
Microsoft MPI is the default communication type for parallel jobs on Windows HPC clusters. It is available
for selection on Windows workstations.
On Windows HPC clusters, no selection of Microsoft MPI is required. The MPI library is tightly coupled to
the workload manager and will execute with the correct process layout.
Usage
44
ECLRUN User Guide
IBM/Platform MPI
IBM/Platform MPI is an optional communication type for parallel jobs on Linux machines.
Usage
45
ECLRUN User Guide
5
Troubleshooting
Running simulators can become complex very quickly and there are many opportunities for unintended
behavior. This section aims to provide some potential solutions to any issues you may encounter when
using the Simulation Runtime package. If these steps do not solve your issue, contact your local system
administrator.
CASENAME.RTELOG
This file contains diagnostic logging messages generated by ECLRUN. This file records all
messages and errors generated by ECLRUN during submission and subsequent check operations
and is very useful when diagnosing issues. In cases where a shared storage system is in use (see File
transfer and shared drives), additional logs may be created with a numerical suffix, e.g.
CASENAME.RTELOG.1. This preserves logging information from processes that may be executing
on remote machines without corruption or out of sequence error reporting. On submission of a new
CASENAME.* case, these files are removed and replaced automatically by ECLRUN.
Note: In older versions of ECLRUN, this file had a LOG extension and has been renamed to avoid
confusion with simulation output files.
CASENAME.RTEMSG
This file contains diagnostic logging messages generated by ECLRUN for consumption by Petrel.
This file records all messages and errors generated by ECLRUN during submission and subsequent
check operations. In cases where a shared storage system is in use (see File transfer and shared
drives), additional logs may be created with a numerical suffix, e.g. CASENAME.RTEMSG.1. This
Troubleshooting
46
ECLRUN User Guide
preserves logging information from processes that may be executing on remote machines without
corruption or out of sequence error reporting.
Note: Petrel 2018.2 or later is required to ensure these files are parsed automatically by Petrel during
simulation execution
Note: In older versions of ECLRUN, this file had an MSG extension and has been renamed to avoid
confusion with, and potential corruption of, simulation output files.
CASENAME.default.session
This file contains state information for the currently executing simulation with this CASENAME.
This state records internal information including the location of the currently in-progress simulation.
The check operation consults this file to determine which compute infrastructure, workload
manager or service should be consulted to determine the simulations status and how to obtain
results. The existence of this file indicates that the simulation CASENAME is currently executing and
an eclrun check CASENAME must be executed to determine if it is complete. Once a
simulation terminates and the final check operation is executed, this state file is automatically
removed.
Note: Re-execution of the same simulation in the same location will unconditionally overwrite this file. The
original simulation, if still running, cannot now be interacted with by ECLRUN and will continue to
completion.
Note: In previous versions of ECLRUN, this file had an `ECLRUN` extension and has been renamed to
avoid compatibility issues with older version of ECLRUN.
Note: In cases where a shared storage system is in use, additional session files may be generated with
names of the form UID.CASENAME.session. These files are required by the ECLRUN processes
executing on the user's behalf on the associated compute infrastructure can be ignored. These files will be
removed as the simulation executes and after the final check operation is performed.
CASENAME.default.sessionlock
This file is used to lock the session file when modifications are required. This file is separate from
the actual session file to allow for compatibility of locking mechanisms across multiple platforms.
Once a simulation terminates and the final check operation is executed, this lock file is
automatically removed along with the associated session.
The command line option --debug=both or --debug=file increases the level of debugging in the
RTELOG file to "DEBUG". This is very useful when diagnosing issues with simulator execution. If
possible, execute the simulation with this option and provide Schlumberger support with the associated
RTELOG files.
Troubleshooting
47
ECLRUN User Guide
Common problems
• Open a new command prompt window (for Windows, right-click the Start button and choose Command
Prompt or Windows Powershell).
• At the command prompt, type eclrun. You should see the following:
Usage:
eclrun [options] PROGRAM FILE
or
eclrun [options] PROGRAM [DIRECTORY]
where PROGRAM is one of:
office, floviz, flogrid, etc.
DIRECTORY is the working directory
default=current working directory
or
eclrun [options]
• Similarly, type plink and pscp at the command prompt. For each command, you should receive a
usage help message.
• If the above steps do not output the expected content in the command window, the installation may be
incomplete or the PATH environment variable has not been appended correctly. Try running through the
installation again to see if this solves the issue.
If you are submitting to a remote server, check that ECLRUN is installed on the remote server.
Troubleshooting
48
ECLRUN User Guide
If it is the first time that you tried to submit to a particular server from your PC, try again. First-time
connections have to store the remote server's signature in the Windows registry, and this sometimes fails.
Often simply repeating the submission works.
If it is not the first time that you have attempted submission, try the following:
• An error occurred during the submission, simulation, or the fetching of results. In this situation,
ECLRUN does not delete any files so that you can determine the cause of the error, and salvage any
useful results from the simulation. If you know why the error occurred, it is safe to delete the temporary
directory.
• You did not ask ECLRUN to load results after the run had finished. ECLRUN only deletes the temporary
files on the server when they have all been downloaded to the local PC after the run has finished. If you
look at the results of a simulation halfway through, decide it is wrong in some way, then go on to submit
another case without killing or fetching the final results of the first case, they will stay on the server. You
should always fetch results or abort simulations to ensure that your server does not fill up with 'orphaned'
simulations.
Troubleshooting
49
ECLRUN User Guide
• If you are using shared network drives, there is no need to transfer data back. Simply load the results
directly into Petrel.
• If the simulation output files are very large, it may simply be taking a long time to download them from
the server.
• The remote server may have become inaccessible. Manually check if you can still access the server from
your machine and login.
Your machine does not allow more than X jobs to be run at once. This is determined by the EclNJobs
configuration file variable, which defaults to the number of CPU cores available (limited to 4). To solve this
error, do one of the following:
• Change the number of CPUs required by the DATA file by editing the PARALLEL keyword,
• Edit the number of EclNJobs in one of the ECLRUN configuration files.
Neither the TEMP nor a TMP environmental variable was detected on this
machine.
ECLRUN cannot create a mutex file which is used for synchronization as it cannot find the location of the
Windows temp directory. To get this directory, ECLRUN checks the TMP and TEMP environment variables
which should point to the temporary directory on Windows. To solve this problem, create a TMP or TEMP
environmental variable pointing to the temp directory you wish to use. By default, this value on Windows
should point to a location of %USERPROFILE%\AppData\Local\Temp.
ECLRUN cannot run more jobs than specified in ECLNJOBS (this defaults to the real number of CPU cores
available on the client's machine). By specifying -q as 'high', 'medium', or 'low', different waiting times
may be forced: 20, 300, or 900 seconds.
The number of jobs exceeds the limitation specified in ECLRUNJOBS in eclrun.config. You cannot
run a six-way parallel case when only five jobs can be run at the same time. ECLRUN terminates.
Troubleshooting
50
ECLRUN User Guide
ECLRUN failed to execute lmutil.exe which should be in the current path. ECLRUN will execute the
job without waiting for licence resources.
:ECLRUN will check again after a period of time has elapsed. The wait period depends on the job priority
(60 seconds for high priority, 300 seconds for medium priority, and 900 seconds for low priority).
This error message appears when ECLRUN or a Simulator fails to check out a license. This could be caused
by:
• Not having the appropriate licenses available for your choice of workflow.
• Not being able to access the license server (identified by the environment variable
SLBSLS_LICENSE_FILE).
• Not having a license server defined (check if SLBSLS_LICENSE_FILE is set correctly).
Only network paths with read/write access are allowed when submitting to Microsoft HPC. When
submitting a job to the Windows HPC Cluster, you must use a shared network drive with write access as
your working directory. This is because only shared drives are supported and no file transfer is permitted
otherwise. To solve this issue, do one of the following:
• Share your working directory and ensure that you have 'change' permissions on both your local machine
and remotely.
• Mount a network drive with write access that can be accessed from the Windows HPC Cluster.
Such error messages mean that the total path length of one of ECLRUN's output files exceeds the Windows
path length limitation and cannot be created. This may appear when running a simulation for a dataset with
a very long filename. ECLRUN needs to create some temporary files whose names may be even longer
than the dataset's filename. To solve this issue, do one of the following:
Troubleshooting
51
ECLRUN User Guide
Other
Error Unable to find executable for program(s):
'['ConverterSummaryData2DataBase']'...'.
HDF conversion cannot be performed as the executable needed is missing. Ensure that
ConvertSummaryData2DataBase.exe is present in the bin folder. Alternatively, the HDF
conversion step can be disabled completely by adding '--summary-conversion=no' to the command
line.
If you are using Windows to run a workflow from a directory that is shared, you must have read and write
access to that shared directory. This applies even if the workflow is being run directly on the machine
sharing the directory.
When attempting to run a parallel job, the Intel MPI installation was not found. ECLRUN tries to get the
latest version from the environment variable I_MPI_ROOT or the location (if it exists) under:
C:\Program Files\Intel\MPI-RT\
This problem should be solved by reinstalling Intel MPI which is available and packaged with the simulator
installers.
Troubleshooting
52