Installation and requirements#
Obtaining the required files and dependencies#
download Prodrisk from the portal
pip install pyprodrisk
a valid license file
extra library for HPO option if this is part of your license
a compatible MPI install for parallell computation
The file downloaded from the portal should include the prodrisk binary executables as well as API files.
General setup#
The binaries should be placed somewhere where they can be found, either in the user path or the system path. This includes the API files if these are to be used.
The license file should be placed in the folder the environment variable LTM_LICENSE_PATH points to, for example: LTM_LICENSE_PATH=/home/my_license
or LTM_LICENSE_PATH=C:\path\to\my\license
.
If you have a library for the HPO-functionality, this should also be in the path environment variable.
Command line interface#
In order to use the applications, you need a valid data set to run on. The simplest way to obtain this is to use an existing data set from EOPS in the LTM application suite or another Prodrisk run. The data set can also be build from scratch, but that requires using the LTM applications and following the accompanying documentation to create an EOPS data set which Prodrisk can run on.
pyprodrisk interface#
You need certain input data to run the model and you need this accessible in Python. If you want to get started immediately, all required data can be generated - this is what is done in the basic example.
To jump in and get started with the pyprodrisk interface, check out the example PyProdrisk: Basic example.
MPI#
Parallel processing requires an MPI implementation. The application is available in several versions using different MPI implementations from different vendors. You need to run with a version matching the MPI implementation you have installed on your system.
Example configuration#
By default the mpi path is set to where the default MS MPI installation is located. For Linux or non-standard Windows installs the mpi_path
attribute on the session object must be set to a path where mpirun
from the corresponding MPI install to the Prodrisk variant used is located. The release notes detail which version of MPI each Prodrisk release is built against and you will need an MPI runtime of that version or newer installed to avoid issues.
session.mpi_path = “/opt/intel/oneapi/mpi/${impi version >= 2021.12}/bin” session.prodrisk_variant = “prodrisk_cplex_impi”
In container environments#
When MPI is communicating between processes on the same machine it uses shared memory for performance reasons. In standard Linux environments, processes can use up to all of the host’s physical memory for this purpose, but a single machine might be running 10s or 100s of containers so in those environments this is limited by default.
In Docker, for instance, this default limit is 64 MBs. This is insufficient for running any non-trivial prodrisk workloads so we need to extend this. For Docker this can be done by setting a parameter to docker run docker run -it --shm-size=512m image:tag
. For k8s-based environments the Pod spec has to be adapted to include a /dev/shm
memory volume mount, as shown in the example. Other cloud solutions should be able to do the same by adapting the example to their need.
Kubernetes example#
spec: volumes:
name: shm emptyDir: medium: Memory sizeLimit: 1Gi containers:
image: my.reg/myproj/myimg volumeMounts:
mountPath: /dev/shm name: shm
Known issues#
MPI works by launching several independent processes from a coordinating process. Using the most common names for this, mpiexec
launches -n processes through the smpd
process. Only one smpd process can typically run on one system at a time, and this needs to match the mpiexec which needs to match the MPI implementation in the application being launched (prodrisk). Some implementations shut down the smpd automatically when the mpiexec process finishes, but not necessarily all. Thus, a situation can occur where one can run one mpiexec using one MPI implementation and then try to run another mpiexec afterwards, but this may give unexpected results (typically a crash) if the smpd from the first is still running so that the second mpiexec tries to communicate with the smpd using a slightly different “language”.