9 | | The logger is started with mpiexec and subsequently starts the application to monitor. It will create a directory (default: .memlog in the PBS_O_WORKDIR) and by default each task will create its own logfile in that directory. |
10 | | After the run is finished the analyzer juman is run in the PBS_O_WORKDIR, either from within the same job script or afterwards on the login node to analyze the consumed resources. |
| 9 | The logger is started with mpiexec and subsequently starts the application to monitor. It will create a directory (default: .memlog in the PBS_O_WORKDIR, => wrkdir, option -w) and by default each task will create its own logfile in that directory. The logger checks now each time step (=> delay, option -d) for the following keys in the status file of each process (/proc/<id>/status): |
| 10 | * VmSize |
| 11 | * VmData |
| 12 | * VmStk |
| 13 | * VmRSS |
| 14 | Each task writes the value for each key in the file .memlog/task<MPI-rank>.log and waits for the next time step. |
| 15 | |
| 16 | After the run is finished the analyzer juman is run in the PBS_O_WORKDIR, either from within the same job script or afterwards on the login node to analyze the consumed resources. It will create graphs with the value of key (default: VmSize, => key, option -k) for each task at each time step (use juman -k help for a list of available keys), the process with the maximum value of the key at each time step and the total sum of all values of a key across all tasks at each time step (see option -s). If -i is specified the graphs are displayed immediately. |