Using the benchmarking scripts
Currently, the scripts are only really tested for JUQUEEN which is expected to yield reliable timing results. Other targets for now come with one hard-wired makefile.defs.
- checkout the benchmark environment
svn co https://svn.version.fz-juelich.de/pepc/benchmarks ./pepc.benchmarks cd pepc.benchmarks/<TARGET>
- run the benchmark either
- with a certain revision (e.g. 3333)
./template/runbechmark.sh 3333
- with (a variable number of) several revisions
./template/runbechmark.sh 3482 2837 2819 2049 2071 1111 1984
- with a certain revision (e.g. 3333)
- After the job has finished, you can call
./template/postprocess.sh
for extracting the timings and updating the figure. - Do not forget to upload the new results to the svn using svn ci. The graph in this wiki page will be updated automatically if a new runtimes.png is available.
Benchmarking Test Case
The benchmarking scripts use
<TARGET> | Juqueen | Juropa | Judge |
binary | pepc-essential | pepc-essential | pepc-essential |
n_particles | 16,000,000 | 2,000,000 | 500,000 |
n_timesteps | 25 | 25 | 15 |
n_nodes | 256 | 32 | 8 |
ranks-per-node | 1 | 1 | 1 |
n_worker_threads | 56 | 16 | 24 |
If the respective parameter file (Juqueen in this case) is modified, i.e. the test case is changed, all older runs have to be repeated.
Results
Juqueen
Juropa
Judge
To make them more 'comparable' the same data against revision number as x-value. Feel free to remove this.
Juqueen
Juropa
Judge
Last modified
11 years ago
Last modified on 05/15/13 19:43:41
Note:
See TracWiki
for help on using the wiki.