Changes between Version 1 and Version 2 of trunk/getting_started
- Timestamp:
- 07/24/10 20:04:37 (14 years ago)
Legend:
- Unmodified
- Added
- Removed
- Modified
-
trunk/getting_started
v1 v2 1 1 == Getting started == 2 2 3 To run PEPC, first enter or create a run directory. This can be anywhere, but we will assume it is placed in the PEPC install directory ($PEPC): 4 {{{ 5 #!sh 6 mkdir mydemo 7 cd mydemo 8 }}} 3 To run PEPC you first need to create a run directory. This can be anywhere, but we will assume it is placed in the PEPC install directory ($PEPC), 4 such as the example 'tutorial'. 9 5 10 This is where all the main output will appear. A number of subdirectories data/pe0000, data/pe0001, ... data/peNNNN must also exist or be created prior to the run, depending on the number of CPUs requested (P). This can be done with the aid of the script create_pes, which resides in the bin directory. It will prove useful to include this directory in your path, i.e.: 11 {{{ 12 #!sh 13 export PATH=$PATH:$PEPC/bin 14 }}} 15 16 To create 4 data directories from scratch, type: 17 {{{ 18 #!sh 19 create_pes 0 20 ls data 21 }}} 22 23 The file PE_list – also kept in the working directory – maintains a list of subdirectories. If you need more, just edit this and repeat create_pes command. Alternatively, if you want to run with twice the number of CPUs, just do: 24 {{{ 25 #!sh 26 create_pes 4 27 }}} 28 29 Sample run scripts (.sh) and parameter files (.h) can be found in the directory 'tutorial': 6 Sample run scripts (.sh) and parameter files (.h) can be found here: 30 7 31 8 * billiards.h - Inter-particle forces switched off; reflective boundaries for various geometries … … 36 13 * wire.h - Laser interaction with wire target 37 14 38 == Example run.h file == 15 16 == Example parameter (.h) file == 39 17 40 18 {{{ … … 101 79 }}} 102 80 103 This parameter file is first copied to run.h by the run script or job. See the User Guide for a more comprehensive list and description of input parameters. More complex examples can be found in the Demos.81 This parameter file is first copied to run.h by the run script or job. See the User Guide for a more comprehensive list and description of input parameters. More complex examples can be found in the demos. 104 82 105 83 To execute the code on a Linux PC with mpich: … … 109 87 }}} 110 88 111 On the IBM p690,use:89 For JUROPA use: 112 90 {{{ 113 91 #!sh 114 ./eqm.sh 92 msub juropa.job 93 }}} 94 95 For JUGENE use: 96 {{{ 97 #!sh 98 llsubmit eqm.bgp 115 99 }}} 116 100 117 101 == Output data == 118 102 119 The output files will be stored either in the run directory or in the subdirectories d ata/pe0000etc. The most important of these are:103 The output files will be stored either in the run directory or in the subdirectories dumps/ fields/ log/ etc. The most important of these are: 120 104 121 105 * energy.dat … … 140 124 Particle data is output independently by each CPU to avoid memory and MPI bottlenecks for large runs, and can be found in: 141 125 {{{ 142 d ata/peNNNN/parts_dump.TTTTTT143 d ata/peNNNN/parts_info.TTTTTT126 dumps/parts_pNNNN.TTTTTT 127 dumps/info_pNNNN.TTTTTT 144 128 }}} 145 129 Currently the format of the particle dump is a 15-column ASCII file (13 reals, 2 integers) with the following content: … … 147 131 x, y, z, px, py, pz, q, m, Ex, Ey, Ez, pot, owner, label 148 132 }}} 149 The number of particles written out together with other data is contained in the associated info file. Each subdirectory p eNNNN contains data for CPU NNNN at the checkpoint timestamps TTTTTT, whose frequency is controlled by the input parameter idump. Data for each CPUcan be merged for postprocessing with the script bin/merge1_dump, for example:133 The number of particles written out together with other data is contained in the associated info file. Each subdirectory pNNNN contains data for task number NNNN at the checkpoint timestamps TTTTTT, whose frequency is controlled by the input parameter idump. Data for each task can be merged for postprocessing with the script bin/merge1_dump, for example: 150 134 {{{ 151 135 #!sh … … 154 138 will generate will create 2 new files in the subdirectory dumps in the same format as the partial dumps containing the complete particle data at time 000100: 155 139 {{{ 156 dumps/parts _dump.000100157 dumps/ parts_info.000100140 dumps/parts.000100 141 dumps/info.000100 158 142 }}} 159 143 These can either be used by a postprocessor or as an initial configuration for a new run.