Changes between Version 1 and Version 2 of trunk/getting_started


Ignore:
Timestamp:
07/24/10 20:04:37 (14 years ago)
Author:
Paul Gibbon
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • trunk/getting_started

    v1 v2  
    11== Getting started ==
    22
    3 To run PEPC, first enter or create a run directory. This can be anywhere, but we will assume it is placed in the PEPC install directory ($PEPC):
    4 {{{
    5 #!sh
    6 mkdir mydemo
    7 cd mydemo
    8 }}}
     3To run PEPC you first need to create a run directory. This can be anywhere, but we will assume it is placed in the PEPC install directory ($PEPC),
     4such as the example 'tutorial'.
    95
    10 This is where all the main output will appear. A number of subdirectories data/pe0000, data/pe0001, ... data/peNNNN must also exist or be created prior to the run, depending on the number of CPUs requested (P). This can be done with the aid of the script create_pes, which resides in the bin directory. It will prove useful to include this directory in your path, i.e.:
    11 {{{
    12 #!sh
    13 export PATH=$PATH:$PEPC/bin
    14 }}}
    15 
    16 To create 4 data directories from scratch, type:
    17 {{{
    18 #!sh
    19 create_pes 0
    20 ls data
    21 }}}
    22 
    23 The file PE_list – also kept in the working directory – maintains a list of subdirectories. If you need more, just edit this and repeat create_pes command. Alternatively, if you want to run with twice the number of CPUs, just do:
    24 {{{
    25 #!sh
    26 create_pes 4
    27 }}}
    28 
    29 Sample run scripts (.sh) and parameter files (.h) can be found in the directory 'tutorial':
     6Sample run scripts (.sh) and parameter files (.h) can be found here:
    307
    318 * billiards.h - Inter-particle forces switched off; reflective boundaries for various geometries
     
    3613 * wire.h - Laser interaction with wire target
    3714
    38 == Example run.h file ==
     15 
     16== Example parameter (.h) file ==
    3917
    4018{{{
     
    10179}}}
    10280
    103 This parameter file is first copied to run.h by the run script or job. See the User Guide for a more comprehensive list and description of input parameters. More complex examples can be found in the Demos.
     81This parameter file is first copied to run.h by the run script or job. See the User Guide for a more comprehensive list and description of input parameters. More complex examples can be found in the demos.
    10482
    10583To execute the code on a Linux PC with mpich:
     
    10987}}}
    11088
    111 On the IBM p690, use:
     89For JUROPA use:
    11290{{{
    11391#!sh
    114 ./eqm.sh
     92msub juropa.job
     93}}}
     94 
     95For JUGENE use:
     96{{{
     97#!sh
     98llsubmit eqm.bgp
    11599}}}
    116100
    117101== Output data ==
    118102
    119 The output files will be stored either in the run directory or in the subdirectories data/pe0000 etc. The most important of these are:
     103The output files will be stored either in the run directory or in the subdirectories dumps/ fields/ log/ etc. The most important of these are:
    120104
    121105 * energy.dat
     
    140124 Particle data is output independently by each CPU to avoid memory and MPI bottlenecks for large runs, and can be found in:
    141125 {{{
    142  data/peNNNN/parts_dump.TTTTTT
    143  data/peNNNN/parts_info.TTTTTT
     126 dumps/parts_pNNNN.TTTTTT
     127 dumps/info_pNNNN.TTTTTT
    144128 }}}
    145129 Currently the format of the particle dump is a 15-column ASCII file (13 reals, 2 integers) with the following content:
     
    147131 x, y, z, px, py, pz, q, m, Ex, Ey, Ez, pot, owner, label
    148132 }}}
    149  The number of particles written out together with other data is contained in the associated info file. Each subdirectory peNNNN contains data for CPU NNNN at the checkpoint timestamps TTTTTT, whose frequency is controlled by the input parameter idump. Data for each CPU can be merged for postprocessing with the script bin/merge1_dump, for example:
     133 The number of particles written out together with other data is contained in the associated info file. Each subdirectory pNNNN contains data for task number NNNN at the checkpoint timestamps TTTTTT, whose frequency is controlled by the input parameter idump. Data for each task can be merged for postprocessing with the script bin/merge1_dump, for example:
    150134 {{{
    151135 #!sh
     
    154138 will generate will create 2 new files in the subdirectory dumps in the same format as the partial dumps containing the complete particle data at time 000100:
    155139 {{{
    156  dumps/parts_dump.000100
    157  dumps/parts_info.000100
     140 dumps/parts.000100
     141 dumps/info.000100
    158142 }}}
    159143 These can either be used by a postprocessor or as an initial configuration for a new run.