Changes between Version 33 and Version 34 of ParaView


Ignore:
Timestamp:
05/03/17 16:59:56 (7 years ago)
Author:
Jens Henrik Goebbert
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • ParaView

    v33 v34  
    5555* Start !ParaView and connect to the pvservers (localhost:11111).
    5656
    57 ==== How to read data to !ParaView in parallel
    58 Just starting !ParaView in parallel __does not__ result necessarily in using the compute resources (cpu and memory) of more than one node or using them on the allocated nodes equaly.
     57==== How to use !ParaView in parallel
     58Just starting !ParaView in parallel __does not__ result necessarily mean to benefit from the compute resources (cpu and memory) of more than one node.\\
     59Even if !ParaView distributes the data and work over the compute nodes this might not happen equally.\\
    5960
    6061Two main issues must be considered:
    61 * Parallel Data Management - the data must be distributed across parallel processes to take advantage of resources
    62 * Parallel Work Managemnet - the work must be distributed across parallel processes to take advantage of
     62* Parallel Data Management - the data must be distributed equally across parallel processes to take advantage of resources
     63* Parallel Work Management - the work must be distributed equally across parallel processes to take advantage of resources
     64
     65You have less influence on the 'Parallel Work Management', therefore we only discuss 'Parallel Data Management' for now.
    6366
    6467===== Parallel Data Management
    6568Data must be distributed across parallel processes to take advantage of resources.\\
     69This distribution can be accomplished by the reader or the D3 filter afterwards.
     70 
     71 * fully parallel readers
     72   * Explicit parallel formats use separate files for partitions (.pvti, global.silo)
     73   * Implicit parallel formats – parallel processes figure out what they need (.vti, brick-f-values)
    6674
    67  
    68  * Some !ParaView readers import in parallel
    69    * Explicit parallel formats use separate files for partitions (.pvti, global.silo)
    70    * Implicit parallel formats – parallel processes figure out what they need – (.vti, brick-f-values)
    71  * Some !ParaView readers may seem to import in parallel
    72    * Actually, import serially and then distribute
    73    * Bad bad bad – 1 process needs enough memory for entire dataset plus additional space for partitioning
    74  * Some !ParaView readers do NOT read in parallel
    75    * ... and leave it to you (D3 filter in Paraview - this results in an unstructured grid, which might need more memory)
    76    * See Bad bad bad above
     75 * serial readers + distribute
     76   * first process needs enough memory for __entire__ dataset plus additional space for partitioning
    7777
    78 ====== Test Parallel Data Mangement
    79  * Click Sources->Sphere
    80  * Max out Theta Resolution and Phi Resolution
    81  * Click Filters->Alphabetical->Process Id Scalars
    82  * => Segments are colored by which process handles them
     78 * fully serial readers
     79   * you need to distribute data manually using D3 filter
     80   * attention: D3 filter outputs unstructured grid, which might need more memory
     81
     82===== Test Parallel Data Mangement
     83 * Load your data
     84 * Add filter 'Process Id Scalars'
     85 * Segments are colored by which process handles them
    8386
    8487----