Changes between Version 6 and Version 7 of Examples/Ear5Animating


Ignore:
Timestamp:
07/10/18 11:24:59 (6 years ago)
Author:
Herwig Zilken
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • Examples/Ear5Animating

    v6 v7  
    33=== NetCDF and HDF5 on JURECA ===
    44The typical data format in climate science is NetCDF. Though NetCDF files can be loaded in !ParaView, this approach is very unflexible, as !ParaViews NetCDF reader makes some fixed assumptions regarding some naming conventions of the variables.
    5 Fortunately, NetCDF files geenrated with the newer NetCDF version 4 are based on HDF5 and can be loaded with !ParaViews [http://xdmf.org XDMF] reader.
    6 In case one has older NetCDF files of version 3, a conversion from NetCDF-3 to NetCDF-4 is necessary. So one has to deal with som NetCDF and HDF5 related tools.
     5Fortunately, NetCDF files generated with the newer NetCDF version 4 are based on HDF5 and can be loaded with !ParaViews [http://xdmf.org XDMF] reader.
     6If the NetCDF files are from the older version 3, a conversion from NetCDF-3 to NetCDF-4 is necessary. So one has to deal with som NetCDF and HDF5 related tools.
    77
    88How to find out what modules to load for NetCDF/HDF5 on JURECA:
     
    1313}}}
    1414
    15 Load modules e.g. by:
     15For JURECA software stage 2018a, the modules can be loaded by:
    1616{{{
    1717#!bash
     
    3434ncdump -k foo.nc
    3535}}}
    36 If ncdump returns netCDF-4, or netCDF-4  classic  model, then congratulations, you already have an HDF file, as netCDF-4 is the netCDF data model implemented using HDF5 as the storage layer. These files can be ready by the HDF library version 1.8 or later, and from what I can tell, pytables. If ncdump returns classic or 64-bit offset, then you are using netCDF-3 and will need to convert to netCDF-4 (and thus HDF5). The good news is that if you have netCDF installed, you can do the conversion pretty easily:
     36If ncdump returns netCDF-4, or netCDF-4  classic  model, then congratulations, you already have an HDF file, as netCDF-4 is the netCDF data model implemented using HDF5 as the storage layer. These files can be read by the HDF library version 1.8 or later, and from what we can tell, pytables. If ncdump returns classic or 64-bit offset, then you are using netCDF-3 and will need to convert to netCDF-4 (and thus HDF5). The good news is that if you have NetCDF installed, you can do the conversion pretty easily:
    3737{{{
    3838#!bash
     
    4040}}}
    4141
    42 Graphical HDF5 viewer:
    43 HDFView is a very handy tool to investigate hdf5 files. To open HDFView on JURECA, either click on the HDFView icon on the desktop. Or launch HDFView by
     42Graphical HDF5 viewer:\\
     43HDFView is a handy tool to explore hdf5 files. To open HDFView on JURECA, either click on the HDFView icon on the desktop, or launch HDFView by
    4444{{{
    4545#!bash
     
    6464The files are stored at /data/slmet/slmet111/met_data/ecmwf/era5/netcdf4/2017/
    6565
    66 The files cover the months June and August 2017 in 1 h steps.
    67 For every file, its date is recorded in its filename. e.g. 2017061516_ml.nc (YYYYMMDDHH)
     66The files cover the months June and August 2017 in steps of one hour.
     67For every file, its date is recorded in its filename, e.g. 2017061516_ml.nc (YYYYMMDDHH).
    6868
    6969Some interesting variables stored in the 3D *ml-files:
     
    7171- ciwc (1 x 137 x 601 x 1200): Specific cloud ice water content
    7272- clwc (1 x 137 x 601 x 1200): Specific cloud liquid water content
    73 - d (1 x 137 x 601 x 1200): divergence_of_wind
     73- d (1 x 137 x 601 x 1200): Divergence of wind
    7474- o3 (1 x 137 x 601 x 1200): Ozone mass mixing ratio
    7575- q (1 x 137 x 601 x 1200): Specific humidity
     
    8383- lat (601): latitude (degrees north), ranging from 90 to -90
    8484- lon (1200): longitude (degrees east), ranging from 0 to 359.7
    85 - lev, lev_2 (137): hybrid_sigma_pressure, ranging from 1 to 137 (137 is ground level!)
     85- lev, lev_2 (137): hybrid_sigma_pressure, ranging from 1 to 137 (137 is ground level)
    8686
    8787Calculation of coordinates:\\
    88 !ParaView natively does not understand the original coordinates (lat, lon, lev_2). Therefore, these must be converted into a "structured grid" data structure, where every node of the 3D grid has its own 3D coordinate. This calculation is done in the "generate_coordinates.py" script.
    89 In this script also the conversion to Cartesian coordinates takes place essentially via:
     88!ParaView does not understand the original coordinates (lat, lon, lev_2) natively. Therefore, these must be converted into a "structured grid" data structure, in which every node of the 3D grid has its own 3D coordinate. This calculation is done in the "generate_coordinates.py" script.
     89In this script also the conversion from Spherical coordinates to Cartesian coordinates is done essentially via:
    9090{{{
    9191#!python
     
    9595  z = height * np.sin(lat*3.14/180)
    9696}}}
     97As you can see, the calculation of "height" is just some virtual height and does not correspond to the real extend of the earth, but this approach is sufficient for visualization.
    9798The generated coordinates are stored in the new created file "coordinates.h5".
    98 ATTENTION: !ParaView can read this "structured grid", but cannot volume-render it, as volume rendering with [https://www.ospray.org/ OspRay] or GPU-based only works for data of type "image data", wich is a 3D rectiliniear grid. Therefore, the filter "Resample to Image" must be applied to the reader in !ParaView!
     99ATTENTION: !ParaView can read this "structured grid", but cannot volume-render it, as volume rendering with [https://www.ospray.org/ OspRay] or GPU-based only works for data of type "image data", wich is a 3D rectiliniear grid. Therefore, the filter "Resample to Image" must be applied to the reader in !ParaView later!
    99100
    100101Create XDMF file:\\
    101 The hdf5 files are loaded via an xdmf reader. To enable this, an xdmf file must be created first, containing information about alle time steps and all variables. The script "make_xdmf.py" does this. The script essentially scans the directory where the data files are located and gets the names of all files for one month (the definition of the month is fixed in the script).
     102The hdf5 files are loaded via an xdmf reader. To enable this, an xdmf file must be created first, containing information about alle time steps and all variables. The script "make_xdmf.py" does this. The script essentially scans the directory where the data files are located and collects the names of all files for one month (the definition of the month is fixed in the script).
    102103Variables that can be later loaded into !ParaView are noted in the script in form of a python list:
    103104{{{
     
    109110== !ParaView ==
    110111=== Loading the necessary modules ===
    111 The needed modules can be found out with "module spider !ParaView/5.5.0".
     112For !ParaView v5.5.0 the needed modules can be found out with "module spider !ParaView/5.5.0".
    112113Load modules e.g. with:
    113114{{{
     
    118119
    119120=== ParaView GUI ===
    120 First load the modules as described above, then start !ParaView GUI
     121First load the modules as described above, then start !ParaView GUI on JURECA via vglrun:
    121122{{{
    122123#!bash
     
    125126The GUI is well suited to prototype the scene. In the GUI one can define the visualization pipeline with its parameters, i.e. the readers and filters, the color tables and the camera positions. However, for various reasons it makes sense to script the visualization with paraview-python:
    126127- In the script all parameters are recorded in text form, so one can look up these parameters later
    127 - Loading !ParaView GUI state files sometimes does not work
    128 - !ParaView has a memory leak, so after a number of render steps you have to quit !ParaView and restart it at the aborted location (otherwise !ParaView would crash). This can be automated using a script.
     128- You can save the state of your !ParaView session in so called state files. But unfortunately loading the saved state files sometimes does not work
     129- !ParaView has a memory leak, so after a number of render steps you have to quit !ParaView and restart it at the aborted location (otherwise !ParaView would crash because it eats up all memory). This restart procedure can be automated using a script.
    129130
    130131
     
    132133How To transfer pipeline parameters from GUI to script:\\
    133134In the !ParaView-GUI, start a Python trace by Tools->Start Trace.
    134 Then create the pipeline you want. (Most of) the corresponding Python commands are displayed in the trace. These can be transferred into a script with copy & paste and, if needed, modified there.
     135Then create the pipeline you want. (Most of) the corresponding Python commands are displayed in the trace. These commands can be transferred into a script with copy & paste and, if needed, modified there.
    135136
    136137How To transfer colormaps from GUI to script:\\
    137138Once you have designed a good colormap in the GUI, you can save it there as a preset. This preset can then be renamed and saved to disk as a *.json file.
    138 Since a different colormap makes sense for each variable, one will end up with more then one colormap file. In this example the naming scheme for colormap files is "stein_''variable''.json", e.g. "stein_vo.json" for the vorticity. This naming scheme is expected in the Python scripts, which among other things load the color tables.
     139Since a different colormap makes sense for each variable, one will end up with more then one colormap file. In this example the naming scheme for colormap files is "stein_''variable''.json", e.g. "stein_vo.json" for the vorticity. This naming scheme is expected in the Python scripts, which among other things load the color tables:
     140{{{
     141#!python
     142attribute = 'vo'
     143ImportPresets(filename='./stein_' + attribute + '.json')
     144}}}
    139145
    140146How To transfer camera parameters from GUI to script:\\
    141 You can save four (and more) camera positions in the !ParaView GUI. Click on the camera icon ("Adjust Camera"), then "configure", then "Assign current view". The camera positions can be saved in an XML file via "export" and can later be loaded and used in the Python script e.g. with:
     147You can save four (and more) camera positions in the !ParaView GUI. Click on the camera icon ("Adjust Camera"), then "configure", then "Assign current view". The camera positions can be permanently saved in an XML file via "export" and can later be loaded and used in the Python script e.g. with:
    142148{{{
    143149#!python
     
    150156   camera.SetParallelScale(float(root[camIdx-1][1][0][0][6][0].attrib['value']))
    151157
     158attribute = 'vo'
    152159tree = ET.parse('camera_' + attribute + '.pvcvbc')
    153160root = tree.getroot()
     
    162169{{{
    163170#!bash
    164 bash -c 'export DISPLAY=:0.0 && pvserver --disable-xdisplay-test' & sleep 2; see pvpython ./script.py
    165 }}}
    166 
    167 == Sample session for using the scripts: ==
    168 All files needed to run this sample session can be found in the attachment at the end of this page! Just download from there!
    169 In this sample session, the typical usage of the scripts is demonstrated. All necessary files (scripts, colormaps, camera positions, texture) should be copied into one directory. At the end, 10 images for the two variables "ciwc" and "clwc" should be generated.
     171bash -c 'export DISPLAY=:0.0 && pvserver --disable-xdisplay-test' & sleep 2; pvpython ./script.py
     172}}}
     173
     174== Sample session for using the attached scripts ==
     175All files needed to run this sample session can be found in the attachment at the end of this page! Just download from there.\\
     176
     177In this sample session, the typical usage of the scripts is demonstrated. All necessary files (scripts, colormaps, camera positions, texture) should be placed into one directory. At the end, 10 images for the two variables "ciwc" and "clwc" should be generated.
    170178
    171179=== Prerequisites ===