wiki:Examples/Ear5Animating

Visualisation and Animation of ERA5 Reanalysis Climate Data

Dataformats and Tools

NetCDF and HDF5 on JURECA

The typical data format in climate science is NetCDF. Though NetCDF files can be loaded in ParaView, this approach is very unflexible, as ParaViews NetCDF reader makes fixed assumptions regarding some naming conventions of the variables. Fortunately, NetCDF files generated with the newer NetCDF version 4 are based on HDF5 and can be loaded with ParaViews XDMF reader. If the NetCDF files are from the older version 3, a conversion from NetCDF-3 to NetCDF-4 is necessary. So one has to deal with some NetCDF and HDF5 related tools.

How to find out what modules to load for NetCDF/HDF5 on JURECA:

Error: Failed to load processor bash
No macro or processor named 'bash' found

For JURECA software stage 2018a, the needed modules are:

Error: Failed to load processor bash
No macro or processor named 'bash' found

Just as a reminder some examples of how to use h5dump:

Error: Failed to load processor bash
No macro or processor named 'bash' found

How to find out the version of a netcdf file:

Error: Failed to load processor bash
No macro or processor named 'bash' found

If ncdump returns netCDF-4, or netCDF-4 classic model, then congratulations, you already have an HDF file, as netCDF-4 is the netCDF data model implemented using HDF5 as the storage layer. These files can be read by the HDF library version 1.8 or later, and from what we can tell, pytables. If ncdump returns classic or 64-bit offset, then you are using netCDF-3 and will need to convert to netCDF-4 (and thus HDF5). The good news is that if you have NetCDF installed, you can do the conversion pretty easily:

Error: Failed to load processor bash
No macro or processor named 'bash' found

Graphical HDF5 viewer:
HDFView is a handy tool to explore hdf5 files. To open HDFView on JURECA, either click on the HDFView icon on the desktop, or launch HDFView by

Error: Failed to load processor bash
No macro or processor named 'bash' found

Python on JURECA

To use python and h5py on JURECA, you have to load some modules first. Here are some useful commands:

Error: Failed to load processor bash
No macro or processor named 'bash' found
Error: Failed to load processor bash
No macro or processor named 'bash' found

Overview about the Data Files

The files are stored at /data/slmet/slmet111/met_data/ecmwf/era5/netcdf4/2017/

The files cover the months June and August 2017 in steps of one hour. For every file, its date is recorded in its filename, e.g. 2017061516_ml.nc (YYYYMMDDHH).

Some interesting variables stored in the 3D *ml-files:

  • cc (1 x 137 x 601 x 1200): Fraction of cloud cover
  • ciwc (1 x 137 x 601 x 1200): Specific cloud ice water content
  • clwc (1 x 137 x 601 x 1200): Specific cloud liquid water content
  • d (1 x 137 x 601 x 1200): Divergence of wind
  • o3 (1 x 137 x 601 x 1200): Ozone mass mixing ratio
  • q (1 x 137 x 601 x 1200): Specific humidity
  • t (1 x 137 x 601 x 1200): Temperature
  • u (1 x 137 x 601 x 1200): U component of wind (eastward wind)
  • v (1 x 137 x 601 x 1200): V component of wind (northward wind)
  • w (1 x 137 x 601 x 1200): Vertical velocity
  • vo (1 x 137 x 601 x 1200): Vorticity (relative)

Variables related to coordinates:

  • lat (601): latitude (degrees north), ranging from 90 to -90
  • lon (1200): longitude (degrees east), ranging from 0 to 359.7
  • lev, lev_2 (137): hybrid_sigma_pressure, ranging from 1 to 137 (137 is ground level)

Calculation of coordinates:
ParaView does not understand the original coordinates (lat, lon, lev_2) natively. Therefore, these must be converted into a "structured grid" data structure, in which every node of the 3D grid has its own 3D coordinate. This calculation is done in the "generate_coordinates.py" script. In this script also the conversion from Spherical coordinates to Cartesian coordinates is done essentially via:

  height = (137.0 - levIn[i])*.5 + 150
  x = height * np.cos(lat*3.14/180)*np.cos(lon*3.14/180*1.002)
  y = height * np.cos(lat*3.14/180)*np.sin(lon*3.14/180*1.002)
  z = height * np.sin(lat*3.14/180)

As you can see, the calculation of "height" is just some virtual height and does not correspond to the real extend of the earth, but this approach is sufficient for visualization. The generated coordinates are stored in the new created file "coordinates.h5". ATTENTION: ParaView can read this "structured grid", but cannot volume-render it, as volume rendering with OspRay or GPU-based only works for data of type "image data", wich is a 3D rectiliniear grid. Therefore, the filter "Resample to Image" must be applied to the reader in ParaView later!

Create XDMF file:
The hdf5 files are loaded via an xdmf reader. To enable this, an xdmf file must be created first, containing information about alle time steps and all variables. The script "make_xdmf.py" does this. The script essentially scans the directory where the data files are located and collects the names of all files for one month (the definition of the month is fixed in the script). Variables that can be later loaded into ParaView are noted in the script in form of a python list:

 scalars=["cc", "ciwc", "clwc", "q", "d", "vo", "o3", "w"]

The name of the xdmf-output file is structuredgrid_201709.xdmf.

ParaView

Loading the necessary modules

For ParaView v5.5.0 the needed modules can be found out with "module spider ParaView/5.5.0". Load modules e.g. with:

Error: Failed to load processor bash
No macro or processor named 'bash' found

ParaView GUI

First load the modules as described above, then start ParaView GUI on JURECA via vglrun:

Error: Failed to load processor bash
No macro or processor named 'bash' found

The GUI is well suited to prototype the scene. In the GUI one can define the visualization pipeline with its parameters, i.e. the readers and filters, the color tables and the camera positions. However, for various reasons it makes sense to script the visualization with paraview-python:

  • In the script all parameters are recorded in text form, so one can look up these parameters later
  • You can save the state of your ParaView session in so called state files. But unfortunately loading the saved state files sometimes does not work
  • ParaView has a memory leak. Therefore after a number of render steps you have to quit ParaView and restart it at the aborted location (otherwise ParaView would crash because it eats up all memory). This restart procedure can be automated using a script.

From GUI to Script

How To transfer pipeline parameters from GUI to script:
In the ParaView-GUI, start a Python trace by Tools->Start Trace. Then create the pipeline you want. (Most of) the corresponding Python commands are displayed in the trace. These commands can be transferred into a script with copy & paste and, if needed, modified there.

How To transfer colormaps from GUI to script:
Once you have designed a good colormap in the GUI, you can save it there as a preset. This preset can then be renamed and saved to disk as a *.json file. Since a different colormap makes sense for each variable, one will end up with more then one colormap file. In this example the naming scheme for colormap files is "stein_variable.json", e.g. "stein_vo.json" for the vorticity. This naming scheme is expected in the Python script, which among other things loads the color tables:

attribute = 'vo'
ImportPresets(filename='./stein_' + attribute + '.json')

How To transfer camera parameters from GUI to script:
You can save four (and more) camera positions in the ParaView GUI. Click on the camera icon ("Adjust Camera"), then "configure", then "Assign current view". The camera positions can be permanently saved in an XML file via "export" and can later be loaded and used in the Python script e.g. with:

import xml.etree.ElementTree as ET
from paraview.simple import *
def assignCameraParameters(root, camera, camIdx):
   camera.SetPosition(float(root[camIdx-1][1][0][0][0][0].attrib['value']), float(root[camIdx-1][1][0][0][0][1].attrib['value']), float(root[camIdx-1][1][0][0][0][2].attrib['value']))
   camera.SetFocalPoint(float(root[camIdx-1][1][0][0][1][0].attrib['value']), float(root[camIdx-1][1][0][0][1][1].attrib['value']), float(root[camIdx-1][1][0][0][1][2].attrib['value']))
   camera.SetViewUp(float(root[camIdx-1][1][0][0][2][0].attrib['value']), float(root[camIdx-1][1][0][0][2][1].attrib['value']), float(root[camIdx-1][1][0][0][2][2].attrib['value']))
   camera.SetParallelScale(float(root[camIdx-1][1][0][0][6][0].attrib['value']))

attribute = 'vo'
tree = ET.parse('camera_' + attribute + '.pvcvbc')
root = tree.getroot()
camera = GetActiveCamera()
assignCameraParameters(root, camera, 1)

As you can see above, the script expects the naming convention "camera_variable.pvcvbc", e.g. "camera_vo.pvcvbc" for the vorticity.

How-To start a ParaView-Python script

Generally a ParaView python script is started with "pvpython script.py" (or, on a VNC-server, like JURECA, by "vglrun pvpython script.py"). On JURECA, however, the following approach is necessary: first start a pvserver on display :0.0 and let pvpython connect to this server. This can be done in one command line:

Error: Failed to load processor bash
No macro or processor named 'bash' found

Hints for generating a textured sphere

As there are some bugs in ParaView, a nice looking textured sphere must be defined as follows:

  • Don't set the EndTheta of the sphere to 360. Instead use a little bit lower value: "sphere.EndTheta = 359.9999"
  • To use a texture, the TextureMaptoSphere filter must be attached to the sphere via "textureMaptoSphere = TextureMaptoSphere(Input=sphere)". To work correctly, the PreventSeam attribute has to be disabled: "textureMaptoSphere.PreventSeam = 0"
  • here is how to attach a texture image to the sphere (tested with ParaView 5.5):
    textureMaptoSphere = TextureMaptoSphere(Input=sphere)
    textureMaptoSphere.PreventSeam = 0
    textureMaptoSphereDisplay = Show(textureMaptoSphere, renderView)
    textureMaptoSphereDisplay.Representation = 'Surface'
    texProxy = servermanager.CreateProxy("textures", "ImageTexture")
    texProxy.GetProperty("FileName").SetElement(0, "./images_earth/earth_heller_transformed.jpg")
    texProxy.UpdateVTKObjects()
    textureMaptoSphereDisplay.Texture = texProxy
    

Sample session on how to use the attached scripts

All files needed to run this sample session can be found in the attachment at the end of this page! Just download from there.

In this sample session, the typical usage of the scripts is demonstrated. All necessary files (scripts, colormaps, camera positions, texture) should be placed into one directory. At the end, 10 images for the two variables "ciwc" and "clwc" should be generated.

Prerequisites

  • Data is located as described above in /data/slmet/slmet111/met_data/ecmwf/era5/netcdf4/2017/
  • On JURECA, the modules for Python and H5PY are loaded, e.g. with "module --force purge && module load Stages/2018a GCC/5.5.0 MVAPICH2/2.3a-GDR h5py/2.7.1-Python-2.7.14"
  • The following scripts are available: generate_coordinates.py, make_xdmf.py, make_movie_pvserver.sh, paraview_make_movie_pvserver.py
  • Color tables for two attributes are available: stein_ciwc.json and stein_clwc.json
  • Camera positions for two attributes are available: camera_ciwc.pvcvbc and camera_clwc.pvcvbc
  • Texture earth_heller_transformed.jpg is available

Step 1: Create coordinates

Error: Failed to load processor bash
No macro or processor named 'bash' found

As a result, the file "./coordinates.h5" should be generated.

Step 2: Create XDMF file

Error: Failed to load processor bash
No macro or processor named 'bash' found

As a result, the file "./structuredgrid_201709.xdmf" should be created.

Step 3: Render images

Error: Failed to load processor bash
No macro or processor named 'bash' found

Rendering takes about 10 minutes. As a result, 10 images for the variables "ciwc" and "clwc" should be generated. The images can be viewed with GPicView on JURECA via "/usr/local/jsc/etc/xdg/scripts/launch_gpicview.sh image*.jpg".

Last modified 6 years ago Last modified on 08/23/18 12:41:23

Attachments (1)

Download all attachments as: .zip

Note: See TracWiki for help on using the wiki.