wiki:Examples/Ear5Animating

Version 3 (modified by Herwig Zilken, 6 years ago) ( diff )

--

Visualisation and Animation of ERA5 Reanalysis Climate Data

Dataformats and Tools

NetCDF and HDF5 on JURECA

As HDF5 is the preferred dataformat for ParaView, maybe a conversion from netCDF-3 to netCDF-4 (which in fact is a HDF5 file) is necessary. So one has to deal with som netcdf and hdf5 tools.

How to find out what modules to load for NetCDF/HDF5:

Error: Failed to load processor bash
No macro or processor named 'bash' found

Load modules e.g. by:

Error: Failed to load processor bash
No macro or processor named 'bash' found

Just some examples of how to use h5dump:

Error: Failed to load processor bash
No macro or processor named 'bash' found

How to find out the version of a netcdf file:

Error: Failed to load processor bash
No macro or processor named 'bash' found

If ncdump returns netCDF-4, or netCDF-4 classic model, then congratulations, you already have an HDF file, as netCDF-4 is the netCDF data model implemented using HDF5 as the storage layer. These files can be ready by the HDF library version 1.8 or later, and from what I can tell, pytables. If ncdump returns classic or 64-bit offset, then you are using netCDF-3 and will need to convert to netCDF-4 (and thus HDF). The good news is that if you have netCDF installed, you can do the conversion pretty easily:

Error: Failed to load processor bash
No macro or processor named 'bash' found

Graphical hdf5 viewer: HDFView is a very handy tool to investigate hdf5 files. To open HDFView on JURECA, either click on the HDFView icon on the desktop. Or launch HDFView by

Error: Failed to load processor bash
No macro or processor named 'bash' found

Python on JURECA

To use Python on JURECA, you have to load some modules first. Here are some useful commands:

Error: Failed to load processor bash
No macro or processor named 'bash' found
Error: Failed to load processor bash
No macro or processor named 'bash' found

Overview about the Data Files

The files are stored at /data/slmet/slmet111/met_data/ecmwf/era5/netcdf4/2017/

The files cover the period of time for the month June and August 2017 in 1 h steps. Example of a filename: 2017061516_ml.nc (YYYYMMDDHH)

Some interesting variables stored in the *ml-files:

  • cc (1 x 137 x 601 x 1200): Fraction of cloud cover
  • ciwc (1 x 137 x 601 x 1200): Specific cloud ice water content
  • clwc (1 x 137 x 601 x 1200): Specific cloud liquid water content
  • d (1 x 137 x 601 x 1200): divergence_of_wind
  • o3 (1 x 137 x 601 x 1200): Ozone mass mixing ratio
  • q (1 x 137 x 601 x 1200): Specific humidity
  • t (1 x 137 x 601 x 1200): Temperature
  • u (1 x 137 x 601 x 1200): U component of wind (eastward wind)
  • v (1 x 137 x 601 x 1200): V component of wind (northward wind)
  • w (1 x 137 x 601 x 1200): Vertical velocity
  • vo (1 x 137 x 601 x 1200): Vorticity (relative)

Variables related to coordinates:

  • lat (601): latitude (degrees north), ranging from 90 to -90
  • lon (1200): longitude (degrees east), ranging from 0 to 359.7
  • lev, lev_2 (137): hybrid_sigma_pressure, ranging from 1 to 137 (137 is ground level!)

Calculation of Coordinates:
ParaView natively does not understand the original coordinates (lat, lon, lev_2). Therefore, these must be converted into a "structured grid" data structure. See the the "generate_coordinates.py" script. In this script also the conversion to Cartesian coordinates takes place essentially via:

  height = (137.0 - levIn[i])*.5 + 150
  x = height * np.cos(lat*3.14/180)*np.cos(lon*3.14/180*1.002)
  y = height * np.cos(lat*3.14/180)*np.sin(lon*3.14/180*1.002)
  z = height * np.sin(lat*3.14/180)

The generated coordinates are stored in the new created file "coordinates.h5". ATTENTION: ParaView can read this "structured grid", but cannot volume-render it, as volume rendering only works good for data of type "image data". Therefore, the filter "Resample to Image" must be applied to the reader in ParaView!

Create XDMF file:
The hdf5 files are loaded via an xdmf reader. To enable this, an xdmf file must be created first. The script "make_xdmf.py" does this. The script essentially scans the directory where the files are located and gets the names of all files for one month (the month is fixed in the script). Variables that can be later loaded into ParaView are noted in the script in a python list:

 scalars=["cc", "ciwc", "clwc", "q", "d", "vo", "o3", "w"]]

The name of the xdmf-output file is structuredgrid_201709.xdmf.

ParaView

Loading the necessary modules

The modules can be found out with "module spider ParaView/5.5.0". Load modules e.g. with:

Error: Failed to load processor bash
No macro or processor named 'bash' found

ParaView GUI

First load the modules as described above, then start ParaView GUI

Error: Failed to load processor bash
No macro or processor named 'bash' found

The GUI is well suited to prototype the scene. In the GUI one can define the visualization pipeline with its parameters, i.e. the readers and filters, the color tables and the camera positions. However, for various reasons it makes sense to script the visualization with paraview-python:

  • In the script all parameters are recorded in text form
  • Loading ParaView GUI states sometimes does not work
  • ParaView has a memory leak, so after a number of render steps you have to quit ParaView and restart it at the aborted location (otherwise ParaView would crash). This can be automated using a script.

From GUI to Script

How To transfer pipeline parameters from GUI to script:
In the ParaView-GUI, start a Python trace by Tools->Start Trace. Then create the pipeline you want. (Most of) the corresponding Python commands are displayed in the trace. These can be transferred into a script with copy & paste.

How To transfer colormaps from GUI to script:
Once you have designed a good colormap in the GUI, you can save it there as a preset. This preset can then be renamed and saved to disk as a *.json file. Since a different colormap makes sense for each variable, one will end up with mor then one colormap file. In this example the naming scheme for colormap files is "stein_variable.json", e.g. "stein_vo.json" for the vorticity. This naming scheme is expected in the Python scripts, which among other things load the color tables.

How To transfer camera parameter from GUI to script:
You can save four (and more) camera positions in the ParaView GUI. Click on the camera icon ("Adjust Camera"), then "configure", then "Assign current view". The camera positions can be saved in an XML file via "export" and can later be loaded and used in the Python script e.g. with:

import xml.etree.ElementTree as ET
from paraview.simple import *
def assignCameraParameters(root, camera, camIdx):
   camera.SetPosition(float(root[camIdx-1][1][0][0][0][0].attrib['value']), float(root[camIdx-1][1][0][0][0][1].attrib['value']), float(root[camIdx-1][1][0][0][0][2].attrib['value']))
   camera.SetFocalPoint(float(root[camIdx-1][1][0][0][1][0].attrib['value']), float(root[camIdx-1][1][0][0][1][1].attrib['value']), float(root[camIdx-1][1][0][0][1][2].attrib['value']))
   camera.SetViewUp(float(root[camIdx-1][1][0][0][2][0].attrib['value']), float(root[camIdx-1][1][0][0][2][1].attrib['value']), float(root[camIdx-1][1][0][0][2][2].attrib['value']))
   camera.SetParallelScale(float(root[camIdx-1][1][0][0][6][0].attrib['value']))

tree = ET.parse('camera_' + attribute + '.pvcvbc')
root = tree.getroot()
camera = GetActiveCamera()
assignCameraParameters(root, camera, 1)

As you can see above, the script expects the naming convention "camera_variable.pvcvbc", e.g. "camera_vo.pvcvbc" for the vorticity.

How-To start a ParaView script

Generally a ParaView Python script is started with "pvpython script.py" (or, on a VNC-server, by "vglrun pvpython script.py"). On JURECA, however, the following approach is necessary: first start a pvserver on display :0.0 and let pvpython connect to this server. This can be done in one command line:

Error: Failed to load processor bash
No macro or processor named 'bash' found

Sample session for using the scripts:

In the sample session, the typical usage of the scripts is demonstrated. All necessary files (scripts, colormaps, camera positions, texture) should be copied into the same directory. At the end, 10 images for the two variables "ciwc" and "clwc" should be generated.

Prerequisites

  • Data is located as described above in /data/slmet/slmet111/met_data/ecmwf/era5/netcdf4/2017/
  • On JURECA, the modules for Python are loaded, e.g. with "module use otherstages && modules --force purge && module load Stages/Devel-2017b GCC/7.2.0 ParaStationMPI/5.2.0-1 h5py/2.7.1-Python-2.7.14"
  • The following scripts are installed: generate_coordinates.py, make_xdmf.py, make_movie_pvserver.sh, paraview_make_movie_pvserver.py
  • Color tables for two attributes are installed: stein_ciwc.json and stein_clwc.json
  • Camera positions for two attributes are installed: camera_ciwc.pvcvbc and camera_clwc.pvcvbc
  • Texture earth_heller_transformed.jpg is installed

Step 1: Create coordinates

Error: Failed to load processor bash
No macro or processor named 'bash' found

As a result, the file "./coordinates.h5" should be generated.

Step 2: Create XDMF file

Error: Failed to load processor bash
No macro or processor named 'bash' found

As a result, the file "./structuredgrid_201709.xdmf" should be created.

Step 3: Render images

Error: Failed to load processor bash
No macro or processor named 'bash' found

Rendering takes about 10 minutes. As a result, 10 images for the variables "ciwc" and "clwc" should be generated. The images can be viewed on JURECA with "/usr/local/jsc/etc/xdg/scripts/launch_gpicview.sh image*.jpg".

Attachments (1)

Download all attachments as: .zip

Note: See TracWiki for help on using the wiki.