wiki:Examples/Ear5Animating

Version 2 (modified by Herwig Zilken, 6 years ago) ( diff )

--

Visualisation and Animation of ERA5 Reanalysis Climate Data

Prerequisites

NetCDF/HDF5 (on JURECA)

How to find out what modules to load for NetCDF/HDF5:

Error: Failed to load processor bash
No macro or processor named 'bash' found

Load modules e.g. by:

Error: Failed to load processor bash
No macro or processor named 'bash' found

Just some examples of hot to use h5dump:

Error: Failed to load processor bash
No macro or processor named 'bash' found

How to find out the version of a netcdf file:

Error: Failed to load processor bash
No macro or processor named 'bash' found

If ncdump returns netCDF-4, or netCDF-4 classic model, then congratulations, you already have an HDF file, as netCDF-4 is the netCDF data model implemented using HDF5 as the storage layer. These files can be ready by the HDF library version 1.8 or later, and from what I can tell, pytables. If ncdump returns classic or 64-bit offset, then you are using netCDF-3 and will need to convert to netCDF-4 (and thus HDF). The good news is that if you have netCDF installed, you can do the conversion pretty easily:

Error: Failed to load processor bash
No macro or processor named 'bash' found

Graphical hdf5 viewer: HDFView is a very handy tool to investigate hdf5 files. To open HDFView on JURECA, either click on the HDFView icon on the desktop. Or launch HDFView by

Error: Failed to load processor bash
No macro or processor named 'bash' found

Python on JURECA

To use Python on JURECA, you have to load some modules first, e.g.

Error: Failed to load processor bash
No macro or processor named 'bash' found

Overview about the Data Files

The files are stored at /data/slmet/slmet111/met_data/ecmwf/era5/netcdf4/2017/

The Files cover a period of time for two month: June and August 2017 in 1 h steps. Example of a filename: 2017061516_ml.nc (YYYYMMDDHH)

Some interesting variables stored in the *ml-files: cc (1 x 137 x 601 x 1200): Fraction of cloud cover ciwc (1 x 137 x 601 x 1200): Specific cloud ice water content clwc (1 x 137 x 601 x 1200): Specific cloud liquid water content d (1 x 137 x 601 x 1200): divergence_of_wind o3 (1 x 137 x 601 x 1200): Ozone mass mixing ratio q (1 x 137 x 601 x 1200): Specific humidity t (1 x 137 x 601 x 1200): Temperature u (1 x 137 x 601 x 1200): U component of wind (eastward wind) v (1 x 137 x 601 x 1200): V component of wind (northward wind) w (1 x 137 x 601 x 1200): Vertical velocity vo (1 x 137 x 601 x 1200): Vorticity (relative)

Variables related to coordinates:
lat (601): latitude (degrees north), ranging from 90 to -90 lon (1200): longitude (degrees east), ranging from 0 to 359.7 lev, lev_2 (137): hybrid_sigma_pressure, ranging from 1 to 137 (137 is ground level!)

Calculation of Coordinates:
ParaView does not understand the original coordinates (lat, lon, lev_2) this way. Therefore, these must be converted into a "structured grid" data structure. See the "generate_coordinates.py" script. Here also a conversion to Cartesian coordinates takes place essentially via:

  height = (137.0 - levIn[i])*.5 + 150
  x = height * np.cos(lat*3.14/180)*np.cos(lon*3.14/180*1.002)
  y = height * np.cos(lat*3.14/180)*np.sin(lon*3.14/180*1.002)
  z = height * np.sin(lat*3.14/180)

The generated coordinates are stored in the new created file "coordinates.h5". ATTENTION: ParaView can read this "structured grid", but cannot volume-render, as volume rendering only works good for image data. Therefore, the filter "Resample to Image" must be applied to the reader in ParaView!

Create XDMF files:
The hdf5 files are read via an xdmf reader. To enable this, an xdmf file must be created. The script "make_xdmf.py" does this. The script essentially scans the directory where the files are located and gets the names of all files for one month (the month is fixed in the script). Variables that can be read into ParaView later are noted in the script via:

 scalars=["cc", "ciwc", "clwc", "q", "d", "vo", "o3", "w"]]

The name of the xdmf-output file is structuredgrid_201709.xdmf.

ParaView

Loading the necessary modules

The modules can be found out with "module spider ParaView/5.5.0". Load modules e.g. with:

Error: Failed to load processor bash
No macro or processor named 'bash' found

ParaView GUI

First load the modules, then start ParaView GUI

Error: Failed to load processor bash
No macro or processor named 'bash' found

The GUI is well suited to prototype the scene, i.e. to define the visualization pipeline with its parameters, i.e. the filters, the color tables and the camera positions. However, for various reasons it makes sense to script the visualization in ParaView:

  • In the script all parameters are recorded in text form
  • Loading ParaView GUI states sometimes does not work
  • ParaView has a memory leak, so after a few render steps you have to quit ParaView and restart it at the aborted location. This can be automated using a script.

From GUI to Script

How-To transfer pipeline parameters from GUI to script:
In the ParaView-GUI, start a Python trace by Tools->Start Trace. Then create the pipeline you want. The corresponding Python commands are displayed in the trace. These can be transferred into a script with copy & paste.

How-To transfer colormaps from GUI to script:
Once you have designed a good colormap, you can save it as a preset. This preset can then be renamed and saved to disk as *.json file. Since a different colormap makes sense for each variable, in the example the naming scheme for colormap files is "stein_variable.json", e.g. "stein_vo.json" for the vorticity. This naming scheme is expected in the Python scripts, which among other things load the color tables

How-To transfer camera parameter from GUI to script:
You can save four camera positions in ParaView. Click on the camera icon ("Adjust Camera"), then "configure", then "Assign current view". The camera positions can be saved in an XML file via "export" and can later be read in and used in the Python script e.g. with:

import xml.etree.ElementTree as ET
from paraview.simple import *
def assignCameraParameters(root, camera, camIdx):
   camera.SetPosition(float(root[camIdx-1][1][0][0][0][0].attrib['value']), float(root[camIdx-1][1][0][0][0][1].attrib['value']), float(root[camIdx-1][1][0][0][0][2].attrib['value']))
   camera.SetFocalPoint(float(root[camIdx-1][1][0][0][1][0].attrib['value']), float(root[camIdx-1][1][0][0][1][1].attrib['value']), float(root[camIdx-1][1][0][0][1][2].attrib['value']))
   camera.SetViewUp(float(root[camIdx-1][1][0][0][2][0].attrib['value']), float(root[camIdx-1][1][0][0][2][1].attrib['value']), float(root[camIdx-1][1][0][0][2][2].attrib['value']))
   camera.SetParallelScale(float(root[camIdx-1][1][0][0][6][0].attrib['value']))

tree = ET.parse('camera_' + attribute + '.pvcvbc')
root = tree.getroot()
camera = GetActiveCamera()
assignCameraParameters(root, camera, 1)

Attachments (1)

Download all attachments as: .zip

Note: See TracWiki for help on using the wiki.