| 1 | = Animated movie of neuroscience brain data with ParaView |
| 2 | [[PageOutline]] |
| 3 | |
| 4 | == Background |
| 5 | At FZJ, the institute for [http://www.fz-juelich.de/inm/inm-1/EN/Home/home_node.html structural and functional organisation of the brain (INM-1)] develops a 3-D model of the human brain which considers cortical architecture, connectivity, genetics and function. The INM-1 research group [http://www.fz-juelich.de/inm/inm-1/EN/Forschung/Fibre%20Architecture/Fibre%20Architecture_node.html Fiber Architecture] develops techniques to reconstruct the three-dimensional nerve fiber architecture in mouse, rat, monkey, and human brains at microscopic resolution. As a key technology, the neuroimaging technique Three-dimensional Polarized Light Imaging (3D-PLI) is used. To determine the spatial orientations of the nerve fibers, a fixated and frozen postmortem brain is cut with a cryotome into histological sections (≤ 70 µm). Every slice is then scanned by high resolution microscopes. |
| 6 | |
| 7 | == Data |
| 8 | The datset used in this visualisation scenario consists of 234 slices of gridsize 31076x28721, resulting in a rectilinear uniform grid of size 31076x28721x234, ~200 GB memory usage in total. The data was stored as raw binary unsigned char data, one file for each slice. |
| 9 | |
| 10 | == Conversion to HDF5 |
| 11 | Because !ParaView has a very decent XDMF/HDF5 reader, we decided to convert the raw data to hdf5 first. |
| 12 | This was done using a Python script. Before Python can be used on our JURECA cluster, the necessary modules have to be loaded first: |
| 13 | {{{ |
| 14 | module load GCC/5.4.0 |
| 15 | module load ParaStationMPI/5.1.5-1 |
| 16 | module load h5py/2.6.0-Python-2.7.12 |
| 17 | module load HDF5/1.8.17 |
| 18 | }}} |
| 19 | |
| 20 | In the Python-script, the directory containing the 234 slice files is scanned for the filenames. Every file is opened and the raw content is read into a numpy array. This numpy array is written into a hdf5 file, which was created first. |
| 21 | |
| 22 | {{{ |
| 23 | #!python |
| 24 | import sys |
| 25 | import h5py # http://www.h5py.org/ |
| 26 | import numpy as np |
| 27 | import glob |
| 28 | |
| 29 | dir = "/homeb/zam/zilken/JURECA/projekte/hdf5_inm_converter/Vervet_Sehrinde_rightHem_direction/data" |
| 30 | hdf5Filename="/homeb/zam/zilken/JURECA/projekte/hdf5_inm_converter/Vervet_Sehrinde_rightHem_direction/data/Vervet_Sehrinde.h5" |
| 31 | |
| 32 | # grid-size of one slice |
| 33 | numX = 28721 |
| 34 | numY = 31076 |
| 35 | |
| 36 | #scan directory for filenames |
| 37 | files = glob.glob(dir + "/*.raw") |
| 38 | numSlices = len(files) # actually 234 slices for this specific dataset |
| 39 | |
| 40 | # create hdf5 file |
| 41 | fout = h5py.File(hdf5Filename, 'w') |
| 42 | # create a dataset in the hdf5 file of type unsigned char = uint8 |
| 43 | dset = fout.create_dataset("PLI", (numSlices, numX, numY), dtype=np.uint8) |
| 44 | |
| 45 | i = 0 |
| 46 | for rawFilename in sorted(files): |
| 47 | print "processing " + rawFilename |
| 48 | sys.stdout.flush() |
| 49 | |
| 50 | # open each raw file |
| 51 | fin = open(rawFilename, "rb") |
| 52 | # and read the content |
| 53 | v = np.fromfile(fin, dtype=np.uint8, count=numX*numY) |
| 54 | fin.close() |
| 55 | v = v.reshape(1, numX, numY) |
| 56 | # store the data in the hdf5 file at the right place |
| 57 | dset[i, : , :]=v |
| 58 | |
| 59 | |
| 60 | print "success" |
| 61 | fout.close() |
| 62 | |
| 63 | }}} |
| 64 | |
| 65 | == Creating XDMF Files |
| 66 | |