Changes between Version 11 and Version 12 of Examples/Brain


Ignore:
Timestamp:
04/11/17 14:39:26 (7 years ago)
Author:
Herwig Zilken
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • Examples/Brain

    v11 v12  
    1 = Animated movie of neuroscience brain data with ParaView
     1= Animated movie of neuroscience brain data using ParaView
    22[[PageOutline]]
    33
    44== Background
    5 At FZJ, the institute for [http://www.fz-juelich.de/inm/inm-1/EN/Home/home_node.html structural and functional organisation of the brain (INM-1)] develops a 3-D model of the human brain which considers cortical architecture, connectivity, genetics and function. The INM-1 research group [http://www.fz-juelich.de/inm/inm-1/EN/Forschung/Fibre%20Architecture/Fibre%20Architecture_node.html Fiber Architecture] develops techniques to reconstruct the three-dimensional nerve fiber architecture in mouse, rat, monkey, and human brains at microscopic resolution. As a key technology, the neuroimaging technique Three-dimensional Polarized Light Imaging (3D-PLI) is used. To determine the spatial orientations of the nerve fibers, a fixated and frozen postmortem brain is cut with a cryotome into histological sections (≤ 70 µm). Every slice is then scanned by high resolution microscopes.
    6 
    7 The datset used in this visualisation scenario consists of 234 slices of gridsize 31076x28721 each, resulting in a rectilinear uniform grid of size 31076x28721x234 (~200 GB memory usage in total). The data was stored as raw binary unsigned char data, one file for each slice.
     5At FZJ, the institute for [http://www.fz-juelich.de/inm/inm-1/EN/Home/home_node.html structural and functional organisation of the brain (INM-1)] develops a 3-D model of the human brain which considers cortical architecture, connectivity, genetics and function. The INM-1 research group [http://www.fz-juelich.de/inm/inm-1/EN/Forschung/Fibre%20Architecture/Fibre%20Architecture_node.html Fiber Architecture] develops techniques to reconstruct the three-dimensional nerve fiber architecture in mouse, rat, monkey, and human brains at microscopic resolution. As a key technology, the neuroimaging technique Three-dimensional Polarized Light Imaging (3D-PLI) is used. To determine the spatial orientations of the nerve fibers, a fixated and frozen postmortem brain is cut with a cryotome into histological sections (≤ 70 µm). Every slice is then scanned by high resolution microscopes, resulting in a uniform rectilinear grid of scanned points.
     6
     7The data used in this visualisation scenario consists of 234 slices of size 31076 x 28721 each, resulting in a rectilinear uniform grid of size 31076 x 28721 x 234 (~200 GB memory usage in total). Originally the data was stored as raw binary unsigned char data, one file for each slice.
    88
    99== Conversion to HDF5
    10 Because !ParaView has a very usable XDMF/HDF5 reader, we decided to convert the raw data to hdf5 first.
     10Because !ParaView has a well working XDMF/HDF5 reader, we decided to convert the raw data to hdf5 first.
    1111This was done using a Python script. Before Python can be used on our JURECA cluster, the necessary modules have to be loaded first:
    1212{{{
     
    1717}}}
    1818
    19 In the Python-script, the directory containing the 234 slice files is scanned for the names of the raw-files. Every file is opened and the raw content is read into a numpy array. This numpy array is written into a hdf5 file, which was created first.
     19The Python-script scans the directory containing the 234 slice files for the names of the raw-files. Every file is opened and the raw content is loaded into a numpy array. This numpy array is written into the dataset of a hdf5 file, which was created first.
    2020
    2121{{{
     
    3333numY = 31076
    3434
    35 #scan directory for filenames
     35# scan directory for the filenames of the raw files
    3636files = glob.glob(dir + "/*.raw")
    3737numSlices = len(files) # actually 234 slices for this specific dataset
     
    4242dset = fout.create_dataset("PLI", (numSlices, numX, numY), dtype=np.uint8)
    4343
    44 i = 0
    4544for rawFilename in sorted(files):
    4645   print "processing " + rawFilename
     
    6160
    6261== Creating XDMF Files
    63 !ParaView needs proper XDMF files to be able to read the data from a hdf5 file. We generated two xdfm files by hand. One for the fullsize dataset, and one to be able to load a spatial subsampled version via a hyperslab.
    64 
    65 The xdmf file for the fullsize dataset is quite simple an defines just the uniform rectilinear grid with one attribute. It is a very good practise to normalize the spatial extend of the grid by setting the grid spacing accordingly. We decided that the grid should have the extend "1.0" in the direction on its largest axis.
    66 So the grid spacing is 1.0/31076=3.21792e-5 for the Y- and Z-axis. The X-axis is the direction where the slices are cut. Therefore it's grid spacing is larger by a factor of 40, resulting in 40*3.21792e-5=1.2872e-3.
     62!ParaView needs proper XDMF files to be able to read the data from the hdf5 file. We generated two xdfm files by hand. One for the fullsize dataset, and one for spatial subsampled version ('hyperslab').
     63
     64The xdmf file for the fullsize dataset is quite simple an defines just the uniform rectilinear grid with one attribute named 'PLI'. It is a very good practise to normalize the spatial extend (size) of the grid by setting the grid spacing accordingly. We decided that the grid should have the extend "1.0" in the direction of its longest axis, which is 31076 pixels.\\
     65Therefore the grid spacing is set to 1.0/31076=3.21792e-5 for the Y- and Z-axis. The X-axis is the direction in which the brain slices have been cut and has a much lower resolution (by the factor of 40). Therefore it's grid spacing is 40 times larger, resulting in 40*3.21792e-5=1.2872e-3.
    6766The origin of the grid was set to its center.
    6867
     
    9493}}}
    9594
    96 As the dataset is relatively large (200 GB), we decided to generate a second xdmf file which only reads a subsampled version of the data (every 4th pixel in Y- and Z-direction, 12.5 GB). This is conventiant because the loading time is much shorter and the handling of the data is more fluent. The subsampling can be done via the "hyperslab" construct in the xdmf file and adds a little bit more complexity to the description. Please note that the grid spacing has to be adapted to the subsampled grid size accordingly.
     95As the dataset is relatively large (200 GB), we decided to generate a second xdmf file which only reads a subsampled version of the data (every 4th pixel in Y- and Z-direction, 12.5 GB). This is conveniant because the loading time is much shorter and the handling of the data is more fluent. The subsampling can be done via the "hyperslab" construct in the xdmf file and adds a little bit more complexity to the description. Please note that the grid spacing has to be adapted to the subsampled grid size, as we want that the spatial extend of the fullsize and the subsampled grid should be 1.0 in both cases.
    9796
    9897'''Subsampled Version:'''
     
    131130
    132131== Prototyping the Movie
    133 The final movie is generate by controlling the rendering process in !ParaView via a Python-script (pvpython). Before this is done, we need to find out good camera positions for camera flights, proper color- and opacity-tables, maybe a volume of interest and so on.
    134 
    135 To find out the necessary parameters, the data can be loaded with the !ParaView GUI first. There one can interactively adjust camera positions, color tables, .....
    136 It is also very helpful to open the Python-Trace window of !ParaView, where most parameter changes made in the GUI are shown as Python commands. These commands can, with little changes, be used in the final Python script for the movie generation.
     132The final movie is generate by controlling the rendering process in !ParaView via a Python-script (pvpython). Before this is done, we need to identify good camera positions for camera flights, proper color- and opacity-tables, maybe a volume of interest and so on.
     133
     134To find out the necessary parameters, the data can be loaded and examined with the normal !ParaView GUI first. Within the GUI the user can interactively adjust camera positions, color tables, and so on, in a very conveniant way. The obtained parameters can later be used in the final Python-script.\\
     135While prototyping the movie, it is also very helpful to open the Python-Trace window of !ParaView, where most parameter changes (interactively made in the GUI) are shown as Python commands. These commands can, with little changes, be used in the final Python script for the production of the movie.
    137136
    138137== Generating the Movie by Python Scripting
    139138In the next sections the resulting Python script is shown step by step.
    140139=== Preliminary Steps
    141 First of all one has to import paraview.simple and open a view of the correct size (e.g. 1920x1080=fullHD or 3840x2160=4K).
     140First of all one has to import paraview.simple and open a view of the correct size (e.g. 1920x1080=fullHD or 3840x2160=4K). It is mandatory to switch on offscreen rendering, as else the size of the view would be limited by the size of the XWindow desktop.
    142141{{{
    143142#!python
     
    145144from paraview.simple import *
    146145
    147 Connect() #connect to build in paraview
    148 #also possible to connect to pvserver, e.g. pv.Connect('localhost')
     146Connect() #connect to build-in paraview
     147# its also possible to connect to pvserver, e.g. pv.Connect('localhost')
    149148
    150149paraview.simple._DisableFirstRenderCameraReset()
     
    155154renderView.ViewSize = [1920*2, 1080*2]
    156155}}}
    157 === Loading the Data
    158 The data is loaded and then visualized with the [http://www.ospray.org/ OSPRay] volume renderer. Proper color and opacity functions are set.
     156=== Loading the Data and switch on Volume Rendering
     157The data is loaded and then visualized with the [http://www.ospray.org/ OSPRay] volume renderer. Proper color and opacity functions and a good camera position are set.
    159158{{{
    160159#!python
     
    193192
    194193=== Camera Animation: Path-based (Orbit)
    195 To orbit around the data (rotate the data), the camera can be animated in 'Path-based' mode. The !CameraTrack and the !AnimationScene are used in conjunction with two keyframes to render 100 images. Please note that in 'Path-based' mode the location of ALL camera positions on the track are stored in !PositionPathPoints of the first keyframe.
     194To orbit around the data (rotate the data), the camera can be animated in 'Path-based' mode. In this example the !CameraTrack and the !AnimationScene are used in conjunction with two keyframes to render 100 images. Please note that in 'Path-based' mode the location of ALL camera positions on the track are stored in !PositionPathPoints of the first keyframe!
    196195{{{
    197196#!python
     
    235234
    236235=== Camera Animation: Interpolate Camera
    237 For whatever reason !ParaView offers a second option for camera animation, called "interpolate camera".
     236For whatever reason, !ParaView offers a second option for camera animation, called "interpolate camera".
    238237In this mode, the Position-, !FocalPoint- and !ViewUp-vectors of a given number of keyframes are interpolated.
    239238This way a camera flight consisting of 2, 3 or more keyframes can be animated. Here is an example for 3 keyframes.
     
    286285{{{
    287286#!python
    288 # t: timestep, in range [0.0, 1.0]
     287# t: interpolation step, in range [0.0, 1.0]
    289288def interpolate(t, list1, list2):
    290289  if len(list1) != len(list2):
     
    316315}}}
    317316Resulting movie snippet:
    318 [raw-attachment:opacity.wmv Animation of opacity]
     317[raw-attachment:opacity.wmv Animation of Opacity]
    319318
    320319=== Rendering a Subset of the Grid (Animated Volume of Interest)
    321320Not only the data source, but also the output of !ParaView filters can be visualized. In this example we will use the !ExtractSubset filter to visualize a smaller part of the whole grid.
    322321The input of the !ExtractSubset has to be connected to the output of the XDMF reader.
    323 Then the scripts animates the volume of interest in 81 steps, fading away slices from the top and bottom side of the grid. At the end about 70 slices located at the middle of the grid are visible.
     322Then the scripts animates the volume of interest in 81 steps, fading away slices from the top and bottom side of the grid. At the end about 70 slices located in the middle of the grid are visible.
    324323
    325324{{{
     
    350349
    351350Resulting movie snippet:
    352 [raw-attachment:voi.wmv Animation of volume of interest]
     351[raw-attachment:voi.wmv Animation of Volume of Interest]