= Animated movie of neuroscience brain data with ParaView [[PageOutline]] == Background At FZJ, the institute for [http://www.fz-juelich.de/inm/inm-1/EN/Home/home_node.html structural and functional organisation of the brain (INM-1)] develops a 3-D model of the human brain which considers cortical architecture, connectivity, genetics and function. The INM-1 research group [http://www.fz-juelich.de/inm/inm-1/EN/Forschung/Fibre%20Architecture/Fibre%20Architecture_node.html Fiber Architecture] develops techniques to reconstruct the three-dimensional nerve fiber architecture in mouse, rat, monkey, and human brains at microscopic resolution. As a key technology, the neuroimaging technique Three-dimensional Polarized Light Imaging (3D-PLI) is used. To determine the spatial orientations of the nerve fibers, a fixated and frozen postmortem brain is cut with a cryotome into histological sections (≤ 70 µm). Every slice is then scanned by high resolution microscopes. The datset used in this visualisation scenario consists of 234 slices of gridsize 31076x28721 each, resulting in a rectilinear uniform grid of size 31076x28721x234 (~200 GB memory usage in total). The data was stored as raw binary unsigned char data, one file for each slice. == Conversion to HDF5 Because !ParaView has a very usable XDMF/HDF5 reader, we decided to convert the raw data to hdf5 first. This was done using a Python script. Before Python can be used on our JURECA cluster, the necessary modules have to be loaded first: {{{ module load GCC/5.4.0 module load ParaStationMPI/5.1.5-1 module load h5py/2.6.0-Python-2.7.12 module load HDF5/1.8.17 }}} In the Python-script, the directory containing the 234 slice files is scanned for the names of the raw-files. Every file is opened and the raw content is read into a numpy array. This numpy array is written into a hdf5 file, which was created first. {{{ #!python import sys import h5py # http://www.h5py.org/ import numpy as np import glob dir = "/homeb/zam/zilken/JURECA/projekte/hdf5_inm_converter/Vervet_Sehrinde_rightHem_direction/data" hdf5Filename="/homeb/zam/zilken/JURECA/projekte/hdf5_inm_converter/Vervet_Sehrinde_rightHem_direction/data/Vervet_Sehrinde.h5" # grid-size of one slice numX = 28721 numY = 31076 #scan directory for filenames files = glob.glob(dir + "/*.raw") numSlices = len(files) # actually 234 slices for this specific dataset # create hdf5 file fout = h5py.File(hdf5Filename, 'w') # create a dataset in the hdf5 file of type unsigned char = uint8 dset = fout.create_dataset("PLI", (numSlices, numX, numY), dtype=np.uint8) i = 0 for rawFilename in sorted(files): print "processing " + rawFilename sys.stdout.flush() # open each raw file fin = open(rawFilename, "rb") # and read the content v = np.fromfile(fin, dtype=np.uint8, count=numX*numY) fin.close() v = v.reshape(1, numX, numY) # store the data in the hdf5 file at the right place dset[i, : , :]=v print "success" fout.close() }}} == Creating XDMF Files !ParaView needs proper XDMF files to be able to read the data from a hdf5 file. We generated two xdfm files by hand. One for the fullsize dataset, and one to be able to load a spatial subsampled version via a hyperslab. The xdmf file for the fullsize dataset is quite simple an defines just the uniform rectilinear grid with one attribute. It is a very good practise to normalize the spatial extend of the grid by setting the grid spacing accordingly. We decided that the grid should have the extend "1.0" in the direction on its largest axis. So the grid spacing is 1.0/31076=3.21792e-5 for the Y- and Z-axis. The X-axis is the direction where the slices are cut. Therefore it's grid spacing is larger by a factor of 40, resulting in 40*3.21792e-5=1.2872e-3. The origin of the grid was set to its center. '''Fullsize Version:''' {{{ #!xml -0.151 -.4621 -0.5 1.2872e-3 3.21792e-5 3.21792e-5 Vervet_Sehrinde.h5:/PLI }}} As the dataset is relatively large (200 GB), we decided to generate a second xdmf file which only reads a subsampled version of the data (every 4th pixel in Y- and Z-direction, 12.5 GB). This is conventiant because the loading time is much shorter and the handling of the data is more fluent. The subsampling can be done via the "hyperslab" construct in the xdmf file and adds a little bit more complexity to the description. Please note that the grid spacing has to be adapted to the subsampled grid size accordingly. '''Subsampled Version:''' {{{ #!xml -0.151 -0.4626 -0.5 1.28866e-3 1.28866e-4 1.28866e-4 0 0 0 1 4 4 234 7180 7760 Vervet_Sehrinde.h5:/PLI }}} == Prototyping the Movie The final movie is generate by controlling the rendering process in !ParaView via a Python-script (pvpython). Before this is done, we need to find out good camera positions for camera flights, proper color- and opacity-tables, maybe a volume of interest and so on. To find out the necessary parameters, the data can be loaded with the !ParaView GUI first. There one can interactively adjust camera positions, color tables, ..... It is also very helpful to open the Python-Trace window of !ParaView, where most parameter changes made in the GUI are shown as Python commands. These commands can, with little changes, be used in the final Python script for the movie generation. == Generating the Movie by Python Scripting In the next sections the resulting Python script is shown step by step. === Preliminary Steps First of all one has to import paraview.simple and open a view of the correct size (e.g. 1920x1080=fullHD or 3840x2160=4K). {{{ #!python #### import the simple module from the paraview from paraview.simple import * Connect() #connect to build in paraview #also possible to connect to pvserver, e.g. pv.Connect('localhost') paraview.simple._DisableFirstRenderCameraReset() # get active view renderView = CreateRenderView('RenderView') renderView.UseOffscreenRendering = 1 renderView.ViewSize = [1920*2, 1080*2] }}} === Loading the Data The data is loaded and then visualized with the [http://www.ospray.org/ OSPRay] volume renderer. Proper color and opacity functions are set. {{{ #!python vervet = XDMFReader(FileNames=['/homeb/zam/zilken/JURECA/projekte/hdf5_inm_converter/Vervet_Sehrinde_rightHem_direction/data/vervet_hyperslab.xdmf']) vervet.PointArrayStatus = ['PLI'] vervet.GridStatus = ['Brain'] print "loading vervet data" vervetDisplay = Show(vervet, renderView) vervetDisplay.VolumeRenderingMode = 'OSPRay Based' vervetDisplay.SetRepresentationType('Volume') vervetDisplay.ColorArrayName = ['POINTS', ''] vervetDisplay.OSPRayScaleArray = 'PLI' vervetDisplay.OSPRayScaleFunction = 'PiecewiseFunction' vervetDisplay.SelectOrientationVectors = 'None' vervetDisplay.ScaleFactor = 0.09998712839442306 vervetDisplay.SelectScaleArray = 'PLI' pLILUT = GetColorTransferFunction('PLI') pLILUT.ScalarRangeInitialized = 1.0 pLILUT.RGBPoints = [0.0, 0.33, 0.33, 0.5, 27.0, 1.0, 1.0, .9, 51.0, 0.9, 0.9, 0.0, 65.0, 0.9, 0.0, 0.0, 254.0, 0.0, 0.0, 0.0] pLILUT.ColorSpace = 'RGB' # get opacity transfer function/opacity map for 'PLI' pLIPWF = GetOpacityTransferFunction('PLI') pLIPWF.Points = [0.0, 0.0, 0.5, 0.0, 254.0, 1.0, 0.5, 0.0] pLIPWF.ScalarRangeInitialized = 1 camera = GetActiveCamera() camera.SetPosition(0.0, 0.0, 1.488) camera.SetFocalPoint(0.0, 0.0, 0.0) camera.SetViewUp(-1.0, 0.0, 0.0) Render() }}} Resulting Image:\\ [[Image(image_0000.png, 640px)]] === Camera Animation: Path-based (Orbit) To orbit around the data (rotate the data), the camera can be animated in 'Path-based' mode. The !CameraTrack and the !AnimationScene are used in conjunction with two keyframes to render 100 images. Please note that in 'Path-based' mode the location of ALL camera positions on the track are stored in !PositionPathPoints of the first keyframe. {{{ #!python cameraAnimationCue1 = GetCameraTrack(view=renderView) animationScene = GetAnimationScene() # create first keyframe # Note: this one keyframe stores all orbit camera position in PositionPathPoints keyFrame5157 = CameraKeyFrame() keyFrame5157.Position = [0.0, 0.0, 1.488] keyFrame5157.ViewUp = [-1.0, 0.0, 0.0] keyFrame5157.ParallelScale = 0.6825949417738006 keyFrame5157.PositionPathPoints = [0.0, 0.0, 1.488, 0.0, 1.1563931906479727, 0.9364287418821582, 0.0, 1.455483629891903, -0.30937259593682587, 0.0, 0.6755378636124458, -1.3258177079922915, 0.0, -0.6052241248967907, -1.3593556409721905, 0.0, -1.437297629518134, -0.3851227391125512, 0.0, -1.2038172876299222, 0.8746244554111999] keyFrame5157.FocalPathPoints = [0.0, 0.0, 0.0] keyFrame5157.ClosedPositionPath = 1 # create last keyframe keyFrame5158 = CameraKeyFrame() keyFrame5158.KeyTime = 1.0 keyFrame5158.Position = [0.0, 0.0, 1.488] keyFrame5158.ViewUp = [-1.0, 0.0, 0.0] keyFrame5158.ParallelScale = 0.6825949417738006 # initialize the animation track cameraAnimationCue1.Mode = 'Path-based' cameraAnimationCue1.KeyFrames = [keyFrame5157, keyFrame5158] numFrames = 100 imageNum = 0 animationScene.NumberOfFrames = numFrames animationScene.GoToFirst() for i in range(numFrames): print ("orbit state: rendering image %04d" % (imageNum)) WriteImage("image_%04d.png" % (imageNum)) imageNum = imageNum + 1 animationScene.GoToNext() }}} Resulting movie sniplet: [raw-attachment:orbit.wmv Camera Orbit] === Camera Animation: Interpolate Camera For whatever reason !ParaView offers a second option for camera animation, called "interpolate camera". In this mode, the Position-, !FocalPoint- and !ViewUp-vectors of a given number of keyframes are interpolated. This way a camera flight consisting of 2, 3 or more keyframes can be animated. Here is an example for 3 keyframes. {{{ #!python cameraAnimationCue1 = GetCameraTrack(view=renderView) animationScene = GetAnimationScene() # create key frame 1 keyFrame1 = CameraKeyFrame() keyFrame1.KeyTime = 0.0 keyFrame1.Position = [0.0, 0.0, 1.488] keyFrame1.FocalPoint = [0.0, 0.0, 0.0] keyFrame1.ViewUp = [-1.0, 0.0, 0.0] keyFrame1.ParallelScale = 0.8516115354228021 # create key frame 2 keyFrame2 = CameraKeyFrame() keyFrame2.KeyTime = 0.14 keyFrame2.Position = [-0.28, -0.495, 1.375] keyFrame2.FocalPoint = [-0.00016, -0.0003, -4.22e-5] keyFrame2.ViewUp = [-0.981, 0.0255, -0.193] keyFrame2.ParallelScale = 0.8516115354228021 # create key frame 2 keyFrame3.KeyTime = 1.0 keyFrame3.Position = [-0.0237, -0.092, 0.1564] keyFrame3.FocalPoint = [-0.000116651539708215, -0.000439441275715884, -9.75944360306001e-05] keyFrame3.ViewUp = [-0.98677511491576, -0.0207684556984332, -0.160759272923493] keyFrame3.ParallelScale = 0.8516115354228021 cameraAnimationCue1.KeyFrames = [keyFrame1, keyFrame2, keyFrame3] cameraAnimationCue1.Mode = 'Interpolate Camera' numFrames = 200 animationScene.NumberOfFrames = numFrames animationScene.GoToFirst() imageNum = 0 for i in range(numFrames): print ("interpolate camera: rendering image %04d" % (imageNum)) WriteImage("picture_%04d.png" % (imageNum)) imageNum = imageNum + 1 animationScene.GoToNext() }}} Resulting movie sniplet: [raw-attachment:camera_flight.wmv Camera Flight]