Changes between Version 5 and Version 6 of vnc3d/manual


Ignore:
Timestamp:
05/19/16 16:29:45 (8 years ago)
Author:
Herwig Zilken
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • vnc3d/manual

    v5 v6  
    8181ssh <USERID>@jureca.fz-juelich.de
    8282}}}
     83sbatch is used to start a job script. In this case the job script should start the vnc server and could look like:
     84{{{
     85#!/bin/bash
    8386
    84 salloc is used to request an allocation.\\
    85 When the job is started, a shell (or other program specified on the command line) is started on the submission host (login node). \\
    86 From the shell srun can be used to interactively spawn parallel applications. \\
    87 The allocation is released when the user exits the shell.
    88 
    89 To allocate a vis node with 512 GByte main memory for one hour, use:
    90 {{{ #!sh
    91 # request allocation and spawn VNC server on visualization node
    92 salloc -N 1 -p vis --gres=mem512,gpu:1 --start-xserver --time=1:00:00
    93 srun -n 1 --cpu_bind=none --gres=gpu:1 vncserver -fg -profile vis -geometry 1920x1080
    94 
    95 ...
    96 Desktop 'TurboVNC: <NODE>:<DISPLAY> (profile <PROFILE>)'
    97  started on display <NODE>:<DISPLAY> (<NODE>:<DISPLAY>)
    98 ...
     87#SBATCH --job-name=vnc
     88#SBATCH --nodes=1
     89#SBATCH --ntasks-per-node=24
     90#SBATCH --gres=gpu:2
     91#SBATCH --partition=vis
     92#SBATCH --time=24:00:00
     93#SBATCH --mem=512000
     94vncserver -fg -profile vis -geometry 1920x1080
    9995}}}
    10096
    101 salloc/srun options
    102 * -N 1            -> Set number of requested nodes.
    103 * -p vis          -> Limit request to nodes from the visualization partition.
    104 * --gres=mem512   -> Set the size of main memory per node (mem512 or mem1024).
    105 * --gres=gpu:2    -> Set the number of requested GPUs in the range of 0-2.
    106 * --start-xserver -> Start an Xserver for usage with VirtualGL.
    107 * --time=1:00:00  -> Set the default wallclock time to 1 hour (the maximum is 24 hours).
    108 * Please check 'salloc --help' and 'srun --help' for more details.
     97Once you have created this batch script, you can start it by
    10998
     99{{{ #!sh
     100sbatch --start-xserver ''name_of_jobscript''
     101}}}
     102
     103The option  '--start-xserver' tells sbatch that it should start X-server on the allocated nodes, which is important if you want to do hardware rendering on a GPU.
     104Once the job ist started, you have to find out the name of the node on which the job running and the number of the vnc display. To do this you have to look at the output file generated by slurm, e.g.:
     105
     106{{{ #!sh
     107cat slurm-2063855.out
     108
     109Desktop 'TurboVNC: jrc1386:2 (profile vis)' started on display jrc1386:2 (jrc1386:2)
     110
     111Starting applications specified in /etc/turbovnc/xstartup.turbovnc
     112Log file is /homeb/.../your_user_name/.vnc/jrc1386:2.log
     113}}}
     114
     115In this case the node is '''jrc1386''' and the display number is '''2'''. This information is needed to set up the ssh tunnel (see step 3).
    110116
    111117=== 3. Tunnel VNC traffic to workstation
     
    123129{{{ #!ShellExample
    124130ssh -N -L <5900+DISPLAY>:<NODE>:<5900+DISPLAY> <USERID>@jureca.fz-juelich.de
    125 # example: ssh -N -L 5902:jrc1327:5902 jjuser@jureca.fz-juelich.de
     131# example: ssh -N -L 5902:jrc1386:5902 jjuser@jureca.fz-juelich.de
    126132}}}
    127133