The WestGrid remote visualization capability exists as part of the Parallel (parallel.westgrid.ca) cluster at the University of Calgary. Parallel has 60 12-core nodes with 3 GPUs each. The GPUs are NVIDIA Tesla M2070s, each with about 5.5 GB of memory. These nodes are primarily targeted at GPU computation but a subset of the nodes are available for remote visualization (for more information on using the GPUs for computation, please refer to the WestGrid GPU Computation page).
Using the Parallel GPUs for remote visualization is a two part process. Users need to log in to a parallel.westgrid.ca visualization node and start a remote visualization server (TurboVNC) that utilizes one of the GPUs. The user then needs to start the remote visualization client on their desktop computer and connect to the visualization server running on the visualization node. Note that in order to successfully use a remote visualization node, it is necessary to install the client software on your local computer.
If you are interested in a general discussion about your Visualization options within WestGrid, please refer to the Visualization Quickstart guide. If you want to see a summary of all related visualization resource in WestGrid, please refer to the main Visualization page.
Then, log in to Parallel (parallel.westgrid.ca) and run the command
Enter a password when prompted. This establishes a password that VNC will use when connecting to a visualization node. You can reissue the vncpasswd command if you ever want to change this password (note that the password will be truncated to 8 characters).
That completes the one-time setup required on your local computer and Parallel.
Requesting a visualization node
To request a visualization node on which to work, log in to Parallel (parallel.westgrid.ca) with your SSH terminal client, then, use qsub to submit an interactive job with the following parameters:
qsub -q interactive -I -l nodes=1:ppn=1:gpus=1,mem=7gb,walltime=1:00:00
You may add additional options or modify the ones above, such as the walltime limit, ppn, or number of gpus, using the qsub -l argument (as explained on the Running Jobs page). In particular, if you are not sure that your application will use less than 2 GB of RAM, one CPU core and one GPU, you should begin your work by requesting a whole GPU-enabled node. That way if your application is not well behaved it will not interfere with other users. To request a whole node, you can use:
qsub -q interactive -I -l nodes=1:ppn=12:gpus=3,mem=23gb,walltime=1:00:00
As with other batch jobs, qsub will return a JOBID number that you can use to monitor the job, however this qsub will wait for the job to start and will return a prompt on the compute node assigned. If one of the interactive GPU nodes has sufficient resources, your job should start within 30 sec. When "qsub: job xxxxx.parallel-admin ready" appears, then continue. If your job does not start right away, you can check on the job by logging in again via another terminal window. You can use showq to check that. If the job has not started within a minute or so, you might like to use showstart JOBID to get an estimate of when it might start.
Starting the vncserver
Once the job has started, you can start your vncserver on the GPU node. This is done via the vncserver command. There is one option to the vncserver command of interest, this is the desktop size (called geometry by VNC). With no arguments, a default size of 1240x900 which is slightly less than a standard 1280x1024 desktop giving room for a taskbar and scrollbar if necessary. You can set this to any value, but smaller or equal to your screen's resolution generally works the best. If you are using fullscreen mode with your vnc client, setting the geometry to exactly your screen's resolution makes sense. A full example of submitting a job and starting a session with geometry of 1024x768:
[username@parallel ~]$ qsub -q interactive -I -l nodes=1:ppn=12:gpus=3,mem=23gb,walltime=1:00:00
qsub: waiting for job 75617.parallel-admin to start
qsub: job 75617.parallel-admin ready
[username@cn0553 ~]$ vncserver -geometry 1024x768
New 'X' desktop is cn0553:1
Starting applications specified in /home/username/.vnc/xstartup.
turbovnc Log file is /home/username/.vnc/cn0553:1.log
Look at the output of the vncserver command for the desktop location. It's the line that says "New 'X' desktop is......". In that line you will see a node name with a colon and a digit after. These are the two pieces of information that you need to connect your desktop to the visualization node. In the example above, the cn0553 is the node that you need to connect to and the :1 is the VNC/Xwindows display number.
Using ssh to port forward the VNC port
Next, use the display number to compute the port number. Simply add 5900 to the display number. In the above case, we need to forward local port 5901 to cn0553 port 5901. To do this, start a new ssh session to parallel.westgrid.ca forwarding the port like this (putting the same port number before and after the node. Please note that this typically works only from Linux/MAC computers. In other operating systems, you will have to find a similar option in your SSH client. An image of the Putty (an ssh client for Windows) tunneling set up is given below.
ssh parallel.westgrid.ca -L 5901:cn0553:5901
Running the vncviewer on your desktop
Now start the TurboVNC vncviewer program on your local machine. You can use any vnc viewer, but for better performance, please use TurboVNC for VirtualGL rendering.
In Windows, double-click from the start menu.
In Linux start the program /opt/TurboVNC/bin/vncviewer.
If your rendering is choppy, you can add the option -medqual to reduce the compression quality but your display will be more smooth. There's even a -lowqual option if necessary.
In the TurboVNC box that appears, enter localhost:<displaynumber> i.e. localhost:1 and press enter.
Now enter the password you established in the server-config section above.
You should now have a standard Gnome Xwindows desktop inside a VNC window. You can now press (all on the left side of the keyboard) ctrl-alt-shift-f to go fullscreen if you are using the Windows client or add -FullsScreen to the linux/Mac client.
Running an OpenGL based visualization application
Inside the VNC window, open a Terminal (Applications->System Tools->Terminal or right click on the desktop) on the compute node.
Now start your OpenGL application by prefixing the application with vglrun.
Here is an example of running glxgears:
Here is an example of running VMD on Parallel (VMD access requires membership in the wg-namd UNIX group):
Running ParaView or VisIT is similiar:
Running Avizo can be slightly different, depending on the version. The older Avizo 7 is run by a script. Thus to run it in Virtual GL mode, you have to tell the script to start Avizo in vglrun mode. This is done as follows:
For the newer Avizo 9 we recommend starting it directly:
module load avizo/9.0.0
It is worth noting that WestGrid only has a single floating user license for running Avizo and there is currently no way to reserve this license as part of your job subscription script. Thus if Avizo is being used by another user when you try to run it you will get a licensing error message. Avizo is an expensive piece of software, if you have need of it and you are unable to use it because it is already in use, please notify firstname.lastname@example.org. If there is significant interest, it may be possible to purchase a second license.
Quitting the remote visualization session
When you are done with the desktop, close the VNC session and exit from the node that qsub provided you with (you can also kill the job or simply wait for the job to expire). Your desktop will then terminate along with any programs running inside it. You can the exit the vncviewer application running on your desktop and exit the original ssh connection to Parallel (parallel.westgrid.ca).