Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
technical:recipes:vnc-usage [2022-04-13 12:14] – [Mac OS X] anita | technical:recipes:vnc-usage [2022-04-13 17:30] (current) – [Windows] anita | ||
---|---|---|---|
Line 1: | Line 1: | ||
+ | ====== Using VNC for X11 Applications ====== | ||
+ | The X11 graphics standard has been around for a long time. Originally, an X11 program would be started with '' | ||
+ | |||
+ | The X11 protocol can be very slow, especially with the addition of encryption and hops through wireless access points and various ISPs across the Internet. | ||
+ | |||
+ | The VNC remote display protocol is better-optimized for transmission across networks. | ||
+ | |||
+ | ====== On the Cluster ====== | ||
+ | |||
+ | ===== Starting a VNC Server ===== | ||
+ | |||
+ | In reality, a user only needs a single //VNC server// running on a login node of a cluster. | ||
+ | |||
+ | If you already have a VNC server running on the cluster, note the display (as '' | ||
+ | <code bash> | ||
+ | $ hostname | ||
+ | login01 | ||
+ | $ vncserver -list | ||
+ | |||
+ | TigerVNC server sessions: | ||
+ | |||
+ | X DISPLAY # PROCESS ID | ||
+ | :1 5429 | ||
+ | </ | ||
+ | Thus, '' | ||
+ | |||
+ | By default a new VNC server will detach from your login shell and execute in the background, so it will not be killed when your SSH session is closed. | ||
+ | <code bash> | ||
+ | $ vncserver | ||
+ | |||
+ | New ' | ||
+ | |||
+ | Starting applications specified in / | ||
+ | Log file is / | ||
+ | </ | ||
+ | In this case the '' | ||
+ | |||
+ | <note important> | ||
+ | If this is the first time you are running '' | ||
+ | </ | ||
+ | |||
+ | ===== Configuring X11 Display ===== | ||
+ | |||
+ | With a VNC server running on the cluster login node, first ensure the shell has the appropriate value for '' | ||
+ | <code bash> | ||
+ | $ echo $DISPLAY | ||
+ | |||
+ | $ export DISPLAY=" | ||
+ | </ | ||
+ | You may wish to confirm the display is working: | ||
+ | <code bash> | ||
+ | $ xdpyinfo | head -5 | ||
+ | name of display: | ||
+ | version number: | ||
+ | vendor string: | ||
+ | vendor release number: | ||
+ | X.Org version: 1.19.3 | ||
+ | </ | ||
+ | |||
+ | There are two ways to submit interactive jobs to Slurm with the VNC server acting as the X11 display. | ||
+ | |||
+ | ===== Direct and Unencrypted ===== | ||
+ | |||
+ | The private network inside the cluster is much more secure than the Internet at large, so transmission of the X11 protocol without encryption (between compute nodes and login nodes //all within that private network//) is likely satisfactory for most use cases. | ||
+ | |||
+ | <code bash> | ||
+ | $ salloc [...] srun --pty --export=TERM, | ||
+ | salloc: Granted job allocation 13504701 | ||
+ | salloc: Waiting for resource configuration | ||
+ | salloc: Nodes r00n56 are ready for job | ||
+ | |||
+ | [(group: | ||
+ | login01:4 | ||
+ | |||
+ | [(group: | ||
+ | name of display: | ||
+ | version number: | ||
+ | vendor string: | ||
+ | vendor release number: | ||
+ | X.Org version: 1.19.3 | ||
+ | </ | ||
+ | |||
+ | X11 applications launched in this interactive shell will display to the VNC server running on the login node without encryption of the packets passing across the cluster' | ||
+ | |||
+ | ===== Tunneled and Encrypted ===== | ||
+ | |||
+ | The Slurm job scheduler has the ability to tunnel X11 traffic between the compute and login nodes. | ||
+ | |||
+ | <WRAP center round important 60%> | ||
+ | For Slurm releases of version 19 onward, the tunneling is effected using Slurm' | ||
+ | |||
+ | For older Slurm releases -- notably, the 17.11.8 release currently used on Caviness -- the X11 tunneling is implemented via SSH tunneling //back to the submission node// and requires: | ||
+ | - The user MUST have an RSA keypair (named '' | ||
+ | - The VNC server MUST be running on the same login node from which the '' | ||
+ | - The '' | ||
+ | On Caviness all users' have an RSA keypair generated in '' | ||
+ | <code bash> | ||
+ | $ export DISPLAY=" | ||
+ | </ | ||
+ | </ | ||
+ | |||
+ | With the '' | ||
+ | <code bash> | ||
+ | $ salloc --x11 [...] | ||
+ | salloc: Granted job allocation 13504764 | ||
+ | salloc: Waiting for resource configuration | ||
+ | salloc: Nodes r00n56 are ready for job | ||
+ | |||
+ | [(group: | ||
+ | localhost: | ||
+ | |||
+ | [(group: | ||
+ | name of display: | ||
+ | version number: | ||
+ | vendor string: | ||
+ | vendor release number: | ||
+ | X.Org version: 1.19.3 | ||
+ | </ | ||
+ | |||
+ | ===== Stopping a VNC Server ===== | ||
+ | |||
+ | Once all X11 applications have finished executing and there is no immediate ongoing need for the VNC server, it is good computing manners to shutdown the VNC server((Recall that the X11 desktop launched within the VNC server represents 50+ unique processes)). | ||
+ | <code bash> | ||
+ | $ vncserver -list | ||
+ | |||
+ | TigerVNC server sessions: | ||
+ | |||
+ | X DISPLAY # PROCESS ID | ||
+ | :4 6543 | ||
+ | |||
+ | $ vncserver -kill :4 | ||
+ | Killing Xvnc process ID 6543 | ||
+ | </ | ||
+ | |||
+ | ====== On the Desktop/ | ||
+ | |||
+ | To view the VNC server that is running on a cluster login node you must: | ||
+ | |||
+ | - Have VNC viewer software installed on the desktop/ | ||
+ | - Tunnel the VNC protocol to the login node | ||
+ | |||
+ | Satisfying these two requirements varies by operating system. | ||
+ | |||
+ | ===== Mac OS X ===== | ||
+ | |||
+ | Mac OS X includes a built-in VNC viewer which can be accessed using the **Go** > **Connect to Server…** menu item in the Finder. | ||
+ | < | ||
+ | vnc:// | ||
+ | </ | ||
+ | where the «port» is the local TCP/IP port that will be associated with the VNC server on the tunnel. | ||
+ | |||
+ | Given the display on the cluster login node: | ||
+ | < | ||
+ | login01:4 | ||
+ | </ | ||
+ | the TCP/IP port on the login node is (5900 + 4) = 5904 as previously discussed. | ||
+ | <code bash> | ||
+ | [my-mac]$ ssh -L 5904: | ||
+ | </ | ||
+ | As long as that SSH connection is open, the desktop/ | ||
+ | < | ||
+ | vnc:// | ||
+ | </ | ||
+ | If the SSH connection is broken, connectivity to the cluster VNC server can be reestablished by means of the same '' | ||
+ | |||
+ | ===== Windows ===== | ||
+ | |||
+ | Windows it is a bit more complicated, | ||
+ | |||
+ | <note tip> | ||
+ | Download and install the VNC viewer (client) software [[https:// | ||
+ | </ | ||
+ | |||
+ | Given display on the DARWIN cluster login node ('' | ||
+ | < | ||
+ | login00:1 | ||
+ | </ | ||
+ | the TCP/IP port on the login node is (5900 + 1) = 5901 as previously discussed. | ||
+ | |||
+ | {{ : | ||
+ | |||
+ | Now setup the tunnel by clicking on **+** next to **SSH** under **Connection** in the **Category: | ||
+ | |||
+ | {{ : | ||
+ | |||
+ | Once added, the Putty Configuration will look like this | ||
+ | |||
+ | {{ : | ||
+ | |||
+ | Now save your session by clicking on **Session** in the **Category: | ||
+ | |||
+ | {{ : | ||
+ | |||
+ | Now click **Open** to connect to the cluster VNC server on DARWIN login00 with port tunnel 5901. If this SSH connection is broken, connectivity to the cluster VNC server can be reestablished by reconnecting this SSH connection using the PuTTY saved session '' | ||
+ | |||
+ | Once the REAL VNC viewer client is installed, then you need to configure it for the correct TCP/IP port on the login node. | ||
+ | |||
+ | * Open the REAL VNC viewer software. | ||
+ | * Add a new connection to localhost:<< | ||
+ | |||
+ | where the «port» is the local TCP/IP port that will be associated with the VNC server on the tunnel. Click **File** then **New Connection** and set the properties for VNC Server: '' | ||
+ | |||
+ | {{ : | ||
+ | |||
+ | Click **OK** to save it, then double-click on '' |