Skip to content

VS Code (and other IDEs) on ScienceCluster

Whether you prefer VS Code, Cursor, PyCharm, or Zed, ScienceCluster offers multiple ways to use your chosen IDE when developing your research workflow for high-performance computing.

Performance-First Development

To ensure your development environment is responsive, we recommend these simple optimizations:

  • Dedicated Compute Nodes: For intensive, interactive work, use a compute node. This gives you exclusive access to the CPU, memory resources, and optional GPU that you need for high-performance tasks.
  • Smart File Watching: Exclude large folders from your IDE's background scanner features. For VS Code open Settings, search for exclude, and add filepath patterns like venv, mydata, or .git that may contain many files. Other IDEs will differ slightly in their approaches, and you'll need to adapt accordingly.

Fair Usage for a Better Experience

The ScienceCluster is a shared community resource. Login nodes are provided for lightweight file management, quick edits, and job submissions; please run your development, interactive, and production workloads on compute nodes. Certain software and hardware restrictions are in place on login nodes to ensure the ScienceCluster's shared entry point remains responsive for you and your colleagues.

Improper Usage Can Affect Other Users

Given the nature of how HPC systems are designed (e.g., distributed filesystems), it is possible that improper usage can affect not only your workflows but the workflows of other users. Please consider your actions carefully to ensure optimal performace for yourself and others.

Connection Methods

Science IT supports various connection methods for your use of VS Code and similar IDEs on compute nodes.

ScienceApps Code Server

ScienceApps provides a Code Server app to launch VS Code in the browser.

This is the fastest and easiest way to run VS Code on ScienceCluster. It includes a full filesystem browser with syntax highlighting as well as autocompletion (i.e., Tab to autocomplete) features. For many users this functionality will suffice to begin prototyping and may also prove sufficient for your entire workflow development.

When using this method, please keep in mind the following points:

  • If you open a terminal (e.g., from VS Code, View > Open View... > Terminal), it will launch from the code server's container environment from which you cannot submit or manage SLURM jobs. Instead, consider connecting to ScienceCluster via a local terminal to submit and manage your jobs directly.
  • Due to the containerization involved in delivering this application, you cannot customize the code server's software environment nor use built-in AI features requiring direct connections/authentication to external clouds.
    • If you need to customize the runtime environment for your VS Code please refer to the next docs section.

ScienceApps Remote Desktop Environments

Another option from ScienceApps is to use the MATE or Xfce Desktop Environments, which offer Linux remote desktops from the browser via VNC. VS Code comes pre-installed with these apps from version 24.04-2025c or newer. It's also possible to install custom software with customized Apptainer containers. See our page on Remote Desktop Environments for more info.

Connecting to a Compute Node (Advanced)

Support Disclaimer

Due to the wide variety of possible client-side configurations and plugins, we cannot provide detailed technical support for this connection method.

The most user-involved method is to connect directly to compute nodes (which ensures the shared login nodes are not swamped with daemons and file watchers). This approach can be used for VS Code as well as other IDEs, such as PyCharm. This method requires the user has configured passwordless authentication.

One way to do so is as follows:

  1. Add this text to your local computer's SSH config (~/.ssh/config). Edit the file, specifically for your <shortname> and the path to your SSH key, as noted in the !! comments below.

    Host sciencecluster
      HostName cluster.s3it.uzh.ch
      User <shortname> # !!
      IdentityFile ~/.ssh/id_ed25519 # !! your local SSH key used to connect to the cluster; edit as necessary
      ControlMaster auto
      ControlPath ~/.ssh/master-%r@%h:%p
      ControlPersist yes
    
    Host cluster_node
      User <shortname> # !!
      IdentityFile ~/.ssh/id_ed25519 # !! your local SSH key used to connect to the cluster; edit as necessary
      ProxyCommand ssh sciencecluster "nc \$(squeue --me --name=tunnel --states=R -h -O NodeList,Comment)"
      StrictHostKeyChecking no
      UserKnownHostsFile /dev/null
      IdentitiesOnly yes
      PreferredAuthentications publickey
    
  2. Create a file called tunnel.sbatch on the cluster filesystem with the following content. The minimal example script requests 4 hours of compute time, with 4 cpus and 16GB total system memory. You should update the resources as required by your development workflow. You can also add #SBATCH --gpus=1 to request a single GPU device.

    #!/bin/bash
    #SBATCH --output="tunnel.log"
    #SBATCH --job-name="tunnel"
    #SBATCH --time=4:00:00
    #SBATCH --cpus-per-task=4
    #SBATCH --mem=16G
    
    # Find an open port
    PORT=$(python -c 'import socket; s=socket.socket(socket.AF_INET, socket.SOCK_STREAM); s.bind(("", 0)); print(s.getsockname()[1]); s.close()')
    scontrol update JobId="$SLURM_JOB_ID" Comment="$PORT"
    echo "Tunnel active on port: $PORT"
    
    # generate the job key pair
    HOST_KEY="${TMPDIR:-/tmp}/tmp_ed25519_${SLURM_JOB_ID}"
    echo "Generating temporary host key: ${HOST_KEY}"
    ssh-keygen -t ed25519 -f "$HOST_KEY" -N "" -q
    chmod 600 "$HOST_KEY"
    
    # Start the sshd server on the available port
    /usr/sbin/sshd -D \
        -p ${PORT} \
        -f /dev/null \
        -h "${HOST_KEY}" \
        -E ${HOME}/tunnel_sshd.log \
        -o "PidFile=/dev/null" \
        -o "StrictModes=no" \
        -o "UsePAM=no" \
        -o "PrintLastLog=no" \
        -o "AllowTcpForwarding=yes" \
        -o "AllowStreamLocalForwarding=yes" \
        -o "GatewayPorts=yes" \
        -o "Subsystem sftp internal-sftp"
    
    Specific GPU types can be requested using #SBATCH --gpus=H100:1 (this requests 1 GPU of type H100). The standard guidelines for modifying Slurm scripts when submitting GPU jobs apply; for details, see the relevant GPU jobs docs section.
  3. Submit the job to the Slurm queue.

    # Submit via:
    sbatch tunnel.sbatch
    
    Another way to request a specific GPU is to specify the number of GPU devices in tunnel.sbatch (e.g., #SBATCH --gpus=1) and then load the desired GPU type module (e.g., l4, a100, h100, h200) before submitting the job. For example,
    # Submit via:
    module load l4
    # Then submit the tunnel job
    sbatch tunnel.sbatch
    
  4. Test the tunnel connection by running ssh -v cluster_node from a terminal on your machine. If it is configured correctly, you will be brought directly to the compute node. You can then enter exit to close this connection, as this step is only a test and not needed to remain active when connecting through VS Code remote session.

  5. Start a remote connection to host cluster_node from your VS Code or other IDE.

  6. In the integrated terminal of VS Code (or another IDE), your prompt will show <username>@<hostname>, where <hostname> is the hostname of the allocated compute node (cluster_node), indicating that a remote SSH connection has been established. All software modules are available as normal. If you are using Apptainer, for example, load the module via module load apptainer. You can then continue with your usual Apptainer workflow (i.e., running or managing containers).

Info

Some users have noticed that if you open a folder with many files, the remote connection becomes unstable. Please only open folders with few files if you encounter this issue.