Web UI Setup#
This tutorial walks you through installing and starting the srunx Web UI to manage SLURM jobs from your browser.
Prerequisites#
srunx installed (see Installation)
An SSH profile configured for your SLURM cluster (see User Guide)
Python 3.12+
Node.js 18+ (for frontend development only; not needed for production use)
Step 1: Install Web Dependencies#
The Web UI is an optional feature. Install it with the web extra:
uv sync --extra web
This installs FastAPI, uvicorn, and other backend dependencies.
Step 2: Configure SSH Connection#
The Web UI connects to your SLURM cluster via SSH. If you already have an srunx SSH profile configured, the Web UI will use it automatically.
Check your current profile:
srunx ssh profile list
If you don’t have a profile, create one:
srunx ssh profile add myserver --hostname dgx.example.com --username researcher
Set it as the current profile:
# The Web UI auto-detects the current profile
srunx ssh profile add myserver
# myserver is now the default
Step 3: Start the Server#
srunx-web
You should see:
INFO: Using current SSH profile: myserver
INFO: Connecting to SLURM server via SSH...
INFO: SSH connection established
INFO: Uvicorn running on http://127.0.0.1:8000
Open http://127.0.0.1:8000 in your browser.
Step 4: Explore the Dashboard#
The Dashboard shows:
Active Jobs — Number of running and pending SLURM jobs
Failed — Jobs that failed
GPU Availability — Available GPUs across partitions
Active Jobs Table — Clickable links to job logs
Resource Gauges — Per-partition GPU utilization
Note
Data is fetched via SSH polling. The dashboard refreshes automatically every 10 seconds.
Step 5: View Jobs#
Navigate to Jobs to see all SLURM queue entries:
Use the search bar to filter by job name
Use the status dropdown to filter by state (RUNNING, PENDING, FAILED, etc.)
Click the log icon to view job output
Click the cancel button to stop a running job
Step 6: Upload a Workflow#
Navigate to Workflows and click Upload YAML:
Select a workflow YAML file from your local machine
The file is validated and stored on the server
View the DAG visualization showing job dependencies
Example workflow YAML:
name: ml-pipeline
jobs:
- name: preprocess
command: ["python", "preprocess.py"]
resources:
nodes: 1
- name: train
command: ["python", "train.py"]
depends_on: [preprocess]
resources:
gpus_per_node: 4
- name: evaluate
command: ["python", "evaluate.py"]
depends_on: [train]
Step 7: Build a Workflow Visually#
The DAG builder lets you create workflows interactively instead of writing YAML by hand.
Navigate to Workflows and click New Workflow
Enter a workflow name in the toolbar (e.g.,
ml-pipeline)Click Add Job to create your first node. It appears on the canvas as
job_1Click the node to open the property panel on the right. Set:
Name:
preprocessCommand:
python preprocess.py
Click Add Job again. In the property panel, set:
Name:
trainCommand:
python train.py --epochs 100GPUs per Node:
4Time Limit:
4:00:00
Drag from the bottom handle of
preprocessto the top handle oftrainto create a dependency edge(Optional) Click the edge to open the dependency type selector and choose between
afterok,after,afterany, orafternotokClick Save Workflow in the toolbar. The workflow is validated, saved as YAML, and you are redirected to the DAG view
Note
The builder validates your workflow before saving: every job must have a name and command, job names must be unique, and the dependency graph must be acyclic.
Step 8: Configure Settings#
The Settings page lets you manage all srunx configuration from the browser.
Click Settings (gear icon) in the sidebar
On the General tab, review default resource allocation (nodes, GPUs, tasks per node, CPUs per task, memory, time limit, partition, nodelist), environment settings (conda, venv, environment variables), and general options (log directory, working directory)
Click Save to persist changes
Switch to the SSH Profiles tab to manage your connection profiles:
Add new profiles with hostname, username, key file, and optional proxy jump
Click Activate to switch the active profile
Expand a profile to add or remove mount points
On the Notifications tab, enter a Slack webhook URL to receive job notifications
The Environment tab shows all active
SRUNX_*variables (read-only)The Project tab lists projects from your mounts — initialize
.srunx.jsonfor per-project config overrides
Note
See Settings for detailed recipes for each tab.
Step 9: Browse Files with the Explorer#
The file explorer lets you browse project files and submit scripts directly to SLURM.
Click the Explorer button (folder icon) at the top of the sidebar
The explorer panel opens showing your configured mounts as tree roots
Click a mount to expand it and browse the directory tree
Right-click a shell script (
.sh,.slurm,.sbatch,.bash) and select Submit as sbatchReview the script content, edit the job name if needed, and click Submit
Click the Sync button on a mount to push local files to the remote server via rsync
Note
See File Explorer for more details on file browsing and script submission.
Step 10: Set Up Mount Points#
Mount points let the file browser in the DAG builder map between local directories and remote paths on the SLURM cluster.
Add a mount to your SSH profile:
srunx ssh profile mount add myserver ml-project \ --local ~/projects/ml-project \ --remote /home/researcher/ml-project
Verify the mount was created:
srunx ssh profile mount list myserver
In the DAG builder, click a job node to open the property panel
Click the folder icon next to the Command, Work Dir, or Log Dir field
The file browser opens showing your configured mounts. Select a mount, browse the project tree, and click Select
The selected local path is translated to the corresponding remote path and inserted into the field
Note
Click Sync Now in the file browser to push local files to the remote server via rsync before running a workflow.
Step 11: Run a Workflow#
Once you have a saved workflow, you can execute it directly from the Web UI.
Navigate to Workflows and click a workflow card to open the detail page
Click Run Workflow in the toolbar
The system automatically identifies which mounts need syncing based on each job’s work directory, and pushes local files to the remote cluster via rsync
Jobs are submitted to SLURM in topological order. Dependencies between jobs are translated into SLURM
--dependencyflags, so the scheduler handles sequencing nativelyWatch job statuses update in the DAG view as SLURM processes the pipeline:
PENDINGthenRUNNINGthenCOMPLETED(orFAILED). The view polls every 10 secondsClick a completed (or running) job node to view details including the SLURM job ID. Click View Logs to see stdout and stderr output
Note
If a mount sync fails, the run is aborted before any jobs are submitted. Fix the sync issue (check SSH connectivity and rsync availability) and try again.
Next Steps#
Web UI How-to Guide — Common Web UI tasks
Web UI REST API Reference — REST API reference