This file provides detailed context and requirements for AI code assistants when creating new interactive session workflows. Reference this file along with DeveloperGuide.md when prompted to create a new workflow.
When creating a new interactive session workflow, the YAML will be named [deployment]_v4.yaml (e.g., general_v4.yaml, emed_v4.yaml)
- DO NOT add k8s support - create only the standard deployment workflow
- Use the session_runner subworkflow (marketplace/session_runner/v1.4) for deployment
- Follow the existing v4 session pattern with preprocessing + session_runner jobs
- Ask the user which deployment target to use (general/emed/hsp/noaa) if not specified
- Bash script that runs on the controller node (has internet access)
- Install/download dependencies needed for the service
- Make it idempotent (check if already installed before installing)
- Use
service_parent_install_dirvariable (default: ${HOME}/pw/software) - Set appropriate executable permissions where needed
- Bash script that starts the web service
- MUST use
service_portvariable provided by session_runner - Create a
cancel.shscript with commands to kill the service - Example structure:
#!/bin/bash # Start service on service_port echo '#!/bin/bash' > cancel.sh chmod +x cancel.sh # Start your service /path/to/service --port=${service_port} & pid=$! echo "kill ${pid}" >> cancel.sh sleep inf
- Complete workflow YAML with:
permissions: ['*']sectionsessions.sessionwithuseTLS: falseandredirect: truepreprocessingjob that:- Checks out this repo with sparse_checkout for your service directory
- Creates
inputs.shwith PW environment variables + form inputs - Uses
remoteHost: ${{ inputs.cluster.resource.ip }}
session_runnerjob that:- Depends on preprocessing (
needs: [preprocessing]) - Uses
marketplace/session_runner/v1.4 - Passes session, resource, cluster (slurm/pbs settings), and service configuration
- Service config must include:
start_service_script: ${PW_PARENT_JOB_DIR}/[service-name]/start-template-v3.shcontroller_script: ${PW_PARENT_JOB_DIR}/[service-name]/controller-v3.shinputs_sh: ${PW_PARENT_JOB_DIR}/inputs.shslug: ""(or appropriate URL path like "lab", "vnc.html")rundir: ${PW_PARENT_JOB_DIR}
- Depends on preprocessing (
- Input form under
'on'.execute.inputswith:- Standard
clustergroup (resource, scheduler, slurm, pbs settings) - Service-specific
servicegroup for your configuration options
- Standard
- User-facing documentation for the workflow. Structure:
- Title + one-line description of what the service provides
- Features: bullet list of key capabilities (runtime options, GPU support, scheduler support, etc.)
- Use Cases: bullet list of typical scenarios users would launch this for
- Configuration: subsection per major input group (e.g., OS, startup options, compute resources) — describe what each does and any valid values
- Requirements: any software that must be present on the target node (e.g., module, binary, container runtime)
- Getting Started: short numbered steps (select resource → configure → launch → access)
- Keep it factual and concise — no implementation details, no internal paths
- Look at
webshell/controller-v3.shandwebshell/start-template-v3.shfor the simplest example - Look at
workflow/yamls/jupyterlab-host/general_v4.yamlfor workflow structure (but don't copy JupyterLab-specific settings) - Compare deployment variants like
general_v4.yamlvsemed_v4.yamlto understand deployment-specific differences
- Service MUST listen on
service_port(allocated by session_runner) - Scripts MUST be idempotent (safe to run multiple times)
- DO NOT create k8s variants (no general_k8s_v4.yaml)
- Follow the exact directory structure:
[service-name]/for scripts,workflow/yamls/[service-name]/for YAML - Ensure all paths in the YAML use
${PW_PARENT_JOB_DIR}prefix for scripts and inputs.sh