Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/SphinxBuild.yml
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ jobs:
pre-build-command: "apt install -y pandoc"
uses: ammaraskar/sphinx-action@master
- name: Upload artifacts
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: html-docs
path: docs/_build/html/
Expand Down
4 changes: 2 additions & 2 deletions cybergis_compute_client/UI.py
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ def __init__(self, compute, defaultJobName="hello_world", defaultDataFolder="./"
'num_of_node', 'num_of_task', 'time',
'cpu_per_task', 'memory_per_cpu', 'memory_per_gpu',
'memory', 'gpus', 'gpus_per_node', 'gpus_per_socket',
'gpus_per_task', 'partition']
'gpus_per_task', 'partition', 'modules']
self.slurm_integer_configs = [
'num_of_node', 'num_of_task', 'time', 'cpu_per_task',
'memory_per_cpu', 'memory_per_gpu', 'memory', 'gpus',
Expand All @@ -65,7 +65,7 @@ def __init__(self, compute, defaultJobName="hello_world", defaultDataFolder="./"
self.slurm_integer_none_unit_config = [
'cpu_per_task', 'num_of_node', 'num_of_task', 'gpus',
'gpus_per_node', 'gpus_per_socket', 'gpus_per_task']
self.slurm_string_option_configs = ['partition']
self.slurm_string_option_configs = ['partition', 'modules']
self.globus_filename = None
self.jupyter_globus = None

Expand Down
2 changes: 2 additions & 0 deletions docs/model_contribution/develop_model.rst
Original file line number Diff line number Diff line change
Expand Up @@ -125,6 +125,7 @@ Slurm parameters can be added by adding a “slurm_input_rules” to the manifes
* **gpus_per_socket (integerRule):** The number of GPUs required for the job on each socket included in the job's resource allocation.
* **gpus_per_task (integerRule):** The number of GPUs required for the job on each task to be spawned in the job's resource allocation.
* **partition (stringOptionRule):** The partition name on the HPC.
* **modules (stringOptionRule):** The modules you would like to load on the HPC

You can specify these SLURM parameters including a reasonable range for running your model. The UI will read from this manifest and populate widgets for users to specify SLURM settings. The widgets available for each SLURM parameter are::

Expand All @@ -143,6 +144,7 @@ You can specify these SLURM parameters including a reasonable range for running
"gpus_per_socket": integerRule, // number of GPU per socket, ie. SBATCH gpus-per-socket
"gpus_per_task": integerRule, // number of GPU per task, ie. SBATCH gpus-per-task
"partition": stringOptionRule // partition name on HPC, ie. SBATCH partition
"modules": stringOptionRule // modules available on the HPC, i.e. "module load xxx yyy"
}
}

Expand Down