From e52de626a05a223fface172d841afee2d852d985 Mon Sep 17 00:00:00 2001 From: David Bizzozero <122558988+dbizzoze@users.noreply.github.com> Date: Tue, 21 Oct 2025 14:31:40 -0700 Subject: [PATCH 1/7] Create czarguide.md This page will be a starter-guide on Czar duties --- beginnerguide.md | 84 ------------------------------------------------ czarguide.md | 3 ++ 2 files changed, 3 insertions(+), 84 deletions(-) delete mode 100644 beginnerguide.md create mode 100644 czarguide.md diff --git a/beginnerguide.md b/beginnerguide.md deleted file mode 100644 index 064ddd1..0000000 --- a/beginnerguide.md +++ /dev/null @@ -1,84 +0,0 @@ -# Beginner's Guide - -This guide presents three different methods of accessing S3DF. This is an example of a general step-by-step workflow we hope is suitable for most users. Within this document, you will find guidance on how to: - -- Log in to the S3DF system -- Navigate directories and storage spaces -- Access supported applications -- Prepare and submit a job script - -Follow these instructions to access and use S3DF. - -Let's get started! - - -## Access to S3DF Through SSH - -This example provides a clear, step-by-step workflow for running software on S3DF through SSH. - -### Connect to a Bastion Node - -To start, connect to a bastion node using the following command: - - ssh username@s3dflogin-mfa.slac.stanford.edu - -### Connect to an Interactive Node - -After successfully connecting to a bastion node, log in to an interactive node using SSH. For example: - - ssh iana - -### Set Up a Running Environment - -To set up your running environment, create a bash file containing all necessary commands, and then execute the bash file. - -### Configure an [SLURM](batch-compute.md#) Job Script - -Here is an example SLURM job script named run.sbatch: - - - #!/bin/bash - #SBATCH --partition=milano - #SBATCH --account=rfar - #SBATCH --job-name=test - #SBATCH --output=output-%j.txt - #SBATCH --error=error-%j.txt - #SBATCH --nodes=1 - #SBATCH --ntasks-per-node=16 - #SBATCH --time=0-00:10:00 - mpirun /sdf/group/rfar/ace3p/bin/omega3p pillbox.omega3p - - - - Submit Jobs to a Compute Node - -Use the sbatch command to submit your job to a compute node for execution: - - sbatch run.sbatch - - - Check the Status of Running Jobs (Optional) - -To monitor the status of your submitted jobs, run the following command: - - squeue -u username - -- View Data Output - -Once your jobs have completed, you can view the data output directly on the pool node to verify that the results are as expected. - -- Transfer Data (If Necessary) - -If you need to transfer data, connect to a data transfer node to facilitate the movement of your files. Use appropriate file transfer commands (e.g., scp, rsync) to move your data to the desired location. - -## Access to S3DF Through NoMachine - - NoMachine offers a specialized remote desktop solution that enhances the performance of X11 graphics over slow connections, compared to SSH. - - A key feature of NoMachine is its ability to maintain the state of your desktop across multiple sessions, even if your internet connection is unexpectedly lost. - - To access NoMachine, use the login pool at s3dfnx.slac.stanford.edu. - - For additional information about this access method, please refer to the [NoMachine](reference.md#nomachine) documentation. - -## Access to S3DF Through OnDemand - - Users can also access S3DF through [Open OnDemand](interactive-compute.md#using-a-browser-and-onDemand) via any (modern) browser. - - This solution is recommended for users who want to run Jupyter notebooks, or don't want to learn SLURM, or don't want to download a terminal or the NoMachine remote desktop on their system. - - After login, you can select which Jupyter image to run and which hardware resources to use (partition name and number of hours/cpu-cores/memory/gpu-cores). - - The partition can be the name of an interactive pool or the name of a SLURM partition. - - You can choose an interactive pool as partition if you want a long-running session requiring sporadic resources; otherwise slect a SLURM partition. - - Note that no GPUs are currently available on the interactive pools. diff --git a/czarguide.md b/czarguide.md new file mode 100644 index 0000000..8be915c --- /dev/null +++ b/czarguide.md @@ -0,0 +1,3 @@ +# Beginner's Guide for Facility Czars + + From 87222a97b476d238a436568334f7fb1fc86dd3dc Mon Sep 17 00:00:00 2001 From: David Bizzozero <122558988+dbizzoze@users.noreply.github.com> Date: Tue, 21 Oct 2025 14:51:00 -0700 Subject: [PATCH 2/7] Update czarguide.md --- czarguide.md | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/czarguide.md b/czarguide.md index 8be915c..fb38552 100644 --- a/czarguide.md +++ b/czarguide.md @@ -1,3 +1,13 @@ # Beginner's Guide for Facility Czars +This guide is intended for facility czars to get started with managing their facility's repos, computing resources, user accounts, and more. The majority of czar tasks can be performed on [coact](https://coact.slac.stanford.edu) but this guide will cover some additional topics. +List of primary tasks for a facility czar: +* Create/manage repos for computing/storage resources for their facility +* Add/remove users to facility repos for accessing computing resources +* Add/remove users to Posix group for accessing storage resources (e.g. /sdf/data or /sdf/group) +* Inform facility users on how to access computing/storage resources on S3DF +* Commincate facility-wide issues or requests to S3DF admin team + +> [!TIP] +> Facility czars play the role of an intermediary between S3DF general users and S3DF admin/support staff. Czars are the primary point-of-contact for users with S3DF issues/requests but not expected to provide full technical support; they can refer users to the [S3DF help Slack channel](slac.slack.com#comp-sdf). From e34a0354474714d57cbb3fc5c2b4574822a937dd Mon Sep 17 00:00:00 2001 From: dbizzoze Date: Thu, 30 Oct 2025 15:31:57 -0700 Subject: [PATCH 3/7] Coact section update --- czarguide.md | 26 ++++++++++++++++++++++++++ 1 file changed, 26 insertions(+) diff --git a/czarguide.md b/czarguide.md index fb38552..5c88f02 100644 --- a/czarguide.md +++ b/czarguide.md @@ -11,3 +11,29 @@ List of primary tasks for a facility czar: > [!TIP] > Facility czars play the role of an intermediary between S3DF general users and S3DF admin/support staff. Czars are the primary point-of-contact for users with S3DF issues/requests but not expected to provide full technical support; they can refer users to the [S3DF help Slack channel](slac.slack.com#comp-sdf). + +## Management Systems in Coact + +Coact is the primary interface for facility czars to manage their tasks. The most common tasks involve managing repos and their facility's users. To begin, log into your [coact](https://coact.slac.stanford.edu) account. The tabs in the top left: Facilities, Repos, and Requests each provide different options. + +### Coact -> Facilities + +In this section, czars can see an overview of their facility's compute and storage resources. Additionally, czars can add existing S3DF users to their facility and delegate additional czars. + +### Coact -> Repos + +In this section, czars can create repos and allocate computing resources to them. Additionally, czars can add/remove members to the different repos (useful if different users need access to different computing resources for different projects). + +> [!NOTE] +> **Repos cannot be deleted in Coact!** This is a known limitation and should not be an issue for now as repos can be renamed and allocated to zero users aside from the facility czar. To delete a repo, please contact the S3DF admin team. + +* To create a new repo use the "Request New Repo" button in the top right. +* To add a user to an existing repo, select the person icon next to the repo name to view the users for that repo. Then select the "Manage Users" button to add/remove users. +* To allocate comute resources for a repo, select the "Compute" tab and edit the "Total compute allocation" value for the desired repo/cluster name. + +> [!TIP] +> Allocation values less than 100% will restrict the maximum number of CPUs/GPUs that can be used by that repo/cluster at a time as a fraction of the total owned resources by the facility. + +### Coact -> Requests + +In this section, czars can accept requests by users to join a facility and gain access to its repos. An email notification will be sent to czars when a user requests access to the facility. If an error occurs, it may be due to a new user not having a SLAC Unix account associated with coact. Accepting requests here only adds the user to the facility but does not automatically add them to any repos except "default". \ No newline at end of file From d8760e028325f781d19df3b7d3d394fe118e2eab Mon Sep 17 00:00:00 2001 From: dbizzoze Date: Mon, 3 Nov 2025 14:32:06 -0800 Subject: [PATCH 4/7] added how-to guides and coact menu image --- assets/coact_menu_tabs.png | Bin 0 -> 2936 bytes czarguide.md | 41 +++++++++++++++++++++++++++++++++---- 2 files changed, 37 insertions(+), 4 deletions(-) create mode 100644 assets/coact_menu_tabs.png diff --git a/assets/coact_menu_tabs.png b/assets/coact_menu_tabs.png new file mode 100644 index 0000000000000000000000000000000000000000..79d9d0110e914e62389016c6700bc4779a7f694b GIT binary patch literal 2936 zcmb`J`#;m~8^_-YIg~SfQb-FOM3VY2%qeGEjHV)TmXO9!Op(aU94a}NvpK7ovqwzM zmb1dtoT5z2VZ&_S_5BOJKYV|uqRb2mrM>Vb)zfZZ7!1{3a3rB!d4g9*UPj z7yt+#u`n^T54*FL^AU5rS2~VS^Vpdqu05cH+*WMsI&JkkDCx042E6B^*s;UI(JwR< zzv;cX3;R@Kmz~CN^6 zD*%8cKS>k-lnzo40Z&qS=m!CV-X}~R;H}61eB2Sw3`K7v?{Rk2Ge6SWS3Zm?-5zU~ z=9F`syL;Vv06?nKj#O<@^(>cIDq=9fHQxRsa}Eve$Sdk;;!aJfvx8J!SUsZsUZ(Id z0Jv(Xny_F|JI7b(Wf;8igfQ1a3mt8jErXpq7Z<1!X3Y1u)~@Rf%=aEu37HFnk~n*A z3#!jExYtyeE~F-t!c&Yyi();Tk_moek3JWsQIdyDE22d~I$+wUMFF(3{3k7lI}E#G ziKsh`-qINzJLGk=x^ZyI@^&pFBn?{hYwj2mB|Gfvj@i}?sW=5D)VHmsWj1r4SyCc0 zjX@i}n7;MfJLR@|QStpU86tmb5q_f?dfyssTC*m7%t*fiC`DH*--&4ZO_H(9+t?qO z2piUHDMaWswTfsNj09D98HfpxO48{|}RXDFtLW);ASEw+p)$n!8Z zDRM4ZBlk_ncctED=^Lu!OPhgk^w7|t`2l=3L0K>|CwH#V>QMUP_IQ_2{l6(P#=*}0 z$=*580mgCSQxF=KxON$rHI`{Vcg~HJ7h!rW2b#!%-t$b&k&eo0vi?&UwJNW>n?!D9 zaS9>x4fOF5aGeu1Z%eV_CPp_(7W@d5ZV9IxdE{6<5&@S+UTD9 z6~*6oP3BQ)O_SAOROz3an@b>BeD8v2@7?(!O&yjo^AW0g()Tk|0eQt|^qXNUDr6v1 z7T+jX4{^9*;I(YX+LcATt@Ju|hD|Qpv$LqNKAU%#jzIZVgyt|_!mv>EEn7MnoB{Sv zSx#?3-#sv=SAfXfZ@gJP^6dhc-eWD)O_dwoQ^2o;#}I6EBE%Qx_Pd#d4N$T|-HyN& z9{>I?n7z$vSozp`5wUygybkP#0s6U;CYHGv@dV*@Ok=q1N{D}p8LRA#0eXg_rM=}l zHCqr*Mq*<%ZCRGl5zUQ~u1c>&M*L+$WGeYNZ*xUR9|BPWRq=H5#*4}rh=lTV0F9SQ zwK&vOb_x!?=Q8|ls&8^aqwphkbJip{dqy5R1>YKv?5-VQXT6TtjO^>s|EXZJyflpf zGN|hJjIIWzoJTudDGT)6-%ZiX$jUpiT90+bF;;$C>D#p%9m0W4$3U_a_q~aFpHh+I zAwQ1fS>GwY-k$C>2}h~vD`-%r^Ose%2uOlu>y5Z1b|1ODY3D}-HB`lJ%_C%qR3X+3 zTD71qZ#^<)K`W%?aMIL}rCV+(#Xr>&-!k>eyA!Nj}g$Rm2$EP=M1XFe~4LFXty=!G%_}n`z>ampgFa zL-06FDq~EJ5HjZ5;IE~ZL3LU3aTx^lY&6%1XYR%#x0e?tG(p>+73ey0U%?n6?1%O= z5g8JbySpS!o6>2Fud-byl$C7kKeO=Z$7AMPoNfk<7$xaMU!UGUHs-5YvUa^;PM6GF zS2LFx_iq(8X!Vxn;+6;ky4q2Cb;M~s|KeJ?=-2ft0=frav`cEks6hc+3ZLp>#EdK$ zzuch}3Iq&6B$_GmO;*X%$OMQlC;0J?f8Xm#NlHqz(+p<(&ZNE)YuCZ)67$?hoO|TN z?btCm>C`%$L(JVzRc|YS%aNbezc$z9M3C?p$n7b`1|Q>*5b9B7b|H3ODFFj+(ne1f zpGvMuSm<_7d!^5Iqw4v~e59SEdYkG#9I5lBjzKP1vA-{7+fkr8*QBHsikcFwcf!3S zq8^rfQ@8cj3gWXfX9Z<;w1hc!pON=!dsWyAyK)xQiA-;+`6-^xj<%y` z(OFGFr2D)r!Ss!$+3cuVb9%YIX~UmvrwOQ@cvr%lxi*fOK0lEv>@`Aj@aB;HpKZZ?1daT$LYqC2l)^5BnTZCB-dXi^h2i5u z?LEPhHWtlwOAdV@g5sZurY#zHW;PRgM=H>c=++RN3KhJxz>D zONsYPgTMR}p`h41`T=%0h?CYLKDQKU!He4c45B;!X&J&a;8n)F1W`L)_7%?-B?%SA z7nR_v+FfMN5{x+U>ICwLPMclPjUwo5W@*26n`<;x)O@D+epu}!IqBHd%$45z(;B?1 zxP*qz((yE~CtOA)@|q(n!SPds)PRZnEA!mS1b>*u-HN2vVQU=IAzNL&`fK-1-}$Qf zNh`HL`?9T@IZbo;p^WwP?qO(n46hn!Kw=b789GB@g51ovywbG*YNYZV1aIV0ifU z&Vj~-BxQZ_qEKrS;d_KWaETuL+TbKy?;(fJQ;mMUg0;rMzX~OgBdCaq(T|}wJ{SL1 zao#BxHI-YPlaT9NoQ6Xhn;FjC*sJ?}d!In>spd)XlVw}Rkd0pFamC-wPbVS8c@%7t zADkilg&*($=rDe&`tz2vZ+E%^8=T$+6y%`ilFk)~V;{4e|COJ(vNAsJOBpYqtc@0H z2qV^~!;#+bP_dMnA|?n`jbZGSZftJPyN^yfyvq&G`JmPvDXTxgJOMDDFjWrHj~d^h zFS8Zu|AJMYnK&m{)SpXSXHttWQB=Qlu2g+l^df3PCFC!dS$(H6TKkTb$pL_)oX}AS wM|5m_wDk>%4|p=3|9}gFfTREa6By}H*IVC@C*+>x0x1AkT!Wfa8M#LP2WdXY<^TWy literal 0 HcmV?d00001 diff --git a/czarguide.md b/czarguide.md index 5c88f02..de0b478 100644 --- a/czarguide.md +++ b/czarguide.md @@ -4,8 +4,9 @@ This guide is intended for facility czars to get started with managing their fac List of primary tasks for a facility czar: * Create/manage repos for computing/storage resources for their facility +* Install and set up facility-specific software (e.g. in `/sdf/group`) * Add/remove users to facility repos for accessing computing resources -* Add/remove users to Posix group for accessing storage resources (e.g. /sdf/data or /sdf/group) +* Add/remove users to Posix group for accessing storage resources (e.g. `/sdf/data` or `/sdf/group`) * Inform facility users on how to access computing/storage resources on S3DF * Commincate facility-wide issues or requests to S3DF admin team @@ -15,14 +16,15 @@ List of primary tasks for a facility czar: ## Management Systems in Coact Coact is the primary interface for facility czars to manage their tasks. The most common tasks involve managing repos and their facility's users. To begin, log into your [coact](https://coact.slac.stanford.edu) account. The tabs in the top left: Facilities, Repos, and Requests each provide different options. +![Coact Menu Tabs](assets/coact_menu_tabs.png) ### Coact -> Facilities -In this section, czars can see an overview of their facility's compute and storage resources. Additionally, czars can add existing S3DF users to their facility and delegate additional czars. +In this tab, czars can see an overview of their facility's compute and storage resources. Additionally, czars can add existing S3DF users to their facility and delegate additional czars. ### Coact -> Repos -In this section, czars can create repos and allocate computing resources to them. Additionally, czars can add/remove members to the different repos (useful if different users need access to different computing resources for different projects). +In this tab, czars can create repos and allocate computing resources to them. Additionally, czars can add/remove members to the different repos (useful if different users need access to different computing resources for different projects). > [!NOTE] > **Repos cannot be deleted in Coact!** This is a known limitation and should not be an issue for now as repos can be renamed and allocated to zero users aside from the facility czar. To delete a repo, please contact the S3DF admin team. @@ -36,4 +38,35 @@ In this section, czars can create repos and allocate computing resources to them ### Coact -> Requests -In this section, czars can accept requests by users to join a facility and gain access to its repos. An email notification will be sent to czars when a user requests access to the facility. If an error occurs, it may be due to a new user not having a SLAC Unix account associated with coact. Accepting requests here only adds the user to the facility but does not automatically add them to any repos except "default". \ No newline at end of file +In this tab, czars can accept requests by users to join a facility and gain access to its repos. An email notification will be sent to czars when a user requests access to the facility. If an error occurs, it may be due to a new user not having a SLAC Unix account associated with coact. Accepting requests here only adds the user to the facility but does not automatically add them to any repos except "default". + +## Other S3DF Czar Tasks + +In addition to managing users and repos through Coact, some additional common tasks include: +
How to add/remove users from legacy Posix groups + +New S3DF users without legacy unix accounts should automatically gain membership into their respective facility group. However, it may be necessary to manually adjust the group settings if issues occur. + +Directly changing posix group membership is not possible on S3DF due to elevated privilege requirements (the "usermod" command requires "sudo" access). Instead, to add/remove members from the posix group can be done by logging into `centos7.slac.stanford.edu` and using the `ypgroup` command. + +For example, to add a user to a posix group on Centos7, use the following command: +`ypgroup adduser -group -user ` +Additional options can be viewed with `ypgroup --help`. + +
+
How/where to install software (e.g. Conda) for the facility + +Software to be used by several users in a facility should be installed in `/sdf/group` or a subdirectory. To install Conda specifically, see the [Conda guide](conda.md). For other software, see the [software guide](software.md). + +If software to be installed is universal enough to be used by a lot of S3DF users, across multiple facilities, please send an inquery to [S3DF help Slack channel](slac.slack.com#comp-sdf) or the dedicated [S3DF czars Slack channel](slac.slack.com#s3df-czars) (requires permission). + +
+
How to request/purchase additional resources (e.g. compute nodes) + +If additional computing or storage resources are needed, please consult with the [S3DF czars Slack channel](slac.slack.com#s3df-czars) (requires permission) to check when the next bulk purchase order is to be made along with pricing options. + +> [!NOTE] +> Computing and storage resources have finite support lifetimes (usually around 5 years). Be sure to check with the S3DF admin team for available options when nearing the end-of-life of existing resouces. + +
+ From 4abd756022e6656428f15b8c7ea7ca1b94296648 Mon Sep 17 00:00:00 2001 From: David Bizzozero <122558988+dbizzoze@users.noreply.github.com> Date: Mon, 3 Nov 2025 14:38:29 -0800 Subject: [PATCH 5/7] Update czarguide.md --- czarguide.md | 11 +++++------ 1 file changed, 5 insertions(+), 6 deletions(-) diff --git a/czarguide.md b/czarguide.md index de0b478..fa18546 100644 --- a/czarguide.md +++ b/czarguide.md @@ -16,6 +16,7 @@ List of primary tasks for a facility czar: ## Management Systems in Coact Coact is the primary interface for facility czars to manage their tasks. The most common tasks involve managing repos and their facility's users. To begin, log into your [coact](https://coact.slac.stanford.edu) account. The tabs in the top left: Facilities, Repos, and Requests each provide different options. + ![Coact Menu Tabs](assets/coact_menu_tabs.png) ### Coact -> Facilities @@ -43,7 +44,8 @@ In this tab, czars can accept requests by users to join a facility and gain acce ## Other S3DF Czar Tasks In addition to managing users and repos through Coact, some additional common tasks include: -
How to add/remove users from legacy Posix groups + +### How to add/remove users from legacy Posix groups New S3DF users without legacy unix accounts should automatically gain membership into their respective facility group. However, it may be necessary to manually adjust the group settings if issues occur. @@ -53,20 +55,17 @@ For example, to add a user to a posix group on Centos7, use the following comman `ypgroup adduser -group -user ` Additional options can be viewed with `ypgroup --help`. -
-
How/where to install software (e.g. Conda) for the facility +### How/where to install software (e.g. Conda) for the facility Software to be used by several users in a facility should be installed in `/sdf/group` or a subdirectory. To install Conda specifically, see the [Conda guide](conda.md). For other software, see the [software guide](software.md). If software to be installed is universal enough to be used by a lot of S3DF users, across multiple facilities, please send an inquery to [S3DF help Slack channel](slac.slack.com#comp-sdf) or the dedicated [S3DF czars Slack channel](slac.slack.com#s3df-czars) (requires permission). -
-
How to request/purchase additional resources (e.g. compute nodes) +### How to request/purchase additional resources (e.g. compute nodes) If additional computing or storage resources are needed, please consult with the [S3DF czars Slack channel](slac.slack.com#s3df-czars) (requires permission) to check when the next bulk purchase order is to be made along with pricing options. > [!NOTE] > Computing and storage resources have finite support lifetimes (usually around 5 years). Be sure to check with the S3DF admin team for available options when nearing the end-of-life of existing resouces. -
From 052d02db60e68714dae8d6eb6145fab05136feab Mon Sep 17 00:00:00 2001 From: David Bizzozero <122558988+dbizzoze@users.noreply.github.com> Date: Tue, 4 Nov 2025 13:57:55 -0800 Subject: [PATCH 6/7] Update czarguide.md cleanup --- czarguide.md | 57 ++++++++++++++++++++++++++++++++++------------------ 1 file changed, 38 insertions(+), 19 deletions(-) diff --git a/czarguide.md b/czarguide.md index fa18546..494eb7f 100644 --- a/czarguide.md +++ b/czarguide.md @@ -8,7 +8,8 @@ List of primary tasks for a facility czar: * Add/remove users to facility repos for accessing computing resources * Add/remove users to Posix group for accessing storage resources (e.g. `/sdf/data` or `/sdf/group`) * Inform facility users on how to access computing/storage resources on S3DF -* Commincate facility-wide issues or requests to S3DF admin team +* Monitor facility repo and resources utilization +* Communicate facility-wide issues or requests between the S3DF admin team and facility users > [!TIP] > Facility czars play the role of an intermediary between S3DF general users and S3DF admin/support staff. Czars are the primary point-of-contact for users with S3DF issues/requests but not expected to provide full technical support; they can refer users to the [S3DF help Slack channel](slac.slack.com#comp-sdf). @@ -19,27 +20,37 @@ Coact is the primary interface for facility czars to manage their tasks. The mos ![Coact Menu Tabs](assets/coact_menu_tabs.png) -### Coact -> Facilities +### Coact: Facilities -In this tab, czars can see an overview of their facility's compute and storage resources. Additionally, czars can add existing S3DF users to their facility and delegate additional czars. +
    + + In this tab, czars can see an overview of their facility's compute and storage resources. Additionally, czars can add existing S3DF users to their facility and delegate additional czars.
-### Coact -> Repos +### Coact: Repos -In this tab, czars can create repos and allocate computing resources to them. Additionally, czars can add/remove members to the different repos (useful if different users need access to different computing resources for different projects). +
    + + In this tab, czars can create repos and allocate computing resources to them. Additionally, czars can add/remove members to the different repos (useful if different users need access to different computing resources for different projects).
> [!NOTE] > **Repos cannot be deleted in Coact!** This is a known limitation and should not be an issue for now as repos can be renamed and allocated to zero users aside from the facility czar. To delete a repo, please contact the S3DF admin team. -* To create a new repo use the "Request New Repo" button in the top right. -* To add a user to an existing repo, select the person icon next to the repo name to view the users for that repo. Then select the "Manage Users" button to add/remove users. -* To allocate comute resources for a repo, select the "Compute" tab and edit the "Total compute allocation" value for the desired repo/cluster name. + * To create a new repo use the "Request New Repo" button in the top right. + * To add a user to an existing repo, select the person icon next to the repo name to view the users for that repo. Then select the "Manage Users" button to add/remove users. + * To allocate compute resources for a repo, select the "Compute" tab and edit the "Total compute allocation" value for the desired repo/cluster name. + +
    + + Additionally, in the "Compute" tab, selecting a particular cluster will show the utilization history along with how many CPU-hours were used by each user in the repo.
> [!TIP] > Allocation values less than 100% will restrict the maximum number of CPUs/GPUs that can be used by that repo/cluster at a time as a fraction of the total owned resources by the facility. -### Coact -> Requests +### Coact: Requests -In this tab, czars can accept requests by users to join a facility and gain access to its repos. An email notification will be sent to czars when a user requests access to the facility. If an error occurs, it may be due to a new user not having a SLAC Unix account associated with coact. Accepting requests here only adds the user to the facility but does not automatically add them to any repos except "default". +
    + + In this tab, czars can accept requests by users to join a facility and gain access to its repos. An email notification will be sent to czars when a user requests access to the facility. If an error occurs, it may be due to a new user not having a SLAC Unix account associated with coact. Accepting requests here only adds the user to the facility but does not automatically add them to any repos except "default".
## Other S3DF Czar Tasks @@ -47,23 +58,31 @@ In addition to managing users and repos through Coact, some additional common ta ### How to add/remove users from legacy Posix groups -New S3DF users without legacy unix accounts should automatically gain membership into their respective facility group. However, it may be necessary to manually adjust the group settings if issues occur. - -Directly changing posix group membership is not possible on S3DF due to elevated privilege requirements (the "usermod" command requires "sudo" access). Instead, to add/remove members from the posix group can be done by logging into `centos7.slac.stanford.edu` and using the `ypgroup` command. +
    + + New S3DF users without legacy unix accounts should automatically gain membership into their respective facility group. However, it may be necessary to manually adjust the group settings if issues occur.
+
    + + Directly changing posix group membership is not possible on S3DF due to elevated privilege requirements (the "usermod" command requires "sudo" access). Instead, to add/remove members from the posix group can be done by logging into `centos7.slac.stanford.edu` and using the `ypgroup` command.
+
    -For example, to add a user to a posix group on Centos7, use the following command: -`ypgroup adduser -group -user ` -Additional options can be viewed with `ypgroup --help`. + For example, to add a user to a posix group on Centos7, use the following command: `ypgroup adduser -group -user `. Additional options can be viewed with `ypgroup --help`.
### How/where to install software (e.g. Conda) for the facility -Software to be used by several users in a facility should be installed in `/sdf/group` or a subdirectory. To install Conda specifically, see the [Conda guide](conda.md). For other software, see the [software guide](software.md). +
    + + Software to be used by several users in a facility should be installed in `/sdf/group` or a subdirectory. To install Conda specifically, see the [Conda guide](conda.md). For other software, see the [software guide](software.md).
-If software to be installed is universal enough to be used by a lot of S3DF users, across multiple facilities, please send an inquery to [S3DF help Slack channel](slac.slack.com#comp-sdf) or the dedicated [S3DF czars Slack channel](slac.slack.com#s3df-czars) (requires permission). +
    + + If software to be installed is universal enough to be used by a lot of S3DF users, across multiple facilities, please send an inquery to [S3DF help Slack channel](slac.slack.com#comp-sdf) or the dedicated [S3DF czars Slack channel](slac.slack.com#s3df-czars) (requires permission).
### How to request/purchase additional resources (e.g. compute nodes) -If additional computing or storage resources are needed, please consult with the [S3DF czars Slack channel](slac.slack.com#s3df-czars) (requires permission) to check when the next bulk purchase order is to be made along with pricing options. +
    + + If additional computing or storage resources are needed, please consult with the [S3DF czars Slack channel](slac.slack.com#s3df-czars) (requires permission) to check when the next bulk purchase order is to be made along with pricing options.
> [!NOTE] > Computing and storage resources have finite support lifetimes (usually around 5 years). Be sure to check with the S3DF admin team for available options when nearing the end-of-life of existing resouces. From ecc8e1f39ff0a4ad4041bc7e82aaa05a4d8ce5a8 Mon Sep 17 00:00:00 2001 From: David Bizzozero <122558988+dbizzoze@users.noreply.github.com> Date: Tue, 4 Nov 2025 14:09:14 -0800 Subject: [PATCH 7/7] Add files via upload This file was mistakenly deleted --- beginnerguide.md | 84 ++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 84 insertions(+) create mode 100644 beginnerguide.md diff --git a/beginnerguide.md b/beginnerguide.md new file mode 100644 index 0000000..064ddd1 --- /dev/null +++ b/beginnerguide.md @@ -0,0 +1,84 @@ +# Beginner's Guide + +This guide presents three different methods of accessing S3DF. This is an example of a general step-by-step workflow we hope is suitable for most users. Within this document, you will find guidance on how to: + +- Log in to the S3DF system +- Navigate directories and storage spaces +- Access supported applications +- Prepare and submit a job script + +Follow these instructions to access and use S3DF. + +Let's get started! + + +## Access to S3DF Through SSH + +This example provides a clear, step-by-step workflow for running software on S3DF through SSH. + +### Connect to a Bastion Node + +To start, connect to a bastion node using the following command: + + ssh username@s3dflogin-mfa.slac.stanford.edu + +### Connect to an Interactive Node + +After successfully connecting to a bastion node, log in to an interactive node using SSH. For example: + + ssh iana + +### Set Up a Running Environment + +To set up your running environment, create a bash file containing all necessary commands, and then execute the bash file. + +### Configure an [SLURM](batch-compute.md#) Job Script + +Here is an example SLURM job script named run.sbatch: + + + #!/bin/bash + #SBATCH --partition=milano + #SBATCH --account=rfar + #SBATCH --job-name=test + #SBATCH --output=output-%j.txt + #SBATCH --error=error-%j.txt + #SBATCH --nodes=1 + #SBATCH --ntasks-per-node=16 + #SBATCH --time=0-00:10:00 + mpirun /sdf/group/rfar/ace3p/bin/omega3p pillbox.omega3p + + + - Submit Jobs to a Compute Node + +Use the sbatch command to submit your job to a compute node for execution: + + sbatch run.sbatch + + - Check the Status of Running Jobs (Optional) + +To monitor the status of your submitted jobs, run the following command: + + squeue -u username + +- View Data Output + +Once your jobs have completed, you can view the data output directly on the pool node to verify that the results are as expected. + +- Transfer Data (If Necessary) + +If you need to transfer data, connect to a data transfer node to facilitate the movement of your files. Use appropriate file transfer commands (e.g., scp, rsync) to move your data to the desired location. + +## Access to S3DF Through NoMachine + - NoMachine offers a specialized remote desktop solution that enhances the performance of X11 graphics over slow connections, compared to SSH. + - A key feature of NoMachine is its ability to maintain the state of your desktop across multiple sessions, even if your internet connection is unexpectedly lost. + - To access NoMachine, use the login pool at s3dfnx.slac.stanford.edu. + - For additional information about this access method, please refer to the [NoMachine](reference.md#nomachine) documentation. + +## Access to S3DF Through OnDemand + - Users can also access S3DF through [Open OnDemand](interactive-compute.md#using-a-browser-and-onDemand) via any (modern) browser. + - This solution is recommended for users who want to run Jupyter notebooks, or don't want to learn SLURM, or don't want to download a terminal or the NoMachine remote desktop on their system. + - After login, you can select which Jupyter image to run and which hardware resources to use (partition name and number of hours/cpu-cores/memory/gpu-cores). + - The partition can be the name of an interactive pool or the name of a SLURM partition. + - You can choose an interactive pool as partition if you want a long-running session requiring sporadic resources; otherwise slect a SLURM partition. + - Note that no GPUs are currently available on the interactive pools.