Skip to content
Open

Main #158

Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,6 @@
***Globus Data Transfer outage scheduled for February 4th 2026 09:00-13:00 PST. [More details here](changelog.md)***


Welcome to the SLAC Shared Scientific Data Facility (S3DF) at SLAC National Accelerator Laboratory.

S3DF is a compute, storage, and network architecture designed to support
Expand Down
13 changes: 13 additions & 0 deletions accounts.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,5 +53,18 @@ You can change your password via the SLAC Account self-service password update s

If you have forgotten your password and need to reset it, please contact the [SLAC IT Service Desk](https://it.slac.stanford.edu/support).

## Support for urgent account-related issues

Staff and users needing assistance outside of business hours should call the main IT Service Desk line at (650) 926-4357. They will be presented with a new menu for after-hours support:

Option 1: For Account Lockouts and Password Resets

Option 2: For all other issues

When a user selects Option 1, the system is designed to maximize the chance of reaching an on-call technician promptly. The user can choose to wait on hold or go directly to voicemail. If they wait, the system will cycle between the primary and secondary on-call staff members in 15-second intervals to avoid rolling to personal voicemail. If the caller is unable to reach the scheduled agents, they will be asked to leave a detailed voicemail. Total hold time will be two minutes if all scheduled agents are unable to answer the call.

The service level objective for these urgent off-hours account issues is to provide a response within 30 minutes during non-business hours (from 5 PM to midnight during weekdays and 8 AM - midnight on the weekends).
Between the hours of midnight and 8:00 AM, support will be provided on a best-effort basis.



Binary file added assets/S3DF_container_lifecycle.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 2 additions & 0 deletions batch-compute.md
Original file line number Diff line number Diff line change
Expand Up @@ -79,6 +79,8 @@ See the table below to determine the specifications for each cluster (slurm part

| Partition name | CPU model | Useable cores per node | Useable memory per node | GPU model | GPUs per node | Local scratch | Number of nodes |
| --- | --- | --- | --- | --- | --- | --- | --- |
| torino | AMD Turin 9555 | 120 | 720 GB | - | - | 6 TB | 52 |
| hopper | AMD Turin 9575F | 224 (hyperthreaded) | 1344 GB | NVIDIA H200 | 4 | 21 TB | 3 |
| roma | AMD Rome 7702 | 120 | 480 GB | - | - | 300 GB | 131 |
| milano | AMD Milan 7713 | 120 | 480 GB | - | - | 6 TB | 270 |
| ampere | AMD Rome 7542 | 112 (hyperthreaded) | 952 GB | Tesla A100 (40GB) | 4 | 14 TB | 42 |
Expand Down
4 changes: 4 additions & 0 deletions changelog.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,10 @@

### Upcoming

|When |Duration | What |
| --- | --- | --- |
| February 4th 2026 | 9:00-13:00 PST (planned) | Shutdown the Globus node  “sdfdtn004” for a network card upgrade.

### Past

|When |Duration | What |
Expand Down
250 changes: 166 additions & 84 deletions conda.md

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion data-and-storage.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ can be found at\
`/sdf/{home, sw, group}/.snapshots/<GMT_time>/<dir>`
GMT_time indicates the time the snapshot directory was created. Choose a time that corresponds to the file versions you want and simply copy back the files.

- Files/objects under `/sdf/data` will be backed up or archived according to a data retention policy defined by the facility. Facilities will be responsible for covering the media costs and overhead required by their policy. Similar to the /sdf/home area, you can also check in /sdf/data/\<facility\>/.snapshots to see if snapshots are enabled for self-service restores.
- Files/objects under `/sdf/data` will be backed up or archived according to a data retention policy defined by the facility. Facilities will be responsible for covering the media costs and overhead required by their policy. Similar to the /sdf/home area (but with a slightly different path structure), you can also check in /sdf/data/\<facility\>/.snapshots to see if snapshots are enabled for self-service restores.

- The scratch spaces under `/sdf/scratch` and all directories named "nobackup" (located *anywhere* in any /sdf path) will not be backed up or archived. Please use as many "nobackup" subdirectory locations as required for any files that do not need backup. That can save significant tape and processing resources.

Expand Down
10 changes: 5 additions & 5 deletions interactive-compute.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,12 +13,12 @@ The currently available pools are shown in the table below (The facility can be
|Pool name | Facility | Resources |
| --- | --- | --- |
|iana | For all S3DF users | 4 servers, 40 HT cores and 384 GB per server |
|rubin-devl | Rubin | 4 servers, 128 cores and 512 GB per server |
|psana | LCLS | 4 servers, 40 HT cores and 384 GB per server |
|fermi-devl | Fermi | 1 server, 64 HT cores and 512 GB per server |
|rubin-devl | Rubin | 11 servers, 128 cores and 512 GB per server |
|psana | LCLS | 7 servers, 40 HT cores and 384 GB per server |
|fermi-devl | Fermi | 2 server, 64 HT cores and 512 GB per server |
|faders | FADERS | 1 server, 128 HT cores and 512 GB per server |
|ldmx | LDMX | 1 server, 128 HT cores and 512 GB per server |
|ad | AD | 3 servers, 128 HT cores and 512 GB per server |
|ad | AD | 2 servers, 128 HT cores and 512 GB per server |
|epptheory | EPPTheory | 2 servers, 128 HT cores and 512 GB per server |
|cdms | SuperCDMS | (points to iana) |
|suncat | SUNCAT | (points to iana) |
Expand Down Expand Up @@ -67,4 +67,4 @@ Users are welcome to submit a github pull-request to have their Jupyter environm

### Other Custom Ondemand Applications

If you wish to deploy your own custom Open Ondemand applications/services to the SLAC Ondemand Service, please [contact us](contact-us.md).
If you wish to deploy your own custom Open Ondemand applications/services to the SLAC Ondemand Service, please [contact us](contact-us.md).