Forcing data size #2
Replies: 1 comment 6 replies
-
|
Hi Tijn, Thanks for reporting back on your experience and for using this discussion forum. You win the price for the first post! |
Beta Was this translation helpful? Give feedback.
-
|
Hi Tijn, Thanks for reporting back on your experience and for using this discussion forum. You win the price for the first post! |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi guys,
I'm just getting started downloading the Antarctica forcing data, using the Globus CLI. However, I'm a bit worried by the absolutely massive volume of data.
I'm just looking at the CESM2-WACCM historical data. The ocean forcing is single-precision, compressed NetCDF, resulting in about 9 Gb of data per variable for the entire 165-year period; quite manageable. The atmospheric forcing data, however, has not been compressed, so each yearly file (with a 2km resolution) is about 440 Mb. With 165 years of data and 14 variables, that adds up to just over 1 Tb of data. And that's just for the historical period; the four different SSP scenarios, which have even more years, would be another 1.8 Tb each, adding up to just over 8 Tb for the entire thing.
I ran a quick test, and turning on NetCDF deflate+shuffle for the atmospheric data reduces the file size by about 75% (lossless compression), which would bring the total volume down to about 2 Tb. That would take me about 60 hours to download, but it might just fit on my HPC disk. 8 Tb would take ten days of non-stop downloading, never mind the fact that I don't have the disk space to store that kind of data.
Would it be possible to apply this compression to the files on the server?
Beta Was this translation helpful? Give feedback.
All reactions