Currently, the only way to compress datasets with Silo is for the data producer to use one of Silo's pre-installed filters. There is no way for a data producer to use an arbitrary custom compression filter.
There is a way for a caller to specify arbitrary HDF5 file-access properties but this is only during file create/open (e.g. in DBCreate/DBOpen). To affect compression on individual object basis, we'd have to add it somehow as a DBoption list option. That is doable but requires touching all the places where individual object's optlist options are processed.
The idea would be that a caller could use HDF5 directly creating a dataset creation property list item with their own custom filter in it and then pass the resulting hid_t dcpl id to Silo via this new optlist option. Then, this hid_t would get fed through to the dataset creation operations that occur subsequent to the Silo object DBPutXXX() calls where that optlist option is in effect.
Currently, the only way to compress datasets with Silo is for the data producer to use one of Silo's pre-installed filters. There is no way for a data producer to use an arbitrary custom compression filter.
There is a way for a caller to specify arbitrary HDF5 file-access properties but this is only during file create/open (e.g. in DBCreate/DBOpen). To affect compression on individual object basis, we'd have to add it somehow as a DBoption list option. That is doable but requires touching all the places where individual object's optlist options are processed.
The idea would be that a caller could use HDF5 directly creating a dataset creation property list item with their own custom filter in it and then pass the resulting
hid_tdcpl id to Silo via this new optlist option. Then, thishid_twould get fed through to the dataset creation operations that occur subsequent to the Silo objectDBPutXXX()calls where that optlist option is in effect.