HDF5
BLISS allows to configure, how data are saved in its HDF5 files.
Nexus writer¶
HDF5 allows data compression and chunking to optimize storage space and IO speed. This can be configured for the HDF5 files produced by the Nexus writer for each BLISS session individually:
DEMO [1]: SCAN_SAVING.writer_object.chunk_size = 1 # MB
DEMO [2]: SCAN_SAVING.writer_object.compression_limit = 1 # MB
DEMO [3]: SCAN_SAVING.writer_object.compression_scheme = "lz4-bitshuffle"
DEMO [4]: SCAN_SAVING.writer_object.chunk_split = 4
chunk_size
: the maximal chunk size in MB. Smaller datasets are not chunked, unless they require compression. Default: 1 MBcompression_limit
: datasets larger than this limit will be compressed. Default: 1 MBcompression_scheme
: used in case the dataset size is larger thancompression_limit
. Default:"gzip-byteshuffle"
chunk_split
: in case the dataset size is larger thanchunk_size
, the inner dataset dimensions are split in this many parts. Default: 4.
LIMA¶
The HDF5 files produced by LIMA are pre-allocated in chunks. This can be configured for each BLISS session individually:
# fixed number of images per file
DEMO [1]: lima_simulator.saving.mode = lima_simulator.saving.mode.ONE_FILE_PER_N_FRAMES
DEMO [2]: lima_simulator.saving.frames_per_file = 100
# fixed file size
DEMO [3]: lima_simulator.saving.mode = lima_simulator.saving.mode.SPECIFY_MAX_FILE_SIZE
DEMO [4]: lima_simulator.saving.max_file_size_in_MB = 500