Skip to content

HDF5

The HDF5 files produced by BLISS to hold all scan data can be configured.

Nexus writer

HDF5 allows data compression and chunking to optimize storage space and IO speed. This can be configured for the HDF5 files produced by the Nexus writer for each BLISS session individually:

DEMO [1]: SCAN_SAVING.writer_object.chunk_size = 1  # MB
DEMO [2]: SCAN_SAVING.writer_object.compression_limit = 1  # MB
DEMO [3]: SCAN_SAVING.writer_object.compression_scheme = "lz4-bitshuffle"
DEMO [4]: SCAN_SAVING.writer_object.chunk_split = 4
  • chunk_size: the maximal chunk size in MB. Smaller datasets are not chunked unless they require compression. Default: 1 MB
  • compression_limit: datasets larger than this limit will be compressed. Default: 1 MB
  • compression_scheme: used in case the dataset size is larger than compression_limit. Default: "gzip-byteshuffle"
  • chunk_split: in case the dataset size is larger than chunk_size, the inner dataset dimensions are split in this many parts. Default: 4.

LIMA

The HDF5 files produced by LIMA are pre-allocated in chunks. This can be configured for each BLISS session individually:

# fixed number of images per file
DEMO [1]: lima_simulator.saving.mode = lima_simulator.saving.mode.ONE_FILE_PER_N_FRAMES
DEMO [2]: lima_simulator.saving.frames_per_file = 100

# fixed file size
DEMO [3]: lima_simulator.saving.mode = lima_simulator.saving.mode.SPECIFY_MAX_FILE_SIZE
DEMO [4]: lima_simulator.saving.max_file_size_in_MB = 500