TODO#

Important things that are not implemented yet!

  • nwb_linkml.adapters.classes.ClassAdapter.handle_dtype() does not yet handle compound dtypes, leaving them as AnyType instead. This is fine for a first draft since they are used rarely within NWB, but we will need to handle them by making slots for each of the dtypes since they typically represent table-like data.

Docs TODOs#

Todo

Implement reading, skipping arrays - they are fast to read with the ArrayProxy class and dask, but there are times when we might want to leave them out of the read entirely. This might be better implemented as a filter on model_dump , but to investigate further how best to support reading just metadata, or even some specific field value, or if we should leave that to other implementations like eg. after we do SQL export then not rig up a whole query system ourselves.

original entry

Todo

Implement HDF5 writing.

Need to create inverse mappings that can take pydantic models to hdf5 groups and datasets. If more metadata about the generation process needs to be preserved (eg. explicitly notating that something is an attribute, dataset, group, then we can make use of the LinkML_Meta model. If the model to edit has been loaded from an HDF5 file (rather than freshly created), then the hdf5_path should be populated making mapping straightforward, but we probably want to generalize that to deterministically get hdf5_path from position in the NWBFile object – I think that might require us to explicitly annotate when something is supposed to be a reference vs. the original in the model representation, or else it’s ambiguous.

Otherwise, it should be a matter of detecting changes from file if it exists already, and then write them.

original entry

Todo

Test find_references() !

original entry

Todo

Document Pydantic model generation

original entry

Todo

Document provider usage

original entry