trx.trx_file_memmap#

Attributes#

Classes#

TrxFile

Core class of the TrxFile

Functions#

_append_last_offsets(→ numpy.ndarray)

Appends the last element of offsets from header information

_generate_filename_from_data(→ str)

Determines the data type from array data and generates the appropriate

_split_ext_with_dimensionality(→ Tuple[str, int, str])

Takes a filename and splits it into its components

_compute_lengths(→ numpy.ndarray)

Compute lengths from offsets

_is_dtype_valid(→ bool)

Verifies that filename extension is a valid datatype

_dichotomic_search(→ int)

Find where data of a contiguous array is actually ending

_create_memmap(, dtype, offset, order)

Wrapper to support empty array as memmaps

load(→ Type[TrxFile])

Load a TrxFile (compressed or not)

load_from_zip(→ Type[TrxFile])

Load a TrxFile from a single zipfile. Note: does not work with

load_from_directory(→ Type[TrxFile])

Load a TrxFile from a folder containing memmaps

concatenate(→ TrxFile)

Concatenate multiple TrxFile together, support preallocation

save(→ None)

Save a TrxFile (compressed or not)

zip_from_folder(→ None)

Utils function to zip on-disk memmaps

Module Contents#

trx.trx_file_memmap.dipy_available = True[source]#
trx.trx_file_memmap._append_last_offsets(nib_offsets: numpy.ndarray, nb_vertices: int) numpy.ndarray[source]#

Appends the last element of offsets from header information

Keyword arguments:
nib_offsets – np.ndarray

Array of offsets with the last element being the start of the last streamline (nibabel convention)

nb_vertices – int

Total number of vertices in the streamlines

Returns:

Offsets – np.ndarray (VTK convention)

trx.trx_file_memmap._generate_filename_from_data(arr: numpy.ndarray, filename: str) str[source]#

Determines the data type from array data and generates the appropriate filename

Keyword arguments:

arr – a NumPy array (1-2D, otherwise ValueError raised) filename – the original filename

Returns:

An updated filename

trx.trx_file_memmap._split_ext_with_dimensionality(filename: str) Tuple[str, int, str][source]#

Takes a filename and splits it into its components

Keyword arguments:

filename – Input filename

Returns:

tuple of strings (basename, dimension, extension)

trx.trx_file_memmap._compute_lengths(offsets: numpy.ndarray) numpy.ndarray[source]#

Compute lengths from offsets

Keyword arguments:

offsets – An np.ndarray of offsets

Returns:

lengths – An np.ndarray of lengths

trx.trx_file_memmap._is_dtype_valid(ext: str) bool[source]#

Verifies that filename extension is a valid datatype

Keyword arguments:

ext – filename extension

Returns:

boolean representing if provided datatype is valid

Find where data of a contiguous array is actually ending

Keyword arguments:

x – np.ndarray of values l_bound – lower bound index for search r_bound – upper bound index for search

Returns:

index at which array value is 0 (if possible), otherwise returns -1

trx.trx_file_memmap._create_memmap(filename: str, mode: str = 'r', shape: Tuple = (1,), dtype: numpy.dtype = np.float32, offset: int = 0, order: str = 'C') numpy.ndarray[source]#

Wrapper to support empty array as memmaps

Keyword arguments:

filename – filename where the empty memmap should be created mode – file open mode (see: np.memmap for options) shape – shape of memmapped np.ndarray dtype – datatype of memmapped np.ndarray offset – offset of the data within the file order – data representation on disk (C or Fortran)

Returns:
mmapped np.ndarray or a zero-filled Numpy array if array has a shape of 0

in the first dimension

trx.trx_file_memmap.load(input_obj: str, check_dpg: bool = True) Type[TrxFile][source]#

Load a TrxFile (compressed or not)

Keyword arguments: input_obj – A directory name or filepath to the trx data check_dpg – Boolean denoting if group metadata should be checked

Returns:

TrxFile object representing the read data

trx.trx_file_memmap.load_from_zip(filename: str) Type[TrxFile][source]#

Load a TrxFile from a single zipfile. Note: does not work with compressed zipfiles

Keyword arguments: filename – path of the zipped TrxFile

Returns:

TrxFile representing the read data

trx.trx_file_memmap.load_from_directory(directory: str) Type[TrxFile][source]#

Load a TrxFile from a folder containing memmaps

Keyword arguments: filename – path of the zipped TrxFile

Returns:

TrxFile representing the read data

trx.trx_file_memmap.concatenate(trx_list: List[TrxFile], delete_dpv: bool = False, delete_dps: bool = False, delete_groups: bool = False, check_space_attributes: bool = True, preallocation: bool = False) TrxFile[source]#

Concatenate multiple TrxFile together, support preallocation

Keyword arguments:

trx_list – A list containing TrxFiles to concatenate delete_dpv – Delete dpv keys that do not exist in all the provided

TrxFiles

delete_dps – Delete dps keys that do not exist in all the provided

TrxFile

delete_groups – Delete all the groups that currently exist in the

TrxFiles

check_space_attributes – Verify that dimensions and size of data are

similar between all the TrxFiles

preallocation – Preallocated TrxFile has already been generated and

is the first element in trx_list (Note: delete_groups must be set to True as well)

Returns:

TrxFile representing the concatenated data

trx.trx_file_memmap.save(trx: TrxFile, filename: str, compression_standard: Any = zipfile.ZIP_STORED) None[source]#

Save a TrxFile (compressed or not)

Keyword arguments:

trx – The TrxFile to save filename – The path to save the TrxFile to compression_standard – The compression standard to use, as defined by

the ZipFile library

trx.trx_file_memmap.zip_from_folder(directory: str, filename: str, compression_standard: Any = zipfile.ZIP_STORED) None[source]#

Utils function to zip on-disk memmaps

Keyword arguments

directory – The path to the on-disk memmap filename – The path where the zip file should be created compression_standard – The compression standard to use, as defined by

the ZipFile library

class trx.trx_file_memmap.TrxFile(nb_vertices: int | None = None, nb_streamlines: int | None = None, init_as: Type[TrxFile] | None = None, reference: str | dict | Type[nibabel.nifti1.Nifti1Image] | Type[nibabel.streamlines.trk.TrkFile] | Type[nibabel.nifti1.Nifti1Header] | None = None)[source]#

Core class of the TrxFile

header: dict[source]#
streamlines: Type[nibabel.streamlines.array_sequence.ArraySequence][source]#
groups: dict[source]#
data_per_streamline: dict[source]#
data_per_vertex: dict[source]#
data_per_group: dict[source]#
__str__() str[source]#

Generate the string for printing

__len__() int[source]#

Define the length of the object

__getitem__(key) Any[source]#

Slice all data in a consistent way

__deepcopy__() Type[TrxFile][source]#
deepcopy() Type[TrxFile][source]#

Create a deepcopy of the TrxFile

Returns

A deepcopied TrxFile of the current TrxFile

_get_real_len() Tuple[int, int][source]#

Get the real size of data (ignoring zeros of preallocation)

Returns
A tuple representing the index of the last streamline and the total

length of all the streamlines

_copy_fixed_arrays_from(trx: Type[TrxFile], strs_start: int = 0, pts_start: int = 0, nb_strs_to_copy: int | None = None) Tuple[int, int][source]#

Fill a TrxFile using another and start indexes (preallocation)

Keyword arguments:

trx – TrxFile to copy data from strs_start – The start index of the streamline pts_start – The start index of the point nb_strs_to_copy – The number of streamlines to copy. If not set

will copy all

Returns
A tuple representing the end of the copied streamlines and end of

copied points

static _initialize_empty_trx(nb_streamlines: int, nb_vertices: int, init_as: Type[TrxFile] | None = None) Type[TrxFile][source]#

Create on-disk memmaps of a certain size (preallocation)

Keyword arguments:
nb_streamlines – The number of streamlines that the empty TrxFile

will be initialized with

nb_vertices – The number of vertices that the empty TrxFile will

be initialized with

init_as – A TrxFile to initialize the empty TrxFile with

Returns:

An empty TrxFile preallocated with a certain size

_create_trx_from_pointer(dict_pointer_size: dict, root_zip: str | None = None, root: str | None = None) Type[TrxFile][source]#

After reading the structure of a zip/folder, create a TrxFile

Keyword arguments:
header – A TrxFile header dictionary which will be used for the

new TrxFile

dict_pointer_size – A dictionary containing the filenames of all

the files within the TrxFile disk file/folder

root_zip – The path of the ZipFile pointer root – The dirname of the ZipFile pointer

Returns:

A TrxFile constructer from the pointer provided

resize(nb_streamlines: int | None = None, nb_vertices: int | None = None, delete_dpg: bool = False) None[source]#

Remove the ununsed portion of preallocated memmaps

Keyword arguments:

nb_streamlines – The number of streamlines to keep nb_vertices – The number of vertices to keep delete_dpg – Remove data_per_group when resizing

get_dtype_dict()[source]#

Get the dtype dictionary for the TrxFile

Returns

A dictionary containing the dtype for each data element

append(obj, extra_buffer: int = 0) None[source]#
_append_trx(trx: Type[TrxFile], extra_buffer: int = 0) None[source]#

Append a TrxFile to another (support buffer)

Keyword arguments:

trx – The TrxFile to append to the current TrxFile extra_buffer – The additional buffer space required to append data

get_group(key: str, keep_group: bool = True, copy_safe: bool = False) Type[TrxFile][source]#

Get a particular group from the TrxFile

Keyword arguments:

key – The group name to select keep_group – Make sure group exists in returned TrxFile copy_safe – Perform a deepcopy

Returns

A TrxFile exclusively containing data from said group

select(indices: numpy.ndarray, keep_group: bool = True, copy_safe: bool = False) Type[TrxFile][source]#

Get a subset of items, always vertices to the same memmaps

Keyword arguments:

indices – The list of indices of elements to return keep_group – Ensure group is returned in output TrxFile copy_safe – Perform a deep-copy

Returns:

A TrxFile containing data originating from the selected indices

static from_lazy_tractogram(obj: [nibabel.streamlines.tractogram.LazyTractogram], reference, extra_buffer: int = 0, chunk_size: int = 10000, dtype_dict: dict = {'positions': np.float32, 'offsets': np.uint32, 'dpv': {}, 'dps': {}}) Type[TrxFile][source]#

Append a TrxFile to another (support buffer)

Keyword arguments:

trx – The TrxFile to append to the current TrxFile extra_buffer – The buffer space between reallocation.

This number should be a number of streamlines. Use 0 for no buffer.

chunk_size – The number of streamlines to save at a time.

static from_sft(sft, dtype_dict={})[source]#

Generate a valid TrxFile from a StatefulTractogram

static from_tractogram(tractogram, reference, dtype_dict={'positions': np.float32, 'offsets': np.uint32, 'dpv': {}, 'dps': {}})[source]#

Generate a valid TrxFile from a Nibabel Tractogram

to_tractogram(resize=False)[source]#

Convert a TrxFile to a nibabel Tractogram (in RAM)

to_memory(resize: bool = False) Type[TrxFile][source]#

Convert a TrxFile to a RAM representation

Keyword arguments:

resize – Resize TrxFile when converting to RAM representation

Returns:

A non memory mapped TrxFile

to_sft(resize=False)[source]#

Convert a TrxFile to a valid StatefulTractogram (in RAM)

close() None[source]#

Cleanup on-disk temporary folder and initialize an empty TrxFile