tUbenet Foundation Model Weights

DOI

tUbetNet: a foundation model for 3D vessel segmentationtUbeNet is a 3D CNN for semantic segmenting of vasculature from 3D grayscale medical images. tUbeNet was trained on varied data from different modalities, scales and pathologies, to create a generalisable foundation model. The code and instructions for use can be found on github: https://github.com/natalie11/tUbeNetThe model weights presented here may be used as a foundation upon which to fine-tune tUbeNet to new data. Details of the training parameters and data are given briefly below. For a more information please see our preprint here: https://doi.org/10.1101/2023.07.24.550334Training and Validation DataA training library of paired images and manual labels was compiled from three-dimensional image data acquired both in-house and externally. Images from four modalities were chosen to represent a range of sources of contrast, imaging resolutions and tissues types: 1. Optical High Resolution Episcopic Microscopy (HREM) data from a murine liver, stained with eosin B (4080 x 3072 x 416 voxels)2. X-ray microCT images of a microvascular cast, taken from a subcutaneous mouse model of colorectal cancer (1000 x 1000 x 682)3. Raster-Scanning Optical Mesoscopy data from a subcutaneous tumour model (provided by Emma Brown, Bohndiek Group, University of Cambridge) (191 x 221 x 400)4. Optical Coherence Tomography Angiography (OCT-A) of a human retina (provided by Dr Ranjan Rajendram, Moorfields Eye Hospital) (500 × 500 × 64)The model was evaluated using subvolumes of datasets 1 and 2 that were held out of training, as well as an unseen dataset:5. T1-weighted Balanced Turbo Field Echo Magnetic Resonance Imaging (MRI) data from a machine-perfused porcine liver (400 x 400 x 15, resliced to 400 x 400 x 90)Training and Test data will be made available subject to permissions from the data owners.Hyperparameters used for trainingLoss: a sum of voxel-wise DICE score and binary crossentropyOptimizer: AdamLearning rate: 0.001Dropout: 30%

Identifier
DOI https://doi.org/10.5522/04/25498603.v1
Related Identifier https://ndownloader.figshare.com/files/45353281
Metadata Access https://api.figshare.com/v2/oai?verb=GetRecord&metadataPrefix=oai_datacite&identifier=oai:figshare.com:article/25498603
Provenance
Creator Holroyd, Natalie ORCID logo; Li, Zhongwang; Brown, Emmeline; Walsh, Claire; Shipley, Rebecca; Walker-Samuel, Simon
Publisher University College London UCL
Contributor Figshare
Publication Year 2024
Rights https://creativecommons.org/licenses/by-nc/4.0/
OpenAccess true
Contact researchdatarepository(at)ucl.ac.uk
Representation
Language English
Resource Type Model; Other
Discipline Other