Microtomographic investigation of a large corpus of cichlids

This manuscript (permalink) was automatically generated from habi/EAWAG-manuscript@4ca5a5d on March 24, 2023.

Authors

Abstract

A large collection of cichlids from Lake Victoria in Africa spanning a length range of 6 to 18 cm was nondestructively imaged using micro-computed tomography. We describe our method to efficiently obtain three-dimensional tomographic datasets of the oral and pharyngeal jaws and the whole skull of these fishes for accurately describing their morphology. The tomographic data we acquired (9.5 TB projection images) yielded 1.4 TB of three-dimensional image stacks used for extracting the relevant features of interest. Herein we present our method and an outlook on analyzing the acquired data; a morphological description of the oral and pharyngeal jaws of the fishes, a three-dimensional geometric morphometrics analysis of landmark features on the fish skulls, and a robust method to automatically extract the otoliths of the fishes from the tomographic data

Introduction

History

Cichlid fish in African lakes are powerful model systems in speciation and adaptive evolutionary radiation research [1,2]. The functional decoupling of their oral and pharyngeal jaws is hypothesized to be a factor in making cichlids unusually versatile in their feeding and allowing them the ability to adapt to a wide range of environmental factors [3]. The hypothesis is that the fusion of the lower pharyngeal jaws makes that jaw system a powerful food processing tool, in turn releasing the oral jaws from functional constraint. The oral jaws no longer need to process prey and can therefore specialize on prey capture.

Even though the evolutionary diversification of cichlid fish radiations in Lake Victoria in Africa is a well-researched issue it remains a complex system in need of further study [2,4]. We aim to better understand the functional anatomy of the skulls and jaws in these fish in order to test the functional decoupling and other hypotheses about what may facilitate exceptionally high rates of morphological evolution.

The collection of cichlid fish available is extremely valuable, hence a nondestructive imaging method is paramount for studying these samples. Since micro–computed tomography can be regarded as nondestructive method for biological samples, it is the ideal imaging method for investigating the oral and pharyngeal jaws as well as the skull features of the fish presented in this study [5].

Micro-computed tomography

X-ray microtomography is a valuable tool to gain insights into the inner structure of highly diverse samples, namely for specimens related to research done in the biomedical sciences. Microtomographic imaging has been employed as a method of choice to nondestructively assess the morphology of different fish species, large and small [5]. For a small overview of analyses which are possible with X-ray microtomographic imaging in relation to fish biology and morphology, see prior work of the authors of this manuscript [5,6,7] as well as other studies [8].

Depending on the structures of interest, biomedical samples are often only tomographically scanned after being stained with a contrast agent, most often employing contrast agents containing heavy metals. Since the structures of interest for the two studies we touch upon in this manuscript (cichlid teeth and skulls) display large enough contrast to the surrounding tissue we did not stain our samples prior to the tomographic imaging presented here.

Materials, Methods and Results

Sample procurement and preparation

The fish were kept in 75% ethanol for long-term storage at the Swiss Federal Institute of Aquatic Science and Technology (Eawag). They were delivered to the Institute of Anatomy for micro-CT imaging sorted into batches of approximately equal length.

Micro-computed tomographic imaging

All samples were scanned on two of the three available high-resolution micro-CT machines of the Institute of Anatomy of the University of Bern in Switzerland, a SkyScan 1272 and a SkyScan 2214 (both Bruker microCT, Kontich, Belgium).

The fish were sorted into ‘bins’ based on their physical size. We used a custom-made sample-holder to scan each of the fish in our machines. It was 3D-printed on a Form 2 desktop stereolithography printer (Formlabs, Somerville, Massachusetts, USA), the file for printing the holder is available online [9] as part of a library of sample holders for tomographic scanning of biomedical samples [10]. The original OpenSCAD [11] file [12] is parametrized to effortlessly generate a file for 3D-printing sample holders to accommodate the varying width, height and length classes of the fish.

In total, we acquired 362 tomographic scans of 129 different specimens. All the scanning parameters are collected in a table in the Supplementary Materials; a generalized rundown is given below.

Since the fish greatly varied in their length (specimen length varied between 6 cm and 18 cm), the voxel sizes of each of the acquired datasets also varies greatly. We acquired datasets with (isometric) voxel sizes ranging from 3.5 to 50 μm.

Depending on the size of the specimen we set the X-ray source voltage to 50–80 kV and—depending on the voltage—to a current between 107 and 200 μA. The X-ray spectrum was filtered either by an aluminum filter of varying thickness (either 0.25, 0.5 or 1 mm for increasing specimen size) before digitization to projection images or recorded in an unfiltered way (for smaller specimen). In total we recorded 9.5 TB of projection images (TIFF and *.iif files, where the *.iif files are for the so-called alignment scans).

All the recorded projection images were subsequently reconstructed into virtual 3D stacks of axial PNG images spanning the regions of interest of each fish. All the specimens were scanned with their mouths facing downward in the sample holder and rotating along their long axis. We thus manually aligned each of the reconstructed datasets so that the lateral axis of the fish was horizontal in relation to the x and y direction of the reconstructed slices. We reconstructed the projection images with NRecon (Version 1.7.4.6, Bruker microCT, Kontich, Belgium) with varying ring artifact and beam hardening correction values, depending on each fish (again, all relevant values are listed in the Supplementary Materials). In total, this resulted in 1.4 TB of reconstruction images (nearly one million *rec*.png files). Each of the 362 scans we performed has on average about 2700 reconstruction images.

While performing the work, a subset of the data was always present on the production system, for working with it (see Preparation for analysis below). A small bash script [13] was used to generate redundant (archival) copies of the raw projection images and copy all the files to a shared network drive on the ‘Research Storage’ infrastructure of the University of Bern, enabling collaboration on the data by all authorized persons at the same time.

Data analysis

We wrote a set of Jupyter [14] notebooks with Python code to work with the images and wrangle the acquired data. The notebooks were written at the start of the project, to be able to process new scans as soon as they were reconstructed. Re-runs of the notebook added newly scanned and reconstructed data to the analysis, facilitating an efficient quality check of the scans and batched processing of the data.

All analysis notebooks for this work are available online [15].

Preparation for analysis

The main Jupyter notebook for this manuscript dealt with reading all log and image files and preparing images for quality checking and further analysis.

At first, all log files of all the data present in the processed folder were read into a pandas [16,17] dataframe. This already enabled us to extract the specimen name and scan, since we performed multiple scans for each specimen, i.e. a low resolution scan with large field of view for the whole head and one or two scans in high resolution focusing on the region of the oral and pharyngeal jaws. From the log files we extracted the relevant values for double-checking the necessary parameters of each scan. All relevant values for each scan were also saved into the dataframe and saved out to the aforementioned table in the Supplementary Materials at the end of each run of the notebook.

After performing ‘sanity checks’ of the data and reconstruction parameters, we used Dask [18] to efficiently access the tomographic data on disk (in the end amounting to a total of nearly a million single images). On average, each of the tomographic datasets contains around 2700 slices, so the total size of the acquired data exceeds the RAM size available on an average high-end workstation.

At first, we extracted the central view of each of the three axial directions of the datasets (i.e. ‘anteroposterior’, ‘lateral’ and ‘dorsoventral’ view) and either saved those to disk or loaded them from disk if they were already generated in prior runs of the notebook. The notebook then also generated the maximum intensity projection (MIP) for each of the anatomical planes and either saved them to disk or loaded them from prior runs.

At the end of the notebook we performed a final sanity check on the MIP images. In this check we examined the mapping of the gray values of the raw projection images to gray values in the reconstructions, i.e. checked that no overexposed pixels are present in the MIP images. This is an efficient way for double-checking the gray value mapping, since the MIP images have already been generated in prior steps of the notebook and contain the highest gray values present in all the reconstructed images of each scan.

Image processing

Extraction of oral and pharyngeal jaws, visualization of tomographic data

To extract the oral jaw (OJ) and pharyngeal jaw (PJ) of the fish, we used 3D Slicer (Version 4.11.20210226) [19] extended with the SlicerMorph tools [20] which aim to help biologists work with 3D specimen data. The reconstructed image stacks were loaded into ImageStacks, depending on their size we reduced the image resolution (i.e. downscaled the images) for this first step. The three-dimensional volume was rendered via VTK GPU Ray Casting. A custom-made volume property was used as an input to view the scans. Using toggles in the volume rendering, we defined regions of interest (ROIs) for both the OJs and PJs in each specimen. These ROIs were then extracted in their native resolution from the original dataset for further processing. Using the gray value thresholding function in 3D Slicer’s Segment Editor, the teeth in both the oral and pharyngeal jaws were extracted. We used the Scissor and Island tools of the Segment Editor to isolate single regions.

Processed regions of interest were exported as Nrrd [21] files. The three-dimensional visualizations of all regions of interest for each specimen were compiled into overview images (see Figure 1 for an example from the compilation document). In total we compiled overview of 125 specimens with full head morphology, oral jaw and lower pharyngeal jaw profiles.

Figure 1: Overview of data from sample 104016, Enterochromis I cinctus (St. E). Panel A: Three-dimensional visualization of the head scan. Panel B: Three-dimensional visualization of the oral jaw scan. Panel C: Photograph of the specimen. Panels D and E: Three-dimensional visualization of the pharyngeal jaw, dorsoventral and lateral view, respectively.

Principal components analysis of skull landmarks

Current studies are using 3D geometric morphometrics to compare the morphological shape of these scanned cichlids using statistical analysis. We used a homologous landmark scheme across one-half of the skull for higher density of shape information [7,22], and landmarks were placed on each specimen using 3D Slicer. To examine differences in shape across the species sampled, we performed a Generalized Procrustes Superimposition on the landmark data to remove the effects of location, size, and rotation from the analysis using the geomorph package in R (Version 4.2.1) with RStudio (Version 2022.07.2+576) [23,24,25,26,27]. This process brings all specimens to a common origin, scales the landmarks to a unit centroid size, and rotates specimens to reduce distances between landmarks. A principal components analysis was then performed in geomorph on the superimposed landmark data to visualize the major axes of shape change across sampled species. We then used phylogenetic information to identify instances of repeated evolution of trophic adaptations in these cichlids.

Automatic extraction of otoliths

Otoliths are structures made up of mostly calcium carbonate located in the head of fishes. Due to their composition, they are easily distinguished in the X-ray images we acquired. We devised an image processing method to automatically and robustly detect the location of the otoliths in the heads of the cichlids we scanned and to extract them from the original data. The whole method is implemented in its own Jupyter notebook (part of the aforementioned analysis repository [15]).

Since we took great care to scan the fish parallel to their anteroposterior direction and reconstructed the tomographic datasets parallel to the lateral and dorsoventral direction of the fish we could use this ‘preparation’ for automatically extracting the otoliths the tomographic datasets of the whole fish heads. By extracting both the peaks and the peak widths of the gray values along both the horizontal and vertical direction of the MIP (generated above) we robustly detected the position of the otoliths in the datasets. The robust detection is supported by suppressing a small, configurable part of each region, i.e. the front and back, top and bottom or the flanks.

Figure 2: Visualization of automatic otolith extraction. The top row shows the found location of the otolith center in each of the three anatomical directions. The bottom row shows a MIP of each of the three anatomical directions of the extracted otolith region.

Figure 2 shows the visualization of the process. The colored horizontal and vertical bars in each of the directional MIPs denote the found peak location of the two appropriate values. The white bars show the mean of the to detected positions, which was used for extracting the otoliths from the original datasets. Making use of the Dask library facilitated efficient access to all the data on disk and writing out small, cropped copies of the datasets around the otolith positions.

By detecting the largest components in the cropped copies of the datasets we can easily extract and visualize the otoliths in 3D, as shown in Figure 3. The extracted otoliths are thus prepared for further analysis and visualization. The simple three-dimensional visualization is integrated in the aforementioned Jupyter notebook through an integrated visualization library [28] and is also shown in the Supplementary Materials.

Figure 3: Static three-dimensional view of extracted otoliths of specimen 104016. The specimen was scanned with an isotropic voxel size of 13 μm. The extracted otolith has a size of approximately 250 x 350 x 150 pixels. The axes are labeled in mm steps. A dynamic view of the visualization is available in the Supplementary Materials.

The notebook for extracting the otoliths can be run in your browser without installing any additional software (via Binder [29]). To do this, one starts the notebook by clicking a single button in the README file of the project repository [15]. This starts a computing environment in the cloud, downloads the tomographic data we acquired of one specimen, and performs both the otolith extraction and visualization in your browser.

Discussion

We acquired high resolution tomographic datasets of a large collection of cichlids. The acquired datasets were imaged over a wide-spanning range of voxel size (3.5–50 μm) permitting both the analysis of finest details we wanted to resolve (i.e. the structure of the teeth) and having datasets spanning the whole region of interest of the fish (i.e. the whole head for principle components analysis).

Imaging and preparation for analysis

The whole study we presented here spanned a long time frame. It was thus paramount to run the imaging process and the preparation of the tomographic datasets in a batched mode. The Jupyter notebook written to prepare the datasets for quality control and analysis were facilitating a short turnaround time for feedback on single scans and such a batched analysis.

Otolith extraction

The method to extract the otoliths from the tomographic dataset works robustly for all of the different fish sizes and shapes. The extraction is robust because it is based on a combination of distinct details of the gray value curve over the different anatomical directions. The details of the otolith extraction method have been extensively tuned and run in a fully automatic way. This allows a highly reproducible and unbiased extraction of the otoliths from the tomographic datasets. This is even the case for one fish which was scanned with still a hook in his mouth, where the otoliths were nonetheless extracted automatically.

Data on such automatically extracted otoliths, like volume and geometric information like eccentricity and moments of inertia is biologically interesting as the otoliths grow with the age of the fish. One could help estimate the age of wild fishes using a calibration based on the otolith measurements of a fish of known age. It is worth nothing that estimation of age of tropical fishes is not as simple as for fishes from temperate regions where one can distinguish summer and winter layers within the otolith.

Outlook

The acquired tomographic datasets are the basis for multiple additional analyses of fish morphology.

The presented method offers an insight and algorithm on how to perform tomographic scans, preview and analyze micro-computer tomographic datasets of a large collection of fish. The workflow is relying only on free and open-source software and can thus be used and verified independently by any interested reader. All the Jupyter notebook described herein is also freely available online [15].

Author Contributions

Contributor Roles Taxonomy, as defined by the National Information Standards Organization.

Author Contributions
David Haberthür Conceptualization, Data curation, Formal Analysis, Investigation, Methodology, Project administration, Software, Visualization, Writing – original draft, Writing – review & editing
Mikki Law Data curation, Investigation, Methodology, Project administration, Visualization, Writing – review & editing
Kassandra Ford Investigation, Methodology, Visualization, Writing – review & editing
Marcel Häsler Investigation, Project administration, Resources, Writing – review & editing
Ole Seehausen Conceptualization, Investigation, Resources, Supervision, Writing – review & editing
Ruslan Hlushchuk Conceptualization, Resources, Supervision, Writing – review & editing

Acknowledgments

We thank Salome Mwaiko for taking care of the fish collection at Eawag and Mark Charran for helping to find suitable specimens to represent each species. We are grateful to the Microscopy Imaging Center of the University of Bern for the infrastructural support. We thank the manubot project [30] for helping us write this manuscript collaboratively.

Supplementary Materials

Parameters of tomographic scans of all the fishes

The CSV file ScanningDetails.csv gives a tabular overview of all the (relevant) parameters of all the scans we performed. This file was generated with the data processing notebook and contains the data which is read from all the log files of all the scans we performed. A copy of each log file is available in a folder in the data processing repository.

Three-dimensional view of one of the extracted otoliths

The three-dimensional view of sample 104016 was generated in the otolith extraction notebook and saved as a self-contained HTML file with K3D-jupyter. A copy of this HTML file can be viewed and interacted with through the GitHub HTML preview.

References

1.
Adaptive evolution and explosive speciation: the cichlid fish model
Thomas D Kocher
Nature Reviews Genetics (2004-04-01) https://doi.org/dkgf93
2.
African cichlid fish: a model system in adaptive radiation research
Ole Seehausen
Proceedings of the Royal Society B: Biological Sciences (2006-05-09) https://doi.org/frrhbq
3.
Evolutionary Strategies and Morphological Innovations: Cichlid Pharyngeal Jaws
Karel F Liem
Systematic Zoology (1973-12) https://doi.org/b969fp
4.
Process and pattern in cichlid radiations – inferences for understanding unusually high rates of evolutionary diversification
Ole Seehausen
New Phytologist (2015-05-13) https://doi.org/f7gczg
5.
A New Era of Morphological Investigations: Reviewing methods for comparative anatomical studies
KL Ford, JS Albert, AP Summers, BP Hedrick, ER Schachner, AS Jones, K Evans, P Chakrabarty
Integrative Organismal Biology (2023-03-17) https://doi.org/grx5rb
6.
Adaptation mechanism of the adult zebrafish respiratory organ to endurance training
Matthias Messerli, Dea Aaldijk, David Haberthür, Helena Röss, Carolina García-Poyatos, Marcos Sande-Melón, Oleksiy-Zakhar Khoma, Fluri AM Wieland, Sarya Fark, Valentin Djonov
PLOS ONE (2020-02-05) https://doi.org/grf6fj
7.
Convergence is Only Skin Deep: Craniofacial Evolution in Electric Fishes from South America and Africa (Apteronotidae and Mormyridae)
Kassandra L Ford, Rose Peterson, Maxwell Bernt, James S Albert
Integrative Organismal Biology (2022) https://doi.org/gq3gtc
DOI: 10.1093/iob/obac022 · PMID: 35976714 · PMCID: PMC9375771
8.
9.
10.
TomoGraphics/Hol3Drs: A release
David Haberthür
Zenodo (2019-03-08) https://doi.org/gg9fxh
11.
12.
13.
Jupyter notebooks for image processing and data analysis of EAWAG Cichlids project
David Haberthür
(2023-03-03) https://github.com/habi/EAWAG/blob/e5d6be6b416b7c66eaf72a563b0ce5648569f54d/rsync-fishes.sh
14.
Jupyter Notebooks – a publishing format for reproducible computational workflows
Thomas Kluyver, Benjamin Ragan-Kelley, Fernando Pérez, Brian Granger, Matthias Bussonnier, Jonathan Frederic, Kyle Kelley, Jessica Hamrick, Jason Grout, Sylvain Corlay, … Jupyter development team
IOS Press (2016) https://eprints.soton.ac.uk/403913/
15.
habi/EAWAG: Jupyter notebooks for preparing tomographic scans of a collection of cichlids for analysis
David Haberthür
Zenodo (2022-07-05) https://doi.org/gqgdtp
16.
pandas-dev/pandas: Pandas
The Pandas Development Team
Zenodo (2022-09-19) https://doi.org/gqw4db
17.
Data Structures for Statistical Computing in Python
Wes McKinney
Proceedings of the Python in Science Conference (2010) https://doi.org/ggr6q3
18.
Dask: Library for dynamic task scheduling
Dask Development Team
(2016) https://dask.org
19.
3D Slicer as an image computing platform for the Quantitative Imaging Network
Andriy Fedorov, Reinhard Beichel, Jayashree Kalpathy-Cramer, Julien Finet, Jean-Christophe Fillion-Robin, Sonia Pujol, Christian Bauer, Dominique Jennings, Fiona Fennessy, Milan Sonka, … Ron Kikinis
Magnetic Resonance Imaging (2012-11) https://doi.org/gfghgd
20.
SlicerMorph: An open and extensible platform to retrieve, visualize and analyse 3D morphology
Sara Rolfe, Steve Pieper, Arthur Porto, Kelly Diamond, Julie Winchester, Shan Shan, Henry Kirveslahti, Doug Boyer, Adam Summers, AMurat Maga
Methods in Ecology and Evolution (2021-07-22) https://doi.org/gqtgv8
21.
22.
Mosaic Evolution of Craniofacial Morphologies in Ghost Electric Fishes (Gymnotiformes: Apteronotidae)
Kassandra L Ford, Maxwell J Bernt, Adam P Summers, James S Albert
Ichthyology & Herpetology (2022-05-27) https://doi.org/gq3gtd
23.
RRPP: An <scp>r</scp> package for fitting linear models to high‐dimensional data using residual randomization
Michael L Collyer, Dean C Adams
Methods in Ecology and Evolution (2018-05-29) https://doi.org/gdwp9q
24.
Geomorph: Software for geometric morphometric analyses. R package version 4.0.4
DC Adams, ML Collyer, A Kaliontzopoulou, EK Baken
(2022) https://cran.r-project.org/package=geomorph
25.
RRPP: Linear model evaluation with randomized residuals in a permutation procedure, r package version 0.6.2.
ML Collyer, DC Adams
(2021) https://cran.r-project.org/package=RRPP
26.
R: A language and environment for statistical computing
R Core Team
R Foundation for Statistical Computing (2018) https://www.R-project.org/
27.
RStudio: Integrated development environment for r
RStudio Team
RStudio, PBC. (2022) https://www.rstudio.com/
28.
K3D Jupyter
K3D-tools
(2023-03-16) https://github.com/K3D-tools/K3D-jupyter
29.
Binder 2.0 - Reproducible, interactive, sharable environments for science at scale
Project Jupyter, Matthias Bussonnier, Jessica Forde, Jeremy Freeman, Brian Granger, Tim Head, Chris Holdgraf, Kyle Kelley, Gladys Nalvarte, Andrew Osheroff, … Carol Willing
Proceedings of the Python in Science Conference (2018) https://doi.org/gfwcm6
30.
Open collaborative writing with Manubot
Daniel S Himmelstein, Vincent Rubinetti, David R Slochower, Dongbo Hu, Venkat S Malladi, Casey S Greene, Anthony Gitter
PLOS Computational Biology (2019-06-24) https://doi.org/c7np