Browsing by Subject "Segmentation"
Results Per Page
Sort Options
Item Open Access Adaptive Filtering for Breast Computed Tomography: An Improvement on Current Segmentation Methods for Creating Virtual Breast Phantoms(2015) Erickson, DavidComputerized breast phantoms have been popular for low-cost alternatives to collecting clinical data by combing them with highly realistic simulation tools. Image segmentation of three-dimensional breast computed tomography (bCT) data is one method to create such phantoms, but requires multiple image processing steps to accurately classify the tissues within the breast. One key step in our segmentation routine is the use of a bilateral filter to smooth homogeneous regions, preserve edges and thin structures, and reduce the sensitivity of the voxel classification to noise corruption. In previous work, the well-known process of bilateral filtering was completed on the entire bCT volume with the primary goal of reducing the noise in the entire volume. In order to improve on this method, knowledge of the varying bCT noise in each slice was used to adaptively increase or decrease the filtering effect as a function of distance to the chest wall. Not only does this adaptive bilateral filter yield thinner structures in the segmentation result but is adaptive on a case-by-case basis, allowing for easy implementation with future virtual phantom generations.
Item Open Access Automatic Volumetric Analysis of the Left Ventricle in 3D Apical Echocardiographs(2015) Wald, Andrew JamesApically-acquired 3D echocardiographs (echoes) are becoming a standard data component in the clinical evaluation of left ventricular (LV) function. Ejection fraction (EF) is one of the key quantitative biomarkers derived from echoes and used by echocardiographers to study a patient's heart function. In present clinical practice, EF is either grossly estimated by experienced observers, approximated using orthogonal 2D slices and Simpson's method, determined by manual segmentation of the LV lumen, or measured using semi-automatic proprietary software such as Philips QLab-3DQ. Each of these methods requires particular skill by the operator, and may be time-intensive, subject to variability, or both.
To address this, I have developed a novel, fully automatic method to LV segmentation in 3D echoes that offers EF calculation on clinical datasets at the push of a button. The solution is built on a pipeline that utilizes a number of image processing and feature detection methods specifically adopted to the 3D ultrasound modality. It is designed to be reasonably robust at handling dropout and missing features typical in clinical echocardiography. It is hypothesized that this method can displace the need for sonographer input, yet provide results statistically indistinguishable from those of experienced sonographers using QLab-3DQ, the current gold standard that is employed at Duke University Hospital.
A pre-clinical validation set, which was also used for iterative algorithm development, consisted of 70 cases previously seen at Duke. Of these, manual segmentations of 7 clinical cases were compared to the algorithm. The final algorithm predicts EF within ± 0.02 ratio units for 5 of them, and ± 0.09 units for the remaining 2 cases, within common clinical tolerance. Another 13 of the cases, often used for sonographer training and rated as having good image quality, were analyzed using QLab-3DQ, in which 11 cases showed concordance (± 0.10) with the algorithm. The remaining 50 cases retrospectively recruited at Duke and representative of everyday image quality showed 62% concordance (± 0.10) of QLab-3DQ with the algorithm. The fraction of concordant cases is highly dependent on image quality, and concordance improves greatly upon disqualification of poor quality images. Visual comparison of the QLab-3DQ segmentation to my algorithm overlaid on top of the original echoes also suggests that my method may be preferable or of high utility even in cases of EF discordance. This paper describes the algorithm and offers justifications for the adopted methods. The paper also discusses the design of a retrospective clinical trial now underway at Duke with 60 additional unseen cases intended only for independent validation.
Item Open Access Computational Analysis of Clinical Brain Sub-cortical Structures from Ultrahigh-Field MRI(2015) Kim, JinyoungVolumetric segmentation of brain sub-cortical structures within the basal ganglia and thalamus from Magnetic Resonance Image (MRI) is necessary for non-invasive diagnosis and neurosurgery planning. This is a challenging problem due in part to limited boundary information between structures, similar intensity profiles across the different structures, and low contrast data. With recent advances in ultrahigh-field MR technology, direct identification and clear visualization of such brain sub-cortical structures are facilitated. This dissertation first presents a semi-automatic segmentation system exploiting the visual benefits of ultrahigh-field MRI. The proposed approach utilizes the complementary edge information in the multiple structural MRI modalities. It combines optimally selected two modalities from susceptibility-weighted, T2-weighted, and diffusion MRI, and introduces a tailored new edge indicator function. In addition to this, prior shape and configuration knowledge of the sub-cortical structures are employed in order to guide the evolution of geometric active surfaces. Neighboring structures are segmented iteratively, constraining over-segmentation at their borders with a non-overlapping penalty. Experiments with data acquired on a 7 Tesla (T) MRI scanner demonstrate the feasibility and power of the approach for the segmentation of basal ganglia components critical for neurosurgery applications such as Deep Brain Stimulation (DBS) surgery.
DBS surgery on brain sub-cortical regions within the Basal ganglia and thalamus is an effective treatment to alleviate symptoms of neuro-degenerative diseases. Particularly, the DBS of subthalamic nucleus (STN) has shown important clinical efficacy for Parkinson’s disease (PD). While accurate localization of the STN and its substructures is critical for precise DBS electrode placement, direct visualization of the STN in current standard clinical MR imaging (e.g., 1.5-3T) is still elusive. Therefore, to locate the target, DBS surgeons today often rely on consensus coordinates, lengthy and risky micro-electrode recording (MER), and patient’s behavioral feedback. Recently, ultrahigh-field MR imaging allows direct visualization of brain sub-cortical structures. However, such high fields are not clinically available in practice. This dissertation also introduces a non-invasive automatic localization method of the STN which is one of the critical targets for DBS surgery in a standard clinical scenario (1.5T MRI). The spatial dependency between the STN and potential predictor structures from 7T MR training data is first learned using the regression models in a bagging way. Then, given automatically detected such predictors on the clinical patient data, the complete region of the STN is predicted as a probability map using learned high quality information from 7T. Furthermore, a robust framework is proposed to properly weight different training subsets, estimating their influence in the prediction accuracy. The STN prediction on the clinical 1.5T MR datasets from 15 PD patients is performed within the proposed approach. Experimental results demonstrate that the developed framework enables accurate prediction of the STN, closely matching the 7T ground truth.
Item Open Access Deep Learning Segmentation in Pancreatic Ductal Adenocarcinoma Imaging(2024) Zhang, HaoranPurpose: Accurately quantifying the extent vessel coverage in the pancreas is essential for determining the feasibility of surgery. The aim of this study is to train a segmentation model specialized for Pancreatic Ductal Adenocarcinoma (PDAC) imaging, focusing on delineating pancreas, tumor, arteries (celiac artery, superior mesenteric artery, common hepatic artery), and veins (portal vein, superior mesenteric vein) using an improved Attention Unet CNN approach.Methods: Data from 100 PDAC patients treated at the Ruijin Hospital between 2020 and 2022 were utilized. Using Synapse 3D software, masks of the tumor, arteries (including the celiac artery, superior mesenteric artery, and hepatic artery), and veins (including the superior mesenteric vein and portal vein) were generated semi-automatically and reviewed by radiologists. Standard image processing techniques, including adjustment of window level to 60 and width to 350 and histogram equalization were subsequently applied. Two types of CNN-based Attention Unet segmentation models were developed: (1) Unified Unet Model that segments all four components simultaneously, and (2) four Individual Unet Models that segments pancreas, tumor, veins, and arteries separately. The train-validation-test data assignment was set to 7:2:1. The segmentation efficacy was assessed using Dice similarity coefficient, with the Adam optimizer utilized for optimization. Results: The individual segmentation models achieve notable performance: pancreas (Accuracy: 0.84, IoU: 0.81, Dice: 0.76), tumor (Accuracy: 0.78, IoU: 0.77, Dice: 0.68), vein (Accuracy: 0.88, IoU: 0.86, Dice: 0.80), and artery (Accuracy: 0.91, IoU: 0.93, Dice: 0.93). However, the unified model demonstrates inferior performance with accuracy, IoU, and Dice coefficient scores of 0.61, 0.50, and 0.45, respectively. Conclusion: Accurate segmentation models have been developed for pancreas, pancreatic tumors, arteries, and veins in PDAC patients. This will enable the efficient quantification of vessel coverage in the pancreas, thereby enhancing the decision-making process regarding the feasibility of surgery for PDAC patients. The findings also demonstrate that Individual Models outperform the Unified Model in segmentation accuracy, highlighting the importance of tailored segmentation strategies for different anatomical structures in PDAC imaging.
Item Open Access Family Plans: Market Segmentation with Nonlinear Pricing(2014) Zhou, BoIn the telecommunications market, firms often give consumers the option of purchasing an individual plan or a family plan. An individual plan gives a certain allowance of usage (e.g., minutes, data) for a single consumer, whereas a family plan allows multiple consumers to share a specific level of usage. The theoretical challenge is to understand how the firm stands to benefit from allowing family plans. In this paper, we use a game-theoretic framework to explore the role of family plans. An obvious way that family plans can be profitable is if it draws in very low-valuation consumers whom the firm would choose not to serve in the absence of a family plan. Interestingly, we find that even when a family plan does not draw any new consumers into the market, a firm can still benefit from offering it. This finding occurs primarily because of the strategic impact of the family plan on the firm's entire product line. By allowing high- and low-valuation consumers to share joint allowance in the family plan, the firm is able to raise the price to extract more surplus from the individual high-valuation consumers by reducing the cannibalization problem. Furthermore, a family obtains a higher allowance compared to the purchase of several individual plans and therefore contributes more profits to the firm. We also observe different types of quantity discounts in the firm's product line. Finally, we identify conditions under which the firm offers a pay-as-you-go plan.
Item Open Access Graph Theory and Dynamic Programming Framework for Automated Segmentation of Ophthalmic Imaging Biomarkers(2014) Chiu, Stephanie JaYiAccurate quantification of anatomical and pathological structures in the eye is crucial for the study and diagnosis of potentially blinding diseases. Earlier and faster detection of ophthalmic imaging biomarkers also leads to optimal treatment and improved vision recovery. While modern optical imaging technologies such as optical coherence tomography (OCT) and adaptive optics (AO) have facilitated in vivo visualization of the eye at the cellular scale, the massive influx of data generated by these systems is often too large to be fully analyzed by ophthalmic experts without extensive time or resources. Furthermore, manual evaluation of images is inherently subjective and prone to human error.
This dissertation describes the development and validation of a framework called graph theory and dynamic programming (GTDP) to automatically detect and quantify ophthalmic imaging biomarkers. The GTDP framework was validated as an accurate technique for segmenting retinal layers on OCT images. The framework was then extended through the development of the quasi-polar transform to segment closed-contour structures including photoreceptors on AO scanning laser ophthalmoscopy images and retinal pigment epithelial cells on confocal microscopy images.
The GTDP framework was next applied in a clinical setting with pathologic images that are often lower in quality. Algorithms were developed to delineate morphological structures on OCT indicative of diseases such as age-related macular degeneration (AMD) and diabetic macular edema (DME). The AMD algorithm was shown to be robust to poor image quality and was capable of segmenting both drusen and geographic atrophy. To account for the complex manifestations of DME, a novel kernel regression-based classification framework was developed to identify retinal layers and fluid-filled regions as a guide for GTDP segmentation.
The development of fast and accurate segmentation algorithms based on the GTDP framework has significantly reduced the time and resources necessary to conduct large-scale, multi-center clinical trials. This is one step closer towards the long-term goal of improving vision outcomes for ocular disease patients through personalized therapy.
Item Open Access High-resolution, anthropomorphic, computational breast phantom: fusion of rule-based structures with patient-based anatomy(2017) Chen, XinyuanWhile patient-based breast phantoms are realistic, they are limited by low resolution due to the image acquisition and segmentation process. The purpose of this study is to restore the high frequency components for the patient-based phantoms by adding power law noise (PLN) and breast structures generated based on mathematical models. First, 3D radial symmetric PLN with β=3 was added at the boundary between adipose and glandular tissue to connect broken tissue and create a high frequency contour of the glandular tissue. Next, selected high-frequency features from the FDA rule-based computational phantom (Cooper’s ligaments, ductal network, and blood vessels) were fused into the phantom. The effects of enhancement in this study were demonstrated by 2D mammography projections and digital breast tomosynthesis (DBT) reconstruction volumes. The addition of PLN and rule-based models leads to a continuous decrease in β. The new β is 2.76, which is similar to what typically found for reconstructed DBT volumes. The new combined breast phantoms retain the realism from segmentation and gain higher resolution after restoration.
Item Open Access Imaging at the Limits: Segmentation Error Bounds and High Resolution Retinal Imaging Systems(2018) DuBose, Theodore BThe human retina is essential to quality of life and therefore a topic of intense clinical and research interest. The combination of this interest with modern biophotonics has yielded a number of technological and medical developments now in various stages of adoption.
Optical coherence tomography (OCT) is a noninvasive optical imaging technique that utilizes coherent light to produce 3-D images with resolutions as fine as a micrometer. Since its invention in 1990, it has become part of the standard of care in opthalmology, shedding new light on the progression of diseases, therapeutic efficacy, childhood development, and real-time surgery in the retina. OCT has also found applications in microscopy, cardiology, pulmonology, and many other fields.
OCT has become valuable for the standard of care primarily due to its abilities to visualize the structural and functional layers of the retina. The thicknesses and volumes of certain can be used as diagnostic criteria and thus there is a high demand of OCT image assessment. In response, many researchers have developed software algorithms to automatically identify and mark, or segment, each layer.
Scanning light ophthalmoscopy or scanning laser ophthalmoscopy (SLO) is similar to OCT but uses confocal gating to produce high-contrast high-speed en face images of the retina. Although SLO has not become as prevalent as OCT in the clinic, it is frequently combined with adaptive optics (AO) to produce extremely high-resolution images of rod and cone photoreceptors, ganglion cells, and moving blood cells in the living retina.
AO is a technique to eliminate image blurring due to monochromatic aberrations in optical systems. By using a spatial light modulator, such as a deformable mirror or liquid crystal array, the wavefront of a beam sent into the eye can be engineered to compensate for the eye's aberrations. AO-SLO was initially developed in 2002 and has continued to be a field of research growth and interest. However, the majority of AOSLO systems require a dedicated room and staff, hindering their clinical adoption
The objective of the work presented herein was to explore the limits of the above imaging modalities. First, we explored the limits of OCT segmentation and demonstrated that the field of automated segmentation is far from its accuracy limit. Second, we explored the limits on SLO portability and developed both the world's smallest SLO probe and the first handheld AOSLO probe. Finally, we explored the limits of SLO resolution, developing the first super-resolution human retinal imaging system through the use of optical reassignment (OR) SLO.
Item Embargo Neural Network Approaches for Cortical Circuit Dissection and Calcium Imaging Data Analysis(2023) Baker, Casey MichelleThe brain encodes diverse cognitive functions through the coordinated activity of interacting neural circuits. Neural ensembles are groups of coactive neurons in these circuits that respond to similar stimuli. Neural ensembles are found throughout the brain and have been associated with many cognitive processes including memory, motor control, and perception. However, a key goal of systems neuroscience is to establish a functional link between neural activity and behavior and these previous studies established only a correlation between ensembles and behavior. Demonstrating a functional link between ensembles and behavior requires precise manipulation of ensemble activity. Manipulating ensemble activity allows neuroscientists to determine the patterns of neural activity that are necessary and sufficient to drive behavior. Additionally, recording and analyzing the activity of hundreds to thousands of neurons simultaneously allows neuroscientists to elucidate the patterns of neural activity underlying behavior. In this dissertation, we developed novel computational tools to help scientists selectively activate ensembles and analyze large-scale neural activity with single-cell resolution.One method to precisely activate cortical ensembles while limiting off-target effects is to stimulate pattern completion neurons. Pattern completion neurons are subsets of neurons in an ensemble that, when activated, can trigger the activation of the rest of the ensemble. However, scientists currently lack methods to reliably identify pattern completion neurons. The first project in this dissertation used computational modeling to identify characteristics of pattern completion neurons in cortical ensembles. We developed a realistic spiking model of layer 2/3 of the mouse visual cortex. We then identified ensembles in the network and quantified the pattern completion capability of different neuron pairs in an ensemble. We analyzed the relationship between structural and dynamic parameters and pattern completion capability. We found that multiple graph theory parameters, and degree in particular, could predict the pattern completion capability of a neuron pair. Additionally, we found that neurons that fired earlier in an ensemble recall event were more likely to have pattern completion properties than neurons that fired later. Lastly, we showed that we can measure this temporal latency in vivo with modern calcium indicators. The later projects in this dissertation used deep learning to improve calcium imaging analysis. First, we developed a semi-supervised pipeline for neuron segmentation to reduce the burden of manual labeling. We compensated for the low number of ground truth labels in two ways. First, we augmented the training data with pseudolabels generated with ensemble learning. Next, we used domain-specific knowledge to predict optimal hyperparameters from the limited ground truth labels. Our pipeline achieved state-of-the-art accuracy when trained on only 25% the number of manual labels as supervised methods. Lastly, we developed a spatiotemporal deep learning pipeline to predict the underlying electrical activity from calcium imaging videos. Calcium imaging provides only an indirect measurement of spiking neural activity, and various spike inference pipelines have attempted to accurately recover spiking timing and rate. Our pipeline improved the detection of single-spike events and improved spike rate prediction throughout the video. This improved performance will help scientists reconstruct neural circuits and study single-cell responses to stimuli. Overall, the tools developed in this dissertation will help systems neuroscience researchers establish a causal link between neural activity and behavior and will help determine the precise patterns of neural activity underlying these behaviors.
Item Open Access Patterning Mechanisms Underlying Notochord and Spine Segmentation in Zebrafish(2021) Wopat, SusanThe defining characteristic of the subphylum Vertebrata is the vertebral column, which is comprised of alternating vertebral bodies and intervertebral discs. In spite of being a highly conserved structure, the morphogenetic events that culminate in building the vertebral column remain poorly understood. In particular, patterning mechanisms underlying how segmentation of the spine is precisely established have not been examined at post-embryonic stages. For several years, vertebral column patterning was thought to hinge upon proper segmentation of the embryo, while the notochord served as a transient scaffold for the vertebral bodies and intervertebral discs. Using genetic, live-imaging, and quantitative approaches, this work illustrates that the notochord sheath in zebrafish, provides a template for osteoblast recruitment and vertebral bone formation in the developing spine. Furthermore, we show that notochord segmentation is influenced by the adjacent muscle segments and connective tissue, which may provide mechanical patterning cues. Insights from this work will better inform how adolescent idiopathic scoliosis and congenital scoliosis arise.
Item Open Access The Importance of Planning Ahead: A Three-Dimensional Analysis of the Novel Trans-Facet Corridor for Posterior Lumbar Interbody Fusion Using Segmentation Technology.(World neurosurgery, 2024-05) Tabarestani, Troy Q; Drossopoulos, Peter N; Huang, Chuan-Ching; Bartlett, Alyssa M; Paturu, Mounica R; Shaffrey, Christopher I; Chi, John H; Ray, Wilson Z; Goodwin, C Rory; Amrhein, Timothy J; Abd-El-Barr, Muhammad MBackground
The rise of minimally invasive lumbar fusions and advanced imaging technologies has facilitated the introduction of novel surgical techniques with the trans-facet approach being one of the newest additions. We aimed to quantify any pathology-driven anatomic changes to the trans-facet corridor, which could thereby alter the ideal laterality of approach to the disc space.Methods
In this retrospective cohort study, we measured the areas and maximum permissible cannula diameters of the trans-facet corridor using commercially available software (BrainLab, Munich, Germany). Exiting and traversing nerve roots, thecal sacs, and lumbar vertebrae were manually segmented on T2-SPACE magnetic resonance imaging. Spondylolisthesis, disc protrusions, and disc space heights were recorded.Results
A total of 118 trans-facet corridors were segmented bilaterally in 16 patients (65.6 ± 12.1 years, 43.8% female, body mass index 29.2 ± 5.1 kg/m2). The mean areas at L1-L2, L2-L3, L3-L4, and L4-L5 were 89.4 ± 24.9 mm2, 124 ± 39.4 mm2, 123 ± 26.6 mm2, and 159 ± 42.7 mm2, respectively. The mean permissible cannula diameter at the same levels were 7.85 ± 1.43 mm, 8.98 ± 1.72 mm, 8.93 ± 1.26 mm, and 10.2 ± 1.94 mm, respectively. Both parameters increased caudally. Higher degrees for spondylolisthesis were associated with larger areas and maximum cannula diameters on regression analysis (P < 0.001).Conclusions
Our results illustrate that pathology, like spondylolisthesis, can increase the area of the trans-facet corridor. By understanding this effect, surgeons can better decide on the optimal approach to the disc while taking into consideration a patient's unique anatomy.Item Open Access Truth-based Radiomics for Prediction of Lung Cancer Prognosis(2020) Hoye, JocelynThe purpose of this dissertation was to improve CT-based radiomics characterization by assessing and accounting for its systematic and stochastic variability due to variations in the imaging method. The anatomically informed methodologies developed in this dissertation enable radiomics studies to retrospectively correct for the effects CT imaging protocols and prospectively inform CT protocol choices. This project was conducted in three parts of 1) assessing of the bias and variability of morphologic radiomics features across a wide range of CT imaging protocols and segmentation algorithms, 2) assessing the applicability, sensitivity, and usefulness of applying bias correction factors retrospectively to patient data acquired with heterogenous CT imaging protocols, and 3) developing analytical techniques to reduce the variability of radiomics features by prospectively optimizing the CT imaging protocols.
In part 1 (chapters 2-4), the measurability of bias and variability of morphologic radiomcis features was assessed. In chapter 2, a theoretical framework was developed to guide the process of analyzing and utilizing quantitative features, including radiomics, derived from CT images. The framework outlined the key qualities necessary for successful quantification including biological and clinical relevance, objectivity, robustness, and implementability. In chapter 3, a method was developed to use anatomically informed lung lesion models to assess the bias and variability of morphology radiomics features as a function of CT imaging protocols and segmentation algorithms. The results showed that bias and variability of radiomics features are dependent on a complicated interplay of anatomical, imaging protocol, and segmentation effects. In chapter 4, the bias and variability of radiomics due to segmentation algorithms was explored in-depth for three segmentation algorithms across a range of image noise magnitudes. The segmentation algorithms were assessed by comparing their performance to an ideal radiomics estimator for a range of image quality characteristics. The results showed that the optimal segmentation algorithm was function of the specific noise magnitude and the radiomics features of interest.
In part 2 (chapter 5), an analysis was carried out using a Non-Small Cell Lung Cancer patient dataset to assess the applicability, sensitivity, and usefulness of correcting radiomics features for imaging protocol effects. The applicability was assessed by calculating bias correction factors from one set of anatomically informed lesion models and applying the correction factors to another set of anatomically informed lesion models. The sensitivity was assessed by applying idealized bias correction factors to the patient dataset with increasing bias correction magnitudes to determine the sensitivity of predictive models to the magnitude of the bias correction factors. Finally, the usefulness was assessed by applying the anatomically informed protocol-specific bias correction factors to the patient dataset and quantifying the change in the performance predictive model. The results showed that the bias correction factors are applicable when the bias correction factors are derived from and applied to lesion models with similar anatomical characteristics. The feature-specific sensitivity of prediction to bias correction factors was found to be as low as 1-5% and was typically in the range of 20-50%. The bias correction factors were applied to a patient population and were found to result in a small statistically significant improvement in the performance.
In part 3 (chapter 6), a method was developed and implemented to assess the minimum detectable difference of morphologic radiomics features as a function of protocol and anatomical characteristics. The analysis of the data was carried out to allow for evaluating and informing the recommendations of the Quantitative Imaging Biomarkers Alliance (QIBA) for lung nodule volumetry. The results showed that the minimum detectable difference for QIBA compliant protocols was a lower median value than the minimum detectable difference among all possible CT protocols. The techniques developed in this analysis can be used to prospectively optimize CT imaging protocols for improved quantitative characterization of radiomics features.
In conclusion, this dissertation developed methods to assess and account for the variability of radiomics features across CT imaging protocols and segmentation algorithms using anatomically informed lesion models.
Item Embargo Uncovering the patterning mechanisms governing notochord segmentation and spine evolution(2022) Peskin, Brianna ClaireVertebrates are distinguished by the presence of a segmented spine that supports the body axis and facilitates movement. The establishment of alternating domains of vertebral centra and intervertebral discs is a complex biological phenomenon. Recent studies in teleost fish demonstrate that the epithelial sheath of the notochord segments to provide positional information for the development of vertebral bone. The studies performed for this dissertation uncover specific components of the gene regulatory network guiding notochord segmentation. Genetic manipulations and live confocal imaging of transgenic zebrafish demonstrate that BMP activity triggers sheath cell differentiation and regulates the lateral expansion of notochord segments. Moreover, the importance of notochord segmentation during the development and evolution of the spine is highlighted by a unique extracellular matrix mutant in which notochord patterning is lost. Without a segmented notochord framework, sclerotomal osteoblasts alter their migratory trajectories and solely rely on paraxial mesoderm patterning to form centra structures. The resulting mode of spine morphogenesis shares commonalities with basal gnathostome species, suggesting that notochord signals prompted specific morphological transitions during spine evolution.