ESTRO 2026 - Abstract Book PART II

S1556

Physics - Autosegmentation

ESTRO 2026

mouse scans. Nat Commun 11:5626. Keywords: Photon Count CBCT , deep learning , Segmentation

Digital Poster 1393 GTVN auto-segmentation for head and neck cancer; iterative modelling, oncologist evaluation, and pathways to bias assessment Victoria Butterworth 1,2 , Michael Woodward 3,2 , Thomas Young 1,4 , Anil Mistry 3,2 , Christopher Thomas 3,2 , Sarah Misson 3,2 , Delali Adjogatse 1,4 , Mary Lei 1,4 , Philip Touska 5 , Dijana Vilic 3,2 , Andrew King 2 , Teresa Guerrero Urbano 1,4 1 Department of Radiotherapy, Guy's and St Thomas' NHS Foundation Trust, London, United Kingdom. 2 School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom. 3 Department of Medical Physics, Guy's and St Thomas' NHS Foundation Trust, London, United Kingdom. 4 School of Cancer and Pharmaceutical Sciences, King's College London, London, United Kingdom. 5 Department of Radiology, Guy's and St Thomas' NHS Foundation Trust, London, United Kingdom Purpose/Objective: Commercial auto-segmentation tools are widely adopted for organs-at-risk in head and neck radiotherapy, yet a notable gap remains for gross tumour volumes (GTV)1. With increasing demands from adaptive workflows and ongoing workforce pressures, accurate and efficient auto-segmentation tools are vital to sustain high-quality patient care and throughput. We therefore developed, iteratively refined and evaluated a nodal GTV (GTVN) model trained on a federated institutional data lake2. Material/Methods: Patients were drawn from the retrospective RT-HaND dataset3. Eligible cases had oropharynx, hypopharynx, nasopharynx or larynx primaries, treated with definitive RT/chemoRT or cetuximab+RT, and contrast- enhanced planning CT with radiologist peer-reviewed GTVN contours. Three model stages were developed using nnU-Net v2 (3D_fullres, 1000 epochs, 5-fold cross-validation):Stage 1: node-positive patients only (n=135 train; n=32 test).Stage 2: node-positive + node- negative (n=270 train; same test).Stage 3: post- processing to remove volumes <10 mm equivalent diameter, with metrics recalculated on equivalently processed clinical contours.Primary endpoints were volumetric Dice Similarity Coefficient (vDSC), Surface Dice Coefficient (SDC), per-node sensitivity and precision. Paired two-tailed Wilcoxon signed-rank tests were used. Two clinical oncologists qualitatively scored Stage 3 outputs using a 5-point Likert scale4.

The combination of spectral data and specialized network architecture yielded substantial performance gains (Figure 1). Tri-bin achieved highest mean DSC (0.906), outperforming dual-bin (0.893) and single-bin (0.885). An ablation study confirmed SpecCTSegNet's attention mechanisms and multi-scale features were critical, robustly outperforming standard U-Net (DSC: 0.894 vs. 0.906 with tri-bin). For radiotherapy, tumor precision improved dramatically: HD95 decreased from 0.67 mm (single-bin) to 0.22 mm (tri-bin)—a 67% reduction. Notably, the tri-bin framework outperformed both manual expert segmentations (DSC: 0.906 vs. 0.813) [1] and state-of-the-art methods (AIMOS [2]: 0.889) on contrast-enhanced micro-CT, with excellent performance for critical organs: lung (97.3% DSC), heart (97.2%), intestine (94.9%), and liver (92.2%). Conclusion: We developed a physics-informed attention network that substantially improves automated OAR and tumor segmentation in preclinical CBCT, especially with tri- bin spectral data. SpecCTSegNet holds promise as a robust tool for high-fidelity delineation in treatment planning and response assessment, potentially eliminating the need for contrast agents, which can complicate dose calculations. Future work will validate this framework on experimental PC-CBCT data from our system under development. References: [1]Rosenhain S et al. (2018). A preclinical micro- computed tomography database including 3D whole body organ segmentations. Sci Data 5:180294.[2]Schoppe O et al. (2020). Deep learning- enabled multi-organ segmentation in whole-body

Made with FlippingBook - Share PDF online