Description
Accurate histologic subtype classification of non-small cell lung cancer is critical for treatment selection and prognosis assessment. This research introduces RAFENet, a reconstruction-assisted deep learning network designed to improve subtype classification performance using computed tomography images.
Unlike conventional deep learning approaches that rely on shared encoders and pixel-wise reconstruction losses, RAFENet incorporates a task-aware encoding module with cascaded cross-level non-local blocks. This design enables the model to learn multi-level, task-specific tumor representations while suppressing irrelevant background information commonly present in CT scans.
To further enhance semantic understanding, the study introduces a semantic consistency loss that combines feature consistency loss and prediction consistency loss. This dual-constraint approach ensures semantic invariance between original and reconstructed images, resulting in more discriminative feature learning. Extensive experiments on public TCIA-NSCLC data and an in-house clinical dataset demonstrate that RAFENet consistently outperforms state-of-the-art deep learning and reconstruction-based methods across accuracy, sensitivity, specificity, and AUC metrics.
This paper is highly relevant for researchers and professionals in medical imaging, radiomics, cancer diagnostics, and artificial intelligence in healthcare. It provides a robust methodological reference for applying reconstruction-assisted deep learning to challenging medical image classification tasks.
