Take-Away Points
■ Major Focus: Artificial intelligence networks generally perform well only on specific datasets and very particular structures, limiting widespread implementation.
■ Key Result: The self-configuring network design of the algorithm nnU-Net provides a new benchmark for segmenting organs, tumors, and cells without manual segmentation.
■ Impact: nnU-Net is an out-of-the-box tool that will allow clinics and laboratories to train this model in just days, which offers a path to broader adaptation to applications for automated image segmentation in cancer imaging.
Automated image segmentation (autosegmentation) uses computational algorithms to define three-dimensional volumes of anatomic features of interest in imaging studies, including tumors and nearby organs. Clinical translation of autosegmentation promises to improve speed and reduce interobserver variability for planning radiation therapy, identifying tumors, quantifying response to therapy, and constructing downstream applications such as radiomics. While autosegmentation typically performs well with specific training sets or limited types of structures, algorithms commonly fail to generalize well to other datasets or tasks, preventing widespread adoption.
To overcome this challenge, Isensee et al developed nnU-Net, an autosegmentation framework that eliminates manual steps, which include preprocessing, network architecture engineering, training, and postprocessing. Instead, nnU-Net uses a set of readily accessible rules derived from the underlying data to guide the construction of the neural network and associated data manipulation. nnU-Net does not create a new network design (hence its clever name: “no new net”). Rather, the true discovery lies in the set of systematic rules to build and train models fully automatically. The authors showcase the power of nnU-Net by running their framework in 23 unique medical image segmentation challenges in a variety of modalities including CT, MRI, and even electron microscopy. Quite impressively, nnU-Net achieved top-level performance ranks on all challenges without creating any custom modifications.
The authors released their PyTorch code publicly (https://github.com/MIC-DKFZ/nnUNet) and created a Linux-based command-line tool to run nnU-Net. I personally tested nnU-Net for segmenting the gross tumor volume for oropharyngeal head and neck cancers as defined by the 2020 MICCAI HECKTOR challenge (https://www.aicrowd.com/challenges/miccai-2020-hecktor). I downloaded the code and trained the out-of-the-box nnU-Net model with three-dimensional full-resolution CT and PET images from the HECKTOR challenge data consisting of 201 patients with oropharyngeal tumors. I trained the nnU-Net model with fivefold cross-validation on five graphics processing units (NVIDIA V100s 16 GB) in parallel over 3 days. Without any manual modifications, the algorithm produced a 74.7% Dice score, a measure of similarity between two samples, on the test set, placing third place overall in the challenge in the postchallenge leaderboard.
nnU-Net provides a new approach and benchmark for autosegmentation models across several domains of medical image segmentation. By removing manual steps in data processing and network engineering, nnU-Net paves the way ahead for widespread adoption of automated medical image segmentation.
Highlighted Article
Isensee F, Jaeger PF, Kohl SAA, Petersen J, Maier-Hein KH. nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nat Methods 2020; Published December 7, 2020. doi: 10.1038/s41592-020-01008-z.
Highlighted Article
- Isensee F, Jaeger PF, Kohl SAA, Petersen J, Maier-Hein KH. nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nat Methods 2020; Published December 7, 2020. doi: 10.1038/s41592-020-01008-z. [DOI] [PubMed] [Google Scholar]