Skip to main content
. Author manuscript; available in PMC: 2024 May 15.
Published in final edited form as: Domain Adapt Represent Transf (2023). 2023 Oct 14;14293:94–104. doi: 10.1007/978-3-031-45857-6_10

Fig. 2.

Fig. 2.

Our SSL strategy gradually decomposes and perceives the anatomy in a coarse-to-fine manner. Our Anatomy Decomposer (AD) decomposes the anatomy into a hierarchy of parts with granularity level n{0,1,..} at each training stage. Thus, anatomical structures of finer-grained granularity will be incrementally presented to the model as the input. Given image I, we pass it to AD to get a random anchor x. We augment x to generate two views (positive samples), and pass them to two encoders to get their features. To avoid semantic collision in training objective, our Purposive Pruner removes semantically similar anatomical structures across images to anchor x from the memory bank. Contrastive loss is then calculated using positive samples’ features and the pruned memory bank. The figure shows pretraining at n=4.