Skip to main content
. 2026 Jan 22;26(2):738. doi: 10.3390/s26020738
CHARMS Full Name Brief Description
CHARMS CNN-Transformer Hybrid with Attention Regularization for MRI Super-Resolution The proposed lightweight model for MRI super-resolution, combining CNN and Transformer elements with attention regularization.
RRAF Reverse Residual Attention Fusion Backbone module for hierarchical local feature extraction, integrating residual learning and attention.
RLFE Residual Local Feature Extraction Units within RRAF blocks, consisting of convolutions, ReLU activations, and ESA for feature encoding.
ESA Enhanced Spatial Attention Spatial attention operator that highlights high-frequency regions using dilated convolutions.
PCA Pixel–Channel Attention Mechanism merging pixel- and channel-level attention for fine-grained feature recalibration.
MDDTA Multi-Depthwise Dilated Transformer Attention Transformer block for efficient long-range dependency modeling with linear complexity.
GDDFN Gated Depthwise Dilated Feed-Forward Network Feed-forward component in the Transformer module, enhancing nonlinearity via gated convolutions.
THFIR High-Frequency Information Refinement Refinement module post-upsampling to restore high-frequency details and suppress artifacts.