Skip to main content
Sensors (Basel, Switzerland) logoLink to Sensors (Basel, Switzerland)
editorial
. 2022 May 10;22(10):3612. doi: 10.3390/s22103612

Image Sensing and Processing with Convolutional Neural Networks

Sonya Coleman 1,*, Dermot Kerr 1, Yunzhou Zhang 2
PMCID: PMC9146735  PMID: 35632021

Convolutional neural networks are a class of deep neural networks that leverage spatial information, and they are therefore well suited to classifying images for a range of applications. These networks use an ad hoc architecture inspired by our understanding of processing within the visual cortex. Convolutional neural networks (or CNNs) provide an interesting method for representing and processing image information and form a link between general feed-forward neural networks and adaptive filters. Two-dimensional CNNs are formed by one or more layers of two-dimensional filters, with possible non-linear activation functions and/or down-sampling. CNNs possess key properties of translation invariance and spatially local connections (receptive fields). Given this, deep learning using convolutional neural networks (CNNs) has quickly become the state of the art for challenging computer vision applications.

Image quality is critical for many applications. CNNs have a key role to play in directly dealing with low-quality images or in image enhancement applications. Tchendjou et al. [1] presented a new objective method incorporating a CNN for the estimation of visual perceiving quality without the need for a reference image or assumptions on the image quality. Wang et al. [2] explored the effect of geometric disturbance corresponding to attitude jitter using a GAN to explore the usefulness for jitter detection, revealing the enormous potential of GAN-based methods for the analysis of attitude jitter from remote sensing images. Han et al. [3] proposed a deep supervised residual dense network, which uses residual dense blocks to enhance features along with an encoder and decoder to reduce the differences between the features for underwater degraded images. Xiao et al. [4] focused on blur detection as an image segmentation problem where a multi-scale dilated convolutional neural network (MSDU-net) extracts features with dilated convolutions and a U-shape architecture fuses the different-scale features, supporting the image segmentation task. Yang et al. [5] proposed a novel deeply recursive low- and high-frequency fusing network for single-image super-resolution (SISR) tasks which adopts the structure of parallel branches with a focus on reducing computational and memory resources.

CNNs can play a leading role in environmental applications. For example, pollution in the form of litter in the natural environment is one of the great challenges of our times. Cordova et al. [6] developed an automated litter detection system that can help assess waste occurrences in the environment. A comparative study involving state-of-the-art CNN architectures highlights the role CNNs can play to support this. Similarly, Wei et al. [7] developed models for predicting the wind speed and wave height near the coasts of ports during typhoon periods, where gated recurrent unit (GRU) neural networks and convolutional neural networks (CNNs) were combined and adopted to formulate the typhoon-induced wind and wave height prediction models. Wu et al. [8] targeted the detection of specific crop types from crowdsourced road-view photos and clearly demonstrated the superior accuracy of this approach. Xu et al. [9] presented an accurate and robust detection of road damage that is essential for public transportation safety, and Chou et al. [10] developed a smart dredging construction site system using automated techniques to automate the audit work at the control point, which manages trucks in river dredging areas.

Healthcare is an important application area that AI and CNNs, in particular, can have an impact on. Specifically, the role of 5G-IoT plays a crucial part in e-health applications, and to this end, Anand et al. [11] proposed a new deep learning model to detect malware attacks based on a CNN. In contrast, Barros et al. [12] presented a hybrid model to classify lung ultrasound videos captured by convex transducers to diagnose COVID-19 with an average accuracy of 93% and sensitivity of 97%. The Clock Drawing Test (CDT) is a rapid, inexpensive, and popular screening tool for cognitive functions. Park et al. [13] presented a mobile phone application, mCDT, and suggested a novel, automatic, and qualitative scoring method and deep learning that provides the ability to differentiate dementia disease. Alsamadony et al. [14] applied DCNNs to improve the quality of rock CT images and reduce exposure times by more than 60% simultaneously. The approach is applicable to any computed tomography technology. Ankita et al. [15] presented an approach in which convolutional layers are combined with long short-term memory (LSTM) for human activity recognition (HAR); providing an accuracy of 97.89%, this has applications in assistive living and healthcare.

Robotics is an important application area for CNNs, and to help robots grasp specific objects in multi-object scenes, the powerful feature extraction capabilities of CNNs have been proposed. Different from anchor-based grasp detection algorithms, Li et al. [16] successfully developed a keypoint-based scheme demonstrating that a robot can grasp the target in single-object and multi-object scenes with overall success rates of 94% and 87%, respectively.

This Special Issue provides a forum for high-quality peer-reviewed papers that broaden the awareness and understanding of recent CNN developments, applications of CNNs for computer vision tasks, and associated developments in CNN architectures, processing components, connective structures, and learning mechanisms, and in dealing with CNN constraints in respect to data preparation and training.

Conflicts of Interest

The authors declare no conflict of interest.

Footnotes

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  • 1.Tchendjou G.T., Simeu E. Visual Perceptual Quality Assessment Based on Blind Machine Learning Techniques. Sensors. 2021;22:175. doi: 10.3390/s22010175. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Wang Z., Zhang Z., Dong L., Xu G. Jitter Detection and Image Restoration Based on Generative Adversarial Networks in Satellite Images. Sensors. 2021;21:4693. doi: 10.3390/s21144693. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Han Y., Huang L., Hong Z., Cao S., Zhang Y., Wang J. Deep Supervised Residual Dense Network for Underwater Image Enhancement. Sensors. 2021;21:3289. doi: 10.3390/s21093289. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Xiao X., Yang F., Sadovnik A. MSDU-Net: A Multi-Scale Dilated U-Net for Blur Detection. Sensors. 2021;21:1873. doi: 10.3390/s21051873. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Yang C., Lu G. Deeply Recursive Low- and High-Frequency Fusing Networks for Single Image Super-Resolution. Sensors. 2020;20:7268. doi: 10.3390/s20247268. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Córdova M., Pinto P., Hellevik C.C., Alaliyat S.A.-A., Hameed I.A., Pedrini P., Torres R. Litter Detection with Deep Learning: A Comparative Study. Sensors. 2022;22:548. doi: 10.3390/s22020548. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Wei C.-C., Chang H.-C. Forecasting of Typhoon-Induced Wind-Wave by Using Convolutional Deep Learning on Fused Data of Remote Sensing and Ground Measurements. Sensors. 2021;21:5234. doi: 10.3390/s21155234. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Wu F., Wu B., Zhang M., Zeng H., Tian F. Identification of Crop Type in Crowdsourced Road View Photos with Deep Convolutional Neural Network. Sensors. 2021;21:1165. doi: 10.3390/s21041165. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Xu H., Chen B., Qin J. A CNN-Based Length-Aware Cascade Road Damage Detection Approach. Sensors. 2021;21:689. doi: 10.3390/s21030689. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Chou J.-S., Liu C.-H. Automated Sensing System for Real-Time Recognition of Trucks in River Dredging Areas Using Computer Vision and Convolutional Deep Learning. Sensors. 2021;21:555. doi: 10.3390/s21020555. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Anand A., Rani S., Anand D., Aljahdali H.M., Kerr D. An Efficient CNN-Based Deep Learning Model to Detect Malware Attacks (CNN-DMA) in 5G-IoT Healthcare Applications. Sensors. 2021;21:6346. doi: 10.3390/s21196346. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Barros B., Lacerda P., Albuquerque C., Conci A. Pulmonary COVID-19: Learning Spatiotemporal Features Combining CNN and LSTM Networks for Lung Ultrasound Video Classification. Sensors. 2021;21:5486. doi: 10.3390/s21165486. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Park I., Lee U. Automatic, Qualitative Scoring of the Clock Drawing Test (CDT) Based on U-Net, CNN and Mobile Sensor Data. Sensors. 2021;21:5239. doi: 10.3390/s21155239. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Alsamadony K.L., Yildirim E.U., Glatz G., Waheed U.B., Hanafy S.M. Deep Learning Driven Noise Reduction for Reduced Flux Computed Tomography. Sensors. 2021;21:1921. doi: 10.3390/s21051921. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Ankita , Rani S., Babbar H., Coleman S., Singh A., Aljahdali H.M. An Efficient and Lightweight Deep Learning Model for Human Activity Recognition Using Smartphones. Sensors. 2021;21:3845. doi: 10.3390/s21113845. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Li T., Wang F., Ru C., Jiang Y., Li J. Keypoint-Based Robotic Grasp Detection Scheme in Multi-Object Scenes. Sensors. 2021;21:2132. doi: 10.3390/s21062132. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Sensors (Basel, Switzerland) are provided here courtesy of Multidisciplinary Digital Publishing Institute (MDPI)

RESOURCES