Intelligent sensing systems have been fueled to make sense of visual sensory data to handle complex and difficult real-world sense-making challenges due to the rapid growth of computer vision and machine learning technologies. We can now interpret visual sensory data more effectively thanks to recent developments in machine learning algorithms. This means that in related research, significant attention is now being paid to problems in this field, such as visual surveillance, smart cities, etc.
The Special Issue offers a selection of high-quality research articles that tackle the major difficulties in computer vision and machine learning for intelligent sensing systems from both a theoretical and practical standpoint. It includes intelligent sensing techniques [1,2,3,4,5], twelve foundational investigations into sense-making methods [6,7,8,9,10], and particular uses of intelligent sensing systems in autonomous driving [11] and virtual reality [12].
Intelligent sensing techniques
Kokhanovskiy et al. [1] demonstrated the application of deep neural networks to process the reflectance spectrum from a fiberoptic temperature sensor.
Shiba et al. [2] proposed collapse metrics by using the first principles of space–time deformation based on differential geometry and physics for contrast maximization, which provided state-of-the-art results on several event-based computer vision tasks.
Chen et al. [3] presented a system that integrates mobile edge computing technology and simultaneous wireless information and power transfer technology to improve the service supply capability of WSN-assisted IoT applications.
Niu et al. [4] proposed a new fusion network to minimize the influences from the two most common sensor noises, i.e., depth noises and pose noises.
Hashmani et al. [5] presented a novel SLIC extension based on a hybrid distance measure to retain content-aware information for semi-dark images.
Intelligent sense-making techniques
Le and Scherer [6] performed a comprehensive survey of many studies, methods, datasets, and results for human segmentation and tracking in video.
Tran et al. [7] proposed a heuristic attention representation learning framework relying on the joint embedding architecture, in which the two neural networks are trained to produce similar embedding results for different augmented views of the same image.
Zaferani et al. [8] proposed a method for the automatic hyper-parameter tuning of a stacked asymmetric auto-encoder to extract personality perception from speech.
Hu et al. [9] developed two attention modules that work together to extract the coordination characteristics in the process of motion and strengthen the attention of the model to the more important joints in the process of moving.
Oh et al. [10] suggested estimating gaze by detecting eye region landmarks through a single eye image. It learns representations of images at various resolutions, and the self-attention module is used to obtain a refined feature map with better spatial information.
Applications of intelligent sensing systems
Song and Lee [11] studied autonomous driving and proposed a novel algorithm for online self-calibration between sensors using voxels and three-dimensional convolution kernels.
Moreno-Armendáriz et al. [12] described a system composed of deep neural networks that analyze characteristics of customers based on their face, as well as the ambient temperature, to generate a personalized signal to potential buyers who pass in front of a beverage establishment.
In conclusion, through the wide range of research presented in this Special Issue, we would like to boost fundamental and practical research on applying computer vision and machine learning for intelligent sensing systems.
Conflicts of Interest
The author declares no conflict of interest.
Footnotes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
References
- 1.Kokhanovskiy A., Shabalov N., Dostovalov A., Wolf A. Highly Dense FBG Temperature Sensor Assisted with Deep Learning Algorithms. Sensors. 2021;21:6188. doi: 10.3390/s21186188. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Shiba S., Aoki Y., Gallego G. Event Collapse in Contrast Maximization Frameworks. Sensors. 2022;22:5190. doi: 10.3390/s22145190. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Chen F., Wang A., Zhang Y., Ni Z., Hua J. Energy Efficient SWIPT Based Mobile Edge Computing Framework for WSN-Assisted IoT. Sensors. 2021;21:4798. doi: 10.3390/s21144798. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Niu Z., Fujimoto Y., Kanbara M., Sawabe T., Kato H. DFusion: Denoised TSDF Fusion of Multiple Depth Maps with Sensor Pose Noises. Sensors. 2022;22:1631. doi: 10.3390/s22041631. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Hashmani M.A., Memon M.M., Raza K., Adil S.H., Rizvi S.S., Umair M. Content-Aware SLIC Super-Pixels for Semi-Dark Images (SLIC++) Sensors. 2022;22:906. doi: 10.3390/s22030906. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Le V.-H., Scherer R. Human Segmentation and Tracking Survey on Masks for MADS Dataset. Sensors. 2021;21:8397. doi: 10.3390/s21248397. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Tran V.N., Liu S.-H., Li Y.-H., Wang J.-C. Heuristic Attention Representation Learning for Self-Supervised Pretraining. Sensors. 2022;22:5169. doi: 10.3390/s22145169. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Zaferani E.J., Teshnehlab M., Khodadadian A., Heitzinger C., Vali M., Noii N., Wick T. Hyper-Parameter Optimization of Stacked Asymmetric Auto-Encoders for Automatic Personality Traits Perception. Sensors. 2022;22:6206. doi: 10.3390/s22166206. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Hu K., Ding Y., Jin J., Xia M., Huang H. Multiple Attention Mechanism Graph Convolution HAR Model Based on Coordination Theory. Sensors. 2022;22:5259. doi: 10.3390/s22145259. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Oh J., Lee Y., Yoo J., Kwon S. Improved Feature-Based Gaze Estimation Using Self-Attention Module and Synthetic Eye Images. Sensors. 2022;22:4026. doi: 10.3390/s22114026. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Song J., Lee J. Online Self-Calibration of 3D Measurement Sensors Using a Voxel-Based Network. Sensors. 2022;22:6447. doi: 10.3390/s22176447. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Moreno-Armendáriz M.A., Calvo H., Duchanoy C.A., Lara-Cázares A., Ramos-Diaz E., Morales-Flores V.L. Deep-Learning-Based Adaptive Advertising with Augmented Reality. Sensors. 2021;22:63. doi: 10.3390/s22010063. [DOI] [PMC free article] [PubMed] [Google Scholar]
