Abstract
In this study, an improved flame edge detector based on convolutional neural network (CNN) was proposed. The proposed method can generate edge graphs and extract edge graphs relatively effectively. Our network architecture was based on VGG16 primarily, the last two max-pooling operators and all full connection layers of the VGG16 network were deleted, and the rest was taken as the basic network. The images output by the five convolution layers were upsampled to the size of the input images and finally fused to the edge image. Error calculation and back propagation of the fusion image and label image are carried out to form a weakly supervised model. Using the open datasets BSDS500 to train the network, the ODS F-measure can reach 0.810. Various experiments were carried out on different flame and fire images, including butane–air flame, oxygen–ethanol flame, energetic material flame, and oxygen–acetylene premixed jet flame, and the infrared thermogram was also verified by our method. The results demonstrate the effectiveness and robustness of the proposed algorithm.
1. Introduction
Flame temperature is an important parameter in the combustion process.1 As one of the seven international basic units, it has important physical and chemical significance, improves the accuracy of flame temperature measurement technology research, and has great significance in military applications and industrial fields, such as aviation, aerospace,2 nuclear explosion,3 metallurgy,4 and thermal power generation.5 However, in a complex temperature field measurement, with high pressure, high spin, high impact, strong noise, and the influence of environmental factors, such as the test system to produce strong coupling interference, the temperature usually cannot be measured.
As an important step in flame image processing, flame edge detection often lays the foundation for flame temperature measurements and three-dimensional temperature field reconstruction. First, the definition of the flame edge can be used to directly calculate the position, shape, size, and other characteristic parameters of the flame. Second, flame edge processing can filter the background noise of the oblique tone, which minimizes the influence of background noise and significantly reduces the amount of data in 3D reconstruction. Third, edge intelligence plays an important role in fire safety and management. It is necessary to report abnormal conditions such as fires in real time during monitoring. In addition, edges can be used to divide a group of flames, which helps in multi-flame monitoring in the industry and improves energy efficiency.
In the last few decades, several techniques have been proposed for identifying fire and flame image edges. Bheemul6 proposed an effective flame contour extraction technology, which can obtain the complete flame contour by finding the place with the greatest gray variation as the peak point and connecting the peak points successively. However, this technique is only suitable for stable and simple flame images. Toreyin et al.7 proposed an online learning fire detection method based on the hidden Markov model and the time flicker modeling method based on wavelet, which successfully detected fire in real-time video, and the false alarms issued by earlier methods could be drastically reduced. Qin Jiang8 proposed an improved canny edge detector based on the adaptive smoothing combination of the geometry of the flame, which was used to detect fire regions in large-space fire videos. Chacon-Murguia and Perez-Vargas9 detected and analyzed fire information in a video through the analysis of shape regularity and intensity saturation features. Qiu et al.10 proposed an adaptive edge detection algorithm for flame image processing, which is based on the Sobel edge detection algorithm and can adjust high and low thresholds according to the scene, which has good robustness. Gupta et al.11 proposed a simple and robust method for flame image edge detection based on local binary mode and threshold technology. The method first acquired the flame contour through the local binary mode and made the flame edge clearly visible by a double threshold adjustment. Gaidhane12 proposed a flame edge detection method based on the local binary mode, double threshold, and Levenberg–Marquardt optimization techniques. This study primarily used the non-maxima suppression technique to obtain thinner and clearer flame edges.
Although these methods have their own advantages for specific application scenarios, such as fire detection or shape reconstruction in complex backgrounds or for detecting early fires and triggering fire alarms, they all have some limitations. For example, some of the detected flame edges were unclear, discontinuous, or did not match the actual flame shape. To detect the size and shape of the flame, as well as its geometric characteristics, it is necessary to obtain clear, thin, continuous, and as close as possible flame edges. An effective and efficient technique remains a challenge for image processing researchers.
In this study, an edge detection operator was used for flame edge detection, and it was designed based on the convolutional neural network (CNN). Our network architecture was based on VGG16 network primarily, the last two max-pooling operators and all full connection layers of the VGG16 network were deleted, and what remained was our basic network. Our network was trained with the BSDS500 datasets and validated on a variety of flame images. The results show that our method can effectively detect the flame edge.
2. Overview of Edge Detection Techniques
Edges are an important feature of an image. In general, an image is regarded as the input signal, and the edge of the image refers to discontinuous points or a collection of points with drastic changes in the gray value of the image, which can be represented by the first and second derivatives in mathematics. An image can be represented as a two-dimensional function f(x, y). Its gradient can be defined as
![]() |
1 |
where gx and gy are vectors in the x and y directions.
The magnitude of this vector can be expressed as
![]() |
2 |
In general, it can be approximated by an absolute value, ∇f ≈ |gx| + |gy|, and using an absolute value to approximate avoids the calculation of squares and square roots but still has the property of derivation. In practical calculation, the magnitude of the gradient is usually referred to as ″gradient″.13
A basic feature of the gradient vector is that the direction of f maximum rate of change at the coordinates (x, y) is indicated, and the angle at which the maximum rate of change occurs is
![]() |
3 |
Image edge extraction is to use a convolution kernel to solve the gradient of x and y. By obtaining the sum of the absolute values of the two convolution images, the corresponding edge values can be obtained, which is the Sobel edge detector.14Figure 1 is the convolution factors of Sobel in the x-direction and the y-direction.
Figure 1.
Convolution factor of Sobel in the (a) x-direction and the (b) y-direction.
The Sobel and Prewitt operators15 are first-order differential operators developed based on the Roberts operator. They have a certain inhibitory effect on noise, but their edge-positioning accuracy is not high.
Because the first-order differential operator can not effectively identify the edge when processing an image with a uniform grayscale, the second-order differential operator is better for edge detection. Second-order differential operators mainly include the Laplacian, LOG, DOG, and Canny operators. Among them, the Canny edge detection algorithm16 is a multilevel edge detection algorithm developed by John F. Canny in 1986, which is considered to be the optimal algorithm for edge detection. Compared to other edge detection algorithms, the edge recognition accuracy was much higher.
Generally, the purpose of edge detection is to significantly reduce the image data size while preserving the original image attributes. Currently, many algorithms are available for edge detection. Although the Canny algorithm has a long history, it can be said to be a standard algorithm for edge detection and is still widely used in research.
The brief steps of the Canny operator are as follows: Denoise, look for gradients, apply non-maximum suppression to filter out non-edge pixels, apply the double threshold method to determine the possible potential threshold, and use lag technology to find boundaries.
In the last decade, some techniques based on edge detection have been applied to flame images and analyses as described in the Introduction.6−12 Although significant progress has been made, there are still some limitations to this study. In the era of deep learning, especially with the emergence of the convolutional neural network,17 CNN has advantages such as powerful ability in automatic learning of advanced representations of natural images, and edge detection using CNN has become a new trend. In 2015, Xie et al.18 proposed holistically nested edge detection (HED), which was used to detect and extract the edges of natural images. HED has the advantage of increasing the feel field of feature maps and is robust; however, it has the disadvantage of increasing parameters. Because HED is based on VGG-1619 as the backbone architecture, only the last layer or before the pooling layer is considered as a feature, and the characteristics of the middle layer are ignored. Until ResNet appeared, the network’s rich convolutional features were effectively applied to various visual tasks. In 2017, Liu et al.20 proposed a full convolutional network to efficiently utilize the richer convolutional feature (RCF) of each convolutional neural network layer, optimize the loss function, and obtain the best edge detection effect of the year.
However, as shown in Figure 2 ((a) is the original figure, (b) is the Sobel operator, (c) is the Prewitt operator, (d) is the LOG operator, (e) is the Laplacian operator, and (f) is the Canny operator), while this algorithm on the edge of the picture is good, the use of the traditional edge detection method for flame edge detection cannot obtain continuous, clear, and thin edges. The complexity of real images, such as gain ideal edges, is not always possible, so temperature measurement, such as the 3D reconstruction task, is complicated.
Figure 2.
Results of traditional edge detection methods. (a) Origin images. (b) Sobel method. (c) Prewitt method. (d) LOG method. (e) Laplacian method. (f) Canny method.
3. Improved Flame Edge Detection
3.1. Flame Edge Detection Architecture
In this section, we will introduce the architecture of flame edge detection. As shown in Figure 3, it is obvious that our architecture is based on the VGG1619 network and ResNet.21 VGG16Net comprises 13 convolution layers and three fully connected layers, which are divided into five stages. A pooling layer was connected after each stage. VGG16Net uses a stack of smaller convolution kernels to increase the receptive field and it can reduce the number of parameters of the network; on the other hand, the extracted useful feature by each convolution can be coarser. The starting point of our architecture is based on the VGG16 network primarily.
Figure 3.
Our flame edge detection network architecture. The input is an image with arbitrary sizes, and our network outputs an edge possibility map in the same size.
The last stage and fourth pooling layer are reduced in our network. We used the same convolution block as VGG16Net with a kernel size of 3 × 3, with a stride of 1 and padding a pixel at the edge of the image. Each convolution is followed by batch normalization and ReLU as the activation function. For the maximum pooling layer, we used a 3 × 3 kernel with a stride of 2. Owing to many convolution operations, some important edge information will be seriously lost as described in detail in DeepEdge.22We added a 1 × 1 convolution21 block after the third convolution block to average the subsequent convolution output, and the yellow square in the figure can be reflected.
Our network is a multiscale fusion edge detection algorithm, such as HED18 and RCF.20 In the main architecture, the size of the side output will decrease after max pooling; we need to upsample the side output to be the same size as the input image. The upsampling process was realized using bilinear interpolation.
The feature output of the first convolution layer is not high in semantics and has more noise, but it contains more location and detail information with a higher resolution.23 We used the output of the first convolution layer and the output of the pooling layer as the side output, and a total of five side outputs were concatenated and fused together to form the flame edge image.
Compared with HED, our flame edge detection algorithm is mainly different in the following two points: on one hand, HED only considers the output of each layer of VGG network; while ignoring the information contained in the middle layer, our algorithm utilizes the output of all the convolution layers, which enables us to capture more edge information. On the other hand, since each convolution will lose important features after HED performs many convolutions, the 1 × 1 convolution block was added after the third convolution block to carry out average edge connections.
3.2. Loss Functions
We denote S = {(Xn, Yn)} as the training set, where Xn is the input image, n is the total number of images in the datasets, and Yn = {y1, y1···yn} is the pixel-wise ground true label. The sampled pair from the training set is denoted as (X, Y).
Then, as the model is supervised, to address the class imbalance problem in edge detection, we apply a class-balancing technique as described in HED, which is solved as follows
![]() |
4 |
where β is defined to balance the losses of two classes: edge and non-edge, β = |Y–|/|Y|, and 1 – β = |Y+|/|Y|. |Y–| and |Y+| are the number of edge and non-edge pixels in the ground truth, respectively. where W is the collection of all network parameters, w is the n-corresponding parameter, Pr(yj = 1|X; W, wn) ∈ [0,1] is the probability of predicting yj = 1 at pixel j, and the probability is obtained by applying a standard sigmoid function σ( · ).
A mixed-weight layer was added to the architecture to extract powerful high-level features. We denoted the parameters unique to the main branch as wm and the parameters unique to the side branch as ws. We then minimize eq 5 via a standard stochastic gradient descent.
![]() |
5 |
where and
are the losses of the main branch and side
branch and the weight α of the side branch loss is set at different
scales on different datasets.
4. Experiments on Flame Edge Detection
We implemented our network using the publicly available PyTorch library.24 The entire network was initialized using a pretrained VGG-16 network model. The Datasets in image edge detection include MDBD,25 NYUD,26 CID,27 PASCAL,28 BSDS300, and BSDS500.29 In this article, the datasets we used was the BSDS500 (Berkeley segmentation data set). The datasets provided by the Computer Vision Group of Berkeley University can be used for image segmentation and edge detection. It contains 200 training, 100 verification, and 200 test images, each of which is labeled by four to nine annotators. The global learning rate was set to 1 × 10–4 and was divided by 10 after every 1000 iterations. The Adam optimizer was adopted to train the network. We set the batch size to eight for the experiment, and according to the loss–epoch curve in Figure 4, the x axis represents the number of iterations of training and the y axis represents the loss function of training on the BSDS500 datasets. Our network converges after 4000 epochs.
Figure 4.
Loss–epoch curve on BSDS500 datasets.
We evaluated our flame edge detection performance using commonly used evaluation metrics, including average precision (AP), fixed contour threshold (ODS), and per-image best threshold (OIS). The F-measure of ODS is also be considered
![]() |
6 |
The performance of the human eye in edge detection is known as the 0.803 ODS F-measure; the HED is 0.788, and the Canny operator is 0.611. Figure 5 shows the precision/recall curves on the BSDS500 datasets. Recall on the x axis represents the probability of detecting real edge pixels occupying all real edge pixels. The y axis precision represents the probability that edge pixels generated through our network account for real edge pixels. The F-measure is the harmonic mean of precision and recall. The ODS F-measure can be calculated from eq 6, our flame edge detection method achieves better results than the average human performance, which can reach 0.810.
Figure 5.
Precision/recall curves on BSDS500 datasets.
5. Results and Discussion
After training our flame edge detection method, as described in Section 4, we used an ethanol flame, butane flame, and a quasi-static simulation of the explosion field designed to evaluate our algorithm. The flame images were taken by the high-sensitivity monochrome camera Dhyana 400BSI V2 camera with an effective area of 13.3 mm × 13.3 mm, pixel size of 6.5 μm × 6.5 μm, resolution of 4 MP(2048(H) × 2048(V)), and frame rate of 74 fps@CameraLink. A narrow band filter is installed in front of the camera with a central wavelength of 768 nm and a bandwidth of 20 nm. We captured the flame image at 40 frames with an exposure time of 15 μs. As shown in Figure 6, the top row shows the original image of different flames captured by the Dhyana camera. The following row shows the edge map obtained by our flame edge detection method. Figure 6a shows the butane–air flame. The combustion device is described in the paper.30Figure 6b shows the oxygen–ethanol flame. Figure 6c is the energetic material flame. Figure 6d is the oxygen–acetylene premixed flame. Compared to the traditional methods shown in the Figure 2, it is obvious that our algorithm can successfully detect the clear edge of the flame, something that traditional edge detection methods cannot achieve.
Figure 6.
Some results of flame edge detection. Original images (top row) and edge images (following row): (a) butane–air flame; (b) oxygen–ethanol flame; (c) energetic materials flame; (d) oxygen–acetylene premixed jet flame.
After extracting the edge of the oxygen–ethanol flame, as shown in Figure 6b, we designed an element doping method30 combined with laser-induced breakdown spectroscopy (LIBS)31,32 to measure flame temperature. First, we used LIBS to analyze the energy spectrum of the energetic materials and selected K2SO4 as an appropriate dopant. A black-body furnace calibration experiment was conducted based on the Planck black-body radiation law. Relationships between the temperature of the blackbody cavity and image gray value (T–G), radiation brightness, and image gray value (L–G) were calculated. Finally, the temperature field of the oxygen ethanol flame was calculated using the equation30
![]() |
7 |
After a clear edge is extracted, all pixels outside the edge are removed based on previous work so that only the effective flame position can be calculated, greatly reducing the amount of calculation. We used standard thermocouples to measure the flame at 10 points and verified our method with a maximum relative error of less than 4%.
We also validated the edge detection of infrared thermography and used DL-700 infrared thermography to collect the medium oxygen–acetylene flame generated by the designed high-temperature simulated fireball device. As shown in Figure 7a, our device including the main base is evenly distributed around the three root guide rails; it has a step screw next to the stepper motor and drive control, and there is a bracket on the stepper screw, which is connected with a flame spray gun. The premixed acetylene–oxygen is injected into the bracket, which can form a stable spherical flame 30 cm wide and 20 cm high. A DL-700 infrared thermal imager was used for acquisition. The working band of the infrared thermal imager was 8–14 μm, and that of the standard lens was 25 mm with a temperature range of 673.2–2273.2 K. It was placed 3 m away from the flame center for shooting, as shown in Figure 7. From left to right are the working diagram of the high-temperature simulation fireball device, system acquisition principle diagram, infrared thermal image diagram, and edge diagram. Form Figure 7d), it can be observed that our edge detection algorithm is also applicable to infrared thermal images.
Figure 7.
High temperature fireball simulation device. (a) Working diagram of high temperature simulation device. (b) Infrared calorimeter measurement principle diagram. (c) Infrared thermography captured by the DL-700. (d) Flame edge extraction by our algorithm.
By detecting the edge of the flame clearly, we can get many parameters of the flame, such as the area of the flame calculated by the pixel points surrounded by the edge, the perimeter of flame by edge point can be calculated, and the damage radius of the thermobaric projectile can be analyzed effectively. The background can be removed to significantly reduce the amount of data required for the flame temperature calculation33 and 3D reconstruction. The furnace flame can be diagnosed by our flame edge detection, and some corresponding measures can be taken to improve production efficiency.
6. Conclusions
In this study, a flame edge detection method based on the convolutional neural network (CNN) was designed. The algorithm was improved based on HED and combined with the VGG16Net and ResNet. Using the PyTorch framework to build our network and the BSDS500 datasets for training, the F-measure of our network reached 0.810, which is higher than that of the HED algorithm. We also compared it with the traditional edge-detection algorithm. Experimental results show that our algorithm can clearly detect the continuous edge of the flame and extract the edge of the infrared thermal image well. Our flame edge detection method can be used to further calculate the flame area, perimeter, and other related parameters, and it can easily filter out the background information of the flame and reduce the amount of data calculation. We used the proposed algorithm combined with the element doping method to measure the flame temperature; the maximum relative error of temperature measurement is less than 4%. At present, our experimental objects are only targeted at small-sized and medium-sized flames in a laboratory environment, and we will conduct further studies on forest fires and thermobaric bombs. On the other hand, future studies could fruitfully explore this issue further by real-time edge detection of the flame.
Acknowledgments
This work was supported by the National Natural Science Foundation of China (NSFC) (no. 52075504), the Fund for Shanxi 1331 Project Key Subject Construction, the Shanxi Postgraduate Education Innovation Program (no. 2021Y619), and the Fund for Shanxi Key Laboratory of Signal Capturing and Processing (no. ISPT2020-10). The authors thank all the reviewers, editors, and contributors for their contributions and suggestions as well as Xuanda Liu and all the members of the OSEC Laboratory.
The authors declare no competing financial interest.
References
- Chuah L. F.; Aziz A. R. A.; Yusup S.; Jiří J. K.; Awais B. Waste cooking oil biodiesel via hydrodynamic cavitation on a diesel engine performance and greenhouse gas footprint reduction[J]. Chem. Eng. Aust. 2016, 50, 301–306. 10.3303/CET1650051. [DOI] [Google Scholar]
- Najafian Ashrafi Z.; Ashjaee M. Temperature field measurement of an array of laminar premixed slot flame Jets using Mach-Zehnder interferometry. Opt. Laser. Eng. 2015, 68, 194–202. 10.1016/j.optlaseng.2015.01.002. [DOI] [Google Scholar]
- Nie B.; He X.; Zhang C.; Li X.; Li H. Temperature measurement of gas explosion flame based on the radiation thermometry. Int. J. Therm. Sci. 2014, 78, 132–144. 10.1016/j.ijthermalsci.2013.12.010. [DOI] [Google Scholar]
- Thiébaud R.; Drezet J.-M.; Jean-Paul L. Experimental and numerical characterisation of heat flow during flame cutting of thick steel plates. J. Mater. Process. Technol. 2013, 214, 304–310. 10.1016/j.jmatprotec.2013.09.016. [DOI] [Google Scholar]
- Zhang R.; Cheng Y.; Li Y.; Zhou D.; Cheng S. Image-Based Flame Detection and Combustion Analysis for Blast Furnace Raceway. IEEE Trans. Instrum. Meas. 2019, 68, 1120–1131. 10.1109/TIM.2017.2757100. [DOI] [Google Scholar]
- Bheemul H. C.; Lu G.; Yan Y. Three-dimensional visualization and quantitative characterization of gaseous flames. Meas. Sci. Technol. 2002, 13, 1643–1650. 10.1088/0957-0233/13/10/318. [DOI] [Google Scholar]
- Toreyin B. U.; Cetin A. E.. Online Detection of Fire in Video. 2007 IEEE Conference on Computer Vision and Pattern Recognition.
- Jiang Q.; Wang Q.. Large Space Fire Image Processing of Improving Canny Edge Detector Based on Adaptive Smoothing. 2010 International Conference on Innovative Computing and Communication and 2010 Asia-Pacific Conference on Information Technology and Ocean Engineering.
- Chacon-Murguia M. I.; Perez-Vargas F. J. Thermal Video Analysis for Fire Detection Using Shape Regularity and Intensity Saturation Features. Pattern Recognit. 2011, 118–126. 10.1007/978-3-642-21587-2_13. [DOI] [Google Scholar]
- Qiu T.; Yan Y.; Lu G. An Autoadaptive Edge-Detection Algorithm for Flame and Fire Image Processing. IRE Trans. Instrum. 2012, 61, 1486–1493. 10.1109/tim.2011.2175833. [DOI] [Google Scholar]
- Gupta P.; Gaidhane V.. A new approach for flame image edges detection. International Conference on Recent Advances and Innovations in Engineering (ICRAIE-2014).
- Gaidhane V. H.; Hote Y. V. An efficient edge extraction approach for flame image analysis. Pattern Anal. Appl. 2018, 1139–1150. 10.1007/s10044-018-0717-0. [DOI] [Google Scholar]
- Gonzalez R. C.; Woods R. E.; Masters B. R.. Digital Image Processing, Third Edition. 2009, 14 ( (2), ), 029901. Prentice Hall Press. [Google Scholar]
- Kittler J. On the accuracy of the Sobel edge detector. Image Vision Comput. 1983, 1, 37–42. 10.1016/0262-8856(83)90006-9. [DOI] [Google Scholar]
- Torre V.; Poggio T. A. On Edge Detection,” in IEEE Transactions on Pattern Analysis and Machine Intelligence. PAMI-8 1986, 2, 147–163. 10.1109/TPAMI.1986.4767769. [DOI] [PubMed] [Google Scholar]
- CANNY J. A Computational Approach to Edge Detection. IEEE. Pattern Anal. 1986, 679–698. 10.1109/TPAMI.1986.4767851. [DOI] [PubMed] [Google Scholar]
- Chen S.; Hao X.; Pan B.; Huang X. Super-Resolution Residual U-Net Model for the Reconstruction of Limited-Data Tunable Diode Laser Absorption Tomography. ACS Omega 2022, 7, 18722–18731. 10.1021/acsomega.2c01435. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Xie S.; Tu Z.. Holistically-nested edge detection. Proceedings of the 2015 IEEE International Conference on Computer Vision. Piscataway:IEEE, 2015:1395–1403
- Simonyan K.; Zisserman A.. Very deep convolutional networks for large-scale image recognition. 2015. [Online]. Available: https://arxiv.org/abs/1409.1556.
- Liu Y.; Cheng M.-M.; Hu X.; Bian J.-W.; Zhang L.; Bai X.; Tang J. Richer Convolutional Features for Edge Detection. IEEE. Pattern Anal. 2019, 41, 1939–1946. 10.1109/tpami.2018.2878849. [DOI] [PubMed] [Google Scholar]
- He K.; Zhang X.; Ren S.; Sun J.. Deep Residual Learning for Image Recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2016.
- Bertasius G.; Shi J.; Torresani L.. DeepEdge: A multi-scale bifurcated deep network for top-down contour detection. 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2015.
- Li C.; Zhong Q. A review of image edge detection algorithms based on deep learning. J. Comput. Appl. 2020, 40, 3280–3288. 10.11772/j.issn.1001-9081.2020030314. [DOI] [Google Scholar]
- Paszke A.; et al. An imperative style, high-performance deep learning library. In Advances in neural information processing systems, 2019, 8026–8037
- Mély D. A.; Kim J.; McGill M.; Guo Y.; Serre T. A systematic comparison between visual cues for boundary detection. Vision Res. 2016, 120, 93–107. 10.1016/j.visres.2015.11.007. [DOI] [PubMed] [Google Scholar]
- Silberman N.; Hoiem D.; Kohli P.; Fergus R. Indoor Segmentation and Support Inference from RGBD Images. Lect. Notes Comput. Sci. 2012, 7576, 746–760. 10.1007/978-3-642-33715-4_54. [DOI] [Google Scholar]
- Grigorescu C.; Petkov N.; Westenberg M. A. Contour detection based on nonclassical receptive field inhibition. IEEE Trans. Image Proc. 2003, 12, 729–739. 10.1109/tip.2003.814250. [DOI] [PubMed] [Google Scholar]
- Mottaghi R.; Chen X.; Liu X.; Cho N.-G.; Lee S.-W.; Fidler S.; Yuille A.. The Role of Context for Object Detection and Semantic Segmentation in the Wild. 2014 IEEE Conference on Computer Vision and Pattern Recognition.
- Arbeláez P.; Maire M.; Fowlkes C.; Malik J. Contour Detection and Hierarchical Image Segmentation. IEEE Trans. Pattern Anal. 2011, 33, 898–916. 10.1109/tpami.2010.161. [DOI] [PubMed] [Google Scholar]
- Liu X.; Hao X.; Xue B.; Tai B.; Zhou H. Two-Dimensional Flame Temperature and Emissivity Distribution Measurement Based on Element Doping and Energy Spectrum Analysis. IEEE Access 2020, 8, 200863–200874. 10.1109/access.2020.3035798. [DOI] [Google Scholar]
- Hao Y. Y. X.; Zhang L.; Ren L. Application of Scikit and Keras Libraries for the Classification of Iron Ore Data Acquired by Laser-Induced Breakdown Spectroscopy (LIBS). Sensors 2020, 20, 1393–1404. 10.3390/s20051393. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hao X.; Sun P.; Tian Y.; Pan B. Effect of Plane Mirrors Combined with Au-Nanoparticle Confinement on the Spectral Properties of Fe Plasma Induced by Laser-Induced Breakdown. ACS Omega 2022, 23605. 10.1021/acsomega.2c02199. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bing X.; Xiaojian H.; Xuanda L.; Ziqi H.; Hanchang Z. Simulation of an NSGA-III Based Fireball Inner-Temperature-Field Reconstructive Method. IEEE Access 2020, 8, 43908–43919. 10.1109/ACCESS.2020.2977853. [DOI] [Google Scholar]