Skip to main content
Science Progress logoLink to Science Progress
. 2022 Jan 31;105(1):00368504221075465. doi: 10.1177/00368504221075465

Inversion of 2D cross-hole electrical resistivity tomography data using artificial neural network

Kean Thai Chhun 1, Sang Inn Woo 2, Chan-Young Yune 1,
PMCID: PMC10364945  PMID: 35099315

Abstract

Geophysical inversion is often ill-posed because of its nonlinearity and the ordinary measured data of measured data. To deal with these problems, an artificial neural network (ANN) has been introduced with the capability of a nonlinear and complex problem for geophysical inversion. This study aims to invert 2D cross-hole electrical resistivity tomography data using a feedforward back-propagation neural network (FBNN) approach. To generate the synthetic data to train the model, eighteen forward models (100 to 600 Ω.m homogeneous medium and three different locations of 10 Ω.m of the grouted bulb) with a dipole-dipole array configuration were adopted. The effect of the hyperparameter on the performance of the proposed FBNN model was examined. Various datasets from the laboratory testing result were also tested using the suggested FBNN model and then the error between the actual and predicted area in each model was determined. The results show that our suggested FBNN model, with the trainrp training function, 4 hidden layers, 75 neurons in each hidden layer, 0.8 learning rate, 1 of momentum coefficient, and 54,000 training data points, has higher performance and better accuracy than other models. It was found that the error value of the FBNN model was about 15% to 18% lower compared to the conventional inversion model.

Keywords: Artificial neural network, cross-hole electrical resistivity tomography, inversion, forward, grouted bulb

Introduction

Cross-hole electrical resistivity tomography (CHERT) survey is a geophysical method which has been widely used to evaluate subsurface structures because it is nondestructive and extremely cost-effective. 1 CHERT has been applied in different applications such as tunnels, 2 ground improvement by chemical grouting,3, 4 monitoring unsaturated flow and transport, 5 groundwater exploration, 6 geothermal studies, 7 observing seawater intrusion dynamics, 8 detecting underground cavities, 9 long-term CO2 monitoring, 10 and assessing the subsurface condition below foundations. 11 To obtain the electrical resistivity distribution of the subsurface structure, the finite element method 12 and finite difference method 13 have been typically employed for apparent resistivity inversion; however, these methods generally require a more complicated algorithm and a faster computer. There have been numerous attempts to solve this problem using expensive commercial software incorporating nonlinear inversion models.

Inversion methods were generally performed for the measured resistivity data which was used to deduce the true resistivity distribution of the subsurface. Geophysical inversion is frequently a nonunique, nonlinear, ill-posed problem. The nonuniqueness and nonlinearity are due to the characteristic (noise and discrete) of recorded data in the field and laboratory test while the ill-posedness is affected by insignificant and irrelevant parameters.14,15 For these reasons, interpreting resistivity data may bring about unclear and unreliable results. In particular, artificial neural networks (ANN) has been introduced and gained popularity, especially for solving ill-posed, nonlinear, and non-uniqueness problems.14,16 Unlike the inversion, approaches used for the analysis of the resistivity data which apply a fixed algorithm to approximate model parameters, ANN solves the problem of optimizing a performance criterion based on a statistical approach using the non-linear relationship between input and output data and allows the network to obtain useful information on the problem. Since the late 1980s, ANNs have been applied to geophysics for data interpretation in different applications. 17 Spichak and Popova 18 developed an ANN model considering various factors to invert magnetotelluric data. Similarly, Ding et al. 19 utilized an ANN to predict the passive components of electromagnetic shear-wave splitting. Neyamadpour et al. 20 proposed the feed-forward backpropagation network model to invert 2D electrical resistivity tomography data of the high electrical resistivity contrast areas. An ANN approach to estimate the porosity, permeability, and intrinsic attenuation using well-log data and seismic attributes was also developed by Iturrarán-Viveros and Parra. 21 Mazurkiewicz et al. 22 investigated and verified that the application of a neural network improved the use of ground-penetration radar (GPR) to locate burial sites. The use of ANNs to invert seismic waveforms was examined by Fu et al., 23 and they reported that the suggested method was fast and accurate compared to the traditional finite difference method. Shiak et al. 24 developed a feed-forward backpropagation network model to predict the life condition of a crude oil pipeline by considering the significant influencing factors, i.e., flow pressure, wall thickness, metal loss anomalies (depth, over length, and width), and weld anomalies. Just as in the above-mentioned geophysics applications, an ANN could achieve a fast rate of convergence and high accuracy of computational results to solve the inverse problems; however, the use of ANN for the inversion of 2D cross-hole electrical resistivity tomography has not been fully studied and is still not well-documented.

In this present study, a multilayer feedforward back-propagation neural network (FBNN) model was developed to invert the 2D cross-hole electrical resistivity tomography data. Also, the optimized hyperparameters, including training function, number of neurons, learning rate, momentum coefficient, and training dataset of the FBNN model, were examined. After the training model was trained, a real dataset from a laboratory test was used and the actual configuration in the laboratory test was compared with the predicted area by both the inversion model and suggested FBNN model to verify its applicability.

Cross-hole electrical resistivity tomography

CHERT measurement is a technique to generate 2D images in the boreholes.5,8 A large quantity of apparent resistivity data is generally measured by injecting an electrical current into the ground and measuring the electrical voltage difference between the potential electrodes. Based on the value of injected electrical current and measured electrical voltage, the apparent resistivity (ρ) is determined as the following equation (1).

ρ=GΔVI (1)

where ρ is the apparent resistivity, G is the geometric factor that depends on the electrode array configuration, I is the electrical current, and ΔV is the potential difference between the two potential electrodes.

Basically, the CHERT can be performed between two or more boreholes. In a CHERT survey, there are four basic electrode array configurations: pole-pole, pole-dipole, dipole-pole, and dipole-dipole, as indicated in Figure 1. 12 The geometric factor, G in each array configuration can be determined as the following equations (2)–(5):

Figure 1.

Figure 1.

Electrode array configuration: (a) pole-pole, (b) pole-dipole, (c) dipole-pole, and (d) dipole-dipole.

pole-pole:G=4π1C1P1+1C1P1 (2)
pole-dipole:G=4π1C1P11C1P2+1C1P11C1P2 (3)
dipole-pole:G=4π1C1P11C2P2+1C1P11C2P2 (4)
dipole-dipole:G=4π1C1P11C1P21C2P1+1C2P2+1C1P11C1P21C2P1+1C2P2 (5)

where CiPj is the length between the electrode Ci and Pj, and C′iPj is the inversion length between the electrode Ci and Pj.

Synthetic data preparation

The synthetic data (input, output, and testing data) have a significant influence on the obtained results when using the proposed FBNN, which can achieve a fast convergence rate and accurate configuration of the grouting area. As a primary step for the data preparation, as shown in Figure 2, forward modeling was conducted using the EM2DMODEL developed at the Korea Institute of Geoscience and Mineral resources (KIGAM). 25 This model with 5% Gaussian noise was used to determine the value of electric potential and apparent resistivity normalized against applied electric currents. Similar to the laboratory test conditions recently conducted by Chhun and Yune, 26 a grouted bulb in a cylindrical shape in a model ground was simulated with the dipole-dipole array configuration of 17 electrodes (0.025 m of electrode spacing). In this study, the homogenous background soils having 100, 200, 300, 400, 500, and 600 Ω.m electrical resistivity and the grouted bulb of 10 Ω.m were adopted as the test condition. The grouted bulb with 10 cm in diameter was also simulated in three different locations (Figure 2); therefore, a total of eighteen datasets (6 types of homogenous background soils × 3 different locations of the grouted bulb) were generated.

Figure 2.

Figure 2.

Forward model used to generate the synthetic data.

Based on the results of forward modeling, the apparent resistivity distribution was determined using the TomoDC developed by Kim. 27 The output data from TomoDC contains vertical and horizontal coordinates and the value of apparent resistivity in each location, which can be visualized as shown in Figure 3(a) and Table 1. Because 300 data points were obtained in each dataset, there were aproximatly 5400 data points for the training data. As mentioned above, the apparent resistivity and its vertical (z-axis) and horizontal (x-axis) position were classified as the input data, while the true electrical resistivity of the center of the mesh element was utilized as the output data. 20 The true electrical resistivity was generated based on the distance between two borehole electrodes, the depth of the borehole, the homogenous background apparent resistivity, the electrical resistivity of the grouted bulb, and the dimension of the grouted bulb (Figure 3(b)).

Figure 3.

Figure 3.

Synthetic data: (a) input data and (b) output data.

Table 1.

The un-normalized training and testing dataset.

Training datasets Testing datasets
i Input datasets Output datasets i Input datasets
x-position (m) z-position (m) ρ (Ω.m) True resistivity (Ω.m) x-position (m) z-position (m) ρ (Ω.m)
1 0.01 0.01 75.87 100 1 0.01 0.01 232.88
2 0.03 0.01 85.07 100 2 0.03 0.01 151.20
3 0.05 0.01 85.13 100 3 0.05 0.01 130.72
4 0.07 0.01 85.56 100 4 0.07 0.01 139.31
299 0.21 0.49 102.28 100 299 0.21 0.49 70.22
300 0.23 0.49 92.28 100 300 0.23 0.49 76.81
Total number of data for 18 datasets = 5400 Total number of data for 1 dataset = 300

Feedforward back-propagation neural networks algorithm

Feedforward back-propagation neural networks (FBNN) are a computational model that has a high capability to solve complex and nonlinear problems. 28 The basic component of FBNN is based on the structure of the human brain by processing data through numerous interconnected neurons. The brain principally learns from experiences while neural networks basically learn from training. Neural network modeling has been applied for classification, prediction, data filtering, and data association. 29 A standard FBNN commonly consists of the input layer, output layer, and multi-hidden layers, as described in Figure 4, for the electrical resistivity survey. The FBNN solves a problem by repeating the adjustment of the weight in each neural network and is utilized for training a model with two main steps (Figure 4). The first step is the forward calculation of input weights and the second step is the backward calculation for updating the weights and also determining errors. Equation (6) presents the mathematical approach of the FBNN model, and is represented by an activation function (f ), which depends on the input data (x), the attributed input weights (wi), and the bias term (b). The following equations can be expressed as:

y=f(i=1nwixi+bi) (6)

where y is the output value, f is an activation function, bi is the bias, wi is the weight for input xi, and xi is the input value from node i in the previous layer.

Figure 4.

Figure 4.

Schematic diagram of the FBNN model.

The activation function plays an important role in training the FBNN model and is also used to transform an input layer of a node to an output layer. Moreover, different activation functions can be used depending on the application problems. As shown in Figure 5, the log-sigmoid transfer function, hyperbolic tangent sigmoid transfer function, and linear transfer function are commonly used activation functions for the FBNN. 30 The range of the log-sigmoid and hyperbolic tangent sigmoid transfer function is [0, 1] and [−1, 1], respectively.

Figure 5.

Figure 5.

Most commonly used activation functions: (a) log-sigmoid transfer function, (b) hyperbolic tangent sigmoid transfer function, and (c) linear transfer function.

Figure 4 presents the schematic diagram of the multi-layer FBNN. The apparent resistivity values and its x-axis, and z-axis positions were assigned as the input data, and the true resistivity values were categorized as the output data. Because the normalization method helps make better exact numeric computations and improves output accuracy, the synthetic data (input, output, and test data) was also normalized in the range from 0 to 1 using equation (7).31,32 This allows the activation function to restrict the size of the input data. At the end of the process, the output data is denormalized into the original format to obtain the final results. In this study, the log-sigmoid activation function was utilized in the hidden layers and the output layer, which is the most common activation function.20,33 The appropriate number of hidden layers should be carefully selected in the range between 3 and 5 to avoid underfitting and overfitting conditions; therefore, 4 hidden layers were selected based on previous studies by Haykin. 16 To evaluate the performance of the training model, the mean squared error (MSE) was utilized.

y=(ymaxymin)(xxmin)(xmaxxmin)+ymin (7)

where y′ is the normalized data, ymax is the maximum normalized data, ymin is the minimum normalized data, x′ is the original data, xmax is the maximum original data, and xmin is the minimum original data.

Figure 6 shows the overall flow chart of the multi-layer FBNN model which was developed using MATLAB. For the FBNN model to perform the 2D CHERT inversion effectively, the effect of hyperparameters, including the training algorithm, number of neurons in each hidden layer, momentum coefficient, learning rate, and training dataset, were investigated. There are many types of training algorithms; however, it is difficult to select the best training algorithm for a fast convergence rate and high accuracy for a specified problem. The main difference of these training algorithms is the technique of determining the weight-updated. 34 Therefore, four types of training algorithms, including Resilient backpropagation (trainrp), Levenberg-Marquardt (trainlm), Fletcher-Powell conjugate gradient (traincgf), and Variable learning rate gradient descent (traingdx), were investigated.

Figure 6.

Figure 6.

Flowchart of the FBNN model.

In order to investigate the influence of the training dataset on the performance of the FBNN model, three case studies consisting of different data points were examined, as shown in Figure 7. In this study, eighteen datasets (5400 data points) were adopted as Case 1. Next, the numerical solutions for generating different types of data points for Case 2 and 3 were programed in MATLAB based on the datasets in Case 1. From this, in each dataset, an element i with its adjacent element j, k, l, and m (input data) was set equal to the point i (output data) for Case 2. It was then stored as a local matrix. Similarly, the local matrix consisting of elements i with its adjacent element j, k, l, m, n, o, p, and q (input data) was simulated equally to the element i (output data) for Case 3. Thus, a global matrix consisting of 25,668 and 44,678 data points was generated for case 2 and 3, respectively.

Figure 7.

Figure 7.

Generating different types of data points.

At the beginning of the algorithm, the performance goal of 1e-6 and 1e5 number of epochs to stop the training phase was assigned. The maximum period to stop the training phase, however, was not assigned until the training met the performance goal within the maximum number of epochs. During the training process, about 85% of the total data are randomly selected and the MSE in equation (8) was determined to statistically assess the performance of the model. After achieving the desired performance goal of the training model, the FBNN model was allowed for the validation stage and validated aproximatly 20% of the total data. Next, the FBNN model was tested with a laboratory experiment result for the testing stage. The error (|AEst − AAct|/AAct × 100) between the actual and predicted values was then evaluated for the FBNN and inversion model.

MSE=1Nin(yiy^i)2 (8)

where N is the number of data points, yi is the measured value, and y^i is the predicted value.

Results and discussion

Training

Effect of training function

This study examined the following training functions: Resilient backpropagation (trainrp), Levenberg-Marquardt (trainlm), Fletcher-Powell conjugate gradient (traincgf), and Variable learning rate gradient descent (traingdx). These training functions were based on backpropagation and the weights were technically updated after each epoch. The models employed herein consist of 16 datasets (5400 data points) and 4 hidden layers with 50 neurons in each hidden layer. Also, the log-sigmoid transfer function was employed in this study and all FBNN models of computations were carried out on an Intel(R) Core(TM) i7-4790 CPU and 8 GB of installed memory (RAM).

The relationship of MSE and the number of epochs during the training in each training function was plotted, as shown in Figure 8. This figure clearly shows that the value of MSE in all training functions decreases with an increase in the number of epochs until the desired performance goal. The analysis results of each training function are summarized in Table 1. The Levenberg-Marquardt training function took the most training time and required higher memory usage compared to other training functions even though it required the least epochs and provided the lowest MSE. Based on computational time (Table 1), resilient backpropagation (trainrp) was the most efficient and had the highest learning speed (epochs/time) network optimization for the training FBNN model. This result was consistent with the previous study by Neyamadpour et al. 20

Figure 8.

Figure 8.

Result of each type of training function.

Effect of neuron number in hidden layers

There is no established theory for the selection of the number of neurons in hidden layers, but, in general, the error of the FBNN model decreases with an increase of the number of neurons. 18 Thus, the effect of neuron numbers in the hidden layers of the FBNN model with the resilient backpropagation training function and four hidden layers was investigated by varying neuron numbers as 10, 25, 50, 75, and 100.

Figure 9 and Table 2 present the MSE change according to the different number of neurons in the hidden layers. From this, it is clear that by increasing the number of neurons in the hidden layers, the learning speed and the number of epochs of the FBNN model gradually decrease. As shown in Table 2, it is clearly seen that MSE values and training time decrease as the number of neurons increase from 10 to 75. However, the MSE values and training time increase when the number of neurons are greater than 75. This result reveales that the FBNN model with more than 75 neurons may result in overfitting.34,35 Typically, when the network consists of many nodes (neurons) in a hidden layer, an overfitting problem occurs because the number of data in the training set is insufficient to train all neurons in the hidden layers. Therefore, the FBNN model consisting of 75 neurons in each hidden layer can be used based on the result of MSE and the learning speed.

Figure 9.

Figure 9.

Result of each type of neurons number in each hidden layer.

Table 2.

Result of each type of training function.

Training function Neuron number in each hidden layers Epochs Time (s) Learning speed MSE
Levenberg-Marquardt 50 54 1205.56 0.045 0.0055
Resilient backpropagation 1071 16.26 65.851 0.0059
Fletcher-Powell conjugate gradient 1621 218.99 7.402 0.0101
Variable learning rate gradient descent 10,530 166.36 63.298 0.0111

Effect of learning rate and momentum coefficient

The appropriate selection of learning rate (η) and momentum coefficient (α) can significantly influence the performance (training speed and accuracy) of the FBNN model. Theoretically, the η deals with the speed or rate at which the model learns and also controls the amount of the correction term that is applied to adjust the neuron weights during training. The recommended range of η is typically from 0 to 1. 36 A large value of η generally allows the model to learn faster with a large change of weight, while a small value of η induces a significantly longer training time because of a small change of weight. The α controls the influence of the amount of weight change in the recent weight update by including the portion of the weight change from the previous iteration. The effective range of α is between 0 to 1 which is the same as the η. The α commonly results in a faster learning rate but can cause instability in some cases if it is set too high. The FBNN model was examined for the trainrp training function: 4 hidden layers with 75 neurons in each layer. Thus, the effects of the learning rate (0.2 to 0.99) and momentum coefficient (0.001 to 1) were investigated.

The summarized results of the FBNN model are listed in Figures 10 and 11. These figures clearly show that both η and α have a strong influence on the FBNN model performance; theoretically, the η deals the learning rate of the model and α controls the amount of weight change to improve the rate of convergence. 37 By increasing the α from 0.001 to 1, the value of the learning speed of the FBNN model in all cases of η slightly decreases within a range of 14.50 to 21.26 (Figure 10). According to Figure 11, the FBNN model with a value of 0.001 for η and 0.9 for α clearly enabled efficient training of the networks based on the statistical criteria. Moreover, by comparing with the previous study by Signh et al., 38 Neyamadpour et al., 39 Karmarkar et al., 40 and Winiczenko et al., 41 it was noted that this suggested value provides the better result.

Figure 10.

Figure 10.

Result of each type of α and η.

Figure 11.

Figure 11.

Result of each type of α and η.

Effect of training dataset

The training dataset can basically be referred to the size or point and the distribution. The result of the FBNN model also depends on the number of datasets used in the training.18,20,42 Theoretically, more training datasets are not usually more accurate models due to the interaction between synthetic data and the model. Moreover, the high quality training dataset can provide a high-performance result for an ANN model. To investigate the effects of the training dataset on the performance of the FBNN model, the FBNN model consists of the trainrp training function, four hidden layers, and 75 neurons in each hidden layer. 0.8 of η, and 1 of α was adopted in this study.

As indicated in Figure 10, the results obtained from this study show that the value of MSE in all cases generally decreased with the rising number of epochs. The summarized result of the model is listed in Table 3 and reveales that the value of MSE, number of epochs, and training time considerably increase along with the number of data points induced by the problem of over-training. The increased data points cause overfitting, which is when the model performs well on the training data but not on the testing data. 43 This phenomenon is a general problem that cannot be completely avoided in supervised learning. Based on this result, it can be concluded that case 1 with 5400 data points provided satisfactory performance due to less computation time and effort (Figure 12 and Table 4).

Table 3.

Result of each type of neurons number in each hidden layer.

Number of neurons Training function Epochs Time (s) Learning speed MSE
10 trainrp 1228 4.575 268.42 0.0116
25 1252 10.17 123.11 0.0076
50 1107 28.25 39.18 0.0052
75 1090 50.86 21.43 0.0024
100 1088 90.41 12.03 0.0037
Figure 12.

Figure 12.

Result of each type of training dataset.

Table 4.

Result of each type of training dataset.

Case study Training data points Epochs Time (s) Learning speed MSE
Case 1 5400 1074 59.44 18.07 0.0021
Case 2 25,668 13,062 1777.48 7.35 0.0079
Case 3 44,678 51,299 12,278.67 4.18 0.0100

Testing

To evaluate the performance of the suggested FBNN model, the model (Learning function: trainrp; Hidden layer: 4; Number of neurons: 75-75-75-75; η: 0.8; α: 1; training data points: 5400) was tested with the laboratory testing datasets. The comparison results between the inversion model and the proposed FBNN model were plotted based on the result of the apparent resistivity in each model (Figure 13 and 14). As shown in Figures 13(a) and 14(a), a laboratory test setup was carried out to evaluate the grouted region using CHERT measurement. In this study, the grouted bulb (10 cm in diameter and 12.9 cm in height) containing 3% and 5% of rCB concentration after one-day of curing was buried in saturated fine sand approximately 30 cm deep. The CHERT test was also measured with a cross-section perpendicular of the grouted bulb (Figures 13(a) and 14(a)). A more detailed description of this experimental method and its calculated area of the grouted bulb region in the case study can be found by Chhun and Yune. 26

Figure 13.

Figure 13.

Comparison results in the case of a grouted bulb containing 3% rCB: (a) real test model, (b) inversion model, and (c) FBNN model.

Figure 14.

Figure 14.

Comparison results in the case of a grouted bulb containing 5% rCB: (a) real test model, (b) inversion model, and (c) FBNN model.

In the current study, the MiniSting R1 device with various output currents (1, 2, 5, 10, 20, 50, 100, 200, and 500 mA) was used to measure the apparent resistivity. Figures 13(b) and 14(b) present the laboratory testing results of the electrical resistivity distribution using the inversion model of the smoothness-constrain least-square technique which is commonly used for electrical resistivity inversion. These figures show the electrical resistivity value of saturated fine sand from 70 to 600 Ω.m and that the grouted bulb has an electrical resistivity value lower than 50 Ω.m. The testing results of the suggested FBNN model are plotted in Figures 13(c) and 14(c). From these results, it is clear that the location of the grouted bulb can be observed for both models, but the proposed FBNN model has also yielded a more accurate area of the grouted region (size, shape, and location) compared to the inversion model. Moreover, the error value between the actual and estimated area of the grouted bulb in the case of the proposed model was about 15 and 18% lower than the inversion model (Table 5). Also, the mean absolute error (MAE) between the inversion and FBNN model for the case of a grouted bulb containing 3% and 5% rCB was about 59.75 and 52.23, respectively.

Table 5.

Comparison for the error value in each model.

Grouted bulb condition Methods Actual area (cm2) Predicted area (cm2) Error (%) MAE
3%rCB_1CT Inversion model 78.54 48.84 37.82 59.75
FBNN model 62.80 20.05
5%rCB_1CT Inversion model 56.35 28.25 52.23
FBNN model 68.37 12.94

Conclusions

In this study, a new method to invert 2D cross-hole electrical resistivity tomography data using feedforward back-propagation neural networks was developed. To develop the FBNN model, a series of synthetic datasets consisting of 5400 data points were generated using EM2DMODEL forward modeling with the dipole-dipole electrode array configuration. Hyperparameteres, such as training function, number of neurons, learning rate, momentum coefficient, and training dataset were also examined. A comparison of the suggested FBNN model and the conventional inversion model (laboratory testing result) was then examined. Based on the training result, it was found that the resilient backpropagation (trainrp) algorithm was most efficient training function for the inversion of 2D CHERT data compared to other training functions. 4 hidden layeres with 75 neurons in each hidden layer, a learning rate of 0.8, a momentum coefficient of 1, and 5400 data points for the FBNN model yields the most satisfactory results. The proposed FBNN model indicates an accurate estimation of the grouted bulb area compared to the actual area with an error value lower than 18% compared to the conventional inversion model. Therefore, the proposed FBNN model had fast rates of convergence and accuracy which show a high capability for inverting 2D CHERT data.

Acknowledgements

This research was supported by a grant (code: 22-SCIP-C151438-04) from Construction Technologies Program funded by Ministry of Land, Infrastructure and Transport of Korean government and by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (2021R1A6A1A03044326).

Footnotes

The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding: The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the Ministry of Land, Infrastructure and Transport of Korean government (grant number 22-SCIP-C151438-04) and by the Ministry of Education (2021R1A6A1A03044326).

Supplemental material: Supplemental material for this article is available online.

Reference

  • 1.Green DJ, Ward SH. Preliminary design for multi-array borehole electrical geophysical method. Report DOE/SAN/12196-23. University of Utah Research Institute Earth Sciences Laboratory, 1986.
  • 2.Bellmunt F, Marcuello A, Ledo J, et al. Time-lapse cross-hole electrical resistivity tomography monitoring effects of an urban tunnel. J Appl Geophs 2012; 87: 60–70. [Google Scholar]
  • 3.Komine H. Evaluation of chemical grouted region by resistivity tomography. Proc Inst Civ Eng Ground Improv 2000; 4: 177–189. [Google Scholar]
  • 4.Giao P, Cuong QN, Loke MH. Monitoring the chemical grouting in sandy soil by electrical resistivity tomography (ERT). In: 1st international workshop on geoelectrical monitoring, GELMON, Vienna, November 30–December 2, 93, pp.168–178. [Google Scholar]
  • 5.Looms MC, Jensen KH, Binley A, et al. Monitoring unsaturated flow and transport using cross-borehole geophysical methods. Vadose Zone J 2008; 7: 227–237. [Google Scholar]
  • 6.McNeill JD. Use of electromagnetic methods for groundwater studies. Geotech Environ Geophys 1990; 1: 191–218. [Google Scholar]
  • 7.Burger HR, Burger DC, Burger HR. 1992. Exploration Geophysics of the Shallow Subsurface (Vol. 8). New Jersey: Prentice Hall. [Google Scholar]
  • 8.Palacios A, Ledo JJ, Linde N, et al. Time-lapse cross-hole electrical resistivity tomography (CHERT) for monitoring seawater intrusion dynamics in a Mediterranean aquifer. Hydrol Earth Syst Sci 2020; 24: 2121–2139. [Google Scholar]
  • 9.Park MK, Park S, Yi MJ, et al. Application of electrical resistivity tomography (ERT) technique to detect underground cavities in a karst area of South Korea. Environ Earth Sci 2014; 71: 2797–2806. [Google Scholar]
  • 10.Schmidt-Hattenberger C, Bergmann P, Labitzke T, et al. Permanent crosshole electrical resistivity tomography (ERT) as an established method for the long-term CO2 monitoring at the Ketzin pilot site. Int J Greenhouse Gas Control 2016; 52: 432–448. [Google Scholar]
  • 11.Deceuster J, Delgranche J, Kaufmann O. 2D cross-borehole resistivity tomographies below foundations as a tool to design proper remedial actions in covered karst. J Appl Geophys 2006; 60: 68–86. [Google Scholar]
  • 12.Bing Z, Greenhalgh SA. Cross-hole resistivity tomography using different electrode configurations. Geophys Prospect 2000; 48: 887–912. [Google Scholar]
  • 13.Sasaki Y. Resolution of resistivity tomography inferred from numerical simulation 1. Geophys Prospect 1992; 40: 453–463. [Google Scholar]
  • 14.Kim Y, Nakata N. Geophysical inversion versus machine learning in inverse problems. Lead Edge 2018; 37: 894–901. [Google Scholar]
  • 15.Jupp DL, Vozoff K. Stable iterative methods for the inversion of geophysical data. Geophys J Int 1975; 42: 957–976. [Google Scholar]
  • 16.Haykin S. Neural networks: a comprehensive foundation. New Jersey: Prentice Hall, 1999. [Google Scholar]
  • 17.Van der Baan M, Jutten C. Neural networks in geophysical applications. Geophysics 2000; 65: 1032–1047. [Google Scholar]
  • 18.Spichak V, Popova I. Artificial neural network inversion of magnetotelluric data in terms of three-dimensional earth macroparameters. Geophys J Int 2000; 142: 15–26. [Google Scholar]
  • 19.Ding X, Devabhaktuni VK, Chattaraj B, et al. Neural-network approaches to electromagnetic-based modeling of passive components and their applications to high-frequency and high-speed nonlinear circuit optimization. IEEE Trans Microwave Theory Tech 2004; 52: 436–449. [Google Scholar]
  • 20.Neyamadpour A, Taib S, Abdullah WW. Using artificial neural networks to invert 2D DC resistivity imaging data for high resistivity contrast regions: a MATLAB application. Comp Geosci 2009; 35: 2268–2274. [Google Scholar]
  • 21.Iturrarán-Viveros U, Parra JO. Artificial neural networks applied to estimate permeability, porosity and intrinsic attenuation using seismic attributes and well-log data. J Appl Geophys 2014; 107: 45–54. [Google Scholar]
  • 22.Mazurkiewicz E, Tadeusiewicz R, Tomecka-Suchoń S. Application of neural network enhanced ground-penetrating radar to localization of burial sites. Appl Artif Intell 2016; 30: 844–860. [Google Scholar]
  • 23.Fu H, Zhang Y, Ma M. Seismic waveform inversion using a neural network-based forward. J Phys Conf Ser 2019; 1324: 012043. [Google Scholar]
  • 24.Shaik NB, Pedapati SR, Taqvi SAA, et al. A feed-forward back propagation neural network approach to predict the life condition of crude oil pipeline. Processes 2020; 8: 61. [Google Scholar]
  • 25.Yi MJ, Kim JH, Chung SH. Enhancing the resolving power of leastsquares inversion with active constraint balancing. Geophysics 2003; 68: 931–941. [Google Scholar]
  • 26.Chhun KT, Yune CY. Laboratory study on evaluation of grouted bulb region using cross-hole electrical resistivity tomography, Geosci J 2021: 1–12. [Google Scholar]
  • 27.Kim JH. TomoDC V. 1.0 software, 1999.
  • 28.Rumelhart DE, Hinton GE, Williams RJ. Learning internal representations by error propagation. In: Parallel distributed processing: explorations in the microstructure of cognition: foundations. Cambridge, MA: MIT Press, 1986, pp.318–362. [Google Scholar]
  • 29.Cha D, Zhang H, Blumenstein M. Prediction of maximum wave-induced liquefaction in porous seabed using multi-artificial neural network model. Ocean Eng 2011; 38: 878–887. [Google Scholar]
  • 30.Demuth H, Beale M. Neural network toolbox. For use with MATLAB. The MathWorks Inc, 1992. [Google Scholar]
  • 31.El-Qady G, Ushijima K. Inversion of DC resistivity data using neural networks. Geophys Prospect 2001; 49: 417–430. [Google Scholar]
  • 32.Madhiarasan M, Deepa SN. A novel criterion to select hidden neuron numbers in improved back propagation networks for wind speed forecasting. Appl Intell 2016; 44: 878–893. [Google Scholar]
  • 33.Werbos PJ. The roots of backpropagation: from ordered derivatives to neural networks and political forecasting. John Wiley & Sons, 1994. [Google Scholar]
  • 34.Sheela KG, Deepa SN. Review on methods to fix number of hidden neurons in neural networks. Math Probl Eng 2013; 2013: 1–11. [Google Scholar]
  • 35.Yilmaz M, Arslan E. Effect of increasing number of neurons using artificial neural network to estimate geoid heights. Int J Phys Sci 2011; 6: 529–533. [Google Scholar]
  • 36.Aristodemou E, Pain C, De Oliveira C. Inversion of nuclear well-logging data using neural networks. Geophy Prospect 2005; 53: 103–120. [Google Scholar]
  • 37.Chen BC, Chen AX, Chai X, et al. A statistical test of the effect of learning rate and momentum coefficient of SGD and its interaction on neural network performance. In: 3rd international conference on data science and business analytics (ICDSBA), Istanbul, Turkey, 11–12 October2019. pp. 276–281. IEEE. 10.1109/ICDSBA48748.2019.00065. [DOI] [Google Scholar]
  • 38.Singh UK, Tiwari RK, Singh SB. One-dimensional inversion of geo-electrical resistivity sounding data using artificial neural networks-a case study. Comput Geosci 2005; 31: 99–108. [Google Scholar]
  • 39.Neyamadpour A, Abdullah WW, Taib Set al. et al. 3D inversion of DC data using artificial neural networks. Stud Geophys Geod 2010; 54: 465–485. [Google Scholar]
  • 40.Karmakar S, Shrivastava G, Kowar MK. Impact of learning rate and momentum factor in the performance of back-propagation neural network to identify internal dynamics of chaotic motion. Kuwait J Sci 2014; 41: 151–174. [Google Scholar]
  • 41.Winiczenko R, Górnicki K, Kaleta Aet al. et al. Optimisation of ANN topology for predicting the rehydrated apple cubes colour change using RSM and GA. Neural Comput Appl 2018; 30: 1795–1809. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.Zho Y, Wu Y. Analyses on influence of training data set to neural network supervised learning performance. In: Advances in computer science, intelligent system and environment, Gaungzhou, China, 24–25 September2001, pp.19–25. 10.1007/978-3-642-23753-9_4. [DOI] [Google Scholar]
  • 43.Tawfiq LNM, Tawfiq MNM. The effect of number of training samples for artificial neural network. Ibn AL-Haitham J Pure Appl Sci 2010; 23: 246–252. [Google Scholar]

Articles from Science Progress are provided here courtesy of SAGE Publications

RESOURCES