Skip to main content
Heliyon logoLink to Heliyon
. 2025 Feb 15;11(4):e42640. doi: 10.1016/j.heliyon.2025.e42640

Performance analysis of steel W18CR4V grinding using RSM, DNN-GA, KNN, LM, DT, SVM models, and optimization via desirability function and MOGWO

Sofiane Touati a, Haithem Boumediri b,, Yacine Karmi c, Mourad Chitour a, Khaled Boumediri d, Amina Zemmouri e, Athmani Moussa a,f, Filipe Fernandes g,h,⁎⁎
PMCID: PMC11904558  PMID: 40084012

Abstract

This study presents an innovative approach to optimizing the grinding process of W18CR4V steel, a high-performance material used in reamer manufacturing, using advanced machine learning models and multi-objective optimization techniques. The novel combination of Deep Neural Networks with Genetic Algorithm (DNN-GA), K-Nearest Neighbors (KNN), Levenberg-Marquardt (LM), Decision Trees (DT), and Support Vector Machines (SVM) was employed to predict key process outcomes, such as surface roughness (Ra), maximum roughness height (Rz), and production time. The results reveal significant improvements, with Ra values ranging from 0.231 μm to 1.250 μm (up to 81.5 % reduction) and Rz from 1.519 μm to 6.833 μm (up to 77.7 % reduction). The hybrid DNN-GA model achieved R2 > 0.99, reducing prediction errors by 23–45 % compared to traditional models. Optimization via the Desirability Function achieved Ra values around 0.341 μm and Rz around 2.3 μm, with production times ranging from 1181 to 1426 s. The innovative Multi-Objective Grey Wolf Optimization (MOGWO) provided Pareto-optimal solutions, minimizing Ra to 0.3 μm, Rz to 1.5 μm, and production times between 2000 and 3000 s, offering better balance between surface quality and machining efficiency. This work highlights the unique integration of machine learning models with optimization techniques to significantly enhance grinding performance and manufacturing efficiency in high-precision industries.

Keywords: Grinding process, W18CR4V steel, Surface roughness, Machine learning models, Multi-objective grey wolf optimization (MOGWO), Desirability function

1. Introduction

Metal machining is a key process in manufacturing, used to shape, cut, and refine materials into functional components for industries such as aerospace, automotive, and heavy machinery. Techniques like milling, drilling, turning, and grinding are chosen based on material properties and precision requirements. Grinding, typically used in the finishing stages, is essential for achieving smooth surface finishes and tight tolerances, which are crucial for high-performance applications [[1], [2], [3], [4]]. Advances in machining technology have greatly enhanced the ability to work with harder materials like tool steels and cemented carbides, helping manufacturers meet the increasing demand for precision and efficiency [[5], [6], [7]]. Today, modern machining focuses on optimizing these processes to balance speed, precision, and cost-effectiveness, while also enhancing surface integrity and extending tool life [[8], [9], [10]].

While several machining techniques contribute to shaping components, grinding distinguishes itself by providing exceptional surface quality and precision, especially in applications that require high tolerance levels [[11], [12], [13]]. Unlike rougher processes like turning or milling, which are primarily focused on removing bulk material, grinding specializes in the fine-tuning of surfaces and the enhancement of dimensional accuracy. This characteristic makes grinding indispensable in the final stages of manufacturing, especially when producing complex tools [12,14]. Among its specialized applications, tool grinding is a key operation in machining. Tools like drills and milling cutters, essential for various machining processes, rely on grinding to refine and sharpen their cutting edges. This ensures sustained cutting efficiency and extends tool life, reducing wear and maintaining optimal performance over time [[15], [16], [17]].

Flute grinding is a specialized subset of tool grinding, crucial for the manufacturing of cutting tools such as drills, end mills, and reamers. It focuses on forming the helical grooves or flutes on cutting tools, which are responsible for chip removal and coolant flow during machining operations. Precision in flute grinding is essential, as it directly impacts tool performance and surface finish of machined parts [18]. The process is highly complex due to the need for precision in small surface areas and intricate geometries, requiring specialized grinding wheels and careful control of parameters such as wheel speed, feed rate, and depth of cut [[19], [20], [21]]. The shape and accuracy of flutes are critical for efficient chip evacuation, which in turn affects the tool's cutting performance and longevity. Ensuring optimal performance in flute grinding necessitates attention to not just tool geometry but also the material properties and cooling methods used during grinding [22,23]. Other factors, such as wheel dressing, frequency, and coolant flow rate, also influence the final surface finish and tool wear. Wheel wear and improper dressing can lead to poor surface quality, increased vibration, and reduced tool life, making it critical to carefully control all grinding parameters for optimal results [24,25].

Several advanced techniques are employed to predict and optimize grinding parameters, especially in complex processes. Improving grinding precision and efficiency is essential for maintaining tool quality and reducing wear. The Taguchi method was applied by Sethuramalingam et al. to optimize grinding parameters for AISI D3 tool steel using nanofluids mixed with multiwall carbon nanotubes and SAE20W40 oil. The study showed a significant improvement in surface roughness, reducing it from Ra = 0.34 μm to Ra = 0.11 μm, and minimized the occurrence of microcracks [26]. The Response Surface Methodology (RSM), used by Witold et al., analyzed grinding forces during the machining of CTS20D cemented carbide with diamond grinding wheels of different bonding materials. Analysis of Variance (ANOVA) confirmed that resin bond wheels reduced grinding forces, with grinding speed strongly influencing tangential force, highlighting the importance of bonding material in optimizing performance [27].

More recently, machine learning models such as Artificial Neural Networks (ANN) were implemented by Li et al. to optimize the grinding wheel position in flute grinding, achieving a prediction accuracy of 96 % in non-linear grinding conditions [28]. The ANN model was trained using a backpropagation algorithm, allowing for real-time adjustments to grinding wheel position, which reduced surface roughness to Ra = 0.12 μm. Additionally, iteration-based error compensation methods, applied by Liu et al., employed feedback from real-time monitoring to adjust for tool wear, reducing wear by 15 % and maintaining dimensional accuracy within ±0.02 mm [29]. Guijian et al. optimized the abrasive belt grinding process using the improved non-dominated sorting genetic algorithm (CNSGA-II), which resulted in faster convergence and enhanced Pareto solution diversity. The optimized parameters reduced surface roughness to 0.499 μm and increased the material removal rate to 0.115 mg/min, significantly improving both grinding efficiency and surface quality. The optimized process yielded a defect-free surface with improved fineness and flatness [30]. These techniques ensured improved performance and accuracy in extended grinding operations.

This study introduces a novel approach to optimizing the grinding process of W18Cr4V steel, a material known for its hardness and wear resistance, commonly used in reamer tools. Despite its widespread use, the grinding process for such high-performance materials is rarely studied in detail. This research integrates advanced machine learning models, such as Deep Neural Networks with Genetic Algorithm (DNN-GA), K-Nearest Neighbors (KNN), Levenberg-Marquardt (LM), Decision Trees (DT), and Support Vector Machines (SVM), with multi-objective optimization techniques like Multi-Objective Grey Wolf Optimization (MOGWO) and the Desirability Function. The study examines the impact of key parameters, including cutting speed and feed rate, on the grinding process. It also focuses on comparing wet oil cooling with dry and air cooling, evaluating their effectiveness in improving surface quality, and minimizing environmental impact. By optimizing these parameters, the research aims to achieve a balance between surface integrity, production efficiency, and environmental sustainability. The integration of machine learning and multi-objective optimization marks a significant advancement over traditional methods, providing precise predictions and optimized grinding conditions that enhance surface quality, efficiency, and tool performance for W18Cr4V steel.

2. Materials and methodologies

This section outlines the materials used, the experimental setup, and the procedures adopted for the grinding process. It also details the modeling techniques applied to predict grinding performance and optimize key parameters such as Ra, Rz and production time.

2.1. Materials

The material utilized in this study is W18CR4V steel, also known as tungsten high-speed tool steel, recognized for its superior hardness, wear resistance, and toughness. Despite its hardness, the material maintains good toughness, allowing it to endure shock and impact without fracturing. It also demonstrates excellent heat resistance, preserving its hardness under high temperatures during high-speed machining. The workpieces, which were used to manufacture reamer tools, measure Ø16 × 175 mm and underwent thermal treatment to enhance their mechanical properties, achieving a hardness of HRC 60–62 in accordance with ISO 6508-1. The chemical composition of W18Cr4V steel, determined using Optical Emission Spectroscopy (OES) in accordance with ASTM E415, is presented in Table 1.

Table 1.

The chemical composition of steel W18CR4V.

C Si Mn Cr W V Mo Ni Cu P S Nb Co P Fe
0.744 0.238 0.215 3.69 17.36 1.11 0.153 0.093 0.0847 0.028 0.012 0.0421 0.0758 0.028 75.654

2.2. Experimental setup

This subsection describes the equipment and instrumentation used during the grinding experiments, including the grinding machine, profilometer for surface roughness measurements, and tools for maintaining precise process conditions.

2.2.1. Grinding machine

The grinding machine utilized in this study is the 5-axis Smp Numafut CA3+, renowned for its advanced functionality and precision. It is designed to support tools with a maximum diameter of 150 mm, lengths of up to 300 mm for manual loading and 140 mm for automatic loading, and a maximum weight of 25 kg. The A-axis work head, equipped with an ISO 50 attachment, offers X/Z strokes of 360/300 mm and full 360-degree rotation, with a top rotational speed of 800 rpm. The B-axis grinding head, featuring an HSK-E40 attachment, provides a 200 mm Y stroke, a rotational range of −30° to +180°, and accommodates grinding wheels up to 125 mm in diameter, reaching speeds of 20000 rpm powered by a 7 kW motor. The machine also includes storage for up to four grinding wheels and a tool magazine designed for tools ranging from 40 mm to 140 mm in length. Operating at 400 V with a power requirement of 19 kW, the machine's dimensions are 2200 mm in length, 3350 mm in width, and 2600 mm in height, with a total weight of 2500 kg. These features enable high-precision grinding across a wide range of machining applications.

2.2.2. Cutting tools (diamond grinding wheels)

The Norton ASD126100B76 11V2 50X22X10 diamond grinding wheel was chosen for its precision and durability in high-performance grinding. With a maximum speed of 63 m/s and up to 16043 rpm, it excels in tasks requiring fine finishes and tight tolerances. Its design ensures efficiency and safety, making it ideal for precision grinding in demanding environments.

2.2.3. Surface roughness measurements

The MarSurf M 400 from Mahr was used in this study to measure surface roughness according to ISO 4287, providing precise data on parameters such as average roughness (Ra) and maximum roughness height (Rz). Each test sample was measured three times to ensure accuracy and reliability (Fig. 1). A sampling length of 2 mm was used for all measurements, conducted in a controlled environment at an ambient temperature of (23 ± 1)°C.

Fig. 1.

Fig. 1

Measurement process of surface roughness (Ra and Rz) on specimens.

2.3. Experimental procedure

The grinding of W18CR4V steel was conducted using 48 distinct parameter combinations, systematically varying key factors such as the cooling method (Dry, Air, Wet), cutting speed (Vc) ranging from 25 to 40 m/min, and feed rate (f) between 0.002 and 0.014 mm/rev. The input parameters and their corresponding levels used in the grinding process are presented in Table 2. The experimental procedure aimed to evaluate the effects of these variables on three critical outcomes: average surface roughness (Ra), maximum roughness height (Rz), and processing time. Each combination was tested to assess the interactions between the cooling method, cutting speed, and feed rate and their influence on surface quality and grinding efficiency.

Table 2.

Experimental input parameters and their levels for the grinding process.

Input Level
1 2 3 4
Cooling method Dry Air Wet
Vc (m/min) 25 30 35 40
f (mm/rev) 0.02 0.014 0.008 0.002

2.4. Modeling techniques

Advanced machine learning models were employed for predictive analysis, including Deep Neural Networks enhanced by Genetic Algorithms (DNN-GA), K-Nearest Neighbors (KNN), and Support Vector Machines (SVM). Additionally, multi-objective optimization techniques were utilized to effectively balance conflicting objectives, ensuring a robust modeling framework.

2.4.1. Response Surface Methodology (RSM)

In this study, Response Surface Methodology (RSM) was combined with a regression model to investigate the relationships between grinding parameters and output responses, such as surface roughness (Ra, Rz) and production time. The regression model quantified the effects of the input variables, providing an in-depth analysis of their influence on the outputs [31]. Using a second-order polynomial equation, RSM effectively identifies optimal grinding conditions by capturing the complex interactions between parameters [32,33]. This method is a valuable tool for predicting surface quality and production efficiency in the grinding process.

2.4.2. Deep Neural Network (DNN)

Deep Neural Networks (DNNs) are a type of artificial neural network composed of multiple layers of interconnected neurons. Each layer applies a transformation to the input data, allowing DNNs to identify intricate patterns and relationships. This capability makes them highly effective for tasks such as classification, regression, and prediction [34,35]. The training process of a DNN involves adjusting the network's weights and biases to minimize the error between predicted and actual outputs [36]. For a neural network with L+1 layers, the following quantities are defined:

Yjl=kwjkakl1+bjl (1)
ajl=f(kwjkakl1+bjl)=f(Yjl) (2)

Here, aj0=xj (xj represents the jth input data), wjkl denotes the weight associated with the connection between the kith neuron in (L1)ith layer and the jth neuron in the lth layer, while bij are the biases. The function f corresponds to the activation function.

2.4.3. Levenberg-Marquardt (LM) algorithm

The Levenberg-Marquardt (LM) algorithm was used to predicate a feedforward neural network for predicting surface roughness and production time. The LM algorithm is a powerful technique for solving nonlinear least squares problems, balancing the advantages of gradient descent and Gauss-Newton methods [37]. By fine-tuning the weights of the network, this method helps improve the convergence speed and accuracy of the neural network predictions, particularly in scenarios involving complex parameter interactions in grinding processes.

The LM algorithm is used to iteratively update the network's weights to minimize the error function. The weight update rule is:

Wijnew=Wij(JTJ+λI)1JTe (3)

where J is the Jacobian matrix of partial derivatives of the error with respect to the network weights, I is the identity matrix, λ is a damping factor and e is the vector of network errors.

The errors from the output layer are propagated back through the network using the chain rule of calculus, allowing the network to adjust the weights and biases.

2.4.4. Deep Neural Network with genetic algorithm (DNN-GA)

A Deep Neural Network (DNN) model was implemented in combination with a Genetic Algorithm (GA) to enhance predictive accuracy. DNNs are well-suited for capturing complex, nonlinear relationships between input variables and output responses in the grinding process. The GA, a nature-inspired optimization technique, was applied to optimize the DNN model by adjusting its architecture such as the number of layers and nodes, its activation functions and the learning algorithm. This hybrid DNN-GA approach allows for improved predictions of surface roughness (Ra) and production time, ensuring a more accurate modeling of the grinding process. The number of layers, the number of nodes, the learning algorithms, and the activation functions employed in this study are presented in Table 3.

Table 3.

Comprehensive overview of neural network architecture.

Hidden Layers Hidden Layer Nodes Learning Algorithms Activation Functions
Min:1
Max:10
Min:2
Max:10
trainlm: Levenberg-Marquardt backpropagation Compet: Competitive transfer function
trainbr: Bayesian Regulation backpropagation elliotsig: Elliot sigmoid transfer function
trainbfg: BFGS quasi-Newton backpropagation hardlim: Positive hard limit transfer function
traincgb: Conjugate gradient backpropagation with Powell-Beale restarts hardlims: Symmetric hard limit transfer function
traincgf: Conjugate gradient backpropagation with Fletcher-Reeves updates logsig: Logarithmic sigmoid transfer function
traincgp: Conjugate gradient backpropagation with Polak-Ribiere updates netinv: Inverse transfer function
traingd: Gradient descent backpropagation poslin: Positive linear transfer function
traingda: Gradient descent with adaptive lr backpropagation purelin: Linear transfer function
traingdm: Gradient descent with momentum radbas: Radial basis transfer function
traingdx: Gradient descent w/momentum & adaptive lr backpropagation radbasn: Radial basis normalized transfer function
trainoss: One step secant backpropagation satlin: Positive saturating linear transfer function
trainrp: RPROP backpropagation satlins: Symmetric saturating linear transfer function
trainscg: Scaled conjugate gradient backpropagation softmax: Soft max transfer function
tansig: Symmetric sigmoid transfer function
tribas: Triangular basis transfer function

The integration of Genetic Algorithms (GAs) into the optimization of neural networks offers numerous advantages, particularly in exploring complex, high-dimensional search spaces for architecture and hyperparameter tuning. Unlike traditional gradient-based methods, GAs are not limited by the local minima of the optimization landscape, allowing them to explore a wide range of neural network configurations, including the number of layers, neurons per layer, and activation functions. This broader exploration can lead to more optimal architectures that might otherwise be overlooked by conventional techniques. Moreover, GAs are well-suited for handling both continuous and discrete variables, which makes them particularly effective in optimizing categorical hyperparameters, such as the choice of activation function or layer type. Their ability to manage such varied parameter spaces allows for a more comprehensive optimization process [38].

Another key advantage of GAs is their inherent parallelism. Each candidate solution (or chromosome) in the population can be evaluated independently, facilitating the use of parallel or distributed computing resources. This feature is especially valuable in optimizing deep learning models, where evaluating a single candidate can be computationally expensive. As a result, GAs can significantly accelerate the optimization of complex neural networks by efficiently leveraging modern computational infrastructure [39].

Several studies have highlighted the superior performance of GA-optimized networks. For example, research on GA-optimized convolutional neural networks (CNNs) has shown that these models can surpass standard backpropagation-based methods, such as Stochastic Gradient Descent (SGD), in tasks like image classification [38]. Similarly, in this work, it has been further demonstrated that GA-optimized Deep Neural Networks (DNNs) deliver superior performance in regression tasks compared to traditional methods. The GA-DNN hybrid approach starts by encoding key hyperparameters of the network, such as the number of layers, learning rate, number of neurons per layer, and activation functions—into chromosomes (Fig. 2). These chromosomes represent candidate solutions, with each gene corresponding to a specific hyperparameter. Through iterative operations such as selection, crossover, and mutation, the GA refines the network structure and hyperparameters, evolving towards an optimal solution. This evolutionary process enables the discovery of architectures that outperform those obtained through standard techniques like grid search or manual tuning [40].

Fig. 2.

Fig. 2

Flow chart of the Hybrid algorithm DNN-GA.

The objective function, or fitness function, used to evaluate each candidate network is based on several performance indices, ensuring that the most effective architectures are selected. In this study, the fitness function incorporates key metrics commonly used in regression tasks to evaluate the predictive accuracy and optimization results of different models. These metrics include: Coefficient of Determination (R2), Root Mean Square Error (RMSE), Mean Absolute Error (MAE), and the Scatter Index (SI) [[41], [42], [43]]. These metrics are defined as follows:

R2=(i(xix)×(yiy)i(xix)²×i(yiy)²)2 (4)
RMSE=i=1n(yixi)²N (5)
MAE=i=1n(yixi)²N (6)
SI=RMSEyi (7)

Where: x and y represent the observed and predicted values, respectively; N denotes the total number of data points; and x and y are the means of the observed and predicted values, respectively. Additionally, an Objective Function (OBJ) was formulated to balance the performance measures across both training and testing datasets. This function combines RMSE and MAE metrics to provide an optimized evaluation of the model's performance, accounting for variations between the training and testing data:

OBJ=(NtrNall×RMSEtr+MAEtrRtr2+1)+(NtestNall×RMSEtest+MAEtestRtest2+1) (8)

These indices offer a comprehensive evaluation of the model's predictive accuracy, error minimization, and generalization capability. By optimizing against these diverse metrics, the GA ensures that the final DNN architectures achieve a balance between high accuracy and low error, making them robust for real-world regression applications. This multi-criteria optimization process allows the GA to navigate the trade-offs between different performance metrics and select architectures that demonstrate superior predictive power across the board.

2.4.5. K-Nearest neighbors (KNN)

The K-Nearest Neighbors (KNN) algorithm is a non-parametric, instance-based learning method that is widely used for both classification and regression tasks [44]. The core idea behind KNN is to predict the output for a new observation by finding the "k" closest data points from the training set, based on a specified distance metric, and using their labels or values to infer the result [45]. Given a set of the present training data {(x₁, y₁), (x₂, y₂), …, (xₙ, yₙ)}, where xᵢ ∈ Rn represents the features vector and yᵢ denotes the corresponding label or output value, the KNN algorithm aims to predict the output ŷ for a new data point x'. For each data point xᵢ in the training set, the distance d(x', xᵢ) between the new observation x' and xᵢ is computed. The most common Minkowski distance is employed:

d(x,xi)=(j=1n|xjxij|p)1p (9)

The algorithm selects the k closest data points (neighbors) from the training set based on the computed distances. For Regression, the predicted output ŷ is computed as the average value of the k nearest neighbors.

2.4.6. Decision trees (DT)

The Decision Tree (DT) method for regression is a supervised learning approach used to predict continuous values rather than discrete class labels [37]. Like in classification, the method recursively partitions the feature space into subsets based on decision rules, forming a tree structure where the terminal nodes (leaves) represent predicted numerical values [46]. This technique is highly interpretable and capable of modeling non-linear relationships. In a regression task, the goal is to predict a continuous target variable y based on a set of input features. The decision tree is built by splitting the feature space into distinct regions, each of which corresponds to a predicted value of y. The root node represents the entire dataset and serves as the starting point of the tree and each internal node makes a decision by splitting the data based on a feature and a threshold value, aiming to reduce prediction error. The terminal nodes represent the predicted continuous values. For a given leaf, the predicted value is typically the mean of the target values in that region of the feature space. In regression trees, the splitting criterion is based on minimizing the sum of squared errors (MSE) in the target variable y [35].

MSE=1Ni=1n(yixi)² (10)

where y is the actual target value yˆ, is the predicted value, and n is the number of samples in the node. To avoid overfitting, decision trees can grow too deep by fitting noise in the data. Pre-Pruning techniques help control the complexity of the tree, that can limit the growth of the tree by specifying a maximum depth, minimum number of samples in a node, or minimum improvement in error for further splits.

DT were applied to model the grinding process by segmenting the dataset into branches based on specific parameter thresholds. This method is useful for identifying the key grinding parameters that influence surface roughness and production time. DTs work by recursively splitting the data into decision nodes, leading to more interpretable results, but they are prone to overfitting, which may limit their generalization capabilities.

2.4.7. Support vector machine (SVM)

Support Vector Machines (SVM) is a powerful supervised learning algorithm commonly used for both classification and regression tasks. When applied to regression, it is known as Support Vector Regression (SVR) [47,48]. The SVM method seeks to find a function that best approximates the relationship between input features and the target variable using a margin-based approach. For regression, the goal is to predict a continuous target variable y based on a set of input features x [49].

The SVR algorithm attempts to find a function f(x) that deviates from the actual target values by a small margin ϵ, while also minimizing model complexity [50]. The function f(x) takes the following form:

f(x)=w·ϕ(x)+bf(x) (11)

where w is the weight vector, ϕ(x) is the feature transformation and b is the bias term.

The objective of SVR is to minimize the following cost function:

minw,b12w2+Ci=1NLϵ(yi,f(xi)) (12)

C is a regularization parameter that controls the trade-off between margin width and prediction error, Lϵ(yi,f(xi)) is the epsilon-insensitive loss function that measures the prediction error if it exceeds the margin ϵ, it's defined as:

Lϵ(yi,f(xi))={0if|yif(xi)|ϵ|yif(xi)|ϵotherwise (13)

The Radial Basis Function (RBF) Kernel allows the algorithm to perform regression in high-dimensional feature spaces without explicitly transforming the input data, it's given by:

K(xi,xj)=exp(γxixj2) (14)

The kernel function computes the similarity between pairs of data points, allowing SVR to fit complex, non-linear relationships in this case.

SVM were employed to predict grinding outcomes by mapping the input parameters to a higher-dimensional space where a linear separation between classes can be identified. SVM is particularly effective in cases where the data is not linearly separable in the original space. However, while SVM can provide robust predictions, it tends to perform less effectively on highly nonlinear data, as seen in some aspects of the grinding process.

2.5. Optimization techniques

The optimization approaches employed aim to achieve balanced grinding performance. The study utilizes the Desirability Function and Multi-Objective Grey Wolf Optimization (MOGWO) to minimize Ra, Rz and production time, effectively addressing the trade-offs between surface quality and machining efficiency.

2.5.1. Desirability function (DF)

The desirability function (DF) was used as an optimization method to balance multiple objectives in the grinding process, such as minimizing at the same time the surface roughness (Ra and Rz) and production time. This technique converts multiple response variables into a single desirability score, with a value of 1 representing the optimal solution [51]. By adjusting input parameters to maximize this score, the desirability function helps identify a balanced set of conditions for optimal performance.

2.5.2. Multi-Objective Grey Wolf Optimization (MOGWO)

The Multi-Objective Grey Wolf Optimization (MOGWO) method was used in conjunction with the GA-DNN model, which was shown to be the best-performing prediction model in this study, to further optimize the grinding process. This strategy sought to minimize manufacturing time (t) and surface roughness characteristics (Ra, Rz) at the same time in order to reconcile competing goals. To improve the optimization process, MOGWO, a nature-inspired algorithm that mimics grey wolf hunting tactics and social structure, was used. Using the GA-DNN model's prediction accuracy, MOGWO found a Pareto front of optimal solutions, allowing to choose grinding settings that support particular production priorities, such as operational efficiency or surface quality.

Fig. 3 presents a schematic illustration of the MOGWO algorithm, offering a visual depiction of its underlying structure and operational process. The diagram encapsulates the key stages of the algorithm, highlighting how the grey wolves' leadership hierarchy and social hunting behavior are leveraged to efficiently explore and exploit the solution space, with the goal of optimizing multiple conflicting objectives simultaneously [52,53].

Fig. 3.

Fig. 3

Schematic representation of the multi-objective grey wolf optimization (MOGWO) algorithm.

Fig. 4 illustrates the experimental setup used for the grinding tests, as well as the prediction and optimization methods applied to improve surface roughness and processing time.

Fig. 4.

Fig. 4

Preparation of samples and Presentation of the Grinding tests Machine.

3. Results and discussion

The surface roughness (Ra), maximum roughness height (Rz), and production time measurements offer valuable insights into the influence of various process parameters on the grinding of reamer tools. This dataset is crucial for optimizing the grinding process, aiming to achieve the desired surface finish while minimizing machining time. By investigating the interactions between factors such as feed rate (f), cutting speed (Vc), and cooling methods (dry, air, or wet), significant trends in surface quality and process efficiency emerge, guiding the refinement of operational strategies.

The roughness measurements and production times for each parameter combination, as shown in Table 4, reveal clear patterns. Higher feed rates generally lead to increased surface roughness, but significantly reduce production time. For example, in Test 41, with a feed rate of 0.020 mm/rev, the surface roughness is 1.250 μm, while production time is 1026 s. Conversely, lower feed rates result in smoother surfaces but longer production times, as seen in Test 16, where Ra is 0.231 μm with a production time of 9842 s at a feed rate of 0.002 mm/rev.

Table 4.

Results of experimental Test runs with learning data: Training, validation, and testing.

Experimental run number Process parameter levels
Responses
Learning data
Cooling method Vc (m/min) f (mm/rev) Ra (μm) Rz (μm) Time (S)
1 Dry 25 0.020 0.544 4.440 933 Training
2 Dry 25 0.014 0.464 3.357 2330 Training
3 Dry 25 0.008 0.419 2.758 3660 Validation
4 Dry 25 0.002 0.390 1.982 15870 Validation
5 Dry 30 0.020 0.736 5.445 1199 Training
6 Dry 30 0.014 0.654 4.359 1752 Training
7 Dry 30 0.008 0.584 3.786 3014 Training
8 Dry 30 0.002 0.554 3.204 14806 Training
9 Dry 35 0.020 0.734 5.030 1078 Validation
10 Dry 35 0.014 0.633 4.122 1438 Validation
11 Dry 35 0.008 0.574 3.931 2583 Training
12 Dry 35 0.002 0.505 3.154 10120 Training
13 Dry 40 0.020 0.546 4.192 870 Training
14 Dry 40 0.014 0.517 3.588 1261 Training
15 Dry 40 0.008 0.393 2.617 2559 Training
16 Dry 40 0.002 0.231 1.519 9842 Training
17 Air 25 0.020 0.324 2.357 1650 Test
18 Air 25 0.014 0.305 2.050 2108 Training
19 Air 25 0.008 0.279 1.848 3493 Training
20 Air 25 0.002 0.265 1.665 15733 Validation
21 Air 30 0.020 0.838 5.448 1256 Validation
22 Air 30 0.014 0.660 4.152 1752 Training
23 Air 30 0.008 0.582 3.396 3198 Training
24 Air 30 0.002 0.518 2.912 14806 Training
25 Air 35 0.020 0.899 6.181 1078 Test
26 Air 35 0.014 0.765 4.965 1438 Test
27 Air 35 0.008 0.657 3.662 2583 Training
28 Air 35 0.002 0.475 2.940 9630 Training
29 Air 40 0.020 0.586 4.626 870 Training
30 Air 40 0.014 0.446 3.434 1261 Training
31 Air 40 0.008 0.315 2.321 2428 Validation
32 Air 40 0.002 0.272 2.054 9639 Training
33 Wet 25 0.020 0.451 3.327 1425 Validation
34 Wet 25 0.014 0.428 2.734 2108 Training
35 Wet 25 0.008 0.388 2.129 3435 Test
36 Wet 25 0.002 0.327 1.899 15655 Training
37 Wet 30 0.020 1.000 6.605 1103 Training
38 Wet 30 0.014 0.952 5.846 1693 Training
39 Wet 30 0.008 0.912 5.177 2980 Training
40 Wet 30 0.002 0.768 4.468 14730 Training
41 Wet 35 0.020 1.250 6.833 1026 Training
42 Wet 35 0.014 1.002 6.059 1378 Validation
43 Wet 35 0.008 0.851 4.563 2532 Training
44 Wet 35 0.002 0.806 5.074 9630 Training
45 Wet 40 0.020 1.098 6.474 813 Test
46 Wet 40 0.014 1.004 5.869 1220 Training
47 Wet 40 0.008 0.817 4.614 2428 Validation
48 Wet 40 0.002 0.736 4.332 9632 Validation

Cutting speed also plays a significant role in determining surface roughness. Higher speeds tend to reduce Ra and Rz values, as demonstrated in Test 16 at 40 m/min, where Ra is 0.231 μm and Rz is 1.519 μm. However, the effect of cutting speed on production time varies depending on the feed rate, with high speeds associated with reduced production times in some cases, such as Test 45, where production time is 813 s at 40 m/min and 0.020 mm/rev.

Cooling methods are critical in influencing the outcomes of grinding processes, particularly in enhancing surface quality. In this study, air cooling consistently reduces Ra values between 0.265 μm and 0.324 μm, and Rz values ranging from 1.665 μm to 3.434 μm, as observed in Tests 17, 18, and 19. These findings highlight air cooling's effectiveness in achieving smoother surfaces for steel W18CR4V. However, while cooling methods impact surface quality, they do not have a pronounced effect on production time.

However, the dataset, consisting of 48 experimental observations, was divided into three parts as shown in Table 4, to ensure the effective development and evaluation of all the predictive models used in this study, including DNN, RSM, KNN, LM, DT, and SVM. To train the models, 70 % of the data was allocated, enabling them to learn the relationships between the input parameters and output variables. Another 20 % of the data was set aside for validation, which helped monitor the models' performance during training and fine-tune their parameters to prevent overfitting. The final 10 % was reserved for testing, providing an independent dataset to evaluate the models' accuracy and generalizability. This division was performed randomly using an algorithm implemented in MATLAB to ensure the subsets were representative of the overall dataset. This approach allowed for a fair and consistent comparison of the predictive performance across all modeling techniques.

3.1. Surface roughness and production time analysis

The ANOVA results in Table 5 for Ra, Rz, and machining time during the sharpening process of W18CR4V steel demonstrate that the models are highly significant, with P-values below 0.0001 and high F-values. This indicates that the factors included in the models collectively exert a significant influence on surface roughness variability and the optimization of machining time. Notably, factors with P-values greater than 0.05 are considered non-significant [[54], [55], [56]].

Table 5.

ANOVA Results for Ra, Rz, and machining time.

Source Sum of Squares df Mean Square percentage contribution F-value p-value Remarks
Ra Model 2.75 11 0.2503 95.82 75.97 <0.0001 Significant
A-Cooling 0.8307 2 0.4154 28.94 126.07 <0.0001 Significant
B-Vc 0.2359 1 0.2359 8.21 71.59 <0.0001 Significant
C-f 0.4625 1 0.4625 16.12 140.39 <0.0001 Significant
AB 0.3180 2 0.1590 11.08 48.27 <0.0001 Significant
AC 0.0065 2 0.0032 0.22 0.9793 0.3853 No-significant
BC 0.0546 1 0.0546 1.90 16.59 0.0002 Significant
B2 0.8438 1 0.8438 29.40 256.10 <0.0001 Significant
C2 0.0013 1 0.0013 0.04 0.4016 0.5303 No-significant
Residual 0.1186 36 0.0033
Cor Total 2.87 47
Rz Model 93.68 11 8.52 94.75 59.13 <0.0001 Significant
A-Cooling 17.47 2 8.74 17.67 60.65 <0.0001 Significant
B-Vc 9.20 1 9.20 9.30 63.89 <0.0001 Significant
C-f 31.54 1 31.54 31.90 218.94 <0.0001 Significant
AB 8.31 2 4.15 8.40 28.83 <0.0001 Significant
AC 0.1256 2 0.0628 0.13 0.4358 0.6501 No-significant
BC 0.8511 1 0.8511 0.86 5.91 0.0202 Significant
B2 25.70 1 25.70 25.99 178.45 <0.0001 Significant
C2 0.4848 1 0.4848 0.49 3.37 0.0748 No-significant
Residual 5.19 36 0.1440
Cor Total 98.87 47
Time Model 1.071E+09 11 9.734E+07 94.53 56.16 <0.0001 Significant
A-Cooling 78617.04 2 39308.52 0.007 0.0227 0.9776 No-significant
B-Vc 3.721E+07 1 3.721E+07 3.28 21.47 <0.0001 Significant
C-f 7.545E+08 1 7.545E+08 66.59 435.29 <0.0001 Significant
AB 33249.92 2 16624.96 0.003 0.0096 0.9905 No-significant
AC 1.581E+05 2 79043.30 0.01 0.0456 0.9555 No-significant
BC 3.189E+07 1 3.189E+07 2.81 18.40 0.0001 Significant
B2 4.070E+05 1 4.070E+05 0.03 0.2348 0.6309 No-significant
C2 2.464E+08 1 2.464E+08 21.75 142.18 <0.0001 Significant
Residual 6.240E+07 36 1.733E+06
Cor Total 1.133E+09 47

The most significant factor on the surface roughness Ra, is the type of cooling (A), which has the highest F-value (126.07) and the lowest P-value (<0.0001). It is followed by feed rate f (C) and cutting speed (B). Significant interactions include (AB) (type of cooling and cutting speed) and BC (cutting speed and feed rate), both of which are also significant with P-values <0.001. The quadratic term B2 is particularly important, as evidenced by its high F-value (256.10). In contrast, the interaction (AC) (type of cooling and feed rate) and the quadratic term C2 are not significant, suggesting they have minimal impact on Ra.

Regarding the surface roughness Rz, the feed rate (C) is the most significant factor, with the highest F-value (218.94) and the lowest P-value (<0.0001). This is closely followed by the type of cooling (A) and cutting speed (B), both of which are also highly significant. Notable interactions include those between cooling method and cutting speed (AB), as well as between cutting speed and feed rate (BC), with quadratic terms for B2 indicating non-linear relationships. In contrast, the interaction between cooling type and feed rate (AC) and the quadratic term for C2 are not significant, suggesting they have minimal impact on Rz.

The feed rate (C) is the most influential factor on production time, exhibiting the highest F-value (435.29) and the lowest P-value (<0.0001), indicating its significant impact. It is followed by cutting speed (B), which also plays a crucial role in determining production time. However, the type of cooling (A) is not significant, suggesting it has a minimal impact on production time. This highlights the importance of optimizing feed rate and cutting speed to enhance efficiency, while the cooling type may be less critical in this specific context.

The presented models for surface roughness (Ra), maximum roughness height (Rz), and production time under dry, air, and wet cooling methods are expressed as quadratic regression equations. These models use cutting speed (B) and feed rate (C) as key input variables, with interaction terms (BC) and squared terms (B2, C2) capturing the nonlinear effects and interactions between these factors. Each cooling method has a distinct impact on the grinding performance, reflected in the variation of the coefficients within the equations.

For example, under air cooling, Ra is influenced by cutting speed (B) and feed rate (C), with both linear and quadratic terms impacting the predicted values. Similarly, Rz and production time also depend on these variables, illustrating how changes in process parameters affect the overall grinding performance. These models provide a predictive framework for selecting optimal grinding conditions based on cooling methods, offering a practical tool for balancing surface quality and production efficiency.

Models for dry cooling method:

Ra=4.63125+0.332459B20.2094C+0.899778BC0.00530333B2+145.833C2
Rz=25.9144+1.8515B51.4956C+3.55089BC0.0292717B2+2791.67C2
Time=32473.8630.19B2.69408e+06C+21736.2BC+3.68333B2+6.2941e+07C2

Models for air cooling method:

Ra=5.01045+0.342499B17.0594C+0.899778BC0.00530333B2+145.833C2
Rz=29.044+1.94052B49.7497C+3.55089BC+0.0292717B2+2791.67C2
Time=32608.4641.72B2.67448e+06C+21736.2BC+3.68333B2+6.2941e+07C2

Models for wet cooling method:

Ra=5.53247+0.367119B16.1844C+0.899778BC0.00530333B2+145.833C2
Rz=30.5131+2.03376B66.7247C+3.55089BC+0.0292717B2+2791.67C2
Time=32394.1636.165B2.67786e+06C+21736.2BC+3.68333B2+6.2941e+07C2

Table 6 complements these models by providing statistical metrics that assess the performance of the predictive models for Ra, Rz, and Time. The high R2 values, such as 0.9587 for Ra, indicate that the models explain most of the variance in the data, while the adjusted R2 values ensure that only relevant variables contribute to the model's accuracy. Predicted R2 values confirm the models' robustness when applied to new data. Adequate precision values, like 33.82 for Ra, further validate the reliability of these models, ensuring they are capable of optimizing grinding processes under varying conditions. This comprehensive analysis offers strong predictive power, enabling enhanced control over surface quality and production time.

Table 6.

Statistical metrics for model performance in predicting for the outputs Ra, Rz, and Time.

Output Std. Dev. Mean C.V. % R2 R2- Adjusted R2-Predicted Adeq Precision
Ra 0.0574 0.6136 9.35 0.9587 0.9461 0.9270 33.8216
Rz 0.3795 3.91 9.72 0.9476 0.9315 0.9057 30.6440
Time 1316.54 4542.21 28.98 0.9449 0.9281 0.9061 22.9572

The 3D surface plots demonstrate the influence of cutting speed (Vc) and feed rate (f) on surface roughness (Ra), maximum roughness height (Rz), and production time (S) under three different cooling methods: dry, air, and wet, as presented in Table 7.

Table 7.

3D Surface plots showing the effects of cutting speed (Vc) and feed rate (f) on Ra, Rz, and production time under dry, air, and wet cooling methods.

3.1.

For surface roughness (Ra), the plots show that increasing the feed rate generally leads to higher roughness values in all cooling methods, with wet cooling producing the highest Ra values. Air cooling provides the most balanced results, achieving smoother surfaces compared to dry and wet methods. In terms of maximum roughness height (Rz), a similar trend is observed, with higher feed rates increasing Rz. Wet cooling again results in the highest roughness heights, while air cooling maintains a better balance, keeping Rz relatively lower. As for production time, lower feed rates and cutting speeds result in significantly longer production times, particularly under wet cooling conditions. Dry and air cooling methods offer shorter production times, with air cooling achieving a reasonable trade-off between surface quality and efficiency. These findings highlight how different cooling methods impact grinding outcomes, allowing for better optimization of surface quality and production efficiency.

3.2. Comparative performance of machine learning models

The optimization of Deep Neural Network (DNN) architectures using Genetic Algorithms (GA) has led to the development of distinct models tailored for predicting surface roughness parameters (Ra, Rz) and production time (t). The performance of each optimized model was assessed based on these criteria, ensuring the networks are effectively suited for their respective predictive tasks. The results of the GA-driven optimization are summarized in Table 8, which offers a comprehensive overview of the optimal architectures.

Table 8.

Optimal parameters of DNN obtained with GA.

Parameter DNN Models HLayer number HLayer Size Learning-Algorithm Act-Fct
Ra DNN- GA 1 9 trainbr elliotsig
Rz DNN- GA 6 7 trainbr purelin
13 softmax
6 poslin
10 radbasn
12 satlins
14 tansig
Time DNN- GA 2 5 trainbr elliotsig
2 radbas

The architecture of the Deep Neural Network (DNN) optimized using the Levenberg-Marquardt (LM) learning algorithm is presented in Table 9.

Table 9.

DNN architecture with LM learning algorithm.

Parameter DNN Models HLayer number HLayer Size Learning-Algorithm Act-Fct
Ra LM 3 4 trainlm Radbas
9 Radbas
Rz LM 2 5 trainlm elliotsig
10 elliotsig
Time LM 3 8 trainlm radbas
3 elliotsig

Fig. 5 presents the regression trees developed for predicting three key outputs: surface roughness parameters Ra, Rz, and production time. Each regression tree provides a visual representation of the decision-making process involved in predicting these outputs based on various input features (X1 = cooling method, X2 = cutting speed and X3 = feed rate). The hierarchical structure of the trees illustrates the sequential splitting of data, where each node represents a decision based on input variables.

Fig. 5.

Fig. 5

Regression Tree Models for: (a) Ra, (b) Rz and (c) Production Time.

In the KNN model, the number of neighbors k was defined as 5, utilizing the Euclidean distance metric to measure proximity. Uniform weighting was applied, ensuring equal contribution from all neighbors in the prediction process. For the SVM model, the Radial Basis Function (RBF) kernel was employed due to its effectiveness in capturing non-linear relationships. The kernel coefficient γ was assigned a value of 0.1.

3.2.1. The linear relationship (LR) analysis

Fig. 6 presents the linear relationship (LR) between the actual and predicted outputs, illustrating how the models performed compared to the experimental data. The blue dashed lines mark the ±5 % error margin, highlighting the accuracy of predictions within this acceptable range. Ideally, in a linear relationship, data points should closely align with the reference line (y = x), indicating a perfect correlation between the observed and predicted values.

Fig. 6.

Fig. 6

Relationship between measured outputs and predicted values using proposed optimization models with ±5 % error margin for (Ra, Rz, and production time).

Among the models, DNN-GA provided the most accurate predictions for surface roughness (Ra), with nearly all data points fitting within the ±5 % margin. The model was able to accurately show the linear connections between cutting speed, lubrication type, and feed rate thanks to its deep learning architecture and genetic algorithm optimization. In contrast, KNN struggled to handle the linearity of the data, resulting in many points outside the error margin, especially with extreme parameter combinations. LM, though better than KNN, still lacks sufficient flexibility to handle highly complex linear relationships, occasionally causing deviations beyond the error margin. Decision Trees (DT) were prone to overfitting, leading to weak generalization for Ra, while SVM performed the worst, significantly struggling to capture the linear trends, with the most deviations beyond the error margin.

For (Rz), DNN-GA again showed the best results, with most predictions falling within the ±5 % margin, demonstrating its ability to model the linear relationship between input parameters and Rz. The genetic algorithm's optimization was crucial in maintaining high reliability across varying conditions. KNN, on the other hand, had significant limitations in handling linearity in Rz, leading to numerous outliers beyond the error margin. LM performed better but still faced issues with underfitting or overfitting, which affected its predictive accuracy for Rz. DT showed poor results due to overfitting, while SVM produced the worst predictions, with many values far outside the acceptable error margin, failing to model linear relationships effectively.

For production time, DNN-GA once again outperformed the other models, with most predictions within the ±5 % error margin. It effectively captured the linear relationships between lubrication, cutting speed, and feed rate, leading to highly accurate predictions. KNN, however, struggled significantly with this task, resulting in many points outside the error bounds. LM offered reasonable accuracy for mid-range values but failed to generalize effectively for extreme values, leading to deviations. DT continued to overfit, making it unsuitable for production time predictions. SVM, meanwhile, performed the worst overall, with predictions far outside the error margin, failing to model linear relationships accurately.

3.2.2. Statistical Indicator (SI) graphs

Fig. 7 presents the Statistical Indicator (SI) values for various models across the training, testing, and validation phases, as well as for the combined dataset ("All"). The SI classification is as follows: SI < 0.1 indicates excellent performance, SI < 0.2 is considered good, SI < 0.3 is fair, and SI > 0.3 reflects poor performance. By comparing the SI values, the figure helps assess how well each model predicts the outputs across different datasets, revealing the strengths and weaknesses of each approach.

Fig. 7.

Fig. 7

Statistical indicator (SI) values across training, testing, validation, and all for Ra, Rz, and production time.

The DNN-GA model consistently achieves SI values below 0.1 for surface roughness (Ra), demonstrating excellent performance across all phases. This shows that DNN-GA maintains high accuracy and generalizes well across training, testing, and validation datasets. In contrast, KNN performs well only during training, but shows a significant decline in testing, validation, and the combined dataset, with SI values exceeding 0.3, indicating poor generalization. This suggests KNN overfits the training data and struggles to adapt to unseen data. LM performs better, with SI values between 0.1 and 0.2, indicating good overall performance, though it does not match the consistency of DNN-GA. Decision Trees (DT) exhibit poor generalization, with SI values consistently above 0.3, indicating a strong tendency to overfit.

For maximum roughness height (Rz), DNN-GA once again leads with SI values below 0.1 across all datasets, confirming its excellent generalization and accuracy. KNN, despite performing well during training, struggles in the testing and validation phases, with SI values exceeding 0.3, making its overall performance on the combined dataset poor. LM maintains good performance, with SI values between 0.1 and 0.2, though it still lags behind DNN-GA. Similar to its performance with Ra, DT struggles with overfitting, resulting in SI values above 0.3 in testing, validation, and the combined dataset, further confirming its inability to generalize effectively.

In terms of production time, DNN-GA consistently delivers excellent performance, with SI values below 0.1 in all phases, showcasing strong predictive ability across datasets. KNN, on the other hand, works well during training but not during testing or validation. The combined dataset has SI values above 0.3, which means it cannot predict production time in data it has not seen yet. LM performs well, with SI values ranging from 0.1 to 0.2, indicating good performance but lacking the consistency seen in DNN-GA. DT continues to show the worst performance, with SI values above 0.3 across all datasets, highlighting significant overfitting issues.

3.2.3. Model performance evaluation through radar charts and metrics

Fig. 8 (Radar Charts), provides a comprehensive comparison of model performance across key metrics, including Mean Absolute Deviation (MAD), Root Mean Squared Error (RMSE), Mean Absolute Percentage Error (MAPE), the R2 (coefficient of determination), and the Objective Function (OBJ) for three outputs: Ra, Rz, and production time. For R2, higher values closer to 1 indicate better predictive accuracy, as this metric represents the proportion of variance explained by the model. For error-based metrics such as MAD, RMSE, and MAPE, lower values are preferable, as they indicate smaller deviations between the predicted and observed values. Similarly, a lower Objective Function (OBJ) value reflects better optimization performance, demonstrating that the model effectively meets the desired objectives. These visualizations, combined with the numerical data, highlight the strengths and weaknesses of each model, with DNN-GA consistently emerging as the superior model compared to KNN, LM, DT, and SVM.

Fig. 8.

Fig. 8

Radar charts comparing model performance across key metrics (MAD, RMSE, MAPE, R2, and OBJ) for Ra, Rz, and production time.

DNN-GA demonstrates the best performance across all metrics and outputs. For Ra, it achieves an R2 of 0.99984, with the lowest RMSE of 0.00435 and MAPE of 0.00373, indicating near-perfect accuracy. The radar charts corroborate this, as DNN-GA remains close to the center, reflecting balanced and minimal error rates across all phases. For both Rz and production time, DNN-GA maintains its high performance, with R2 values of 0.99808 and 0.99999, respectively, solidifying its clear dominance in both accuracy and generalization over the other models.

While KNN performs well in the training phase, it struggles with generalization, as evidenced by its increased error rates in testing and validation. For Ra, KNN achieves an R2 of 0.941, but its RMSE of 0.08322 and MAPE of 0.08522 reflect its poor ability to generalize. The radar charts further illustrate KNN's wider spread, particularly in production time, where its RMSE balloons to 2042.89759, confirming that KNN overfits the training data and fails to maintain accuracy on unseen data.

LM performs moderately well, offering better generalization than KNN, but still falling short of DNN-GA's level of accuracy. For Ra, LM achieves an R2 of 0.97828 with a MAPE of 0.07212. While LM's performance is more consistent than KNN's, the radar charts highlight its larger deviations compared to DNN-GA, particularly for Rz and production time, where its error metrics are higher. These shortcomings limit LM's overall effectiveness despite showing a better balance across phases than KNN.

The weakest performers are DT and SVM. DT shows considerable overfitting, achieving decent results during training but poor generalization, especially for production time, where its RMSE reaches 1507.06105. SVM performs the worst across the board, with the lowest R2 values and the highest error metrics for all outputs, including an RMSE of 5370.85464 for production time. The radar charts further confirm these results, showing that both DT and SVM have the widest spread, making them unsuitable for accurate predictions.

The Taylor plots displayed in Fig. 9 offer a visual representation of the model performance, illustrating the degree of alignment between the predictive models (DNN-GA, KNN, LM, DT, and SVM) and the reference data for three outputs: Ra, Rz, and production time. The plots present significant statistical metrics such as mean, standard deviation, variance, Centered RMS Difference, and correlation coefficient. These measures are used to evaluate the accuracy and variability of each model's predictions compared to the observed data.

Fig. 9.

Fig. 9

Taylor plots comparing model performance for Ra, Rz, and production time.

In predicting Ra, DNN-GA shows exceptional performance with a correlation coefficient of 0.99984 and an exceptionally low Centered RMS Difference of 0.00433, indicating near-perfect accuracy. In comparison, KNN and LM deliver moderate accuracy, with correlation coefficients of 0.90826 and 0.941, respectively. KNN struggles more with higher errors, as reflected by its Centered RMS Difference of 0.10301. While DT achieves a high correlation coefficient of 0.97828, its increased standard deviation suggests overfitting. SVM performs the worst, with the lowest correlation coefficient of 0.65388 and the highest Centered RMS Difference of 0.18543, indicating significant predictive errors.

For Rz, DNN-GA maintains its leading position, with a correlation coefficient of 0.99808 and a Centered RMS Difference of 0.0889, showcasing its ability to model the data with minimal error. KNN and LM perform reasonably well, but their correlation coefficients of 0.90301 and 0.91298 and higher Centered RMS Differences indicate less accurate predictions. DT shows adequate performance with a correlation coefficient of 0.91564, but its higher standard deviation points to overfitting issues. SVM, once again, underperforms with the lowest correlation coefficient of 0.69947 and the highest Centered RMS Difference of 1.04701.

In terms of production time prediction, DNN-GA remains the best-performing model, with an almost perfect correlation coefficient of 0.99999 and a minimal Centered RMS Difference of 20.32882, reflecting excellent generalization. KNN shows poor generalization, with a Centered RMS Difference of 1981.48866 and a lower correlation coefficient of 0.91325. LM performs better than KNN, achieving a correlation coefficient of 0.99352, but it still falls behind DNN-GA in terms of accuracy. DT demonstrates higher variability, as indicated by its Centered RMS Difference of 1472.68042, while SVM performs the worst overall, with the highest Centered RMS Difference of 4833.73791 and a correlation coefficient of 0.83466.

3.3. Optimization results

3.3.1. Desirability function optimization

The optimization process utilized a desirability function to identify the optimal combination of process parameters, including cooling method, cutting speed (Vc), and feed rate (f), to achieve the best results for production time, surface roughness (Ra), and maximum roughness height (Rz). The objective of the desirability function is to minimize both Ra and Rz while simultaneously reducing production time. A desirability value of 1 represents the ideal solution, where all objectives are perfectly optimized.

Table 10 presents the top 10 solutions identified through this optimization process. Each solution maintains consistent parameter settings, including air cooling, a cutting speed of 25 m/min, and a feed rate of 0.014 mm/rev, with the exception of the tenth solution, where the feed rate slightly increases to 0.015 mm/rev. All solutions achieve a desirability score of 0.912, suggesting a well-balanced outcome between minimizing surface roughness, reducing roughness height, and keeping production time reasonable. The Ra values are tightly clustered between 0.341 and 0.348 μm, while Rz ranges from 2.250 to 2.335 μm, indicating minimal variation in surface quality. The production time varies more significantly, ranging from 1181.298 s to 1426.233 s, reflecting a balance between surface quality and efficiency.

Table 10.

Top 10 solutions identified.

Number Cooling method Vc f Ra Rz Time Desirability
1 2 25.000 0.014 0.344 2.285 1315.416 0.912
2 2 25.000 0.014 0.344 2.286 1314.606 0.912
3 2 25.000 0.014 0.344 2.293 1292.090 0.912
4 2 25.000 0.014 0.343 2.277 1339.119 0.912
5 2 25.000 0.014 0.345 2.301 1269.651 0.912
6 2 25.000 0.014 0.342 2.264 1380.572 0.912
7 2 25.000 0.014 0.346 2.316 1229.026 0.912
8 2 25.000 0.014 0.341 2.250 1426.233 0.912
9 2 25.000 0.014 0.347 2.325 1206.019 0.912
10 2 25.000 0.015 0.348 2.335 1181.298 0.912

Fig. 10 provides a detailed visualization of the first solution, where Ra = 0.344 μm, Rz = 2.285 μm, and production time = 1315.42 s. These results reflect an optimal compromise between surface quality and processing time, achieving a high desirability score of 0.912. The plot shows that the ideal solution for Ra and Rz is achieved at a low cutting speed of 25 m/min and a moderate feed rate of 0.014 mm/rev, further confirming that the parameter combinations listed in Table 4 are well optimized for surface finish and efficiency.

Fig. 10.

Fig. 10

Optimal solution for Ra, Rz, and production time using the desirability function (DF = 0.912).

The contour plots in Fig. 11 highlight the optimization results under three different cooling methods: dry, air, and wet. The dry cooling scenario exhibits greater variability in both Ra and Rz, leading to less optimal outcomes compared to air and wet cooling methods. Air cooling achieves the most balanced results, consistently maintaining lower values for surface roughness and roughness height across a range of cutting speeds and feed rates. Meanwhile, wet cooling yields the lowest values for Ra and Rz, indicating superior surface quality. However, wet cooling involves a slight trade-off in production time. Ultimately, air cooling emerges as the most versatile option, providing optimal and consistent results across all outputs, which aligns with the top solutions identified in Table 10, where air cooling was consistently used.

Fig. 11.

Fig. 11

Contour plots for Desirability solution for Ra, Rz, and production time under Dry, Air, and Wet Cooling Methods.

3.3.2. Multi-Objective Grey Wolf Optimization (MOGWO)

The Multi-Objective Grey Wolf Optimization (MOGWO) technique was employed to further optimize the process parameters, offering a sophisticated approach to balancing conflicting objectives such as Ra, Rz, and production time. This nature-inspired algorithm mimics the leadership hierarchy and hunting behavior of grey wolves, making it particularly effective for solving multi-objective optimization problems. The MOGWO algorithm provides a set of solutions known as the Pareto front, highlighting the trade-offs between different objectives without giving priority to any single goal. Unlike the desirability function, which converges to a single optimal solution by weighting the objectives, MOGWO generates multiple solutions, each offering a different balance between the objectives. This allows decision-makers to select the most suitable solution depending on whether they prioritize surface quality, production efficiency, or a compromise between the two. The solutions identified through MOGWO represent various combinations that minimize Ra (ranging from 0.2 to 0.8 μm), reduce Rz (ranging from 0 to 6 μm), and shorten production time (ranging from 1000 to 16000 s), giving manufacturers more flexibility in their choices.

Fig. 12a shows the relationship between Ra and Rz. The plot reveals that when Ra is reduced to around 0.3 μm, Rz also drops to approximately 1.5 μm, indicating that improving surface roughness positively impacts the overall surface texture. However, as Ra increases beyond 0.7 μm, Rz sharply rises beyond 4 μm, suggesting that optimizing both parameters becomes more difficult at higher roughness levels. The most favorable solutions cluster in the lower-left corner, where both Ra and Rz are minimized, leading to optimal surface quality.

Fig. 12.

Fig. 12

Pareto Front Analysis for: a) Rz vs Ra, b) Production Time vs Ra, c) Rz vs Production Time, and d) 3D Pareto Front for Ra, Rz, and Production Time.

Fig. 12b explores the relationship between Ra and production time. The plot shows an inverse relationship between these two objectives—achieving lower Ra values, such as 0.3 μm, results in much longer production times, exceeding 15000 s. Conversely, as Ra increases slightly to around 0.6 μm, the production time decreases significantly to about 2000 s, illustrating the trade-off between surface quality and production efficiency. Small sacrifices in surface roughness can lead to substantial improvements in production speed.

Fig. 12c illustrates the trade-off between Rz and production time. As production time decreases, Rz values tend to rise, with shorter production times (below 2000 s) correlating with higher Rz values (above 4 μm). However, after a production time of around 10000 s, the curve flattens, and Rz stabilizes around 1.5 μm. This suggests that beyond a certain point, further increases in production time do not significantly improve Rz, allowing for a stable balance between surface texture and efficiency.

In Fig. 12d, the 3D plot highlights the interaction between Ra, Rz, and production time, helping to identify the best trade-off solutions. The optimal solution lies in the region where Ra is minimized to approximately 0.3 μm, Rz is reduced to around 1.5 μm, and the production time is kept between 2000 and 3000 s. This solution strikes the perfect balance between achieving high surface quality and maintaining production efficiency. It provides a smooth surface finish while keeping the production time relatively short, making it an ideal choice for manufacturers seeking to optimize both quality and productivity without significantly increasing production costs.

The MOGWO technique offers greater flexibility by generating multiple solutions in the Pareto front, allowing decision-makers to choose between different trade-offs for surface roughness (Ra), maximum roughness height (Rz), and production time. In contrast, the desirability function provides a single optimized solution based on a combined score, balancing the objectives but offering less flexibility. MOGWO allows Ra to range from 0.2 to 0.8 μm and Rz from 0 to 6 μm, with production times ranging from 1000 to 16000 s. Meanwhile, the desirability function produces narrower values for Ra (around 0.341 μm) and Rz (around 2.3 μm), with production times between 1181 and 1426 s, offering fewer trade-off options.

4. Conclusion

This study explored the grinding performance of W18CR4V steel, focusing on optimizing surface roughness Ra, Rz and production time using advanced modeling and optimization techniques. The findings demonstrate that RSM effectively modeled the nonlinear relationships between grinding process parameters and performance outcomes, achieving high predictive accuracy (R2 > 0.90). The predicted ranges for surface roughness, maximum roughness height, and production times were 0.231 μm–1.250 μm, 1.519 μm–6.833 μm, and 813 s to 15,870 s, respectively, underscoring the robustness of the model in capturing complex interactions.

The ANOVA revealed that f was the most significant factor influencing grinding outcomes. High F-values of 140.39 for Ra and 218.94 for Rz confirmed the dominant role of feed rate in determining both surface quality and production efficiency. Variations in all performance outcomes were primarily driven by changes in feed rate, underscoring the need for precise control to achieve optimal grinding conditions.

Among the cooling methods tested, air cooling proved to be the most effective in balancing surface quality and production efficiency. With Ra maintained between 0.265 μm and 0.344 μm, Rz ranging from 1.665 μm to 3.434 μm, and production times varying from 870 to 14,806 s, air cooling consistently outperformed dry and wet cooling methods. The results indicate that air cooling minimizes variability while maintaining efficient grinding conditions, making it the preferred choice for achieving consistent outcomes.

The DNN-GA model demonstrated exceptional predictive accuracy across all responses. For Ra, the model achieved a MAD of 0.99748, a RMSE of 0.00435, a MAPE of 0.00373, and an R2 of 0.99984. Similarly, for Rz, the model achieved an MAD of 1.0033, an RMSE of 0.09061, a MAPE of 0.01605, and an R2 of 0.99808. In terms of production time, the model achieved an MAD of 1.00117, an RMSE of 20.37818, a MAPE of 0.00193, and an R2 of 0.99999, demonstrating its robustness in accurately predicting and optimizing grinding conditions. These results highlight the reliability and precision of the DNN-GA model for machining optimization.

The integration of the Desirability Function and MOGWO proved to be powerful tools for balancing Ra, Rz, and production time. The optimization achieved a desirability score of 0.912, showcasing its efficiency in identifying near-optimal conditions. While the Desirability Function focused on achieving specific targets, MOGWO offered a broader range of trade-off solutions, allowing decision-makers to balance surface quality and production efficiency effectively. For example, Ra ranged from 0.231 μm to 1.250 μm, Rz from 1.519 μm to 6.833 μm, and production times between 813 and 15,870 s, demonstrating the flexibility of these techniques in addressing varied manufacturing objectives.

The application of advanced machine learning models and optimization strategies significantly improve process efficiency, reduce tool wear, and promote sustainable manufacturing practices. This study contributes to the growing body of knowledge on machining optimization, providing valuable insights into reducing resource consumption and supporting greener manufacturing solutions.

CRediT authorship contribution statement

Sofiane Touati: Writing – original draft, Methodology, Investigation, Formal analysis, Data curation. Haithem Boumediri: Writing – original draft, Validation, Methodology, Investigation, Conceptualization. Yacine Karmi: Writing – original draft, Methodology, Formal analysis, Data curation, Conceptualization. Mourad Chitour: Validation, Investigation, Data curation, Conceptualization. Khaled Boumediri: Visualization, Methodology, Formal analysis, Conceptualization. Amina Zemmouri: Writing – review & editing, Validation, Methodology, Formal analysis, Conceptualization. Athmani Moussa: Visualization, Methodology, Conceptualization. Filipe Fernandes: Writing – review & editing, Validation, Resources, Funding acquisition, Conceptualization.

Declaration of competing interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgements

Filipe Fernandes acknowledges the UIDB/00285/2020 and LA/P/0112/2020 projects, sponsored by FEDER Funds through Portugal 2020 (PT2020), the Competitiveness and Internationalization Operational Program (COMPETE 2020), and national funds through the Portuguese Foundation for Science and Technology (FCT).

Contributor Information

Haithem Boumediri, Email: haithem.boumediri@umc.edu.dz.

Filipe Fernandes, Email: fid@isep.ipp.pt.

References

  • 1.Lin G., Shi H., Liu X., Wang Z., Zhang H., Zhang J. Tool wear on machining of difficult-to-machine materials: a review. Int. J. Adv. Manuf. Technol. 2024;134:989–1014. doi: 10.1007/S00170-024-14193-4/METRICS. [DOI] [Google Scholar]
  • 2.Virdi R.L., Pal A., Chatha S.S., Sidhu H.S. A review on minimum quantity lubrication technique application and challenges in grinding process using environment-friendly nanofluids. J. Brazilian Soc. Mech. Sci. Eng. 2023;455:1–25. doi: 10.1007/S40430-023-04159-0. 45 (2023) [DOI] [Google Scholar]
  • 3.Navaneethan G., Palanisamy S., Jayaraman P.P., Bin Kang Y., Stephens G., Papageorgiou A., Navarro J. A review of automated cutting tool selection methods. Int. J. Adv. Manuf. Technol. 2024;133:1063–1082. doi: 10.1007/S00170-024-13823-1/TABLES/3. [DOI] [Google Scholar]
  • 4.Ma Z., Wang Q., Chen H., Chen L., Meng F., Chen X., Qu S., Wang Z., Yu T. Surface prediction in laser-assisted grinding process considering temperature-dependent mechanical properties of zirconia ceramic. J. Manuf. Process. 2022;80:491–503. doi: 10.1016/J.JMAPRO.2022.06.019. [DOI] [Google Scholar]
  • 5.Zeng K., Wu X., Jiang F., Shen J., Zhu L., Wen Q., Li H. The non-traditional and multi-energy field hybrid machining processes of cemented carbide: a comprehensive review. Int. J. Adv. Manuf. Technol. 2024;133:2049–2082. doi: 10.1007/S00170-024-13791-6. 2024 1335. [DOI] [Google Scholar]
  • 6.Lee Y.J., Wang H. Sustainability of methods for augmented ultra-precision machining. Int. J. Precis. Eng. Manuf. Technol. 2023;112(11):585–624. doi: 10.1007/S40684-023-00546-Z. 2023. [DOI] [Google Scholar]
  • 7.Boumediri H., Touati S., Debbah Y., Selami S., Chitour M., Khelifa M., said Kahaleras M., Boumediri K., Zemmouri A., Athmani M., Fernandes F. Effect of carburizing time treatment on microstructure and mechanical properties of low alloy gear steels. Mater. Res. Express. 2024;11 doi: 10.1088/2053-1591/AD5CD6. [DOI] [Google Scholar]
  • 8.Yingjie Z. Energy efficiency techniques in machining process: a review. Int. J. Adv. Manuf. Technol. 2014;71:1123–1132. doi: 10.1007/s00170-013-5551-3. [DOI] [Google Scholar]
  • 9.Hu L., Tang R., Cai W., Feng Y., Ma X. Optimisation of cutting parameters for improving energy efficiency in machining process, Robot Comput. Manuf. 2019;59:406–416. doi: 10.1016/j.rcim.2019.04.015. [DOI] [Google Scholar]
  • 10.Mishra D., Awasthi U., Pattipati K.R., Bollas G.M. Tool wear classification in precision machining using distance metrics and unsupervised machine learning. J. Intell. Manuf. 2023:1–25. doi: 10.1007/S10845-023-02239-5/METRICS. [DOI] [Google Scholar]
  • 11.Li Y., Ding G., Xia C., Ning Y., Jiang L. An iterative optimization algorithm for posture and geometric parameters of grinding wheel based on cross-section sensitivity and matching constraints of solid end mills. J. Manuf. Process. 2022;79:705–719. doi: 10.1016/j.jmapro.2022.05.011. [DOI] [Google Scholar]
  • 12.Ehmann K.F., DeVries M.F. Grinding wheel profile definition for the manufacture of drill flutes. CIRP Ann. - Manuf. Technol. 1990;39:153–156. doi: 10.1016/S0007-8506(07)61024-5. [DOI] [Google Scholar]
  • 13.Ma Z., Wang Q., Liang Y., Cui Z., Meng F., Chen L., Wang Z., Yu T., Liu C. The mechanism and machinability of laser-assisted machining zirconia ceramics. Ceram. Int. 2023;49:16971–16984. doi: 10.1016/J.CERAMINT.2023.02.059. [DOI] [Google Scholar]
  • 14.Karpuschewski B., Jandecka K., Mourek D. Automatic search for wheel position in flute grinding of cutting tools. CIRP Ann. 2011;60:347–350. doi: 10.1016/J.CIRP.2011.03.113. [DOI] [Google Scholar]
  • 15.Ren L., Xu J., Zhang X., Cui X., Ma J. Determination of wheel position in flute grinding of cylindrical end-mills considering tolerances of flute parameters. J. Manuf. Process. 2022;74:63–74. [Google Scholar]
  • 16.Pivkin P.M., Grechishnikov V.A., Malysheva E.Y., Nazarenko E.S., Nadykto A.B. Multicoordinate grinding of helical surfaces on high-strength tool materials. Russ. Eng. Res. 2024;44:854–857. doi: 10.3103/S1068798X24701065. [DOI] [Google Scholar]
  • 17.Kara F., Bulan N., Akgün M., Köklü U. Multi-objective optimization of process parameters in milling of 17-4 PH stainless steel using Taguchi-based gray relational analysis. Eng. Sci. 2023;26 doi: 10.30919/ES961. [DOI] [Google Scholar]
  • 18.Fu X., Lv L., Chen B., Deng Z., Wu M. Pre-control of grinding surface quality by data-driven: a review. Int. J. Adv. Manuf. Technol. 2024;133:3081–3104. doi: 10.1007/S00170-024-13921-0/METRICS. [DOI] [Google Scholar]
  • 19.Uhlmann E., Schroer N., Muthulingam A., Gulzow B. Increasing the productivity and quality of flute grinding processes through the use of layered grinding wheels. Procedia Manuf. 2019;33:754–761. doi: 10.1016/J.PROMFG.2019.04.095. [DOI] [Google Scholar]
  • 20.Feng C., Chen X., Zhang J., Huang Y., Qu Z. Minimizing the energy consumption of hole machining integrating the optimization of tool path and cutting parameters on CNC machines. Int. J. Adv. Manuf. Technol. 2022;121:215–228. doi: 10.1007/S00170-022-09343-5/METRICS. [DOI] [Google Scholar]
  • 21.Denkena B., Bergmann B., Theuer M., Wolters P. Geometrical process design during continuous generating grinding of cutting tools. Int. J. Adv. Manuf. Technol. 2022;121:3871–3882. doi: 10.1007/S00170-022-09573-7/FIGURES/10. [DOI] [Google Scholar]
  • 22.Jiang L., Yang Z., Li Y., Ding G., He X. An optimized calculation method of the grinding wheel profile for the helical flute forming grinding. Int. J. Adv. Manuf. Technol. 2024;132:1649–1664. doi: 10.1007/S00170-024-13447-5/METRICS. [DOI] [Google Scholar]
  • 23.Uhlmann E., Gülzow B., Muthulingam A. Optimising the grinding wheel design for flute grinding processes utilising numerical analysis of the complex contact conditions. J. Mach. Eng. 2020;20:85–94. doi: 10.36897/JME/119641. [DOI] [Google Scholar]
  • 24.Rababah M.M., Chen Z.C. An automated and accurate CNC programming approach to five-axis flute grinding of cylindrical end-mills using the direct method. J. Manuf. Sci. Eng. 2013;135 doi: 10.1115/1.4023271/375533. [DOI] [Google Scholar]
  • 25.Ren L., Wang S., Yi L. A generalized and efficient approach for accurate five-Axis flute grinding of cylindrical end-mills. J. Manuf. Sci. Eng. Trans. ASME. 2018;140 doi: 10.1115/1.4037420/383965. [DOI] [Google Scholar]
  • 26.Prabhu S., Vinayagam B.K. AFM investigation in grinding process with nanofluids using Taguchi analysis. Int. J. Adv. Manuf. Technol. 2012;60:149–160. doi: 10.1007/S00170-011-3599-5/METRICS. [DOI] [Google Scholar]
  • 27.Habrat W.F. Effect of bond type and process parameters on grinding force components in grinding of cemented carbide. Procedia Eng. 2016;149:122–129. doi: 10.1016/J.PROENG.2016.06.646. [DOI] [Google Scholar]
  • 28.Li G., Zhou H., Jing X., Tian G., Li L. An intelligent wheel position searching algorithm for cutting tool grooves with diverse machining precision requirements. Int. J. Mach. Tool Manufact. 2017;122:149–160. doi: 10.1016/J.IJMACHTOOLS.2017.07.003. [DOI] [Google Scholar]
  • 29.Liu X., Chen Z., Ji W., Wang L. Iteration-based error compensation for a worn grinding wheel in solid cutting tool flute grinding. Procedia Manuf. 2019;34:161–167. doi: 10.1016/J.PROMFG.2019.06.134. [DOI] [Google Scholar]
  • 30.Xiao G., Gao H., Zhang Y., Zhu B., Huang Y. An intelligent parameters optimization method of titanium alloy belt grinding considering machining efficiency and surface quality. Int. J. Adv. Manuf. Technol. 2023;125:513–527. doi: 10.1007/S00170-022-10723-0/METRICS. [DOI] [Google Scholar]
  • 31.H F.M., Krishnan A.M., Prabagaran S., Venkatesh R., Kumar D.S., Christysudha J., Seikh A.H., Iqbal A., Ramaraj E. Optimization and prediction of CBN tool life sustainability during AA1100 CNC turning by response surface methodology. Heliyon. 2023;9 doi: 10.1016/J.HELIYON.2023.E18807. [DOI] [PMC free article] [PubMed] [Google Scholar] [Retracted]
  • 32.Al-Alimi S., Yusuf N.K., Ghaleb A.M., Adam A., Lajis M.A., Shamsudin S., Zhou W., Altharan Y.M., saif yazid, Didane D.H., S T T I., Al-fakih M., Alzaeemi S.A., Bouras A., Elfaghi A.M.A., Mohammed H.G. Response surface methodology and machine learning optimisations comparisons of recycled AA6061-B4C–ZrO2 hybrid metal matrix composites via hot forging forming process. Heliyon. 2024;10 doi: 10.1016/J.HELIYON.2024.E33138. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Khelifa H., Bezazi A., Boumediri H., Garcia del Pino G., Reis P.N.B., Scarpa F., Dufresne A. Mechanical characterization of mortar reinforced by date palm mesh fibers: experimental and statistical analysis. Constr. Build. Mater. 2021;300 doi: 10.1016/J.CONBUILDMAT.2021.124067. [DOI] [Google Scholar]
  • 34.Sorour S.S., Saleh C.A., Shazly M. A review on machine learning implementation for predicting and optimizing the mechanical behaviour of laminated fiber-reinforced polymer composites. Heliyon. 2024;10 doi: 10.1016/J.HELIYON.2024.E33681. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Erkan Ö., Işık B., Çiçek A., Kara F. Prediction of damage factor in end milling of glass fibre reinforced plastic composites using artificial neural network. Appl. Compos. Mater. 2013;20:517–536. doi: 10.1007/S10443-012-9286-3/METRICS. [DOI] [Google Scholar]
  • 36.Abu-Hamdeh N.H., Golmohammadzadeh A., Karimipour A. Navigating viscosity of ferrofluid using response surface methodology and artificial neural network. J. Mater. Res. Technol. 2020;9:16339–16348. doi: 10.1016/J.JMRT.2020.11.087. [DOI] [Google Scholar]
  • 37.Yang L., Aghaabbasi M., Ali M., Jan A., Bouallegue B., Javed M.F., Salem N.M. Comparative analysis of the optimized KNN, SVM, and ensemble DT models using Bayesian optimization for predicting pedestrian fatalities: an advance towards realizing the sustainable safety of pedestrians. Sustain. Times. 2022;14 doi: 10.3390/SU141710467. 14 (2022) 10467. [DOI] [Google Scholar]
  • 38.Yuan Y., Wang W., Pang W. IEEE Congr. Evol. Comput. CEC 2021 - Proc; 2021. A genetic algorithm with tree-structured mutation for hyperparameter optimisation of graph neural networks; pp. 482–489. 2021. [DOI] [Google Scholar]
  • 39.Juang C.F. A hybrid of genetic algorithm and particle swarm optimization for recurrent network design. IEEE Trans. Syst. Man Cybern. B Cybern. 2004;34:997–1006. doi: 10.1109/TSMCB.2003.818557. [DOI] [PubMed] [Google Scholar]
  • 40.Arroyo J.C.T., Delima A.J.P. An optimized neural network using genetic algorithm for cardiovascular disease prediction. J. Adv. Inf. Technol. 2022;13:95–99. doi: 10.12720/JAIT.13.1.95-99. [DOI] [Google Scholar]
  • 41.Li X., Wang H., Wang B., Guan Y. Machine learning methods for prediction analyses of 4H–SiC microfabrication via femtosecond laser processing. J. Mater. Res. Technol. 2022;18:2152–2165. doi: 10.1016/J.JMRT.2022.03.124. [DOI] [Google Scholar]
  • 42.Othman R., Chong B.W., Jaya R.P., Mohd Hasan M.R., Al Bakri Abdullah M.M., Wan Ibrahim M.H. Evaluation on the rheological and mechanical properties of concrete incorporating eggshell with tire powder. J. Mater. Res. Technol. 2021;14:439–451. doi: 10.1016/J.JMRT.2021.06.078. [DOI] [Google Scholar]
  • 43.Abu-Hamdeh N.H., Golmohammadzadeh A., Karimipour A. Performing regression-based methods on viscosity of nano-enhanced PCM - using ANN and RSM. J. Mater. Res. Technol. 2021;10:1184–1194. doi: 10.1016/J.JMRT.2020.12.040. [DOI] [Google Scholar]
  • 44.Mahmood J., Mustafa G.e., Ali M. Accurate estimation of tool wear levels during milling, drilling and turning operations by designing novel hyperparameter tuned models based on LightGBM and stacking. Measurement. 2022;190 doi: 10.1016/J.MEASUREMENT.2022.110722. [DOI] [Google Scholar]
  • 45.Liu H., He S., Peng J. K-nearest neighbor based on exploratory data analysis of curriculum models of Chinese early childhood education. Heliyon. 2024;10 doi: 10.1016/J.HELIYON.2024.E33781. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46.Farooq M.U., Kumar R., Khan A., Singh J., Anwar S., Verma A., Haber R. Sustainable machining of Inconel 718 using minimum quantity lubrication: artificial intelligence-based process modelling. Heliyon. 2024;10 doi: 10.1016/J.HELIYON.2024.E34836. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Montesinos López O.A., Montesinos López A., Crossa J. Support vector machines and support vector regression. Multivar. Stat. Mach. Learn. Methods Genomic Predict. 2022:337–378. doi: 10.1007/978-3-030-89010-0_9. [DOI] [Google Scholar]
  • 48.Huang A., Bi Q., Dai L., Hosseinzadeh H. Developing a hybrid technique for energy demand forecasting based on optimized improved SVM by the boosted multi-verse optimizer: investigation on affecting factors. Heliyon. 2024;10 doi: 10.1016/J.HELIYON.2024.E28717. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49.Karthik R.M.C., Malghan R.L., Kara F., Shettigar A., Rao S.S., Herbert M.A. Influence of support vector regression (SVR) on cryogenic face milling. Adv. Mater. Sci. Eng. 2021;2021 doi: 10.1155/2021/9984369. [DOI] [Google Scholar]
  • 50.Shozib I.A., Ahmad A., Rahaman M.S.A., Abdul-Rani A.M., Alam M.A., Beheshti M., Taufiqurrahman I. Modelling and optimization of microhardness of electroless Ni–P–TiO2 composite coating based on machine learning approaches and RSM. J. Mater. Res. Technol. 2021;12:1010–1025. doi: 10.1016/J.JMRT.2021.03.063. [DOI] [Google Scholar]
  • 51.Touati S., Ghelani L., Zemmouri A., Boumediri H. Optimization of gas carburizing treatment parameters of low carbon steel using Taguchi and grey relational analysis (TA-GRA) Int. J. Adv. Manuf. Technol. 2022;120:7937–7949. doi: 10.1007/S00170-022-09302-0/METRICS. [DOI] [Google Scholar]
  • 52.Mirjalili S., Saremi S., Mirjalili S.M., Coelho L.D.S. Multi-objective grey wolf optimizer: a novel algorithm for multi-criterion optimization. Expert Syst. Appl. 2016;47:106–119. doi: 10.1016/J.ESWA.2015.10.039. [DOI] [Google Scholar]
  • 53.Mirjalili S., Mirjalili S.M., Lewis A. Grey wolf optimizer. Adv. Eng. Software. 2014;69:46–61. doi: 10.1016/J.ADVENGSOFT.2013.12.007. [DOI] [Google Scholar]
  • 54.Nas E., Kara F. Optimization of EDM machinability of hastelloy C22 super alloys. Mach. 2022;10:1131. doi: 10.3390/MACHINES10121131. 10 (2022) 1131. [DOI] [Google Scholar]
  • 55.Del Pino G.G., Bezazi A., Boumediri H., Rivera J.L.V., Kieling A.C., Garcia S.D., Neto J.C. de M., Dos Santos M.D., Panzera T.H., Torres A.R., Méndez C.A.C., Diaz F.R.V. Numerical and experimental analyses of hybrid composites made from amazonian natural fibers. J. Res. Updates Polym. Sci. 2023;12:10–18. doi: 10.6000/1929-5995.2023.12.02. [DOI] [Google Scholar]
  • 56.Del Pino G.G., Bezazi A., Boumediri H., Kieling A.C., Garcia S.D., Torres A.R., De Souza Soares R., De Macêdo Neto J.C., Dehaini J., Panzera T.H. Optimal tensile properties of biocomposites made of treated amazonian curauá fibres using Taguchi method. Mater. Res. 2021;24 doi: 10.1590/1980-5373-MR-2021-0326. [DOI] [Google Scholar]

Articles from Heliyon are provided here courtesy of Elsevier

RESOURCES