Skip to main content
ACS Omega logoLink to ACS Omega
. 2025 Sep 1;10(36):41707–41718. doi: 10.1021/acsomega.5c05550

A Synergistic Framework for Hardness Prediction and Design of High-Entropy Alloys Based on Deep Learning and Intelligent Optimization Algorithms

Kai Wang 1,*, Xuewen Zhou 1, Chuan Liu 1, Xiaohui Li 1
PMCID: PMC12444539  PMID: 40978448

Abstract

A synergistic framework combining deep learning with intelligent optimization algorithms has been proposed to predict the hardness and optimize the composition of Al–Ti–Co–Cr–Fe–Ni system high-entropy alloys (HEAs). This forward prediction model systematically refines and selects candidate features through correlation analysis, solid solution strengthening theory, and the NSGA-III algorithm. A hybrid deep learning model integrating transformer attention mechanisms with a multilayer perceptron has been developed, enabling high-accuracy prediction of HEA hardness (R 2 = 0.9813, RMSE = 10.23577). Furthermore, SHAP analysis was employed to investigate the causal links between features and hardness. An inverse design model based on the Egret Swarm Optimization Algorithm was applied to perform reverse optimization of the forward model, achieving optimal compositional combinations for the specified hardness targets. Validation through laser metal deposition experiments demonstrated that the hardness of the designed alloys matched well with the predicted results, with deviations below 10%. In conclusion, the overall framework for the prediction and design of HEA properties was systematically summarized.


graphic file with name ao5c05550_0012.jpg


graphic file with name ao5c05550_0010.jpg

1. Introduction

High Entropy Alloys (HEAs), also referred to as multiprincipal element alloys, , are alloys characterized by the presence of multiple principal elements. Owing to their high mixing entropy, HEAs are prone to forming single-phase solid solutions. The distinctive high-entropy effect, lattice distortion, sluggish diffusion, and cocktail effects confer superior mechanical properties, corrosion resistance, and special functional features to HEAs. Nevertheless, the extensive diversity of constituent elements and the immense compositional space have rendered traditional experimental and trial-and-error methods insufficient for the demands of fast and efficient material development. Computational and simulation techniques, including ab initio (first-principles) calculations, , CALPHAD, , and molecular dynamics, , offer partial improvements over traditional trial-and-error strategies. However, the complex formation mechanisms of HEAs make it difficult for these methods to flexibly construct models, resulting in low prediction accuracy and high computational costs. Consequently, the exploration of novel materials design strategies and methods to improve the efficiency and precision of HEAs development has emerged as a forefront issue in materials science.

In recent years, advancements in machine learning have offered novel perspectives and tools for the study of materials science. Conventional machine learning techniques exhibit strong data analysis and pattern recognition abilities, effectively revealing the complex correlations between composition, structure, and properties in material systems. Bundela and colleagues utilized eight machine learning algorithms to predict the mechanical properties of high-entropy alloys, with the highest achieved R 2 reaching 0.91. Lee and co-workers applied artificial neural networks to predict the performance of Al–Co–Cr–Fe–Mn–Ni system high-entropy alloys, achieving an optimal hardness prediction of 650 Hv. Nevertheless, conventional machine learning algorithms typically depend on manually crafted features, involving only simple combinations and divisions, thereby lacking automated high-dimensional feature abstraction and limiting their expressiveness for continuous nonlinear relationships, especially in complex nonlinear scenarios. As a major branch of machine learning, deep learning stands out with its superior nonlinear mapping and feature extraction abilities, enabling it to more effectively capture the underlying deep patterns in complex multicomponent materials like high-entropy alloys. , Bakr and coauthors employed an ANN model to predict the hardness of high-entropy alloys, resulting in an R 2 of 0.88. Nazir and his team utilized a deep neural network (DNN) model to predict phase structures and mechanical properties across various high-entropy alloys, achieving an R 2 of 0.94 and successfully addressing the complexity of multicomponent systems. Despite the notable achievements of traditional machine learning and early deep learning applications, these approaches still exhibit considerable limitations. These approaches are generally restricted to simple predictions of individual mechanical properties, with the lack of targeted feature selection resulting in models that are neither flexible nor universally applicable. In addition, the sparse and limited nature of HEA data sets constrains current research to specific alloy systems, impeding effective generalization and expansion across various systems and properties. Thus, it is imperative to establish a more systematic, efficient, and interpretable prediction and design framework, enabling the integrated utilization of multidimensional features and wider applicability across alloy systems, to further enhance the precision and practicality of high-entropy alloy performance prediction and design.

Despite the progress made in predicting high-entropy alloy performance, there are still several key challenges that urgently need to be resolved. In this study, we target the limitations of traditional trial-and-error approaches, which are time-consuming, laborious, and incapable of enabling performance-driven inverse compositional optimization for high-entropy alloys. Furthermore, conventional machine learning techniques suffer from random feature selection and inadequate deep nonlinear mapping abilities, limiting their accuracy and generalizability in predicting alloy properties. Initially, we collected and preprocessed the hardness data of 198 Al–Ti–Co–Cr–Fe–Ni system high-entropy alloy samples. Subsequently, an optimized set of candidate feature parameters was developed through feature correlation analysis, solid solution strengthening theory, and the NSGA-III algorithm. A highly accurate and efficient forward prediction model for alloy hardness was then built utilizing deep learning algorithms. Using the SHAP method, we analyzed the contribution and interaction mechanisms of each feature parameter toward alloy hardness. Subsequently, inverse design algorithms combined with intelligent optimization techniques were utilized to propose alloy composition strategies corresponding to target hardness levels. The validity of these design strategies was confirmed through experimental verification. In conclusion, a synergistic prediction and design framework integrating deep learning and optimization algorithms for HEAs was established to facilitate their rapid development. This work enhances both the accuracy and efficiency of performance prediction for HEAs, and offers a generalizable design methodology and theoretical foundation for multicomponent alloy systems, thereby greatly promoting the systematization and intelligent advancement of high-entropy alloy design and development.

2. Methods

2.1. Source Data Set and Candidate Feature Data Set

The size and quality of the data set are essential for the success of data-driven deep learning prediction approaches. Drawing on data collected from published literature, this study employed a source data set for HEAs, comprising the atomic molar ratios of Al, Ti, Co, Cr, Fe, and Ni elements as well as hardness values. Hardness values in the data set were measured under as-cast conditions and within relatively stable alloy phases. In this study, the data set was refined by eliminating duplicate data entries. As for the outliers present in the data set, based on research experience, these were considered to reflect true differences stemming from the intrinsic properties of HEAs rather than measurement errors. In the end, the data set comprised hardness values from 198 samples of Al–Ti–Co–Cr–Fe–Ni system high-entropy alloys. Figure a presents the distribution of hardness measurements in the collected experimental data set, and Figure b depicts the distribution of atomic molar ratios for each element.

1.

1

(a) Hardness value distribution. (b) Atom molar ratio distribution.

Beyond atomic molar ratios, the hardness of HEAs is heavily influenced by the conventional solid solution strengthening effect (SSS), typically related to mismatches in atomic size and modulus. Given that several material feature parametersincluding atomic size mismatch (δr), atomic stacking mismatch factor (γ), Young’s modulus (E), shear modulus (G), shear modulus difference (δG), lattice distortion energy (μ), Peierls–Nabarro factor (F), and the energy term (A) in strengthening modelsare closely related to atomic size and modulus, they were initially included in the candidate material feature set. In Wang et al.’s research, the sixth power of the work function (w 6) showed a linear correlation with the yield strength of alloys, leading to its inclusion in the material feature set. Moreover, the phases of high-entropy alloys are strongly correlated with their hardness. Guo and colleagues distinguished various phases using empirical parameters γ, Ω, and Λ based on the classic Hume–Rothery rules, and analyzed how parameters such as mixing enthalpy (ΔH mix), mixing entropy (ΔS mix), Gibbs free energy (ΔG mix), average melting point (T m), electronegativity difference (Δχ), and valence electron concentration (VEC) affect phase formation in high-entropy alloys. These parameters were deemed significant and subsequently added to the candidate material feature set. The proposed HEAs candidate feature set consists of 17 material features. These features, obtained through approximate formulas, have been widely employed in predicting the properties of HEAs.

2.2. Benchmark Deep Learning Algorithms and Validation Metrics

It is necessary to build a forward prediction model that utilizes 17 candidate feature parameters to predict the hardness of HEAs. Upon building the forward prediction model, an inverse design model targeting the atomic molar ratios of HEAs must also be developed. Selecting the appropriate algorithm is vital for ensuring that the DL model can predict material properties both accurately and efficiently. Given that HEAs hardness prediction involves intricate nonlinear interactions between elemental compositions and processing parameters, traditional machine learning methods often fail to effectively capture high-order feature interactions, thus, deep learning algorithms are introduced to enable the predictive model to autonomously learn the correlation weights among input features, allowing it to potentially uncover synergistic effects among feature parameters that influence hardness in the prediction task. A model handling multidimensional inputs demands robust inference ability and efficient iteration speed, thus, it is crucial to select not only a high-performing algorithm for constructing the model, but also an appropriate optimization algorithm to optimize the hyperparameters of the final model.

For the inverse design model targeting the atomic molar ratios of HEAs, the first step is to utilize the previously established forward prediction model to build the full HV-Constitution search space. Subsequently, the Egret Swarm Optimization Algorithm is used to locate the HEAs with the desired hardness and their associated atomic molar ratios within the HV-Constitution search space. In view of the above, the following section introduces several traditional machine learning and deep learning algorithms.

Random Forest (RF) and XGBoost are both ensemble learning (EL) algorithms based on decision trees and are widely used traditional machine learning methods for high-entropy alloy hardness prediction (RF relies on Bagging, and XGBoost on Boosting). As EL algorithms aggregate the outputs of multiple base learners, they typically achieve better prediction accuracy than individual learners. However, RF and XGBoost usually depend on manually crafted features or simple recombination and partitioning of existing features, and lack the ability for automated high-dimensional abstract feature extraction. Consequently, their capacity to represent continuously varying nonlinear relationships is limited, and they may perform worse than deep learning algorithms in complex nonlinear mapping tasks. Artificial Neural Networks (ANN) are endowed with strong nonlinear mapping capabilities through their backpropagation and activation function mechanisms. Both the Multilayer Perceptron (MLP) and the recently introduced Kernelized Attention Network (KAN) are neural network-based deep learning models, designed to fit complex functions and data mappings via nonlinear transformations, allowing them to capture intricate nonlinear relationships. They are particularly well-suited for capturing the complex relationships between composition, features, and properties in high-entropy alloys.

Nevertheless, deep learning algorithms were initially not widely applied due to their nature as black-box models, lacking transparency and interpretability. This work applies the SHAP (Shapley Additive Explanations) method to investigate the influence of individual structural features on HV within the optimal prediction model. SHAP, a game theory-based method, represents a novel approach for post hoc interpretation of machine learning models. The essence of SHAP is to compute the marginal contribution (SHAP value) of each feature to the model’s output, enabling interpretation of the black-box model at both global and local levels. It enables tracking of substantial useful information, thereby addressing the long-standing issue of poor interpretability in traditional deep learning algorithms, and further uncovering hidden relationships between material features and the data set captured by the model. This study employs Root Mean Square Error (RMSE) and the coefficient of determination (R 2) as the evaluation criteria.

RMSE=1ni=1n(yiŷi)2 1
R2=1i=1n(yiŷi)2i=1n(yii)2 2

2.3. Reverse Engineering Model

The Egret Swarm Optimization Algorithm (ESOA) belongs to the class of intelligent optimization algorithms (IOA). Inspired by the foraging behavior of egrets in wetland ecosystems, ESOA is known for its fast convergence speed and high optimization accuracy. Within the ESOA framework, each egret individual X i corresponds to a specific combination of atomic molar ratios and hardness. The optimization is achieved through iterative global and local searches that guide the solution toward the optimum.

During the initial phase of optimization, the egret individuals conduct a wide-ranging search throughout the parameter space to identify potential high-quality solutions. The position update equation is given as follows:

Xit+1=Xit+r1(XbesttXit)+r2(XrandtXit) 3

Where X i denotes the parameter value of the i-th egret at the t-th generation. X best indicates the best-performing individual within the current population. X rand corresponds to a randomly chosen individual from the population. r 1 and r 2 are stochastic weight factors drawn from a normal distribution. As the egret individuals approach the optimal solution, the ESOA algorithm transitions into a local refinement phase. It conducts a fine-scale search within a narrowed region to enhance the solution accuracy:

Xit+1=Xit+α(XlocaltXit) 4

Where X local denotes the locally optimal individual. α is an adaptive learning rate that dynamically regulates the search step size.

The composition space of Al x Ti y Co z Cr u Fe v Ni w high-entropy alloys maps onto an extensive HV-Constitution space, encompassing the 198 samples from the source data set. The atomic molar fractions x, y, z, u, v, and w are varied with a step size of 1 atomic percent (at. %). Given the exploratory nature of HEAs design, adopting a finer resolution is unsuitable. Moreover, due to the large size of the search space, it is necessary to integrate the elemental content ranges from the HEAs with the data set distributions to set constraints and generate new atomic molar ratio combinations. These newly generated atomic molar ratios are then fed into the final prediction model to obtain the corresponding predicted hardness values. The HV-Constitution search space is thus constructed by pairing the unknown atomic molar ratios with their predicted hardness values. Finally, the ESOA algorithm is applied to explore the HV-Constitution space and identify the target predicted hardness along with the corresponding optimal composition (atomic molar ratios).

maxHardness=D(x,y,z,u,v,w) 5
s.t.{x+y+z+u+v+w=100x+y[0,60]yx=0.4,0.8,1.2,1.6,240z,u,v,w100·or·z,u,v,w=0 6

Here, x, y, z, u, v, and w denote the molar fractions of Al, Ti, Cr, Co, Fe, and Ni, respectively. The mechanical strength and ductility of Al–Ti–Co–Cr–Fe–Ni alloys exhibit marked brittle-to-ductile transitions influenced by the addition levels of Al and Ti elements. To ensure that the final alloy products meet experimental standards, the molar ratios of Al and Ti are restricted to the range [0, 60]. Additionally, the synergy between Ti and Al plays a critical role in affecting the hardness. Accordingly, the Ti/Al ratio is constrained to discrete values of 0.4, 0.8, 1.2, 1.6, and 2.0. The boundary conditions for elements such as Co, Cr, Fe, and Ni are primarily determined according to the guidelines proposed by Yeh J. W.

In Figure , the Pearson Correlation Coefficient (PCC) is computed to assess the relationship between individual elemental hardness and the overall hardness of HEAs. The formula for calculating the PCC is given as follows:

PCC=i=1n(ui)(vi)i=1n(ui)2i=1n(vi)2 7

2.

2

Pearson Correlation Coefficient (PCC) between hardness and individual elements.

In the formula, the numerator corresponds to the covariance between feature u i and feature v i , and the denominator is the product of their respective standard deviations.

The analysis indicates that the Al element plays a major role in enhancing the hardness of HEAs. Accordingly, Al was designated as the initial point in the optimization algorithm to monitor the optimization behavior of other alloying elements.

In addition to the dominant contributions of Al and Ti, SHAP analysis in Figure reveals that Cr and Co also play positive roles in enhancing hardness. Cr promotes BCC phase formation and contributes to lattice distortion when combined with Ti, thus reinforcing the matrix. Co enhances hardness through solid solution strengthening and by increasing the Peierls stress via short-range ordering. In contrast, Ni and Fe exhibit relatively lower SHAP values, consistent with their known function as FCC stabilizers, which enhances ductility but can suppress hardness. These observations are in agreement with both the PCC results (Figure ) and reported experimental trends in CoCrFeNi-based HEAs fabricated via LMD.

3.

3

SHAP analysis of six elements.

The compositions corresponding to the target predicted hardness were prepared using Laser Metal Deposition (LMD). The true hardness was then measured and compared to the predicted hardness, and the error was calculated accordingly.

3. Results and Discussion

3.1. Synergistic Framework for Prediction and Design

To achieve the prediction and integrated design of high-hardness HEAs, this study drew inspiration from comprehensive engineering evaluation systems and constructed a synergistic design framework integrating deep learning prediction models and intelligent optimization algorithms. The framework consists of two functional modules: a forward model for predicting hardness, and an inverse compositional optimization model for determining HEA atomic molar ratios.

The forward prediction model is based on a Transformer-MLP hybrid architecture and uses NSGA-III and SHAP methods to identify an optimal feature set for efficient hardness prediction. Based on output results, R 2 values, and RMSE metrics, the Transformer-MLP model demonstrated superior performance among all models and was selected as the final prediction model.

Meanwhile, to achieve inverse design of HEAs with target hardness, the aforementioned prediction model was used as the basis, and ESOA was employed to search the HV-Constitution space for optimal HEA compositions. Figure presents the schematic diagram of the framework. This framework is not limited to hardness; it can be extended to deep learning-based prediction of other properties such as alloy phase, tensile strength, and ultimate tensile strength, enabling large-scale intelligent screening of HEAs.

4.

4

Schematic diagram of the deep learning-based prediction and design framework for HEAs.

3.2. Optimization of Candidate Feature Set

3.2.1. Relevance Analysis for Selecting Candidate Feature Sets

Because of data collinearity, highly correlated feature sets may adversely affect the predictive performance. To assess the presence of redundant features in the candidate set, the Pearson Correlation Coefficients (PCC) between features were computed, with results shown in Figure a. Figure b presents the feature ranking according to their correlation with hardness as determined by PCC. If the |PCC| of these features exceeds 0.8, they are regarded as highly correlated. As a result, the feature sets [δr, G],[G, Ω], and [VEC, E, G] shown in Figure a demonstrate strong correlations. In these cases, it is inappropriate to delete features merely by evaluating their individual importance, as this would overlook the synergistic contributions to the hardness of HEAs. Furthermore, PCC evaluates relationships based on the linear dependence between variables. However, when nonlinear relationships are present between features, PCC may inadequately capture the true correlations. The formation mechanisms of HEAs are inherently complex, involving significant nonlinear interactions. Solely using PCC for feature selection risks neglecting these nonlinearities, thereby missing critical features. Moreover, PCC accounts only for pairwise feature interactions and fails to capture the higher-order complex relationships among multiple features.

5.

5

(a) Heatmap of the Pearson Correlation Coefficients (PCC) between individual features. (b) Correlation ranking between features and hardness.

The preliminary candidate feature set [VEC, F, G, Δχ, Ω, E, γ] was selected based on the correlation analysis.

3.2.2. Optimization of Traditional Solid Solution Theory

To further enhance the candidate feature set selected after the PCC analysis, Ren W et al. explored the theory related to solid solution strengthening: The hardness enhancement mechanism in HEAs is mainly composed of solid solution strengthening (SSS), grain boundary strengthening, precipitation strengthening, and phase transformation strengthening. SSS is responsible for enhancing yield strength (Δσ) or Vickers hardness (HV), and both Δσ and HV are positively correlated with the following equation:

Δσ(MPa)=9.813HV(Hν) 8

At the microstructural level, the SSS in HEAs primarily results from lattice distortion caused by solute atoms and the generation of slip dislocations. In particular, lattice distortions due to atomic size mismatch and modulus mismatch among metal elements are key contributors to alloy strengthening. The classical Labusch model provides an in-depth explanation of how atomic size and modulus mismatches influence solid solution strengthening (SSS). A majority of conventional mathematical models for solid solution strengthening are extensions derived from the Labusch model. For instance, Thirathipviwat P et al. demonstrated a strong positive correlation between microhardness enhancement due to pronounced lattice distortion and the atomic size mismatch parameter (δr). Ma et al. reported that atomic size mismatch promotes the development of wavy dislocation structures, thus contributing substantially to the solid solution strengthening of HEAs. Toda-Caraballo et al. expanded the Labusch model from binary alloys to multicomponent dilute alloys through the Gypen model. They evaluated lattice distortion contributions from each element by calculating atomic spacing variations across different components, thereby quantifying their impact on solid solution strengthening, resulting in a more precise evaluation of the solid solution strengthening capacity of alloys. Thus, atomic size and modulus mismatch are crucial factors influencing the solid solution strengthening effect. The solid solution strengthening equation formulated by Toda-Caraballo is expressed as follows:

ΔσSSS=ξGZδ 9

In the formula, ΔσSSS represents the parameter used to quantify the extent of solid solution strengthening. ξ denotes the structural factor in the SSS model, assigned as 4 for BCC phases and 1 for FCC phases. Z represents the solid solution strengthening factor. δ denotes the SSS strengthening factor in the model, associated with atomic size mismatch.

Based on the aforementioned theory, it was observed that the hardness of HEAs, as selected through PCC, correlates with the mismatch in modulus, atomic radius, and electronegativity. This implies that these features are linked to the strengthening of HEAs hardness. Building on this, the data selected in Section , including E, G, and the bulk modulus (K), were used as the raw input data. The five mismatch-related features were calculated using the following equations:

δd=i=1nci(1did)2, 10
Δd=i=1nci(ddi)2, 11
D.d=i=1nj=1,ijncicj|didj|, 12
η.d=i=1nci2(did)di+d1+0.5|ci2(did)di+d|, 13
M.d=Max[ci(1did)2]Min[ci(1did)2]. 14

In this context, d represents parameters such as Young’s modulus (E), shear modulus (G), bulk modulus, and electronegativity difference (Δχ), d i corresponds to the parameter value for each element in HEAs, and c i indicates the molar fraction of each element in HEAs. Given that these parameters are linked to atomic size mismatch, the atomic radius is also expanded into a size mismatch feature using the same method as above. The recalculated features are combined with the feature set selected in Section , forming an expanded feature set containing 22 parameters, including [VEC, F, G, Δχ, Ω, E, γ, δd, Δd, D · d, η · d, M · d], The feature selection is subsequently carried out using NSGA-III.

3.2.3. Selection of Features in the NSGA-III Algorithm

NSGA-III is an evolutionary algorithm suitable for multiobjective optimization tasks, which can find trade-off solutions among conflicting objectives. In this study, the NSGA-III algorithm was implemented as a wrapper around the MLP model, using the MLP’s 10-fold cross-validation RMSE as one objective, and the number of selected features as the other objective. The goal of this method is to minimize feature dimensionality without sacrificing prediction accuracy, thus improving the model’s generalization and interpretability. In the implementation, each feature subset is encoded as a 32-bit binary vector, with each bit representing the selection status of a corresponding feature. At the initialization stage, a population of binary vectors is randomly generated to form the initial pool of feature combinations.

NSGA-III employs nondominated sorting to categorize individuals, prioritizing those that are nondominated across all objectives, thus forming the Pareto-optimal solutions. A Pareto-optimal solution is one that cannot be simultaneously surpassed by others in all objectives, and these collectively form the Pareto front representing optimal diversified trade-offs.

To promote diversity within the population, NSGA-III adopts a reference-point based guidance mechanism that directs selection operations toward achieving diversification. Crossover is performed by randomly selecting two individuals and exchanging portions of their binary genes (feature selections) to facilitate feature information recombination and propagation. Mutation is achieved by randomly flipping certain bits (adding or removing features), which introduces new candidate solutions while maintaining convergence efficiency and improving escape from local optima.

Ultimately, NSGA-III progressively refines the population with each generation, producing a set of feature subsets that optimally balance prediction error and feature quantity. These Pareto-optimal solutions offer multiple candidate models for further development and assist in understanding how different feature combinations influence model performance, thereby supporting practical feature selection strategies.

Figure a shows the SHAP analysis results of feature selection performed by the NSGA-III algorithm. As depicted in Figure b, the NSGA-III algorithm converges to the optimal solution at the 20th generation, yielding [VEC, G, M.E] as the optimal feature subset. Compared to the initially selected optimized feature set of 22 features through correlation analysis and solid solution strengthening theory, the NSGA-III optimization reduces the DL model input to only three features, dramatically minimizing redundancy and model complexity by replacing many features with just G and M.E. Figure c–e illustrates the fitting curves between the actual and predicted hardness values when using feature sets selected by PCC, SSS, and NSGA-III as MLP inputs, showing that the final feature set [VEC, G, M.E] delivers the best fitting performance, achieving an RMSE of 47.26670 and an R 2 of 0.8513.

6.

6

(a) SHAP analysis for the eight features selected by the NSGA-III algorithm, The importance of the eight features is ranked in descending order from top to bottom, with VEC identified as the most critical feature for predicting the hardness of HEAs. (b) Feature selection iteration process using the NSGA-III algorithm. (c) Fitting curve between actual and predicted hardness of HEAs using seven MLP input features selected through PCC optimization in 10-fold cross-validation. (d) Fitting curve between actual and predicted hardness of HEAs using 32 MLP input features optimized through SSS in 10-fold cross-validation. (e) Fitting curve between actual and predicted hardness of HEAs using three MLP input features selected by the NSGA-III algorithm under 10-fold cross-validation.

The three ultimately selected candidate material features [VEC, G, M.E] were incorporated into the original data set comprising the molar ratios of six elements, resulting in a final nine-feature data set used as input for the deep learning model.

The selected feature subset is optimized based on the MLP model’s performance. Although these features exhibit strong physical interpretability and may benefit other models, their effectiveness for alternative algorithms has not been verified in this work and could be explored in future studies.

3.3. Construction of Deep Prediction Models

In order to select the most suitable benchmark algorithm and to compare the strengths of traditional machine learning algorithms and deep learning methods, this study utilized ensemble learning algorithms known for strong predictive performance but limited extrapolation ability, including RF (Bagging-based) and XGBoost (Boosting-based [28]), a single-hidden-layer Artificial Neural Network (ANN) was also incorporated, Regarding deep learning approaches, a Multilayer Perceptron (MLP) and the newly introduced Kernelized Attention Network (KAN) were selected as benchmarks. MLP, as a fundamental and widely applied feedforward neural network, contains multiple hidden layers that enable it to capture deeper feature relationships and possess strong generalization capabilities, especially in high-dimensional and complex feature interaction scenarios. KAN integrates the nonlinear modeling capabilities of kernel methods with the flexibility of attention mechanisms by explicitly incorporating kernel functions into the model, thereby improving its ability to fit complex data patterns and enhancing extrapolation performance. Initially, the nine selected candidate material features served as input variables, and the HEAs hardness values were used as outputs to train each of the aforementioned benchmark algorithms sequentially. To ensure that each benchmark algorithm’s potential was maximized during training, the Desert Cat Swarm Optimization algorithm was employed to search for hyperparameters that minimize the model’s RMSE.

The optimized hyperparameters were applied to the deep learning algorithms, and their RMSE and R 2 were evaluated via 10-fold cross-validation, with the results presented in Figure . As illustrated in Figure , the MLP model achieved the highest R 2 value along with the lowest RMSE, suggesting that MLP exhibited the best fitting performance on this data set. Thus, MLP was chosen as the benchmark model for further optimization in the next stages.

7.

7

Fitting performance of five machine learning algorithms on the data set.

Previously, to enhance the physical interpretability of the model, no data preprocessing operations such as feature standardization or dimensionality reduction were conducted on the feature set, which also resulted in the limited capability of MLP, used as a benchmark algorithm, to learn complex nonlinear relationships.To address this, a Transformer-based attention mechanism was incorporated. A Transformer attention module was designed as a feature extractor and integrated with the Multilayer Perceptron (MLP) model, creating a hybrid architecture (HybridModel). The purpose of this design is to improve the model’s learning capacity for complex nonlinear interactions, surpassing the performance of the baseline MLP-only regressor. Figure presents the scatter plots comparing predicted and actual hardness values across the five benchmark algorithms and the Transformer-MLP hybrid prediction model. The marginal histograms illustrate the distribution profiles of actual and predicted hardness values.

8.

8

Predicted versus actual hardness fitting curves for the five benchmark algorithms and the Transformer-MLP hybrid prediction model, with marginal histograms illustrating the distributions of actual and predicted hardness.

3.4. Reverse Design and Experimental Verification

3.4.1. Reverse Design Algorithms and Principles

The forward prediction model is used to predict the hardness values of specific HEAs samples based on their elemental molar ratios. In practical scenarios, determining the elemental molar ratios for certain HEAs samples at specific hardness levels, especially high hardness, becomes essential. To fulfill this requirement, an inverse design model was developed to find the optimal elemental molar ratios for HEAs samples corresponding to the desired hardness values. To select an appropriate intelligent optimization algorithm (IOA) that ensures rapid and accurate convergence to the optimal target during the optimization process, trials were performed comparing ESOA and several of its improved variants (CL-ESOA, I-ESOA, and P-ESOA).

CL-ESOA (Comprehensive Learning ESOA): This variant improves the global search capability of ESOA by introducing a comprehensive learning strategy, which strengthens information sharing among egret individuals. By allowing each individual to learn from the historical best positions of other individuals in the population, the algorithm effectively increases population diversity and avoids premature convergence.

I-ESOA (Improved ESOA): This version integrates multiple mechanisms to enhance exploration and convergence, including chaotic initialization based on logistic mapping, nonlinearly decreasing inertia weights, adaptive learning factors, and genetic-style mutation and crossover operators. These modifications aim to maintain diversity while accelerating convergence speed in complex optimization tasks.

P-ESOA (Predictive ESOA): This variant incorporates a position prediction mechanism that leverages the historical trajectory of each particle to estimate and guide its future search direction. This predictive feature enhances forward-looking capability and reduces inefficient exploration.

As illustrated in Figure , ESOA outperformed all its variants, proving to be the best IOA among those tested. The hardness values optimized through ESOA converged most rapidly to the optimal point. Therefore, ESOA was selected to optimize the Transformer-MLP hybrid prediction model.

9.

9

Search performance of ESOA and its variants for the optimal value in the HV-Constitution space.

All four algorithms were independently executed for 30 runs to account for stochastic variability. Figure displays the convergence curve of the median-performing run among the 30, while Table reports the averaged best hardness value, standard deviation, number of iterations to converge, and runtime. Although I-ESOA and CL-ESOA achieved comparable final hardness values, ESOA demonstrated superior performance in terms of convergence stability, iteration efficiency, and computational cost. Therefore, ESOA was selected as the optimal optimization strategy for the reverse design module.

1. Experimental and Predicted Hardness Values of the Five Specimens Are Presented.
CoCrFeNiTiAl molar ratio predicted hardness (Hv) experimental hardness (Hv) error (%)
Al Ti Co Cr Fe Ni
42.87 9.66 11.13 13.64 11.75 10.95 207.56 195.93 ± 3.22 5.63
30.07 6.41 15.62 17.03 16.34 14.53 219.01 199.76 ± 6.02 8.77
10.04 7.32 20.95 21.23 20.9 19.57 624.37 604.21 ± 1.22 3.23
23.29 35.21 9.45 12.63 10.49 8.93 628.08 611.35 ± 8.52 2.65
20.71 35 10.34 13.24 10.72 9.98 635.23 619.65 ± 16.27 2.46

3.4.2. Experimental Verification

To evaluate the capability of the ESOA-optimized Transformer-MLP hybrid prediction model in discovering high-hardness HEA atomic molar ratios, ESOA was employed to conduct a search through the Transformer-MLP hybrid framework. A total of five Co–Cr–Fe–Ni–Ti-Al HEA atomic molar ratios with target hardness values were derived. The molar compositions of the five HEA samples are summarized in Table . To assess whether the designed HEAs match the hardness of the actually printed components, the Transformer-MLP hybrid model was initially employed to predict the hardness of the five HEA samples. The inverse design model generates candidate atomic molar ratios subject to predefined constraints. These candidate compositions are evaluated using the trained Transform-MLP hybrid model to yield predicted hardness values, thereby constructing an HV-Constitution search space. Subsequently, the ESOA algorithm is employed to identify the optimal alloy compositions among five distinct Ti/Al ratios exhibiting the highest predicted hardness. The inverse design model then generated optimal atomic ratios, which were used for sample fabrication using laser metal deposition (LMD), followed by experimental measurements.

The experimentally measured results confirmed the predicted hardness values: To explore the synergistic effect of Ti and Al content and their ratio, the Ti/Al ratios were set to 0.4, 0.8, 1.2, 1.6, and 2.0. All five HEA samples were fabricated via laser metal deposition (LMD). Each specimen underwent four remelting cycles to ensure chemical homogeneity. Vickers hardness was measured at 12 randomly selected positions on the surface of each HEAs sample. The mean Vickers hardness and corresponding standard deviation were documented for each sample. The results indicated that all tested samples had actual hardness values above 200 Hv. Notably, three samples showed hardness values surpassing 600 Hv. The errors reported in Table indicate the percentage deviation between predicted and experimental hardness.

Both predicted and measured hardness results show that hardness increases progressively when the Ti/Al ratio exceeds 0.8. A further increase in Ti/Al ratio results in a marginal rise in hardness. Beyond a Ti/Al ratio of 1.2, the hardness remains nearly unchanged. This behavior is attributed to the relatively large atomic radii of Ti and Al, which, upon entering the lattice, induce lattice distortions, leading to stress field formation that impedes dislocation movement, resulting in increased hardness. Evidently, the deep learning model’s predictions conform well to the underlying physical mechanisms.

Based on the predictive model and SHAP interpretation, a strategic approach to optimizing Al–Ti–Co–Cr–Fe–Ni HEAs for high hardness includes: (i) increasing Al and Ti contents, which are dominant in enhancing hardness via lattice distortion and precipitation strengthening; (ii) maintaining a Ti/Al ratio between 1.2 and 2.0 to favor duplex phase formation (FCC + L12 or FCC + BCC); (iii) retaining Cr and Co at moderate levels (10–20 at. 1%) to assist in solid solution and phase strengthening; and (iv) minimizing Ni and Fe content when hardness is prioritized over ductility, as both tend to stabilize FCC phases that reduce hardness. These guidelines, derived from both data-driven insights and physical interpretation, provide a practical reference for composition design under laser additive manufacturing constraints.

4. Conclusions

Centered on the research goal of predicting and inversely designing hardness in high-entropy alloys (HEAs), this work proposed a collaborative framework that combines deep learning and intelligent optimization algorithms to enable high-precision prediction and inverse design of HEAs with target hardness. In constructing the forward prediction model, the source data set and candidate features were systematically developed. Feature selection was refined using Pearson correlation analysis and solid solution strengthening theory, and the optimal subset was obtained using the NSGA-III algorithm. A Transformer-MLP hybrid model was then established based on MLP and attention mechanisms. The model surpassed traditional ML methods (such as RF and XGBoost) and standard DL models in predictive performance, with R 2 reaching 0.9813 and RMSE 10.23577. It also exhibited strong generalization capability and interpretability from a physical perspective. Incorporating SHAP explanations enabled a clearer understanding of the causal links between input features and predicted outputs, thus providing theoretical backing for HEA property modeling. Regarding inverse design, this study adopted the Transformer-MLP model as the decision core, and utilized the Egret Swarm Optimization Algorithm (ESOA) to explore the HV-Constitution space, achieving atomic molar ratio optimization under target hardness conditions. Five sets of HEAs samples were prepared using laser metal deposition (LMD), and experimental validation demonstrated that the error between predicted and measured hardness was within an acceptable range (less than 10%), thereby verifying the framework’s reliability and applicability. Ultimately, this study proposed a synergistic prediction and design framework based on deep learning and optimization algorithms to facilitate the efficient design of high-entropy alloys.

Supplementary Material

ao5c05550_si_001.pdf (659.3KB, pdf)

Acknowledgments

This work was supported by the Guangdong Basic and Applied Basic Research Program (2022A1515010761, 2022A1515140028, 2022A0505050081), the Guangdong Provincial Ocean Economy Development Special Fund ([2024]32), the Foshan Technology Project (1920001000409), and the Key Laboratory of Guangdong Regular Higher Education (2017KSYS012). We also acknowledge the contributions of X.Z., C.L., X.L., and K.W. in various aspects of the research, including software development, validation, investigation, data curation, and manuscript editing.

Data are contained within the article.

The Supporting Information is available free of charge at https://pubs.acs.org/doi/10.1021/acsomega.5c05550.

  • The feature calculation methods (atomic radii, molar ratios, and mixing enthalpy) and the key hyperparameter settings of the NSGA-III algorithm. It also compares the performance of four ESOA algorithms over 30 independent runs and provides data on 198 Al–Ti–Co–Cr–Fe–Ni alloy samples with their compositions and hardness values (PDF)

Software, X.Z. and C.L.; validation, X.L.; investigation, X.Z., and K.W.; data curation, K.W, and C.L.; writingreview and editing, X.Z., and K.W. All authors have read and agreed to the published version of the manuscript.

The authors declare no competing financial interest.

References

  1. Cantor B., Chang I. T., Knight P., Vincent A.. Microstructural development in equiatomic multicomponent alloys. Materials Science and Engineering: A. 2004;375:213–218. doi: 10.1016/j.msea.2003.10.257. [DOI] [Google Scholar]
  2. Kumar N., Waghmare U. V.. Entropic stabilization and descriptors of structural transformation in high entropy alloys. Acta Mater. 2023;255:119077. doi: 10.1016/j.actamat.2023.119077. [DOI] [Google Scholar]
  3. Chen T., Shun T., Yeh J., Wong M.. Nanostructured nitride films of multi-element high-entropy alloys by reactive dc sputtering. Surf. Coat. Technol. 2004;188:193–200. doi: 10.1016/j.surfcoat.2004.08.023. [DOI] [Google Scholar]
  4. Wang Z., Chen S., Yang S., Luo Q., Jin Y., Xie W., Zhang L., Li Q.. Light-weight refractory high-entropy alloys: A comprehensive review. Journal of Materials Science & Technology. 2023;151:41–65. doi: 10.1016/j.jmst.2022.11.054. [DOI] [Google Scholar]
  5. Wang W.-R., Wang W.-L., Wang S.-C., Tsai Y.-C., Lai C.-H., Yeh J.-W.. Effects of al addition on the microstructure and mechanical property of alxcocrfeni high-entropy alloys. Intermetallics. 2012;26:44–51. doi: 10.1016/j.intermet.2012.03.005. [DOI] [Google Scholar]
  6. Wang J., Zhao N., Yan M., Kou Z., Fu S., Wu S., Liu S., Lan S., You Z., Wang D.. et al. Phase engineering in nanocrystalline high-entropy alloy composites to achieve strength-plasticity synergy. Scripta Materialia. 2023;229:115374. doi: 10.1016/j.scriptamat.2023.115374. [DOI] [Google Scholar]
  7. Jeon J., Kim G., Seo N., Choi H., Kim H.-J., Lee M.-H., Lim H.-K., Son S. B., Lee S.-J.. Combined data-driven model for the prediction of thermal properties of ni-based amorphous alloys. Journal of Materials Research and Technology. 2022;16:129–138. doi: 10.1016/j.jmrt.2021.12.003. [DOI] [Google Scholar]
  8. Beniwal D., Singh P., Gupta S., Kramer M., Johnson D., Ray P.. Distilling physical origins of hardness in multi-principal element alloys directly from ensemble neural network models. npj Comput. Mater. 2022;8(1):153. doi: 10.1038/s41524-022-00842-3. [DOI] [Google Scholar]
  9. Ma D., Grabowski B., Körmann F., Neugebauer J., Raabe D.. Ab initio thermodynamics of the cocrfemnni high entropy alloy: Importance of entropy contributions beyond the configurational one. Acta Mater. 2015;100:90–97. doi: 10.1016/j.actamat.2015.08.050. [DOI] [Google Scholar]
  10. Jiang C., Uberuaga B. P.. Efficient ab initio modeling of random multicomponent alloys. Physical review letters. 2016;116(10):105501. doi: 10.1103/PhysRevLett.116.105501. [DOI] [PubMed] [Google Scholar]
  11. Zheng S.-M., Feng W.-Q., Wang S.-Q.. Elastic properties of high entropy alloys by maxent approach. Comput. Mater. Sci. 2018;142:332–337. doi: 10.1016/j.commatsci.2017.09.060. [DOI] [Google Scholar]
  12. Senkov O., Miller J., Miracle D., Woodward C.. Accelerated exploration of multi-principal element alloys for structural applications. Calphad. 2015;50:32–48. doi: 10.1016/j.calphad.2015.04.009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Zhang C., Zhang F., Chen S., Cao W.. Computational thermodynamics aided high-entropy alloy design. Jom. 2012;64:839–845. doi: 10.1007/s11837-012-0365-6. [DOI] [Google Scholar]
  14. Saal J. E., Berglund I. S., Sebastian J. T., Liaw P. K., Olson G. B.. Equilibrium high entropy alloy phase stability from experiments and thermodynamic modeling. Scripta Materialia. 2018;146:5–8. doi: 10.1016/j.scriptamat.2017.10.027. [DOI] [Google Scholar]
  15. Li H., Yuan R., Liang H., Wang W. Y., Li J., Wang J.. Towards high entropy alloy with enhanced strength and ductility using domain knowledge constrained active learning. Materials & Design. 2022;223:111186. doi: 10.1016/j.matdes.2022.111186. [DOI] [Google Scholar]
  16. He S., Wang Y., Zhang Z., Xiao F., Zuo S., Zhou Y., Cai X., Jin X.. Interpretable machine learning workflow for evaluation of the transformation temperatures of tizrhfnicocu high entropy shape memory alloys. Materials & Design. 2023;225:111513. doi: 10.1016/j.matdes.2022.111513. [DOI] [Google Scholar]
  17. Bundela A. S., Rahul M.. Machine learning-enabled framework for the prediction of mechanical properties in new high entropy alloys. J. Alloys Compd. 2022;908:164578. doi: 10.1016/j.jallcom.2022.164578. [DOI] [Google Scholar]
  18. Chang Y.-J., Jui C.-Y., Lee W.-J., Yeh A.-C.. Prediction of the composition and hardness of high-entropy alloys by machine learning. Jom. 2019;71(10):3433–3442. doi: 10.1007/s11837-019-03704-4. [DOI] [Google Scholar]
  19. Liu S., Bocklund B., Diffenderfer J., Chaganti S., Kailkhura B., McCall S. K., Gallagher B., Perron A., McKeown J. T.. A comparative study of predicting high entropy alloy phase fractions with traditional machine learning and deep neural networks. npj Comput. Mater. 2024;10(1):172. doi: 10.1038/s41524-024-01335-1. [DOI] [Google Scholar]
  20. Veeresham M., Sake N., Lee U., Park N.. Unraveling phase prediction in high entropy alloys: A synergy of machine learning, deep learning, and thermocalc, validation by experimental analysis. Journal of Materials Research and Technology. 2024;29:1744–1755. doi: 10.1016/j.jmrt.2024.01.145. [DOI] [Google Scholar]
  21. Bakr M., Syarif J., Hashem I. A. T.. Prediction of phase and hardness of heas based on constituent elements using machine learning models. Materials Today Communications. 2022;31:103407. doi: 10.1016/j.mtcomm.2022.103407. [DOI] [Google Scholar]
  22. Nazir T., Shaukat N., Shahid R. N., Bhatti M. H.. et al. A comprehensive strategy for phase detection of high entropy alloys: Machine learning and deep learning approaches. Materials Today Communications. 2023;37:107525. doi: 10.1016/j.mtcomm.2023.107525. [DOI] [Google Scholar]
  23. Yasuda H. Y., Miyamoto H., Cho K., Nagase T.. Formation of ultrafine-grained microstructure in al0. 3cocrfeni high entropy alloys with grain boundary precipitates. Mater. Lett. 2017;199:120–123. doi: 10.1016/j.matlet.2017.04.072. [DOI] [Google Scholar]
  24. Gwalani B., Soni V., Choudhuri D., Lee M., Hwang J., Nam S., Ryu H., Hong S. H., Banerjee R.. Stability of ordered l12 and b2 precipitates in face centered cubic based high entropy alloys-al0. 3cofecrni and al0. 3cufecrni2. Scripta Materialia. 2016;123:130–134. doi: 10.1016/j.scriptamat.2016.06.019. [DOI] [Google Scholar]
  25. Gwalani B., Soni V., Lee M., Mantri S., Ren Y., Banerjee R.. Optimizing the coupled effects of hall-petch and precipitation strengthening in a al0. 3cocrfeni high entropy alloy. Materials & Design. 2017;121:254–260. doi: 10.1016/j.matdes.2017.02.072. [DOI] [Google Scholar]
  26. Wang W.-R., Wang W.-L., Yeh J.-W.. Phases, microstructure and mechanical properties of alxcocrfeni high-entropy alloys at elevated temperatures. J. Alloys Compd. 2014;589:143–152. doi: 10.1016/j.jallcom.2013.11.084. [DOI] [Google Scholar]
  27. Lu Y., Gao X., Jiang L., Chen Z., Wang T., Jie J., Kang H., Zhang Y., Guo S., Ruan H.. et al. Directly cast bulk eutectic and near-eutectic high entropy alloys with balanced strength and ductility in a wide temperature range. Acta Mater. 2017;124:143–150. doi: 10.1016/j.actamat.2016.11.016. [DOI] [Google Scholar]
  28. Wang Q., Ma Y., Jiang B., Li X., Shi Y., Dong C., Liaw P. K.. A cuboidal b2 nanoprecipitation-enhanced body-centered-cubic alloy al0. 7cocrfe2ni with prominent tensile properties. Scripta Materialia. 2016;120:85–89. doi: 10.1016/j.scriptamat.2016.04.014. [DOI] [Google Scholar]
  29. Joseph J., Jarvis T., Wu X., Stanford N., Hodgson P., Fabijanic D. M.. Comparative study of the microstructures and mechanical properties of direct laser fabricated and arc-melted alxcocrfeni high entropy alloys. Materials Science and Engineering: A. 2015;633:184–193. doi: 10.1016/j.msea.2015.02.072. [DOI] [Google Scholar]
  30. Zhao Y., Wang M., Cui H., Zhao Y., Song X., Zeng Y., Gao X., Lu F., Wang C., Song Q.. Effects of ti-to-al ratios on the phases, microstructures, mechanical properties, and corrosion resistance of al2-xcocrfenitix high-entropy alloys. J. Alloys Compd. 2019;805:585–596. doi: 10.1016/j.jallcom.2019.07.100. [DOI] [Google Scholar]
  31. Wen C., Wang C., Zhang Y., Antonov S., Xue D., Lookman T., Su Y.. Modeling solid solution strengthening in high entropy alloys using machine learning. Acta Mater. 2021;212:116917. doi: 10.1016/j.actamat.2021.116917. [DOI] [Google Scholar]
  32. Wang W. Y., Shang S. L., Wang Y., Han F., Darling K. A., Wu Y., Xie X., Senkov O. N., Li J., Hui X. D.. et al. Atomic and electronic basis for the serrations of refractory high-entropy alloys. npj Comput. Mater. 2017;3(1):23. doi: 10.1038/s41524-017-0024-0. [DOI] [Google Scholar]
  33. Guo S.. Phase selection rules for cast high entropy alloys: an overview. Mater. Sci. Technol. 2015;31(10):1223–1230. doi: 10.1179/1743284715Y.0000000018. [DOI] [Google Scholar]
  34. Chen Z., Li S., Francis A., Liao B., Xiao D., Ha T., Li J., Ding L., Cao X.. Egret swarm optimization algorithm: An evolutionary computation approach for model free optimization. Biomimetics. 2022;7(4):144. doi: 10.3390/biomimetics7040144. [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Shao G., Lei J., Zhang F., Wang S., Hu H., Wang K., Tan P., Yi J.. A study of the microstructure and mechanical and electrochemical properties of cocrfeni high-entropy alloys additive-manufactured using laser metal deposition. Coatings. 2023;13(9):1583. doi: 10.3390/coatings13091583. [DOI] [Google Scholar]
  36. Wang K., Song D., Li L., Shao G., Mi Y., Hu H., Liu C., Tan P.. Microstructure and properties of cocrfenitix high-entropy alloys fabricated by laser additive manufacturing. Coatings. 2024;14(9):1171. doi: 10.3390/coatings14091171. [DOI] [Google Scholar]
  37. Yeh J.-W.. Alloy design strategies and future trends in high-entropy alloys. Jom. 2013;65:1759–1771. doi: 10.1007/s11837-013-0761-6. [DOI] [Google Scholar]
  38. Ren W., Zhang Y.-F., Wang W.-L., Ding S.-J., Li N.. Prediction and design of high hardness high entropy alloy through machine learning. Materials & Design. 2023;235:112454. doi: 10.1016/j.matdes.2023.112454. [DOI] [Google Scholar]
  39. Labusch R.. A statistical theory of solid solution hardening. physica status solidi (b) 1970;41(2):659–669. doi: 10.1002/pssb.19700410221. [DOI] [Google Scholar]
  40. Thirathipviwat P., Sato S., Song G., Bednarcik J., Nielsch K., Jung J., Han J.. A role of atomic size misfit in lattice distortion and solid solution strengthening of tinbhftazr high entropy alloy system. Scripta Materialia. 2022;210:114470. doi: 10.1016/j.scriptamat.2021.114470. [DOI] [Google Scholar]
  41. Ma E., Wu X.. Tailoring heterogeneities in high-entropy alloys to promote strength–ductility synergy. Nat. Commun. 2019;10(1):5623. doi: 10.1038/s41467-019-13311-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Toda-Caraballo I., Rivera-Díaz-del Castillo P. E.. Modelling solid solution hardening in high entropy alloys. Acta Mater. 2015;85:14–23. doi: 10.1016/j.actamat.2014.11.014. [DOI] [Google Scholar]
  43. Huang X., Jin C., Zhang C., Zhang H., Fu H.. Machine learning assisted modelling and design of solid solution hardened high entropy alloys. Materials & Design. 2021;211:110177. doi: 10.1016/j.matdes.2021.110177. [DOI] [Google Scholar]
  44. Zhang Y.-F., Ren W., Wang W.-L., Li N., Zhang Y.-X., Li X.-M., Li W.-H.. Interpretable hardness prediction of high-entropy alloys through ensemble learning. J. Alloys Compd. 2023;945:169329. doi: 10.1016/j.jallcom.2023.169329. [DOI] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

ao5c05550_si_001.pdf (659.3KB, pdf)

Data Availability Statement

Data are contained within the article.


Articles from ACS Omega are provided here courtesy of American Chemical Society

RESOURCES