Abstract
The service life of large battery packs can be significantly influenced by only one or two abnormal cells with faster aging rates. However, the early‐stage identification of lifetime abnormality is challenging due to the low abnormal rate and imperceptible initial performance deviations. This work proposes a lifetime abnormality detection method for batteries based on few‐shot learning and using only the first‐cycle aging data. Verified with the largest known dataset with 215 commercial lithium‐ion batteries, the method can identify all abnormal batteries, with a false alarm rate of only 3.8%. It is also found that any capacity and resistance‐based approach can easily fail to screen out a large proportion of the abnormal batteries, which should be given enough attention. This work highlights the opportunities to diagnose lifetime abnormalities via “big data” analysis, without requiring additional experimental effort or battery sensors, thereby leading to extended battery life, increased cost‐benefit, and improved environmental friendliness.
Keywords: big data, early‐stage detection, few‐shot learning, lifetime abnormality, lithium‐ion battery
The lifetime of large battery packs can be influenced by only one or two abnormal cells with faster aging rates in it. This work proposes a method to predict battery lifetime abnormality using only the first‐cycle battery aging data and achieves a typical accuracy >90%. It can be used to screen out abnormal batteries before grouping.

1. Introduction
The lithium‐ion battery is widely regarded as a promising device for achieving a sustainable society.[ 1 , 2 ] Nevertheless, its manufacturing process is always accompanied by high consumption of energy and raw materials.[ 3 , 4 ] Therefore, a long enough service life is critical to achieve net‐zero carbon emissions and make positive environmental impacts,[ 5 , 6 ] especially noting that the electricity for battery manufacturing and charging is mostly (70−−80% for the globe) generated from fossil fuels today.[ 7 , 8 ]
The long service life of individual batteries may not necessarily guarantee the satisfactory life‐cycle performance of a battery pack, in which hundreds of battery cells are connected in series and parallel to meet the power and energy requirements of applications such as electric vehicles and renewable energy storage.[ 9 ] Without proper maintenance, the service life of a large battery pack can be significantly reduced by only one or two abnormal cells with faster aging rates,[ 10 , 11 , 12 ] even if the majority of the candidates in the pack have normal aging behaviors. Screening out these few batteries with abnormal lifetime performances prior to battery grouping and pack assembly can improve the capacity, lifetime, and cost‐benefit of a battery pack with immediate effect.[ 13 ]
Many previous studies have emphasized battery screening and their core idea is to group batteries with similar key parameters into a pack. To date, the most widely used screening method in the industry is the capacity‐resistance (CR) method,[ 14 ] in which batteries with similar capacity and resistance values are assumed to have similar performance. In addition to these two indexes, incremental capacity peaks,[ 15 ] pulse charging responses,[ 16 ] voltage trajectories,[ 17 ] and electrochemical impedance spectroscopy[ 18 ] have also been applied for battery screening. These tests are fast, requiring no more than 12 h in general. In addition, they can effectively screen out the abnormalities that are immediately observable, e.g., high resistance. However, the lifetime abnormality, involving the long‐term decay of the battery's future capacity, is not considered.
Identifying the batteries' lifetime abnormality is challenging, especially at the beginning of their service life.[ 19 ] First, abnormal aging behaviors are more likely to be perceived in the latter part of the battery life, compared to much less information to be possibly extracted in the first few cycles.[ 20 ] As a result, even the best existing algorithms still need to use the data collected from the first 3–5 cycles of the aging process for abnormality detection.[ 21 , 22 ] In addition to the increased testing time, their prediction errors are also as large as 10 to 15%. Second, the low abnormal rate itself poses challenges to dataset establishment.[ 23 , 24 ] To collect sufficient abnormal samples, we have to carry out long‐term aging tests on a large number of batteries, making the experiments costly and time‐consuming. Finally, the correctness of a classification (made at the beginning stage of the battery life) can only be experimentally verified after long‐term battery usage. The delayed feedback hinders the algorithm's development. With these issues in mind, the early‐stage identification of the battery lifetime abnormality remains an unsolved problem in the field of battery manufacturing and management.
In this work, we make the first attempt to identify the lifetime abnormality of lithium‐ion batteries using only the first‐cycle aging data. A few‐shot learning network is developed to detect the lifetime abnormality, without requiring prior knowledge of degradation mechanisms. We generate the largest known dataset for lifetime‐abnormality detection, which contains 215 commercial lithium‐ion batteries with an abnormal rate of 3.25%. Our method can accurately identify all abnormal batteries in the dataset, with a false alarm rate of only 3.8%. The overall accuracy achieves 96.4%. In addition, we find that the widely used capacity‐resistance‐based methods are not suitable for identifying lifetime abnormality, which must draw enough attention from the battery community. Our proposed identification algorithm offers a reliable and cost‐effective way to immediately improve the lifetime of multi‐cell battery packs, without requiring additional experimental effort, battery sensors, or knowledge of aging mechanisms, thereby leading to extended battery life, increased cost‐benefit, and improved environmental friendliness.
2. Results and Discussion
2.1. Data Generation
A group of 215 commercial batteries have been tested in this work (type: 18650, chemistry: LiNi0.8Co0.1Mn0.1O2/graphite). These batteries underwent sequentially an initial resistance test, an initial capacity test, and an accelerated aging test with the current rate increased to 3C. During accelerated aging, seven out of 215 batteries exhibited abnormal aging behaviors. The generated dataset is shared publicly for further battery research and development, as described in Data Availability Statement Section. The full experimental details are provided in Supporting Information.
2.2. Limitations of the Capacity‐Resistance Method
We first check the initial capacity and resistance of all the batteries, with the results shown in Figure 1 . All the 215 batteries share a similar initial capacity (2.53 ± 0.05 Ah) and resistance (13.7±0.5 mΩ). However, their aging trajectories, as shown in Figure 1b, can be highly different even if these batteries underwent the same aging test. To be specific, in the last 25% of the aging test (90th to 120th cycle), seven out of the 215 batteries exhibit a significantly faster local aging rate (>10 mAh/cycle on average) than the others (<5 mAh/cycle). These seven batteries are, therefore, defined as “abnormal”. From the data monitoring point of view, these abnormal samples are also defined as “positive samples”, while the normal batteries are termed as “negative samples” in the following discussions.
Figure 1.

Illustration of our battery aging data. a) Initial resistance versus capacity of 215 batteries. b) Capacity degradation trajectories of 215 batteries. Here, the batteries aging at an average rate of lower than 10 mAh/cycle after the 90th cycle are labeled with “normal”, while the others are labeled with “abnormal”.
In the CR method, the classification relies on two parameters ‐ capacity (C) and resistance (R). A battery is classified as normal if these two parameters fall into a certain tolerance range. For instance, when the allowable parameter ranges are selected as C ∈ [2.48, 2.58] Ah and R ∈ [13.2, 14.2] mΩ, the CR method will report all batteries as “normal”. In other words, all the abnormal individuals shown in Figure 1a cannot be identified, and both the true positive rate (TPR, rate of successfully identifying the abnormal batteries) and the false negative rate (FNR, rate of reporting normal batteries as abnormal) are 0%. By further narrowing down the parameter ranges, the TPR could improve at the cost of an increase in FNR. As shown in Figures 2 and 3 , when the ranges are tuned so that the CR method can identify all abnormal batteries (TPR = 100%), the minimum FNR is 68.27%, implying that at least 68.27% of the normal batteries are predicted to be abnormal. On the other hand, when the FNR is controlled to be lower than 10%, the highest TPR is only 28.57%, implying that more than 70% of the abnormal batteries cannot be identified. From these results, even with the fine‐tuned parameters, the performance of the CR method may still not be satisfactory.
Figure 2.

True positive rate (in %) of the predictions with different capacity and resistance ranges. For instance, the value in the 2nd row 8th column is 85.71, which means when i = 2 and j = 8, the true positive rate of the prediction is 85.71%.
Figure 3.

False negative rate (in %) of the predictions with different capacity and resistance ranges. For instance, the value in the 2nd row 8th column is 60.1, which means when i = 2 and j = 8, the false negative rate of the prediction is 60.1%.
2.3. Identifying Lifetime Abnormality via Few‐Shot Learning Network
The overall performance of our few‐shot learning network is summarized in Figure 4a, in which 7 abnormal and 104 normal batteries are utilized for testing (See Section 4.1 for details). We can accurately screen out all seven abnormal batteries. In addition, 100 out of 104 normal batteries could be identified. The overall prediction accuracy achieves 96.4%, and the false alarm rate is only 3.8%. As an effective statistical index considering both the precision and recall of a model, the F2‐score (see Section 4.1 for details) of our prediction achieves 89.74%.
Figure 4.

Classification results of the proposed method. a) Confusion matrix; b) Prediction scores of the abnormal batteries (the lower, the better); c) Aging trajectories of the abnormal batteries, where the one with the highest prediction score is highlighted with dark green; d) Aging trajectories of the normal batteries, in which the four batteries with the prediction score <50% are highlighted with yellow; e) Prediction scores of the normal batteries (the higher, the better).
More specifically, the prediction scores (see Section 4 for details) of all abnormal batteries are given in Figure 4b. The scores of all batteries are lower than a predefined threshold, i.e., 50% in this work, implying that all abnormal batteries are accurately predicted to be “abnormal”. In our test, the first abnormal battery has the highest score (44.6%), and its aging trajectory is given in Figure 4c. Compared with other abnormal batteries, its average aging rate between the 90th and 120th cycle is indeed the lowest.
For the normal batteries, their prediction scores are given in Figure 4e. 100 out of 104 batteries receive a score >50%, implying that these batteries could be classified as “normal”. Further, the scores of 93 batteries are higher than 99%, indicating that our predictions are highly confident. For the false‐alarmed batteries with a score <50%, their aging trajectories are given in Figure 4d, and their resistance versus capacity plot is given in Figure 5 . The resistance of 5 out of 7 abnormal batteries lies between 13.45 and 13.57 mΩ, while 3 out of 4 false‐alarmed batteries also have this resistance range. In addition, the capacity of 3 out of 4 false‐alarmed batteries lies in the range of 2.551 to 2.553 Ah. In short, these false‐alarmed batteries are somehow similar to the abnormal ones, posing challenges to our few‐shot learning‐based classifier.
Figure 5.

Comparison between the abnormal batteries and the normal batteries with prediction score <50%.
2.4. Discussion
2.4.1. Results of Other Commonly Seen Algorithms
To further illustrate the superiority of the proposed method, we test the classification performance of six commonly used abnormal detection algorithms, including the one‐class supporting vector machine, auto‐encoder, density‐based spatial clustering of applications with noise (DBSCAN), isolation forest, K‐nearest neighbor, and local outlier factor model. The detailed results of these algorithms are given in Sections S.2.1–S.2.6, (Supporting Information). In our tests, the best conventional algorithm could only achieve an F2‐score of 37.8%, with an accuracy of 83.7% and a false alarm rate of 15.9%. Such a performance is significantly lower than the proposed method and cannot meet the general engineering requirements.
As detailed in Section 4, the proposed few‐shot learning is a supervised learning algorithm. The benefit of supervised learning is that it has a clear training target, resulting in more effective training processes. However, in an abnormality detection problem, we are unlikely to cover all possible abnormal cases in the training set, posing challenges to the generalization of our method. In our solution, we design networks that tell if any two batteries in the pool come from the same group, rather than directly telling if a battery is normal or not. In this way, we stand a chance to produce an effective classifier. From the posterior point of view, our method outperforms six main‐stream unsupervised abnormality detection algorithms. We use the minimum possible input requirement (the first cycle data only) and achieve the top‐class lifetime abnormal detection performance on the largest known dataset.
2.4.2. Results of Different Cell Systems
The battery's behavior changes significantly with chemistry and load profiles.[ 25 ] To examine the effectiveness of the proposed framework for various cell systems with different testing procedures, an additional dataset from Ref. [26] was used. First, ten batteries with the highest life cycle and another ten with the lowest life cycle were first selected and then filled into two classes. The aging trajectories of these 20 batteries are shown in Figure 6a. Then, our algorithm is carried out to implement the classification, with the results given in Figure 6b. In this dataset, the overall accuracy achieves 100%. Such a high accuracy indicates that the proposed framework can be easily used for different battery types and duty cycles. At the same time, conventional unsupervised methods listed in Section 2.4.1 can achieve only limited accuracy. The best result has an F2‐score of 86.5% and an accuracy of 71.4% only. The full details about the data and validation results can be found in the Section S.2.7 (Supporting Information).
Figure 6.

Additional results. a) Aging trajectories of different batteries; b) Confusion matrix of the prediction.
It is worth noting that there is no well‐established theory that can accurately and physically explain how the battery's aging behavior over its lifetime is related to attributes measured at the beginning of life. However, the aging‐related attributes can be statistically calculated. According to Ref. [27], the top ten attributes are mostly associated with the battery's pulse resistance or electrochemical impedance spectroscopy (EIS) for batteries in Ref. [26]. As confirmed by Ref. [28] and our recent work,[ 29 ] neural networks are capable of extracting EIS and general resistance information from the battery's operating data (e.g., constant‐current‐constant‐voltage charging data, see Section 4 for details). Therefore, it can be inferred that a powerful data‐driven machine can build a pathway from the first‐cycle testing data to the battery lifespan in‐explicitly.
3. Conclusion
Early‐stage lifetime abnormality prediction is critical to prolonging the service life of a battery pack, but technically challenging due to not only the limited information to be possibly extracted in the first few cycles but also the inherently low rate of battery abnormality. In this paper, we use the few‐shot learning method to predict the lifetime abnormality of the batteries with only first‐cycle aging data. Verified with the largest known dataset generated in this work, the proposed method successfully identifies all abnormal batteries, with a false alarm rate of only 3.8%. Our method cannot only be used on new batteries but also unlock exciting future research opportunities to assist in the screening of retired batteries to facilitate their second‐life usage, further extending the battery's service life. Given the limited experimental resources, the dataset used in this work was generated with accelerated aging tests for only one battery type (although it is the largest known dataset). When using the results in this work for highly diverse usage scenarios associated with different aging mechanisms or even different batteries, model migration or transfer learning techniques could be a solution to maintain high accuracy and robustness. The first‐cycle data used in this work will also be extended to the formation data so that the proposed method can be used by battery manufacturers without carrying out additional tests.
4. Experimental Section
Overview
The proposed battery lifetime abnormality detection method was a supervised data‐driven algorithm based on few‐shot learning, and it basically had two steps – training and testing. The overall scheme of the algorithm is given in Figure 7 . In the training phase, a group of 1000 neural networks was first trained with known data collected through the experiment. In the testing phase, these well‐trained networks could be utilized to predict whether the lifetime of an unknown battery was normal. Technical details are provided in the following subsections.
Figure 7.

The framework of the proposed method.
Training Phase
In the training phase, it trained such networks that tell if any two randomly selected batteries belong to the same group. As described in Table 1 , when any battery i and battery j are both normal or abnormal, the network was expected to provide a “true” response. At the same time, if one battery was normal while the other was abnormal, the network was expected to provide a “false” response.
Table 1.
Desired output of the few‐shot network.
| Battery i: normal | Battery i: abnormal | |
|---|---|---|
| Battery j: normal | [1;0] (true) | [0;1] (false) |
| Battery j: abnormal | [0;1] (false) | [1;0] (true) |
To implement the above‐mentioned functions, the input of the network should consist of the data collected from two batteries:
| (1) |
where Γ i and Γ j are column vectors containing the key features of the i th and j th battery, respectively.
To maximize the data mining capability of the neural network, the battery feature here was directly obtained from the raw data of the experiments. Specifically, the voltage data measured in the constant current charging phase and the current data measured in the constant voltage charging phase were combined as a column vector to serve as the input:
| (2) |
where L 1 is the duration for the constant current phase, and L 2 is the duration of the entire charging process. For the considered batteries and charging profile, we have L 1 = 65 and L 2 = 100.
Finally, a three‐layer neural network was employed to map the input to the output. The hidden layer of the network was activated by the radbas() function:
| (3) |
and the output layer was activated by the softmax() function:
| (4) |
where Z = [z 1, z 2, ⋅⋅⋅, z N ] T is the input of the softmax function, N is the number of output neurons, z j ∈ Z, z k ∈ Z, j ∈ [1, N], and k ∈ [1, N]. Following Table 1, N = 2 is selected in this work. The network's output is binarized with the following rule: we have [y] = 1 if 0.5 ⩽ y ⩽ 1, and [y] = 0 if 0 ⩽ y < 0.5.
Our network was trained by a conjugate gradient backpropagation algorithm with Fletcher‐Reeves updates,[ 30 ] and full details are provided in the uploaded code. To minimize the influence of the random network initialization,[ 31 ] we trained 1000 networks in the training phase and used their averaged output to compensate for the random errors in the testing phase (as detailed in the Section 4).
Testing Phase
In the testing phase, we basically use the well‐trained networks to tell if an unknown battery from the testing dataset and a known battery (also named the “supporting battery” here) are in the same group. In other words, we test if the network's output corresponding to IN = [Γ u , Γ s ] T is “true”, where Γ u is the feature vector extracted from the unknown battery, while Γ s is the feature extracted from the supporting battery. As detailed in Table 2 , an unknown battery can be classified as “normal” if it is in the same class as a normal battery or in a different class with an abnormal battery.
Table 2.
Prediction result for the unknown battery when the network's input is IN = [Γ u , Γ s ] T .
| Network's output: [0;1] (false) | Network's output: [1;0] (true) | |
|---|---|---|
| (Known) Supporting battery: abnormal | Normal | Abnormal |
| (Known) Supporting battery: normal | Abnormal | Normal |
When we have a well‐trained network, we may be willing to use M supporting batteries (with M 1 known normal samples and M 2 known abnormal samples, M = M 1 + M 2, M 1 > 0, M 2 > 0, M ⩾ 2) to improve the reliability and robustness of our prediction. In this case, we can carry out M predictions with all the supporting batteries. For each time the unknown battery is predicted to be normal by a normal supporting battery, we add 1/M 1*50% score to its rate; for each time the unknown battery is predicted to be normal by an abnormal supporting battery, we add 1/M 2*50% score to it. After carrying out all predictions, the unknown battery is believed to be normal if its final score is greater than 50%.
We may also use the averaged prediction results of more networks to further minimize the random prediction error. When we have H (H ⩾ 1) well‐trained networks, the abnormality detection for an unknown battery can then be carried out for H times. For each time the unknown battery is predicted to be normal, we add 1/H*100% score to this battery's rate. Again, the unknown battery is predicted to be normal if the prediction score is greater than 50%. As pointed out in Section 4, we select H = 1000 in this work.
4.1. Data Preparation for Network Training
As a common issue for most abnormality detection problems, the percentage of abnormal samples was usually significantly lower than the normal ones. Therefore, a delicate data preparation method for network training was required. The data preparation method contains three parts, namely, data segmentation, normal data drop‐out, and abnormal data creation.
Data‐Segmentation
Given that it had only seven abnormal batteries in the dataset, the leave‐one‐out method[ 32 ] was adopted for these batteries. Specifically, when training the network, it selected data from six abnormal batteries for training and leave only one for testing. This process was repeated seven times so that all abnormal batteries could be tested one time, and the seven sub‐tests were combined to provide the final results. For the 208 normal batteries, it simply used the first half for training and the remaining half for testing.
Drop‐Out of Normal Samples
With the above configurations, it had more than a hundred normal batteries but only six abnormal ones in the training set. Consequently, the training set was heavily unbalanced, posing challenges to accurate predictions.[ 33 ] In this case, the drop‐out technique was employed to remove some normal batteries from the training set. To be specific, for each abnormal battery in the training set, we check its aging trajectory and find the N1 normal batteries whose aging trajectories were closest to it. Here the distance between two aging trajectories is defined by:
| (5) |
where D i, j is the distance between the trajectories of the i th battery and the j th battery, C i, k is the capacity of cell i measured at k th cycle, and L is the total number of the cycles evaluated. By selecting a suitable N 1 (N 1 = 3 is selected in this work), a large number of normal batteries could be removed from the training set. In addition, the remaining normal samples could be regarded as the “boundary” to classify the normal and abnormal batteries, which could improve the generalization of the network.[ 34 ]
Creation of Abnormal Samples
In addition to the unbalanced proportion, the low absolute number of abnormal batteries was also an important issue to handle. Here, we proposed to solve this issue by “creating” more abnormal data. The aim of this work was to use the data collected from the first cycle of the aging test to identify the lifetime abnormality. However, as shown in Figure 1 and many other battery aging datasets,[ 22 , 35 , 36 ] the battery's behaviors in the first few cycles were highly similar. Therefore, we proposed to use the data collected in the first N 2 cycles to enrich the training set. In other words, when we had actually collected one abnormal battery, we could pretend that we had collected N 2 abnormal (fake) batteries. The “first cycle data” for these N 2 fake batteries were obtained from the data of the abnormal battery collected from cycle 1 to cycle N 2. In short, for each abnormal battery collected, it generated N 2 feature vectors (Γ) in the training phase.
There were some issues worth pointing out. First, the strategy here was to “create” data from the experimental facts. It was different from the conventional resampling‐ or oversampling‐based “data‐creating” approaches, where rule‐based methods were basically used to draw random particles around the known positive samples.[ 33 ] Second, the data‐creating method here was also applied to the selected normal batteries, to minimize the influence of slight battery aging during the period between cycle 1 and cycle N 2. Third, the data creation was only applied in the training phase. In the testing phase, our method only compares the first cycle data of an unknown battery with the known data in the supporting set to implement the classification. Finally, for the specific dataset generated in this work, N 2 = 4 is selected.
Performance Indicators
Some terminologies and key performance indicators used in this work were introduced. First, the definitions of true positive, true negative, false positive, and false negative are given in Table 3 . Then, the accuracy of the prediction can be calculated by:
| (6) |
Table 3.
Definition of the terminologies.
| Actual class: abnormal | Actual class: normal | |
|---|---|---|
| Predicted class: abnormal | True Negative (TN) | False Negative (FN) |
| Predicted class: normal | False Positive (FP) | True Positive (TP) |
The Precision and Recall can be defined respectively as:
| (7) |
| (8) |
The F β score of the prediction is defined as:
| (9) |
When β = 1 is selected, Precision and Recall are given the same weight. When β < 1, Precision is more important. When β > 1, Recall becomes more important. β = 2 is utilized in this work since the consequences of missing alarms were more severe than that of the false alarm in battery lifetime abnormality detection.
Conflict of Interest
The authors declare no conflict of interest.
Supporting information
Supporting Information
Acknowledgements
The first author would like to thank the continuing support from the Guangzhou HKUST Fok Ying Tung Research Institute during Hong Kong's unrest and the outbreak of Covid‐19. The authors would like to thank Dr. Weihan Li from RWTH Aachen University, the first author of Ref. [26], for providing technical support when we tested our method on their dataset. We would also like to thank Ms Wenying Huang and Mr Jun Yuan from Far East Battery for their recommendations onaging experiment design when using batteries produced by their company. This work was supported, in part, by the Hong Kong RGC Postdoctoral Fellowship Scheme (PDFS2122‐6S06), Hong Kong Research Grant Council under grant 16208520, the National Natural Science Foundation of China (51977131,52277223), the Natural Science Foundation of Shanghai (19ZR1435800), Guangdong Scientifi,c and Technological Project (2019A050516002), the Foshan‐HKUST Projects Program (FSUST20‐FYTRI12F), and the Swedish Energy Agency under the Vehicle Strategic Research and Innovation Program (Grant No. 50187‐1).
Tang X., Lai X., Zou C., Zhou Y., Zhu J., Zheng Y., Gao F., Detecting Abnormality of Battery Lifetime from First‐Cycle Data Using Few‐Shot Learning. Adv. Sci. 2024, 11, 2305315. 10.1002/advs.202305315
Contributor Information
Changfu Zou, Email: changfu.zou@chalmers.se.
Furong Gao, Email: kefgao@ust.hk.
Data Availability Statement
The code and data can be downloaded from: https://drive.google.com/drive/folders/1wMoXu‐G9zC6CPyo‐t742XNEkHYbPnWmb?usp=sharing
References
- 1. Desaulty A.‐M., Climent D. M., Lefebvre G., Cristiano‐Tassi A., Peralta D., Perret S., Urban A., Guerrot C., Nat. Commun. 2022, 13, 1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2. Manthiram A., Lutkenhaus J. L., Fu Y., Bai P., Kim B. G., Lee S. W., Okonkwo E., Penner R. M., One Earth 2022, 5, 203. [Google Scholar]
- 3. Hou L.‐P., Zhang X.‐Q., Yao N., Chen X., Li B.‐Q., Shi P., Jin C.‐B., Huang J.‐Q., Zhang Q., Chem 2022, 8, 1083. [Google Scholar]
- 4. Chen M., Ma X., Chen B., Arsenault R., Karlson P., Simon N., Wang Y., Joule 2019, 3, 2622. [Google Scholar]
- 5. Lai X., Chen Q., Tang X., Zhou Y., Gao F., Guo Y., Bhagat R., Zheng Y., eTransportation 2022, 12, 100169. [Google Scholar]
- 6. Zeng A., Chen W., Rasmussen K. D., Zhu X., Lundhaug M., Müller D. B., Tan J., Keiding J. K., Liu L., Dai T., Wang A., Liu G., Nat. Commun. 2022, 13, 1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7. IEA , World Energy Outlook 2022, IEA, Paris: 2022, https://www.iea.org/reports/world-energy-outlook-2022. [Google Scholar]
- 8. Mutezo G., Mulopo J., Renewable Sustainable Energy Rev. 2021, 137, 110609. [Google Scholar]
- 9. Liu M.‐C., Chang H.‐C., Huang C.‐H., Hsu F.‐R., Int. J. Adv. Des. Manuf. Technol. 2022, 123, 3361. [Google Scholar]
- 10. Hoekstra F. S. J., Bergveld H. J., Donkers M. C. F., IEEE Trans. Control Syst. Technol. 2022, 30, 2759. [Google Scholar]
- 11. Gong Z., Chowdhury A., Trescases O., IEEE Trans. Power Electron. 2022, 37, 13264. [Google Scholar]
- 12. Zhou Z., Liu Y., You M., Xiong R., Zhou X., Green Energy Intell. Transportation 2022, 1, 100008. [Google Scholar]
- 13. Lee C., Jo S., Kwon D., Pecht M. G., IEEE Trans. Ind. Electron. 2021, 68, 2659. [Google Scholar]
- 14. Zhou Z., Duan B., Kang Y., Shang Y., Cui N., Chang L., Zhang C., J. Cleaner Prod. 2020, 267, 121882. [Google Scholar]
- 15. Jiang Y., Jiang J., Zhang C., Zhang W., Gao Y., Guo Q., J. Power Sources 2017, 360, 180. [Google Scholar]
- 16. Lai X., Qiao D., Zheng Y., Ouyang M., Han X., Zhou L., J. Cleaner Prod. 2019, 213, 776. [Google Scholar]
- 17. Bai X., Tan J., Wang X., Wang L., Liu C., Shi L., Sun W., J. Cleaner Prod. 2019, 233, 429. [Google Scholar]
- 18. Liao Q., Mu M., Zhao S., Zhang L., Jiang T., Ye J., Shen X., Zhou G., Int. J. Hydrogen Energy 2017, 42, 18817. [Google Scholar]
- 19. Diao W., Naqvi I. H., Pecht M., J. Energy Storage 2020, 32, 101710. [Google Scholar]
- 20. Kim S., Park H. J., Choi J.‐H., Kwon D., IEEE Trans. Ind. Electron. 2021, 68, 3485. [Google Scholar]
- 21. Fermín‐Cueto P., McTurk E., Allerhand M., Medina‐Lopez E., Anjos M. F., Sylvester J., dos Reis G., Energy and AI 2020, 1, 100006. [Google Scholar]
- 22. Severson K. A., Attia P. M., Jin N., Perkins N., Jiang B., Yang Z., Chen M. H., Aykol M., Herring P. K., Fraggedakis D., Bazant M. Z., Harris S. J., Chueh W. C., Braatz R. D., Nat. Energy 2019, 4, 383. [Google Scholar]
- 23. Li S., He H., Li J., Appl. Energy 2019, 242, 1259. [Google Scholar]
- 24. Finegan D. P., Zhu J., Feng X., Keyser M., Ulmefors M., Li W., Bazant M. Z., Cooper S. J., Joule 2021, 5, 316. [Google Scholar]
- 25. Attia P. M., Bills A., Planella F. B., Dechent P., dos Reis G., Dubarry M., Gasper P., Gilchrist R., Greenbank S., Howey D., Liu O., Khoo E., Preger Y., Soni A., Stefanopoulou S. Sripad, A. G., Sulzer V., J. Electrochem. Soc. 2022, 169, 060517. [Google Scholar]
- 26. Li W., Sengupta N., Dechent P., Howey D., Annaswamy A., Sauer D. U., J. Power Sources 2021, 506, 230024. [Google Scholar]
- 27. Baumhöfer T., Brühl M., Rothgang S., Sauer D. U., J. Power Sources 2014, 247, 332. [Google Scholar]
- 28. Duan Y., Tian J., Lu J., Wang C., Shen W., Xiong R., Energy Storage Mater. 2021, 41, 24. [Google Scholar]
- 29. Tang X., Lai X., Liu Q., Zheng Y., Zhou Y., Ma Y., Gao F., iScience 2023, 26, 106821. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30. Scales L. E., Introduction to Non‐Linear Optimization, Macmillan Education, UK: 1985. [Google Scholar]
- 31. Mehrer J., Spoerer C. J., Kriegeskorte N., Kietzmann T. C., Nat. Commun. 2020, 11, 1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32. Shao J., J. Am. Stat. Assoc. 1993, 88, 486. [Google Scholar]
- 33. Kitchenham B., IEEE Trans. Soft. Eng. 1998, 24, 278. [Google Scholar]
- 34. Burges C. J., Data Min. Knowl. Discov. 1998, 2, 121. [Google Scholar]
- 35. Tang X., Liu K., Li K., Widanage W. D., Kendrick E., Gao F., Patterns 2021, 2, 100302. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36. He W., Williard N., Osterman M., Pecht M., J. Power Sources 2011, 196, 10314. [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Supporting Information
Data Availability Statement
The code and data can be downloaded from: https://drive.google.com/drive/folders/1wMoXu‐G9zC6CPyo‐t742XNEkHYbPnWmb?usp=sharing
