Summary
This study addresses the challenge of virtual machine (VM) placement in cloud computing to improve resource utilization and energy efficiency. We propose a mixed integer linear programming (MILP) model incorporating -robustness theory to handle uncertainties in VM usage, optimizing both performance and energy consumption. A heuristic algorithm is developed for large-scale VM allocation. Experiments with Huawei Cloud data demonstrate significant improvements in resource utilization and energy efficiency.
Subject areas: Computer science, Engineering, Energy engineering
Graphical abstract

Highlights
-
•
Integrated Γ-robustness theory with VM consolidation for efficient resource allocation
-
•
Developed a heuristic algorithm for optimal VM consolidation in large-scale settings
-
•
Conducted experiments using Huawei Cloud data to test VM consolidation efficiency
-
•
Results showed enhanced resource utilization, consolidation, and sustainability
Computer science; Engineering; Energy engineering
Introduction
Cloud computing is rapidly evolving and supports a wide range of Internet applications. Cloud service providers, including Amazon and Huawei, offer diverse services, with virtual machines (VMs) being a prominent offering.1 VM placement, a recurring challenge due to the vast number of VMs provisioned daily, involves systematically allocating VM requirements to physical machines (PMs) based on predefined resource configurations, known as “flavors.”2 These configurations, such as 8, 16, 32, or 64 cores, aim to optimize resource utilization. However, the global average utilization of cloud data centers remains low (15%–20%),3 resulting in significant resource waste. To address this, virtual machine consolidation (VMC) aims to optimize resource usage by consolidating VMs onto fewer PMs, minimizing operational costs and energy consumption.
In modern cloud systems, VMC facilitates dynamic adjustments to varying workloads, ensuring both optimal performance and energy efficiency.4 It is typically categorized into static and dynamic consolidation.5 Static consolidation relies on predefined VM flavors, while dynamic consolidation integrates real-time VM utilization data to guide placement. However, dynamic consolidation presents significant optimization challenges, requiring a balance between maximizing resource utilization and minimizing energy consumption, while maintaining quality of service (QoS) and adhering to service level agreements (SLAs).6 The stochastic nature of workload demands, server capacity diversity, and VM dependencies further complicate this issue.
VMC in cloud resource management is another important application of the bin packing problem. Energy consumption significantly impacts various business concerns.7 Several approaches consolidate VMs into the same PM to reduce the number of active servers and energy usage. This challenge has been explored using mixed integer linear programming (MILP) models. Bartók and Mann8 introduce an innovative approach, employing a customized branch-and-bound algorithm that leverages specialized problem knowledge to improve efficiency. Speitkamp and Bichler9 present two optimization models to formalize the consolidation problem, integrating cost and workload data. They introduce a preprocessing method for server load data, facilitating the incorporation of QoS considerations. Their approach combines energy-aware allocation and consolidation modeling to minimize total energy use, merging an optimal allocation algorithm with a consolidation strategy that incorporates VM migration at service departures. This allocation algorithm is formulated as a bin packing problem, focusing on minimizing power consumption.
Various studies have developed algorithms to address VM consolidation in cloud resource management. Hallawi et al.10 propose a method employing genetic algorithms to optimize cloud resource allocation and VM consolidation. Zeng et al.11 introduce a dynamic VM consolidation method designed to optimize energy efficiency and reliability in cloud data centers. Their approach merges Markov chain models for load and reliability forecasting with a multi-objective optimization algorithm to improve resource utilization and system reliability. Sadykov and Vanderbeck12 focus on creating an adaptive Deep Reinforcement Learning (DRL) algorithm that dynamically adjusts VM consolidation strategies. This algorithm enhances energy efficiency, upholds SLA, and reduces operational costs in cloud environments.
There are also some studies that consider the cost of migration. Gudkov et al.13 develop a strategy to optimize VM consolidation in data centers, addressing the issue of resource fragmentation due to the dynamic creation and deletion of VMs. Their BalCon method strives to decrease the number of active PMs and minimize migration costs, thereby improving both data center performance and resource management. Li and Cao14 introduce a multi-step prediction model along with an affinity-aware technique. This model employs an enhanced linear regression algorithm to predict resource utilization and develops an affinity model between VMs and PMs. Li et al.15 propose a model called QoS-aware and multi-objective dynamic VM scheduling (QMOD), which applies an advanced genetic algorithm for VM scheduling. This model focuses on multi-objective optimization, aiming to balance system load, reduce migration costs, and maintain QoS. Wu et al.16 propose a bi-objective optimization approach and introduced an improved grouping genetic algorithm (IGGA) for VM consolidation, considering both energy and migration costs.
However, the core issue with current research on VM consolidation is that it rarely considers the dynamic characteristics of VM usage. Most studies still rely on static consolidation based on fixed values or only simply address the dynamic nature of VM workloads. Xie et al.17 propose an adaptive VM consolidation strategy based on dynamic multi-threshold (DMT), which dynamically adjusts the selection of PMs by considering the future usage of multi-dimensional resources such as Central Processing Unit (CPU), Random Access Memory (RAM), and bandwidth. Rezakhani et al.18 propose a dynamic VM integration method based on energy consumption awareness and QoS, which optimizes the placement and migration of VMs by monitoring and adjusting actual resource usage to balance energy efficiency and service quality. Sayadnavard et al.19 explore a multi-objective approach to optimize the placement of VMs on PMs to improve energy efficiency and reliability by considering different resource factors. However, these studies still have limited treatment of the uncertainty of VM usage. Therefore, this paper employs -robustness theory to consider the dynamics of VM usage and accounts for migration costs.
This paper focuses on dynamic consolidation, which relies on VM utilization data. Accurately predicting VM usage is challenging due to the high proportion of irregular VMs (over 30%),20 as shown in Figure 1. This irregularity makes it difficult to prevent PM overloads solely through prediction.21 To address this, uncertain optimization techniques are employed, which can be categorized into stochastic optimization22 and robust optimization.23 While stochastic optimization requires detailed probability distributions, robust optimization relies on simpler assumptions, making it more practical. However, traditional robust optimization can be overly conservative, leading to energy inefficiency. To mitigate this, we introduce the -robustness optimization theory.24
Figure 1.
Usage of different types of VMs
Subsequently, we develop a MILP model for VM consolidation, incorporating -robustness theory to handle uncertainties in VM usage. Experimental results demonstrate that our model can effectively consolidate VMs using small-scale data, thereby improving resource utilization efficiency and conserving resources. However, this problem is NP-hard,25 presenting significant computational challenges as the problem scale increases. To address this, we propose a heuristic algorithm, Algorithm 1, capable of solving larger-scale problems more efficiently. This algorithm has shown promising results, offering a practical solution for enhancing resource management in extensive cloud environments.
Algorithm 1. GammaFF.
Input: PM_list (list of PMs), (maximum number of iterations allowed)
Output: Optimized placement of VMs
1: Initialize
2: Initialize
3: Initialize an empty queue
4: while and do
5: Select and mark
6: Initialize VMs from
7: for each do
8: if and PM is not empty then
9: Sample k VMs from PM and add to Q
10: end if
11: end for
12: Sort Q by in descending order
13: for each do
14:
15: for each do
16: Calculate total resource usage:
17: if then
18: Place VM on PM
19:
20: end if
21: end for
22: if then
23: break
24: end if
25: end for
26: if Q is empty then
27: Commit changes and set
28: else
29: Increment Iter_count
30: Reset the state of all PMs and VMs
31: end if
32: end while
33: if then
34: Return “No valid placement found within the maximum iterations.”
35: end if
We summarize our main contributions as follows.
-
(1)
We propose combining -robustness theory with VM consolidation to fully consider the uncertainty of VM usage and establish a MILP model. This approach ensures that the model can handle fluctuations in VM demand while optimizing the allocation of resources, thereby maintaining a balance between performance and energy efficiency.
-
(2)
We design a heuristic algorithm capable of efficiently obtaining consolidated results even in larger-scale situations by effectively allocating VMs to PMs based on their resource requirements and usage patterns. This algorithm is designed to quickly adapt to varying workloads and server capacities, ensuring optimal utilization of available resources and minimizing operational costs.
-
(3)
We conduct experiments using real data from Huawei Cloud.26 The experimental results demonstrate that the model and algorithm we propose can effectively consolidate VMs, improve resource utilization, and reduce energy waste. These experiments highlight the practical applicability of our approach in real-world cloud environments, showing significant improvements in both performance and sustainability.
Results
Parameter settings
In this section, we primarily address the configuration of two key parameters crucial for optimizing our algorithm: the percentiles used for determining the upper and lower bounds of -robustness, and the proportion of VMs randomly selected within our algorithm.
As shown in Figure 2, we conduct a comparative analysis of VM placements using different percentile settings for -robustness. The results reveal that when the 80th and 20th percentiles of actual VM usage are employed as the upper bound () and lower bound (), the model tends to accommodate more VMs. This increased placement can lead to resource usage surpassing the PM threshold, subsequently impacting the user experience negatively due to potential overloading. On the other hand, setting the bounds at the 99th and 1st percentiles results in a highly conservative approach, causing underutilization of resources and inefficiencies. To balance these extremes, we identified the 90th and 10th percentiles as the optimal bounds for -robustness. This configuration enhances resource utilization efficiency while preventing the risk of exceeding the PM threshold, thereby ensuring a stable and satisfactory user experience.
Figure 2.
PM CPU usage with different percentiles
Figure 3 outlines our approach and findings in determining the optimal percentage of VMs to be randomly selected by our algorithm. We execute tests on four sets of medium-scale data and four sets of large-scale data, analyzing the final number of active PMs and the number of migrations. The data indicate that when 15% or more of the VMs are selected, the algorithm achieves a reduction in the number of active PMs. However, further increasing the selection percentage does not significantly decrease the number of active PMs but does lead to a marked increase in the number of migrations. This increase in migrations can introduce unnecessary overhead and instability. Consequently, we conclude that selecting 15% of the VMs at random strikes an optimal balance, minimizing the number of active PMs while keeping the number of migrations within acceptable limits.
Figure 3.
Algorithm ratio comparison. Since the algorithm incorporates randomness, we conduct each set of experiments 10 times and calculate the average value
The error bars represent the upper and lower bounds of these average values.
Performance of models and algorithms on small-scale datasets
In this section, we conduct a comprehensive evaluation of the performance of our model and algorithm using small-scale datasets. Specifically, we select nine distinct datasets, each with an initial PM count ranging from 5 to 13, encompassing a variety of VM configurations. The primary objective of this evaluation is to reduce the number of active PMs through efficient VM consolidation while minimizing the number of migrations required to achieve this consolidation. Our algorithm still uses 10 experiments to take the average value.
As illustrated in Figure 4, our model and algorithm demonstrate robust performance on small-scale datasets. Although there is a noticeable performance gap when compared to the optimal solution, our algorithm still manages to significantly reduce the number of active PMs, achieving an improvement of at least 20%. This reduction in active PMs is crucial as it directly translates to more efficient resource utilization and potentially lower operational costs. However, it is important to note that this performance improvement comes at the cost of an increased number of migrations. The higher number of migrations indicates that while the algorithm effectively consolidates VMs, it requires more movement of VMs between PMs to achieve this consolidation.
Figure 4.
Performance of models and algorithms on small-scale datasets
The green shaded region represents the range of variation across multiple experiments.
Additionally, our analysis revealed that as the initial number of PMs increases, the solution time of the model does not improve. In fact, the computation time lengthens progressively. This means that the model becomes less efficient in finding solutions as the problem size increases. Notably, when the number of PMs exceeds 13, our model struggles to obtain the optimal value within the 3,600-s time limit imposed for the computations. This limitation indicates that while the model is effective for small to moderately sized datasets, its performance may degrade significantly when scaling to larger datasets with more PMs.
Figure 5 provides a detailed illustration of VM consolidation using a group with an initial count of 5 PM as an example. Both the MILP model and the GammaFF algorithm demonstrate the ability to release some PMs to enhance resource utilization. However, in scenarios involving small-scale data, the MILP model can release more PMs, leading to a higher utilization rate of the remaining PMs. This higher utilization rate can significantly improve the efficiency of cloud infrastructure, benefiting both service providers and users by ensuring better performance and reduced energy consumption.
Figure 5.
Comparison of virtual machine consolidation
In summary, our evaluation on small-scale datasets confirms that our model and algorithm are effective in reducing the number of active PMs, thereby improving resource utilization. The trade-off between the number of migrations and the reduction in active PMs is evident, highlighting areas for potential optimization. Furthermore, the observed increase in solution time with the number of PMs underscores the need for ongoing refinement to enhance scalability and performance for larger datasets. In addition, as the number of PMs increases, the solution time also increases. Therefore, we propose GammaFF.
Performance of algorithms on medium-scale and large-scale datasets
In this section, we evaluate the performance of our algorithm and baselines using both medium-scale and large-scale datasets to measure their efficacy in more realistic and challenging scenarios. To achieve this, we selected eight initial PMs of varying sizes for our testing framework. Given that the algorithm incorporates a degree of randomness in its operations, each set of experiments is conducted ten times to ensure reliability and statistical significance. The average value from these runs is calculated and analyzed to provide a comprehensive understanding of the algorithm’s performance.
Discussion
Tables 1 and 2 present a comprehensive comparative analysis of GammaFF, GammaBF, FirstFit, BestFit, and the latest algorithms, HLEM-VMP and GMPR, across datasets of varying scales. Figure 6 illustrates the problem addressed in this study. The results highlight significant differences in the algorithms’ abilities to optimize resource usage and manage migration overhead.
Table 1.
Comparison of active PMs across different algorithms in medium-scale datasets
| Algorithm | Active PMs | |||
|---|---|---|---|---|
| Initial PMs | 15 | 20 | 25 | 30 |
| GammaFF | 9.00 | 12.00 | 14.00 | 17.00 |
| GammaBF | 9.00 | 12.00 | 14.00 | 17.00 |
| HLEM-VMP | 9.50 ± 0.20 | 12.70 ± 0.25 | 15.10 ± 0.30 | 18.40 ± 0.35 |
| GMPR | 9.30 ± 0.15 | 12.50 ± 0.20 | 14.70 ± 0.25 | 17.80 ± 0.30 |
| Firstfit | 9.10 ± 0.30 | 12.20 ± 0.40 | 14.10 ± 0.00 | 17.60 ± 0.49 |
| Bestfit | 9.90 ± 0.30 | 12.60 ± 0.49 | 15.10 ± 0.00 | 19.10 ± 0.30 |
| Migration Counts | ||||
|---|---|---|---|---|
| Initial PMs | 15 | 20 | 25 | 30 |
| GammaFF | 431.00±38.70 | 605.40±26.22 | 858.70±54.43 | 1368.30±63.60 |
| GammaBF | 449.80 ± 46.33 | 617.70 ± 35.24 | 859.00 ± 42.69 | 1384.50 ± 70.41 |
| HLEM-VMP | 375.00 ± 30.12 | 550.00 ± 35.55 | 810.00 ± 45.22 | 1265.00 ± 50.35 |
| GMPR | 580.00 ± 40.35 | 810.00 ± 50.42 | 1250.00 ± 60.55 | 1850.00 ± 70.60 |
| Firstfit | 429.00 ± 40.68 | 573.50 ± 31.42 | 827.60 ± 48.73 | 1286.70 ± 35.28 |
| Bestfit | 342.00 ± 30.78 | 548.00 ± 70.51 | 747.50 ± 44.69 | 1145.60 ± 81.43 |
Table 2.
Comparison of active PMs across different algorithms in large-scale datasets
| Algorithm | Active PMs | |||
|---|---|---|---|---|
| Initial PMs | 85 | 90 | 95 | 100 |
| GammaFF | 47.00 | 51.00 | 57.00 | 57.00 |
| GammaBF | 48.70 ± 0.90 | 51.30 ± 0.46 | 57.20 ± 0.40 | 58.40 ± 0.80 |
| HLEM-VMP | 49.50 ± 0.50 | 52.80 ± 0.60 | 58.10 ± 0.70 | 59.50 ± 0.75 |
| GMPR | 48.20 ± 0.60 | 51.70 ± 0.55 | 57.80 ± 0.60 | 58.30 ± 0.70 |
| Firstfit | 49.60 ± 0.49 | 54.70 ± 0.64 | 61.10 ± 0.83 | 60.40 ± 0.49 |
| Bestfit | 53.20 ± 0.40 | 58.80 ± 0.75 | 64.90 ± 1.30 | 65.70 ± 1.19 |
| Migration Counts | ||||
| Initial PMs | 85 | 90 | 95 | 100 |
| GammaFF | 9481.50±94.27 | 8987.00±129.79 | 12889.20±115.74 | 13355.10±141.27 |
| GammaBF | 9104.30 ± 276.62 | 9023.50 ± 135.44 | 12815.10 ± 226.39 | 12901.40 ± 297.35 |
| HLEM-VMP | 8300.00 ± 150.35 | 8050.00 ± 170.42 | 11500.00 ± 180.60 | 12000.00 ± 190.70 |
| GMPR | 12000.00 ± 250.40 | 12500.00 ± 300.55 | 18000.00 ± 320.75 | 19000.00 ± 350.90 |
| Firstfit | 8745.80 ± 135.29 | 8132.70 ± 170.03 | 11372.80 ± 283.36 | 1212.50 ± 159.94 |
| Bestfit | 7886.60 ± 180.32 | 7210.30 ± 223.11 | 10042.60 ± 456.52 | 10561.40 ± 391.08 |
Figure 6.
VM consolidation
The data analysis reveals that simple algorithms like FirstFit and BestFit do not fully exploit the predictive capabilities inherent in the -robustness theory. Figure 7 provides an illustration of the robustness theory. These simpler approaches, while computationally efficient, lead to suboptimal resource utilization. For instance, FirstFit and BestFit rely on straightforward placement heuristics that lack dynamic adaptability to changing workloads. Consequently, they result in higher numbers of active PMs and increased energy consumption, as they fail to balance load effectively across the infrastructure. By contrast, GammaFF demonstrates clear advantages in reducing the number of active PMs without incurring additional migration costs. This efficiency underscores the limitations of simpler algorithms and the necessity for more advanced strategies in dynamic VM environments.
Figure 7.
The probability of constraint violation under different
When directly compared to GammaBF, GammaFF demonstrates significant improvements in both stability and performance. While both algorithms incorporate -robustness theory, their operational mechanisms differ. GammaBF, which relies on sorting and selecting PMs based on resource availability, struggles to adapt efficiently to dynamic workload changes. This is because GammaBF’s decisions are highly sensitive to fluctuations in resource states, leading to potential inefficiencies and increased migration overhead. In contrast, GammaFF adopts a more straightforward and iterative placement strategy that prioritizes quick and adaptive decision-making. This approach enables GammaFF to consistently achieve fewer active PMs while maintaining lower and more stable migration counts, making it particularly effective in environments with high workload variability.
When benchmarked against more advanced algorithms like HLEM-VMP and GMPR, GammaFF continues to exhibit superior performance in both active PM reduction and migration cost management. HLEM-VMP, while effective in minimizing SLA violations, tends to distribute VMs more conservatively, leading to higher active PM counts. GMPR, on the other hand, prioritizes energy efficiency but often incurs higher migration costs due to its two-phase optimization approach. GammaFF strikes an optimal balance between these trade-offs, offering better results in both metrics compared to HLEM-VMP and GMPR. This performance advantage is attributable to GammaFF’s ability to dynamically adjust resource allocation while minimizing disruption, showcasing its adaptability across diverse test scenarios.
The GammaFF algorithm demonstrates several key strengths. By leveraging -robustness theory, GammaFF minimizes the number of active PMs, thereby reducing energy consumption and operational costs. Figure 8 illustrates the main steps involved in the algorithm. It achieves these reductions without significant increases in migration costs, ensuring minimal service disruption during VM reallocation. Unlike GammaBF, which exhibits higher variability in performance due to its reliance on dynamic sorting, GammaFF maintains stable and robust results across varying scenarios. Similarly, unlike FirstFit and BestFit, which exhibit performance inconsistencies across varying workloads, GammaFF’s stability makes it a reliable solution for modern cloud environments.
Figure 8.
GammaFF: The green PM represents the PM selected for release
All virtual machines (VMs) in this PM will be placed into the VM queue, along with some VMs from other PMs. The First-Fit algorithm, combined with -robustness, is then used to place the VMs. If the placement is successful, a new PM will be selected for release. If it fails, another PM is selected for release.
Each algorithm presents unique strengths and weaknesses, which determine its applicability. FirstFit and BestFit are suited for scenarios requiring rapid and straightforward placements, where computational efficiency is paramount. HLEM-VMP excels in SLA-sensitive environments, ensuring compliance at the expense of higher active PM counts. GMPR is ideal for energy-optimized scenarios, albeit with higher migration costs. GammaBF offers robust baseline performance with moderate efficiency but lacks the adaptability and stability required in highly dynamic environments. GammaFF, however, emerges as the most balanced and robust option, suitable for dynamic cloud infrastructures where minimizing both energy usage and migration costs is critical.
In summary, GammaFF outperforms traditional and advanced algorithms, including GammaBF, in reducing active PM counts and managing migration overhead. Its superior stability and integration of -robustness theory ensure not only efficiency but also adaptability and resilience, addressing the challenges of dynamic and unpredictable VM workloads. This combination of strengths solidifies GammaFF as a significant advancement in VM placement algorithms and a robust solution for modern cloud computing needs.
Limitations of the study
In summary, we develop a MILP model for VM consolidation, Table 3 introduces the primary parameters and variables. We also noticed that in practice, the placement and consolidation of VMs may involve issues such as affinity and more diverse PM specifications. Additionally, while this study assumes fixed memory capacity for each PM, dynamic memory allocation, as well as the management of other resource types such as CPU, network bandwidth, and storage, are key aspects of cloud elasticity. Future research will focus on integrating these realistic considerations into our model and algorithm to further enhance its applicability to dynamic and diverse cloud environments. By incorporating multi-dimensional resource management and prioritization mechanisms, the framework can better address the complexities of modern hybrid and multi-cloud systems, ensuring efficient and adaptive resource allocation strategies.
Table 3.
Notation summary of MILP
| Parameters | Descriptions |
|---|---|
| N | Number of VMs. Each VM is indexed by i, where N. |
| M | Number of PMs, each PM is indexed by j, where M |
| Cj | Total capacity of CPU of PM j. |
| Mj | Total volume of memory of PM j. |
| ui | CPU utilization of VM i, in Γ-robustness theory, is given by ui ∈[uciuri, uci + uri], where uci is the center (nominal value) and uri is the radius (deviation). |
| mi | Memory consumption of VM i. |
| Initial placement of VMs on PM where = 1 denotes that VM i is assigned to PM j initially and otherwise = 0. | |
| Γk | Precomputed Γ values for each k [0...k]. |
| Variables | Descriptions |
| xij | Binary variable, xij = 1 if VM i is assigned to PM j and otherwise xij = 0. |
| Binary variable, = 1 if VM i is assigned to PM j and VM i is in the MaxSet. | |
| Binary variable, xi−j = 1 if VM i is assigned to PM j and VM i is in the MinSet. | |
| yj | Binary variable, yj = 1 if there is at least one VM assigned to this PM and otherwise yj = 0. |
| Hjk | Binary variable, Hjk = 1 if the number of VMs allocated on PM j is equal to k and Hjk = 0 otherwise. |
| wi | Binary variable, wi = 1 if VM i is migrated from its original PM, and wi = 0 otherwise. |
| Sj | For separation purpose of MaxSet and MinSet. |
| Mj | The maximum radius (deviation) of the VMs allocated on PM. |
Furthermore, live VM migration, a critical yet complex aspect of dynamic cloud resource management, was not explicitly addressed in this study. Live migration introduces challenges such as downtime, network bandwidth constraints, and migration latency, which can significantly impact system performance. While our current work focuses on optimizing placement strategies to reduce the need for excessive migrations, future efforts will aim to incorporate live migration dynamics into the optimization framework. By addressing these challenges, we seek to further enhance the model’s practicality and effectiveness in real-world, dynamic cloud environments.
Finally, lightweight container technologies, such as Docker and Kubernetes, have gained significant traction in cloud environments due to their efficient resource utilization and rapid deployment capabilities. Although this study focuses on VM consolidation, the principles of the proposed optimization framework and heuristic algorithm can be adapted for dynamic container consolidation. Containers introduce additional challenges, such as finer-grained resource allocation, inter-container dependencies, and orchestration requirements, which require tailored modifications to the model and algorithm. Exploring these extensions will provide valuable insights for improving resource management in modern, containerized cloud environments, making this a promising direction for future research.
Resource availability
Lead contact
Further information and requests for resources and reagents should be directed to and will be fulfilled by the lead contact, Jie Song (jie.song@pku.edu.cn).
Materials availability
This study did not generate new unique reagents.
Data and code availability
-
•
HUAWEICLOUD data have been deposited at HUAWEI and are publicly available as of the date of publication. Accession numbers are listed in the key resources table.
-
•
Codes have been deposited at Energy-Efficient-Cloud-Systems and are publicly available as of the date of publication. Accession numbers are listed in the key resources table.
-
•
Any additional information required to reanalyze the data reported in this paper is available from the lead contact upon request.
Acknowledgments
This work was partially supported by National Natural Science Foundation of China (NSFC) under grants 72131001 and T2121002.
Author contributions
X.H., conceptualization, literature review, model, writing, review & editing. J. Wang, review and supervision. J. Wu, model. J.S., conceptualization and supervision. All authors contributed to the discussions on the writing of this article.
Declaration of interests
The authors declare no competing interests.
STAR★Methods
Key resources table
| REAGENT or RESOURCE | SOURCE | IDENTIFIER |
|---|---|---|
| Deposited data | ||
| HUAWEICLOUD | HUAWEI | https://github.com/huaweicloud/HUAWEICloudPublicDataset |
| Software and algorithms | ||
| Python version 3.9 | Python Software Foundation | https://www.python.org |
| Other | ||
| Code | Energy-Efficient-Cloud-Systems | https://github.com/hanxinming/Energy-Efficient-Cloud-Systems |
Method details
In this section, we define the problem we aim to solve, identify the challenges we encounter, propose our solution concepts, and establish a mathematical model. Specifically, we begin by outlining the key issues related to VM consolidation, including the dynamic nature of VM usage and the complexities involved in minimizing energy consumption while maintaining performance. Then, we discuss the shortcomings of traditional stochastic optimization and RO models in addressing this problem.
Following this, we introduce our proposed solution, which leverages -robustness optimization to address these challenges. This approach incorporates the variability and uncertainty in VM usage patterns, allowing for a more adaptive and resilient resource management strategy. Finally, we present a detailed mathematical model that formalizes our solution, providing a framework for optimizing VM placement and consolidation in cloud environments.
Problem definition
In modern cloud computing environments, resource utilization efficiency is a crucial metric that directly impacts operational costs and energy consumption. When the utilization efficiency of VMs is at long-term low levels, it leads to increased operational costs and unnecessary energy consumption. Therefore, we propose optimizing the placement of VMs on PMs through VM consolidation.
As shown in Figure 6, consider a set of PMs and a set of VMs , where each PM has a certain resource capacity, and each VM has dynamic resource requirements. Our objective is to release some PMs and improve the utilization efficiency of the remaining active PMs by migrating VMs.
We assume that VM migrations are performed in a controlled and periodic manner, reflecting common cloud management practices. This approach prevents frequent or uncontrolled migrations, thereby mitigating potential risks such as the ’migrate-to-infinity’ problem. By consolidating VMs at fixed intervals, the system achieves a balance between optimization and stability.
Our specific goals are.
-
(1)
To release as many PMs as possible with the minimum number of VM migrations.
-
(2)
To improve the utilization efficiency of the existing PM resources.
-
(3)
To ensure that the total resource usage of VMs on each PM does not exceed the machine’s capacity, thereby guaranteeing user experience and avoiding overloading.
To achieve this, we migrate underutilized VMs from their current PMs to other PMs with sufficient available capacity. This consolidation of the workload onto fewer PMs allows us to turn off idle machines and save energy. Table 3 introduces the primary parameters and variables. We can establish the basic mathematical model as follows:
| (Equation 1) |
| (Equation 2) |
| (Equation 3) |
| (Equation 4) |
| (Equation 5) |
| (Equation 6) |
| (Equation 7) |
| (Equation 8) |
| (Equation 9) |
The objective function (1) minimizes the number of active PMs and the number of VM migrations, where β is the weight for the migration cost. The baseline weights and prioritize minimizing active PMs, which has a more significant impact on energy efficiency and cost reduction in cloud data centers. This weighting ensures that the primary objective of PM reduction dominates, while still considering migration costs to avoid excessive overhead. Constraint (2) ensures that each VM is assigned to exactly one PM. Constraint (3) ensures that the CPU utilization of each PM does not exceed its total capacity. Constraint (4) ensures that the memory usage of each PM does not exceed its total capacity, we usually assume that the size of memory is fixed. Constraint (5) ensures that a PM is active if it has at least one VM assigned to it. Constraint (6) ensures the correct calculation of the migration status of each VM. Constraint (7) ensures that the allocation variable is binary. Constraint (8) ensures that the PM status variable is binary. Constraint (9) ensures that the migration status variable is binary.
In this study, we assume that each PM has a fixed memory capacity, which simplifies the problem and aligns with many real-world implementations where memory allocation is statically configured during VM deployment. While this assumption allows us to focus on CPU usage, which is typically more variable and prone to conflicts, we acknowledge that dynamic memory allocation is an important aspect of cloud elasticity. Future extensions of this work could incorporate dynamic memory adjustments to better address scenarios with highly variable memory demands.
For the basic model, in constraint (3) is assumed to be a fixed value, representing a static approach. However, our research involves dynamic scheduling, where is an uncertain value. From a stochastic optimization perspective, constraint (3) can be written as:
| (Equation 10) |
Here, α is the allowable probability of constraint violation. However, stochastic optimization usually requires dealing with a large number of uncertain scenarios and probability distributions, significantly increasing computational complexity. Additionally, stochastic optimization relies on the probability distribution of uncertain parameters, which is challenging to obtain for VM utilization.
In contrast, RO typically assumes simpler models, which better match our practical usage scenarios. From the perspective of basic RO, constraint (3) can be written as:
| (Equation 11) |
Here, represents the maximum value of . This is the basic RO model, which considers the maximum value of VM demand, leading to overly conservative solutions and significant resource waste. Therefore, we introduce the -robustness theory.
-robustness theory
The core challenge of this work is to achieve a balance between energy efficiency and user experience. To address this trade-off, we apply the -robustness theory proposed by Bertsimas and Sim.24 The main advantage of -robustness is that it is less conservative compared to the classic RO theory, also known as min-max RO. Unlike the classic RO, which fully protects against the worst-case scenario by ensuring that the solution does not violate any constraints for all possible deviations, -robustness allows for a more balanced approach by considering only a subset of the most significant deviations. This makes the -robustness approach more practical and efficient in dealing with real-world uncertainties. For the problem of VM consolidation, where different numbers of VMs are placed on each PM and the usage of these VMs varies over time, there may be a risk pooling effect, as a high load on one VM may be offset by a low load on another.
We formulate the CPU capacity constraint under -robustness. For any VM i, we assume the utilized capacity fluctuates within a symmetric and bounded interval with a median value (“center”) and a radius satisfying . This implies that are independent, non-negative, and symmetrically distributed random variables. A PM is considered safe if the set of VMs on it satisfies the following constraint:
| (Equation 12) |
where is the allowable probability of violating the SLA (SLA). Direct computation of this constraint—either analytically or numerically—is impractical. Therefore, Bertsimas and Sim24 propose a simple and tight bound for the above expression, which depends on the precomputed value:
| (Equation 13) |
where is a function that determines the number of VMs contributing to the robustness based on N, the total number of VMs, and . In simple terms, if the sum of all centers plus the sum of the largest radii is less than or equal to the maximum allowed capacity C, then the probability of overload occurring is less than or equal to . For example, as shown in Figure 7, when and , , meaning that all centers but only the largest 25 radii contribute to the left-hand side of the expression.
The number of VMs N, , and are linked by the following expression. Knowing any two of these parameters allows us to compute the worst-case bound on the third. Note that, unlike the approach in,24 we consider only integer values of . This allows us to precompute all values for fixed and all possible N, albeit with slight overprotection.
| (Equation 14) |
where
| (Equation 15) |
and
| (Equation 16) |
where is the binomial probability, μ is the fractional part of v, and v is the average value of and N.
This study employs Equation 15 to quantify the SLA violation probability () under uncertain resource allocation scenarios. The formulation reflects a hybrid approach, combining exact probabilities for edge cases ( or ) with statistical approximations for intermediate values (). For the intermediate cases, the approximation leverages large deviation principles to capture deviations from the mean resource allocation distribution, assuming a binomial framework.
For , is approximated as:
| (Equation 17) |
where N represents the total number of tasks (or VMs), and l denotes the number of deviations from the mean allocation. This expression balances computational efficiency with theoretical robustness by capturing the worst-case deviations while maintaining scalability.
Mathematical model
Based on the previous discussion, by combining -robustness, constraint (3) can be written as:
| (Equation 18) |
Here, -robustness is utilized to account for the variability and uncertainty in VM usage, ensuring that the solution is not overly conservative while still providing robust performance under uncertain conditions. The parameter is a binary variable, if VM i is assigned to PM j and VM i is in the MaxSet. All parameters and variables are described in Table 3. The MILP model we finally constructed is as follows:
Objective:
Subject to:
| (Equation 19) |
| (Equation 20) |
| (Equation 21) |
| (Equation 22) |
| (Equation 23) |
| (Equation 24) |
| (Equation 25) |
| (Equation 26) |
| (Equation 27) |
| (Equation 28) |
| (Equation 29) |
Constraint (18) states that if a VM is deployed on a PM, then it is in the MaxSet or MinSet of that PM. Constraint (19) defines the total number of VMs deployed on a PM. Constraint (20) specifies the total number of VMs in the MaxSet of a PM. Constraint (21) defines the maximum radius of VMs on a PM. Constraint (22) separates the MaxSet and MinSet of VMs on a PM. Constraint (23) ensures that and on a PM. Constraints (24)-(28) define the domains of the decision variables.
Heuristic solutions for larger-scale problems
Given the NP-hard nature of the VM consolidation problem, traditional MILP models are inadequate for solving larger-scale instances. To address this, we introduce a heuristic approach, the Gamma Robust First-Fit (GammaFF) algorithm. This algorithm is designed to optimize resource allocation in cloud computing environments by minimizing the number of active PMs and reducing the frequency of VM migrations. Our primary objective is to enhance resource utilization efficiency while satisfying the CPU demands of VMs. The GammaFF algorithm incorporates a robust scheduling methodology that considers uncertainties in VM resource requirements, thereby providing a more resilient and efficient solution. As shown in Figure 8, the algorithm can be roughly divided into four parts, which will be introduced in detail below.
Selection of PM to be emptied:In this stage, a PM with active VMs is randomly selected for emptying. This selection is crucial as it sets the basis for the subsequent steps in the consolidation process. The random selection ensures that the consolidation process does not favor any particular PM, promoting a more balanced and efficient overall resource utilization. By targeting underutilized PMs, the algorithm can effectively reduce the number of active PMs, thereby saving energy and optimizing resource allocation. This stage enhances the flexibility of the VM consolidation process by allowing different PMs to be selected for emptying in each iteration.
VM Queue Formation:The VMs from the selected PM are added to a queue, along with a sample of VMs from other PMs to facilitate redistribution. This queue ensures that there are enough VMs to redistribute efficiently across the remaining active PMs. Let Q denote the queue.
Including a sample of VMs from other PMs helps create a buffer that can be used to balance the load more effectively. The formation of this queue is a critical step that prepares the VMs for the First-Fit Placement stage, ensuring that all potential candidates for migration are considered. The percentage of VMs selected for migration from the chosen PM is determined experimentally, as will be discussed in detail in subsequent sections. Selecting VMs from each PM to enter the queue increases the flexibility of allocation, thereby improving the overall consolidation process.
First-Fit Placement with -Robustness
In this stage, we reallocate the VMs in the previous queue. The most famous heuristic is the first-fit (FF) policy, which keeps all bins open to new items and assigns the item to the first bin that can accommodate it. As an improvement, the best-fit (BF) policy assigns the item to the bin that can accommodate the item with the most remaining space. Both heuristics have the same performance guarantee in the deterministic setting of
| (Equation 30) |
where OPT represents the optimal number of PMs required for the allocation. However, the usage of VMs is not certain. Taking the firstfit algorithm as an example, the classic method will consider the maximum value of VM usage for placement. Therefore, we use the gamma robust theory to make certain corrections to this.
The VMs in the queue are sorted by their upper bound usage, , and placed onto PMs using a modified First-Fit strategy combined with -robustness. This strategy involves placing each VM into the first PM that has sufficient resources to accommodate it, based on the robust consumption and memory constraints calculated using -robustness. The constraints that need to be met during the placement process are still the same as mentioned above:
| (Equation 31) |
By sorting the VMs by their upper bound usage, the most demanding VMs are placed first. This approach reduces the likelihood of overloading a PM later in the process and ensures that higher resource-consuming VMs are allocated to suitable PMs early on. If a VM cannot be placed on any PM, the iteration is deemed unsuccessful, and the placement process must be repeated. This sorting and placement strategy helps maintain an efficient and balanced resource distribution across the PMs.
The modified First-Fit strategy is chosen for its ability to handle uncertainties in VM resource requirements while maintaining simplicity and effectiveness in managing resource allocation. It operates with a time complexity that is linear relative to the number of VMs and PMs, making it suitable for larger-scale cloud environments. Additionally, by prioritizing VMs with higher resource demands, the strategy mitigates the risk of subsequent VMs being unable to find suitable PMs, thereby improving overall placement success rates. The integration of -robustness ensures that the placement is resilient to variations in resource usage, providing a more reliable and stable allocation.
Resource Update and Iteration:If all VMs are successfully placed, the changes are committed, ensuring that the current configuration is preserved. The iteration counter is then reset to zero. This step confirms that the algorithm has found a valid configuration that satisfies all constraints and optimizes resource usage. If, however, any VM cannot be placed, the process is repeated until a maximum number of iterations, , is reached. This iterative approach ensures that the consolidation process continues until an optimal or near-optimal configuration is found, or the iteration limit is reached, providing a balance between thoroughness and computational efficiency.
Committing successful changes ensures that progress made during each iteration is not lost, preserving the best configuration found so far. The iteration mechanism acts as a fail-safe, preventing the algorithm from entering an endless loop and ensuring that it terminates within a reasonable time frame. This balance between commitment and iteration helps the algorithm achieve an efficient and robust VM consolidation strategy. By continually refining the VM placement through multiple iterations, the algorithm can adapt to varying workload conditions and improve overall resource utilization and energy efficiency. The complete algorithm is Algorithm 1.
The proposed heuristic algorithm for VM consolidation involves three main computational steps, and its time and space complexity are analyzed as follows.
-
(1)
Sorting: The algorithm begins by sorting the list of N VMs based on their resource requirements. This step has a time complexity of .
-
(2)
VM Placement: For each VM, the algorithm iterates over M PMs to determine the most suitable placement. This process requires operations in the worst case.
-
(3)
Iterations: The sorting and placement steps are repeated iteratively until convergence. Assuming a maximum number of iterations denoted as , the overall time complexity of the algorithm is: .
The space complexity is determined by the storage requirements for the states of N VMs and M PMs, which scale as . Temporary variables used during sorting and placement contribute negligible additional space.
Given this analysis, we conclude that the time complexity of our algorithm is competitive and efficient, particularly in comparison to other approaches in the literature. The sorting step, with its complexity, ensures that the initial placement is optimized with minimal overhead, while the iterative placement phase is designed to handle large datasets efficiently. This places our algorithm among the most time-efficient solutions for VM consolidation.
Quantification and statistical analysis
In terms of dataset selection, we used real data from Huawei Cloud, which we have made open source. This dataset consists of 117 instances, each named in the format X-1.json, X-2.json, and X-3.json. In this naming convention, X represents the number of PMs in that particular instance, and the suffixes 1, 2, and 3 denote different replications of the same scenario. Each instance includes multiple PMs, each hosting a certain number of VMs. The trace provides historical utilization data for each VM, detailing their resource consumption over time.
This comprehensive dataset enables us to perform VM consolidation through live migration to achieve various objectives. These objectives include balancing overload risks by redistributing VMs, saving energy by consolidating VMs to free up more PMs, or densely packing existing VMs to create space for larger upcoming VMs. By leveraging this real-world data, we can validate our models and algorithms in a practical context, ensuring that our solutions are robust and effective in dynamic cloud environments.
In our subsequent numerical experiments, we utilized three datasets in total. The first is a small-scale dataset, designed to test the feasibility of our model and algorithm, and primarily includes 5 to 13 PM. The second is a medium-scale dataset, consisting of 15, 20, 25, and 30 PM. The third is a large-scale dataset, containing 85, 90, 95, and 100 PM.
We compared the performance of six algorithms, including both classic and recent state-of-the-art approaches, to establish a comprehensive benchmark for evaluating VM consolidation and resource optimization. Specifically, we included the classic FirstFit and BestFit algorithms, as well as the more advanced GammaFF, GammaBF, and the newly introduced HLEM-VMP27 and GMPR.28 This allows us to assess the effectiveness of our approach in comparison to both well-established methods and the latest advancements in the field. For HLEM-VMP and GMPR, we made necessary simplifications to adapt them to our specific scenarios, ensuring a consistent and fair comparison with the other algorithms across medium-scale and large-scale datasets. All experiments were conducted using Python version 3.9.
-
(1)
FirstFit: This algorithm does not utilize the -robustness theory for allocation. Instead, it uses the maximum value of user usage for allocation and places VMs on the first PM that can accommodate the requested size. This approach is straightforward but may not always yield the most efficient resource utilization.
-
(2)
BestFit: Similar to FirstFit, this algorithm does not employ the -robustness theory. It uses the maximum value of user usage for allocation but selects the PM that can meet the requested size and has the smallest remaining space after placement, optimizing for minimal leftover capacity. This strategy aims to make better use of available resources compared to FirstFit.
-
(3)
GammaFF: This is the algorithm proposed in this paper, which has been previously discussed in detail. It incorporates the -robustness theory with a FirstFit approach, aiming to balance between resource utilization and robustness to varying VM demands. This method leverages the strengths of the -robustness theory to enhance placement decisions.
-
(4)
GammaBF: This algorithm builds upon the GammaFF algorithm by changing the allocation strategy from FirstFit to BestFit. By combining the -robustness theory with the BestFit approach, GammaBF seeks to further optimize resource utilization while maintaining robustness.
-
(5)
HLEM-VMP: This algorithm employs heuristic load evaluation metrics to guide VM placement. It determines VM allocation based on a calculated metric that evaluates resource load, aiming to optimize placement strategies under dynamic workloads.
-
(6)
GMPR: This algorithm uses a two-phase optimization approach for VM placement. The first phase focuses on selecting energy-efficient placements for VMs, and the second phase consolidates resources to further enhance energy optimization, ensuring efficient use of available infrastructure.
Published: January 27, 2025
References
- 1.Lin W., Xiong C., Wu W., Shi F., Li K., Xu M. Performance interference of virtual machines: A survey. ACM Comput. Surv. 2023;55:1–37. [Google Scholar]
- 2.Calcavecchia N.M., Biran O., Hadad E., Moatti Y. 2012 IEEE Fifth International Conference on Cloud Computing. IEEE; 2012. Vm placement strategies for cloud scenarios; pp. 852–859. [Google Scholar]
- 3.Hsieh S.-Y., Liu C.-S., Buyya R., Zomaya A.Y. Utilization-prediction-aware virtual machine consolidation approach for energy-efficient cloud data centers. J. Parallel Distr. Comput. 2020;139:99–109. [Google Scholar]
- 4.Magotra B., Malhotra D., Dogra A.K. Adaptive computational solutions to energy efficiency in cloud computing environment using vm consolidation. Arch. Comput. Methods Eng. 2023;30:1789–1818. doi: 10.1007/s11831-022-09852-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Ahmad R.W., Gani A., Hamid S.H.A., Shiraz M., Yousafzai A., Xia F. A survey on virtual machine migration and server consolidation frameworks for cloud data centers. J. Netw. Comput. Appl. 2015;52:11–25. [Google Scholar]
- 6.Serrano D., Bouchenak S., Kouki Y., Ledoux T., Lejeune J., Sopena J., Arantes L., Sens P. 2013 13th IEEE/ACM International Symposium on Cluster, Cloud, and Grid Computing. IEEE; 2013. Towards qos-oriented sla guarantees for online cloud services; pp. 50–57. [Google Scholar]
- 7.Li Z., Yu X., Yu L., Guo S., Chang V. Energy-efficient and quality-aware vm consolidation method. Future Generat. Comput. Syst. 2020;102:789–809. [Google Scholar]
- 8.Bartók D., Mann Z.Á. Proceedings of the 3rd HPI cloud symposium “operating the cloud. 2015. A branch-and-bound approach to virtual machine placement; pp. 49–63. [Google Scholar]
- 9.Speitkamp B., Bichler M. A mathematical programming approach for server consolidation problems in virtualized data centers. IEEE Trans. Serv. Comput. 2010;3:266–278. [Google Scholar]
- 10.Hallawi H., Mehnen J., He H. Multi-capacity combinatorial ordering ga in application to cloud resources allocation and efficient virtual machines consolidation. Future Generat. Comput. Syst. 2017;69:1–10. [Google Scholar]
- 11.Zeng J., Ding D., Kang X.K., Xie H., Yin Q. Adaptive drl-based virtual machine consolidation in energy-efficient cloud data center. IEEE Trans. Parallel Distrib. Syst. 2022;33:2991–3002. [Google Scholar]
- 12.Sadykov R., Vanderbeck F. Bin packing with conflicts: a generic branch-and-price algorithm. Inf. J. Comput. 2013;25:244–255. [Google Scholar]
- 13.Gudkov A., Popov P., Romanov S. Balcon—resource balancing algorithm for vm consolidation. Future Generat. Comput. Syst. 2023;147:265–274. [Google Scholar]
- 14.Li P., Cao J. Virtual machine consolidation with multi-step prediction and affinity-aware technique for energy-efficient cloud data centers. Comput. Mater. Continua (CMC) 2023;76:81–105. [Google Scholar]
- 15.Li J., Zhang R., Zheng Y. Qos-aware and multi-objective virtual machine dynamic scheduling for big data centers in clouds. Soft Comput. 2022;26:10239–10252. [Google Scholar]
- 16.Wu Q., Ishikawa F., Zhu Q., Xia Y. Energy and migration cost-aware dynamic virtual machine consolidation in heterogeneous cloud datacenters. IEEE Trans. Serv. Comput. 2019;12:550–563. [Google Scholar]
- 17.Xie L., Chen S., Shen W., Miao H. A novel self-adaptive vm consolidation strategy using dynamic multi-thresholds in iaas clouds. Future Internet. 2018;10:52. [Google Scholar]
- 18.Rezakhani M., Sarrafzadeh-Ghadimi N., Entezari-Maleki R., Sousa L., Movaghar A. Energy-aware qos-based dynamic virtual machine consolidation approach based on rl and ann. Cluster Comput. 2024;27:827–843. [Google Scholar]
- 19.H Sayadnavard M., Toroghi Haghighat A., Rahmani A.M. A multi-objective approach for energy-efficient and reliable dynamic vm consolidation in cloud data centers. Engineering science and technology, an International Journal. 2022;26 [Google Scholar]
- 20.Wu J., Yang W., Han X., Qiu Y., Gudkov A., Song J. Hotspot resolution in cloud computing: A γ-robust knapsack approach for virtual machine migration. J. Parallel Distr. Comput. 2024;186 [Google Scholar]
- 21.Wang Y., Wang J., Gao F., Song J. Unveiling value patterns via deep reinforcement learning in heterogeneous data analytics. Patterns. 2024;5 doi: 10.1016/j.patter.2024.100965. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Birge J.R., Louveaux F. Springer Science & Business Media; 2011. Introduction to Stochastic Programming. [Google Scholar]
- 23.Ben-Tal A., Nemirovski A. Robust solutions of uncertain linear programs. Oper. Res. Lett. 1999;25:1–13. [Google Scholar]
- 24.Bertsimas D., Sim M. The price of robustness. Oper. Res. 2004;52:35–53. [Google Scholar]
- 25.Beloglazov A., Buyya R. Optimal online deterministic algorithms and adaptive heuristics for energy and performance efficient dynamic consolidation of virtual machines in cloud data centers. Concurr. Comput. 2012;24:1397–1420. [Google Scholar]
- 26.Huaweicloud Huaweicloudpublicdatase. 2022. https://github.com/huaweicloud/HUAWEICloudPublicDatase
- 27.Jing Y., Shen L., Yao C., Fan F., Wang X. Hlem-vmp: An effective virtual machine placement algorithm for minimizing sla violations in cloud data centers. IEEE Syst. J. 2024;1:1. [Google Scholar]
- 28.Wang J., Yu J., Zhai R., He X., Song Y. Gmpr: a two-phase heuristic algorithm for virtual machine placement in large-scale cloud data centers. IEEE Syst. J. 2023;17:1419–1430. [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
-
•
HUAWEICLOUD data have been deposited at HUAWEI and are publicly available as of the date of publication. Accession numbers are listed in the key resources table.
-
•
Codes have been deposited at Energy-Efficient-Cloud-Systems and are publicly available as of the date of publication. Accession numbers are listed in the key resources table.
-
•
Any additional information required to reanalyze the data reported in this paper is available from the lead contact upon request.








