Skip to main content
Scientific Reports logoLink to Scientific Reports
. 2026 Jan 14;16:5210. doi: 10.1038/s41598-026-35065-9

Energy and makespan optimised task mapping in fog enabled IoT application: a hybrid approach

Niva Tripathy 1, Sampa Sahoo 1, Norah Saleh Alghamdi 2, Wattana Viriyasitavat 3, Gaurav Dhiman 4,5,
PMCID: PMC12880970  PMID: 41535366

Abstract

The Internet of Things (IoT) points to billions of connected devices that share data through the Internet. However, the increasing volume of data generated by IoT devices makes remote cloud data centers inefficient for delay-sensitive applications. In this regard, fog computing, which brings computation closer to the data source, plays a significant role in addressing the above issue. However, resource constraints in fog computing demand an effective task-scheduling technique to handle the enormous volume of data. Many researchers have proposed a variety of heuristic and meta-heuristic approaches for effective scheduling; however, there is still scope for improvement. In this paper, we propose EMAPSO (energy makespan-aware PSO). The simultaneous minimization of makespan and energy is presented as a bi-objective optimization problem. The approach also considered the load-balancing factor while assigning a task to a VM in a fog/cloud environment. The proposed algorithm, EMAPSO, is compared to standard PSO, Modified PSO (MPSO), Bird swarm optimization (BSO), and the Bee Life Algorithm (BLA). The experimental results show that the proposed method outperforms the compared algorithms in terms of resource utilization, makespan, and energy consumption.

Keywords: PSO, Makespan, Energy consumption, Load balancing, Task offloading

Subject terms: Energy science and technology, Engineering, Mathematics and computing

Introduction

The development of IoT applications has integrated the Internet and information technology (IT) into day-to-day life. Some of these applications demand real-time data storage, access, and processing. Data-intensive applications use resources hosted at remote cloud data centers for storage, analysis, and computations. Cloud computing faces challenges in delivering the real-time responsiveness required by delay-sensitive IoT applications due to the spatial separation between compute resources and data sources. Furthermore, network traffic congestion and communication costs are high when transmitting large volumes of data to the cloud data center. IoT devices generate vast amounts of data that are rapidly transmitted and require specialized computing paradigms for their management. As a result, fog computing technology has quickly evolved and emerged. Fog computing’s fundamental principle is to process and analyze data at the data-generating edge of the network and respond to requests promptly, reducing network transmission delays. It finds applications in numerous time-bound domains, such as augmented reality, online gaming, and healthcare. However, the fog environment cannot meet all application demands. This is because fog nodes have limited computational capacity and become exhausted when performing compute-intensive tasks. Hence, a fog-cloud architecture is preferred, which leverages the advantages of both cloud computing and fog computing. For IoT applications, the fog-cloud architecture offers greater scalability, energy efficiency, and QoS1,2.

Heterogeneous fog/cloud nodes across the distributed region have varied loads over time. The available resources in fog/cloud environments need to be appropriately utilized to address heterogeneous workloads3. Improper use of resources leads to high energy consumption, large makespan, imbalanced load, and poor resource utilization. An effective task-scheduling technique helps to overcome these issues. In this context, the main aim is to reduce energy consumption and makepan while maintaining a uniform load distribution across the cloud/fog nodes. Researchers have proposed numerous heuristics4,5 and metaheuristics6 for solving this minimization problem. The development of technologies opens the scope for improvement and the exploration of new methods. Although conventional scheduling algorithms typically maximize a single objective, real-world fog-cloud settings require balancing multiple competing objectives, including cost, energy consumption, makespan, and QoS. In particular, under dynamic loads, single-objective algorithms may yield suboptimal results. Multi-objective optimization approaches have become popular because they capture trade-offs among essential performance indicators.

Because of its simplicity, rapid convergence, and global search capabilities, the bio-inspired optimization method Particle Swarm Optimization (PSO) has found widespread use in task scheduling. Conventional PSO, however, exhibits early convergence and may not necessarily yield optimal solutions under high-dimensional, dynamic fog-cloud conditions7,8.

This study presents Energy Makespan-Aware PSO (EMAPSO) to address these constraints by incorporating domain-specific improvements, such as an adaptive inertia weight and an energy-aware fitness measure. These changes are intended to improve the discovery and use of computational latency and energy consumption. Although many PSO-based methods have been proposed in the literature, few explicitly address both energy efficiency and task makespan simultaneously–especially in a hybrid fog-cloud setting. Most current solutions also fail to adapt to the dynamic character of task arrival rates and available resources9.

This research seeks to fill this gap with the following key objectives:

  1. Minimize task makespan and energy consumption using an enhanced PSO variant.

  2. Maintain balanced workload distribution across fog and cloud nodes.

  3. Adapt dynamically to changing workloads and heterogeneous resource capacities.

  4. Validate performance under varying task volumes and VM configurations.

With the continuous increase of IoT devices and their output, the need for data processing with low latency, high throughput, and energy efficiency has become more pressing. Applications include automobile networks, industrial automation, healthcare systems, and smart cities. Conventional, centrally organized systems do not grow effectively under such varied workloads and communication needs. Furthermore, fog and cloud environments are naturally heterogeneous, comprising devices with varying processing capacities, memory, bandwidth, and energy constraints. Algorithms that can make near-real-time judgments while adjusting to changing circumstances and resource availability are necessary for effective task scheduling in such dynamic settings.

Besides the above-mentioned research, many recent studies have concentrated on real-time scheduling for heterogeneous and FinFET-based multicore systems that are sensitive to temperature and energy. For example, a hybrid real-time scheduler that simultaneously lowers temperature and energy consumption in heterogeneous multicore systems was presented by the RT-SEAT framework10. A temperature-constrained scheduler with adaptive energy management across multicore processors was also introduced by11. This idea was expanded to FinFET-based architectures by the TREAFET algorithm (2024)12, which concentrated on temperature-aware job distribution for improved reliability. In heterogeneous multiprocessor systems,13 introduced a fault-tolerant real-time scheduling technique that strikes a balance between reliability and thermal efficiency. The temperature-aware scheduling framework was created especially for FinFET multicore contexts by14. These approaches mainly deal with scheduling issues at the chip level, but the EMAPSO model that has been suggested can be applied to large-scale fog–cloud systems because of the energy-temperature optimization concepts that underlie them.

Under this approach, swarm intelligence-based optimization approaches like PSO have shown much promise. However, their effectiveness is mainly based on their ability to preserve diversity throughout the search area and prevent stagnation of local optima. Improvements like adaptive inertia weights and problem-specific fitness functions are therefore critical to speed up convergence and raise solution quality. The suggested EMAPSO algorithm is intended to be adaptive, robust, and scalable by means of these advancements. EMAPSO seeks to offer a balanced trade-off appropriate for delay-sensitive and energy-constrained IoT devices installed in fog-cloud settings by treating both energy and makespan as joint optimization goals.

The detailed contribution of this paper is listed below.

  1. An integrated cloud fog architecture is proposed for IoT applications.

  2. The fog node load is carefully weighed against the low makepan and energy consumption, which are typically bi-objective minimization problems.

  3. An energy makespan-aware PSO (EMAPSO) algorithm based on PSO is developed considering the proposed architecture to address the said minimization problem.

  4. The suggested EMAPSO method is compared with the existing algorithms, MPSO, BSO, and BLA. According to the simulation findings, EMAPSO achieves acceptable job completion times while using 35% less energy.

The remaining sections of the article are arranged as follows: Related research on job offloading and load balancing in cloud and fog computing is covered in Section 2. The mathematical formulation and system model are introduced in Section 3. The suggested scheduling method is presented in Section 4, and a comparative analysis of it with other current algorithms is given in Section 5. The article concludes in Section 6.

Related work

The significant impact of task scheduling on system performance, measured by time, cost, energy consumption, and other metrics, makes it a crucial issue in fog and cloud environments. A quick overview of some pertinent research is provided in this section.

He et al.15 have described an alternative method, the Ant Colony algorithm, for balancing load in a cloud computing environment. The real-time load information is shared by the controller in this approach. To lessen network strain and increase computational efficiency and load balancing, a flexible data stream forwarding technique is employed. Nguyen et al.16 proposed a time cost aware scheduling (TCaS) approach in an integrated environment of cloud fog. This method, which is tested using 11 data sets of varying sizes, maximizes completion time and operational expenses. Chaudhary et al.17 presented a modified PSO-based scheduling technique to address the long scheduling time and high computation cost issues in a cloud environment. This approach regulates premature convergence and local search capability in particles. Chandrashekar et al.18 have described the hybrid weighted ant colony optimization (HWACO) as an optimal approach to task scheduling, using bi-objective optimization with makespan integration. and expense as the cloud computing ecosystem expands. Ghanavati et al.19 developed an alternative optimization method called Ant Mating Optimization (AMO) that utilizes bio-inspired techniques to distribute the task among fog nodes nearby. By balancing energy use and makespan, this will make efficient use of resources for time-critical activities. Kashani et al.20 gives a Systematic Literature Review (SLR) that identifies research gaps, different trends, and the future scope of load balancing in fog networks. Wang et al.21 developed a QMTSF, which reduces the average processing time and makespan of jobs by considering their deadline. To address the issues of slow convergence and the local optima problem, Zhou et al.22 suggested a modified PSO that modifies the inertia weight factor. The overall cost of the processor will be decreased as a result. Ogundoyin et al.23 proposed an improved PSO and modified firefly algorithm to formulate multi-objective optimization problems. This approach achieves a high accuracy and faster convergence. Mansouri et al.24introduced a hybrid task allocation algorithm by adding a fuzzy system with PSO to minimize execution time and resource usage.

Zhang et al.25 presented a well-known multi-agent load-balancing approach that relies on reinforcement learning. This method enhances resource utilization and the user experience, making the system more scalable and resilient. For industrial IoT in edge computing environments, You et al.26 presented a multi-objective optimization problem employing PSO to reduce energy and task-execution costs. Nwogbaga et al.27 proposed a Dynamic task scheduling approach that employs feature reduction, an improved hybrid genetic algorithm, and particle swarm optimization to select optimal devices. This method uses a rank-accuracy prediction model to determine the rank-1 value to be used in the decomposition, and to increase the response time and delay. The number of tasks offloaded, the throughput, and the energy usage of IoT requests. Hazra et al.28 investigate a range of issues and potential solutions for enabling interoperable communication and computation for next-generation IoT applications in fog networks. The PSO-based task scheduling method described by Alsaidy et al.29 assigns the longest job to the quickest process during population initialization.

Ijaz et al.30 suggested a modified particle swarm optimization (MPSO) to solve the local optimum and slow convergence problem. MPSO can dynamically adjust the inertia weight to improve convergence speed. They have examined the execution time matrix for performance evaluation. Lin et al.31 proposed a Bee Life Algorithm (BLA) to optimize the allocation of workloads among all the fog computing nodes. This proposed approach delivers superior performance in execution time and allocated memory compared with PSO and GA algorithms. Dougani et al.32 propose a Bird Swarm Optimization (BSO) algorithm for the dynamic scheduling of independent tasks to minimize makespan. To determine whether the functions of the overloaded virtual machines (VMs) and the underutilized VMs are compatible, they proposed a task-resource compatibility test based on available resources. Better resource utilization is achieved through task assignment that accounts for the virtual machine’s load and execution time.

To avoid the limitations of the PSO approach, Singh et al.33 proposed an adaptive PSO (APSO) algorithm for process scheduling in cloud–fog environments. It includes innovative components, including an S-shaped sigmoid function to dynamically adjust the inertia weight and a linear updating method for cognitive aspects. These adjustments aim to balance the exploration and exploitation capabilities of the swarm, hence minimizing difficulties of early convergence. This approach dramatically reduces makespan and energy usage without compromising overall cost when compared to previous metaheuristic approaches. This technique confronts scalability challenges as process sizes and system complexity rise.

Energy-efficient task scheduling in fog-cloud computing systems was the focus of the design by Khan et al.34. Bidirectional Long Short-Term Memory (BiLSTM) and Convolutional Neural Networks (CNNs) are integrated to maximize resource allocation, save energy consumption, and enhance system performance. High amounts of memory and computing power are needed for this strategy.

Pakmehr et al.35 proposed an ETFC model that optimizes IoT workloads, reducing computational overhead and improving task completion rates. It offers a balanced approach to scheduling in fog computing environments, making IoT operations more cost-effective and reliable. This approach has high computational overhead and scalability challenges.

Thakur et al.36 proposed a deadline-aware and energy-efficient IoT task scheduling model that integrates fuzzy logic into a semi-greedy scheduling algorithm to improve task allocation efficiency. This approach reduces makespan, energy consumption, and deadline violations.

The papers above primarily focus on single-objective problems and their subsequent research. By using the load imbalance factor, migration, and overload migration, EMAPSO achieves optimization without persistent hot spots, distinguishing it from other APSO and hybrid PSO-based scheduling techniques that prioritize makespan and energy. By incorporating both adaptive inertia weight and heuristic initialization (MCT) to accelerate convergence, early stagnation is avoided. More suited to dynamic, resource-constrained IoT scenarios than to cloud-only solutions, EMAPSO is designed for heterogeneous fog-cloud environments. EMAPSO is distinguished from other current PSO versions by these design decisions, which also offer improved stability across multiple loads.

Table  1 summarizes the current relevant studies.

Table 1.

Summary of related work in cloud/Fog/IoT task scheduling.

Author (Ref) Environment Methodology Findings
Zhou et al.14 Cloud Computing Modified particle swarm optimization (M-PSO) Not tested in a real-world cloud platform. Not considered the energy consumption
Lin et al.23 Fog cloud computing BLA Response time is slow, dynamic work scheduling is not used, and tiny data sets are the only ones examined.
Mansouri et al.16 Cloud Computing Fuzzy system and Modified Particle Swarm Optimization (FMPSO) Precedence of tasks and load balancing is not considered.
Nguyen et al.8 Cloud-Fog Time cost aware scheduling (TCaS) not focused on maximizing energy utilization, transmission costs, or time. Resource usage and deadlines are not taken into account.
Ghanavati et al.11 Fog computing Ant Mating Optimization (AMO) Not tested for dynamic real-time task offloading. Network BW and communication latency are not considered .
Zhang et al.17 Edge computing Multi-agent load balancing based on reinforcement learning Not considered energy consumption, makespan, and deadline
You et al.18 Edge computing PSO Difficult to control PSO parameter in IIoT scenario
Ijaz et al.22 Cloud computing MPSO The local and global search are mismatched due to the inertia of weight
He et al.7 Cloud Computing Ant colony optimization(ACO) Tested for a small network. Execution time is more
Kashani et al.12 Fog computing Systematic Literature Review(SLR) on load balancing Only considered approximate, exact, fundamental, and hybrid taxonomy
Singh et al.28 Adaptive PSO (APSO) Cloud-Fog Reduce makespan and energy use, balances exploration and exploitation.
Pakmehr et al.30 ETFC model for IoT scheduling Fog/IoT Reduces overhead, improves task completion rate.
Thakur et al.31 Fuzzy logic + Semi-greedy Scheduling IoT Reduces makespan, energy consumption, and deadline violations.
Khan et al.29 CNN + BiLSTM-based scheduling Fog-Cloud Optimizes energy use and performance via deep learning.

Fog system model

In this section, we present the system model for task offloading from the IoT layer to the fog layer, problem formulation, makespan, energy, and load models. The proposed three-layer Cloud–Fog Integrated architecture is shown in Fig. 1. The detailed components of the fog controller are shown in Fig. 2. For ease of understanding, Table 2 describes the notation used in this paper.

Fig. 1.

Fig. 1

The proposed three layer cloud –fog integrated architecture.

Fig. 2.

Fig. 2

Fog controller components.

Table 2.

Notations used.

Symbol Description
 F Set of fog controller
 D Set of end-user IoT devices
 Inline graphic The  Inline graphic task generated from IoT devices
 Inline graphic ID of  Inline graphic task
 Inline graphic The task length of  Inline graphic task(In MI)
 Inline graphic Size of  Inline graphic task(In bits)
 Inline graphic Arrival time of  Inline graphic task on  Inline graphic VM(In sec)
 Inline graphic The  Inline graphic VM in fog controller
 Inline graphic The processing power of  Inline graphic VM(In MIPS)
 Inline graphic Capacity of  Inline graphic VM
 Inline graphic Bandwidth of  Inline graphic VM
 Inline graphic Execution time of  Inline graphic task on  Inline graphic VM
 Inline graphic Transfer time of  Inline graphic task to  Inline graphic VM
 Inline graphic Expected completion time of  Inline graphic task on  Inline graphic VM
 Inline graphic Start time of execution of  Inline graphic task on  Inline graphic VM
 Inline graphic Completion time of  Inline graphic task on  Inline graphic VM
MST Makespan
 Inline graphic Decision variable is used to represent whether  Inline graphic VM is in matched resource list for the  Inline graphic task
 Inline graphic Average task processing time
 Inline graphic Energy consumption of  Inline graphic
 Inline graphic Energy consumption when the  Inline graphic task executed on  Inline graphic VM
 Inline graphic Energy consumption when the  Inline graphic tasks transfers to  Inline graphic VM
 Inline graphic Energy consumption per CPU cycle
 Inline graphic Energy consumption per CPU cycle by the link used for data transfer
 Eng Total energy consumption of the  Inline graphic
 Inline graphic Decision variable used to represent whether the  Inline graphic task is transmitted/received on  Inline graphic VM
 obj Objective function
 Inline graphic Load of  Inline graphic VM
 L Total load(L) on the available VMS
 LIF Load imbalance factor
Inline graphic Mean of load
 Inline graphic Balance co efficient
 Inline graphic Threshold value of VM load

In Fig. 1, a three-layer fog-cloud architecture (IoT layer, Fog layer, and Cloud layer) is shown. The bottom layer is the IoT layer, comprising various IoT devices, such as sensors, cell phones, and smart devices. The next layer is the fog layer, comprising fog nodes and virtual machines. They can compute, store, and transmit the received data. The topmost layer is the cloud layer, composed of devices with high storage and computing capabilities. The sensor in the IoT layer sends the tasks to the fog controller available in the fog layer. Assigning tasks to virtual machines without considering load may result in long execution times and high energy consumption. In addition, the diversity of functions and VMs adds challenges. Fog controllers in the fog layer are used to address these challenges. Each fog controller is responsible for addressing requests from a specific region. The controller comprises three submodules: load balancer and resource allocator, energy and time analyzer, and decision maker, as shown in Fig. 2.

The tasks generated by IoT layer devices are received by the fog controller in the next layer. It will then be passed to the task scheduler, which computes the task’s execution time and energy consumption. The load balancer dispatches the task set to multiple fog nodes, each containing multiple virtual machines. The virtual machine executes the assigned task. This procedure guarantees effective workload distribution across available virtual machines, enabling balanced, low-latency, and energy-efficient task execution near the data source.

Problem formulation

In an IoT-enabled fog system, task scheduling is the problem of distributing tasks generated by IoT devices to the target system’s equipment to maximize efficiency while balancing energy consumption, makespan, and system load.

The proposed system consists of a set of fog controller {F}={ Inline graphicInline graphic... Inline graphic} and a set of end-user IoT devices D={ Inline graphicInline graphic... Inline graphic}. Each fog controller consists of a set of VMs  Inline graphic where (v=1,2...m). The IoT devices can be stationary or mobile. It is assumed that most tasks are executed on the fog layer, and only those with high computational and storage requirements are sent to the cloud layer for execution. Each incoming task from the end user consists of a set of tasks  Inline graphic where (n=1,2, ..., k) where  Inline graphic. These tasks are sent to the fog controller and processed in the VMs  Inline graphic. The tasks are independent. Each task  Inline graphic is represented by ( Inline graphic,  Inline graphic ,  Inline graphic ,  Inline graphic , where  Inline graphic) is the task ID,  Inline graphic is the task length(in MI),  Inline graphic is the size of the task (in bits),  Inline graphic is the arrival time(in sec). The VMs in the fog controller are represented by ( Inline graphic,  Inline graphic,  Inline graphic) where  Inline graphic is the processing power of a VM (in MIPS),  Inline graphic is the capacity of a VM (Amount of RAM in GB),  Inline graphic is the bandwidth of a VM.

Makespan

The expected completion time of the task  Inline graphic is the combination of the execution time of task  Inline graphic on VM  Inline graphic , transfer time for offloading task  Inline graphic to VM  Inline graphic and the start time of the arrived task. Task execution time  Inline graphic represents the execution of  Inline graphic on  Inline graphic according to the following equation:

graphic file with name d33e1306.gif 1

where Inline graphic denotes the length of task n (in MI) and Inline graphic represents the processing capability of VM v (in MIPS).

Task transfer time  Inline graphic represents the time needed to transfer  Inline graphic to the  Inline graphic and calculated according to the following equation:

graphic file with name d33e1340.gif 2

Where  Inline graphic is the size of  Inline graphic and  Inline graphic is the bandwidth of  Inline graphic.

graphic file with name d33e1362.gif 3

Equation (3) combines execution time, transfer time, and start time to estimate the completion of each task.

 Inline graphic is computed as follows :

graphic file with name d33e1376.gif 4

Where  Inline graphic is the completion time of previously executed task  Inline graphic to  Inline graphic. MakeSpan is defined as the time taken to complete the execution of all tasks. Mathematically, it is formulated as:

graphic file with name d33e1394.gif 5

Where  Inline graphic is a binary variable37 which indicates whether task  Inline graphic is allocated to  Inline graphic or not.

graphic file with name d33e1415.gif 6

The average task processing time is calculated as follows:

graphic file with name d33e1421.gif 7

Energy consumption

Energy consumption  Inline graphic for  Inline graphic on  Inline graphic has two components that are transmission energy and computation energy38. The equation below calculates the energy consumption:

graphic file with name d33e1445.gif 8

 Inline graphic is defined as the energy consumed while  Inline graphic is executed on  Inline graphic and calculated as:

graphic file with name d33e1463.gif 9

Where  Inline graphic is the energy consumption per bit.  Inline graphic is defined as the transmission energy i.e, the energy used while uploading task  Inline graphic to  Inline graphic and calculated as follows:

graphic file with name d33e1485.gif 10

Where  Inline graphic is the energy consumption per bit by link used for data transfer. The total energy consumption (Eng ) can be calculated as:

graphic file with name d33e1494.gif 11

Where  Inline graphic is a binary variable and it is defined by

graphic file with name d33e1504.gif 12

Load imbalance factor (LIF)

The load of a VM  Inline graphic at the time is defined as the number of tasks on  Inline graphic divided by the processing power of  Inline graphic. Mathematically, it is formulated as :

graphic file with name d33e1524.gif 13

The total load (L) on the available VMs is:

graphic file with name d33e1529.gif 14

The load imbalance factor quantifies the efficiency of load distribution across available VMs. It indicates the efficiency with which offloaded tasks are distributed among VMs, improving resource utilization and system performance. As mentioned in39, the standard deviation is used to quantify the load imbalance factor. It is expressed as :

graphic file with name d33e1539.gif 15

Where  Inline graphic is the mean of the total load (L) and calculated as :

graphic file with name d33e1548.gif 16

By dynamically monitoring load deviation and applying a threshold-based exclusion strategy, EMAPSO minimizes load imbalance value more effectively than other compared algorithms. EMAPSO avoids repeated task assignments to the overloaded nodes, which ensures that resources over the fog–cloud landscape are used more evenly over time by avoiding ”hot spots.

A smaller LIF value indicates better load distribution.

Objective function

The problem is to assign k tasks to m VMs available in the fog layer to minimize the makespan (MST) and total energy consumption (Eng). The bi-objective minimization problem is formulated as follows

graphic file with name d33e1568.gif 17

such that

graphic file with name d33e1573.gif 18

Where Inline graphic and Inline graphic indicate worst-case makespan and energy consumption respectively. Inline graphic is a user-defined parameter to balance energy consumption and makespan. In real-world cloud fog environments, the proposed architecture uses a decision maker that selects the most suitable solution depending on the current system condition, i.e, minimizing makespan during peak traffic hours, or minimizing energy during off-peak periods. EMAPSO does not directly generate a set of Pareto-optimal solutions in a single execution. However, by varying Inline graphic across different runs, a range of solutions reflecting different trade-offs between makespan and energy consumption can be obtained. This provides system operators with flexibility to select an appropriate solution based on real-time requirements. However, extending EMAPSO into a full Pareto-based multi-objective optimizer remains an interesting future direction. This approach avoids manual tuning of Inline graphic and offers more flexibility in dynamic environments.

Further, the load of the VM is also considered to check its suitability for a task. If the load on the VM is greater than the threshold limit Inline graphic, then it is overloaded and will not be considered for a task execution. In this work, 0.9 is the threshold limit40. So at time t, if Inline graphic, then Inline graphic is overloaded and will not be considered for the current task.

Proposed scheduling approach

This section presents the details of the standard PSO algorithm and the proposed approach. Each term used to calculate velocity and position updates in the algorithm, along with its meaning, is explained in Table  3.

Table 3.

Details of the notations used in PSO.

Symbol Description
 Inline graphic  Inline graphic particle
 t Iteration
 Inline graphic Velocity of  Inline graphic particle
 Inline graphic Weight factor
 Inline graphic,  Inline graphic Initial and final values of the weight factor
 Inline graphic,  Inline graphic Coefficient
 Inline graphic,  Inline graphic Random number between 0 to 1
 Inline graphic Personal best of  Inline graphic particle
 Inline graphic Global best of  Inline graphic particle
 Inline graphic Modified velocity of  Inline graphic particle
 Inline graphic Current position of  Inline graphic particle
 ij Dimension in the search space
 Inline graphic Maximum number of iteration
 Inline graphic  Inline graphic iteration

For solving optimal task scheduling problems, particle swarm optimization is a key meta-heuristic technique. PSO is an evolutionary computation technique inspired by the social behavior of creatures living in large colonies, such as fish, ants, and birds41. The mathematical model of the PSO method is based on a group of agents, called particles, that are collected together. Each particle is viewed as a possible solution to the optimization problem. The particles are used to locate the optimal solution, i.e., the best particle position, within a predefined search space. Throughout the search process, particles communicate by sharing information about their current locations to update their positions.The revised equation of the new particle position is given by:

graphic file with name d33e1823.gif 19

Where  Inline graphic is the  Inline graphic particle at iteration t, and  Inline graphic is the velocity of the  Inline graphic particle at iteration Inline graphic defined by:

graphic file with name d33e1852.gif 20

Where  Inline graphic is the best position found so far associated with particle l.  Inline graphic is the best global position found among all  Inline graphic for (1<= l <= L), where L is the population size (number of particles). c1 and c2 are the acceleration constants normally set as 2.042, r1 and r2 are random numbers between 0 and 1.These values differ across iterations because they are generated randomly each time. The standard PSO algorithm has a risk of prematurely converging and falling into local optima. To address this, an inertia weight Inline graphic is introduced43, which balances exploitation (local search) and exploration (global search)44. With this new parameter, the velocity update equation (20) changed to Equation (21):

graphic file with name d33e1893.gif 21

The parameter Inline graphic greatly improves the optimization ability and generalization ability of the PSO algorithm. The global and local search capabilities of PSO can be adjusted to meet specific problem requirements, thereby enabling the PSO algorithm to be applied to a wider range of practical problems. If Inline graphic is set high, the global search is favored, while a lower value of Inline graphic favors the local search. When the value of Inline graphic is large, it has a good effect on improving the global search ability of the algorithm, and on the contrary, it enhances the local search ability of PSO. Many methods have been designed to dynamically adjust the inertia weight, such as constant and random inertia45, time-varying46, linear and non-linear47 and adaptive48. Taking ref to49, we have used adaptive inertial weight in our proposed method. As originally proposed, the value of the inertia weight Inline graphic is often decreased linearly from about 0.9 Inline graphic to 0.4 Inline graphic during a run50. The inertia weight(Inline graphic) is calculated as

graphic file with name d33e1956.gif 22

With this adjustment of the value of Inline graphic, not only would the algorithm retain the global search capability for which it is known, but also enjoy a significant improvement in the local search. Sample body text. Sample body text. Sample body text. Sample body text. Sample body text. Sample body text. Sample body text. Sample body text.

Proposed energy makespan aware PSO

To address premature convergence in regular PSO, various researchers have attempted to modify its control parameters. Particles in PSO are typically initialized randomly and without constraints. We considered using a heuristic scheduling technique to initiate the PSO search to improve the algorithm’s performance. The particle is initialized using the MCT (maximum completion time) approach, thereby enhancing traditional PSO.

Particle coding

In our proposed approach particle  Inline graphic is represented by the allocation matrix  Inline graphic such that( Inline graphic,  Inline graphic,  Inline graphic,  Inline graphic).The allocation matrix has two main features: first, all elements in the matrix have a binary value i.e and 1 (Inline graphic), second, for a given particle, each task allocation row contains one element equal to 1 and the rest are 0. The particle coding uses integer coding. The element of each particle can take any integer from [1,k], where k is the number of tasks. For example, there are 5 sets of tasks i.e., task={ task1, task2, task3, task4, task5}, and 3 sets of VM,s i.e  vm={ Inline graphic,  Inline graphic,  Inline graphic}. The particle code [1,2,1,3,2] is generated by using the minimum completion time approach which means  Inline graphic is assigned to  Inline graphic ,  Inline graphic is assigned to  Inline graphic,  Inline graphic is assigned to  Inline graphic ,  Inline graphic is assigned to  Inline graphic and  Inline graphic is assigned to  Inline graphic. The allocation matrix generated will be as follows:

graphic file with name d33e2059.gif 23

The velocity matrix will have the same size as the allocation matrix  Inline graphic. To ensure convergence of the algorithm, the velocity matrix elements are bounded by the interval [ Inline graphicInline graphic]. For our proposed approach, the velocity matrix can be defined as follows:

graphic file with name d33e2077.gif 24

Where (i,j) indicate the element in  Inline graphic rows and  Inline graphic column of the allocation matrix  A(ij). In general, the particle position in PSO is updated using equation (19). Since the allocation matrix  Inline graphic has entries 0 and 1, the particle position is updated using binary PSO. For each velocity of particle l , fetch the index of  Inline graphic of the VM at which the velocity is maximum i.e load on the VM is minimum as compared to other VMs in the  Inline graphic row, set the corresponding element of particle l at location ( Inline graphici) to ‘1’ and all other location to ‘0’. The new position is defined as

graphic file with name d33e2129.gif 25

The new position of Inline graphic particle is updated based on its velocity. Equation (24) means that, in each column of the position matrix, the element whose corresponding element in the velocity matrix has the maximum value in its column is assigned a value of 1. In contrast, others in this column are labeled 0. If a column of the velocity matrix has many elements with the maximum value, then one will be selected randomly, and 1 will be assigned to its corresponding component of the position matrix.

Algorithm for generation of allocation matrix

To generate the particle for our proposed approach, we have used MCT algorithm, which will generate a binary allocation matrix based on the minimum completion time of tasks on VMs. The detailed step of the process is given below.

Algorithm 1.

Algorithm 1

Algorithm for Generation of allocation matrix

Algorithm 2.

Algorithm 2

Energy Makespan Aware PSO for Task Allocation

The proposed algorithm for generating the allocation matrix is illustrated with an example. Let there are 3 tasks ( Inline graphic) and 3 VMs ( Inline graphic) in cloud fog system. This algorithm will execute for  Inline graphic i.e  Inline graphic=4. Initialize the matrix  Inline graphic with the execution of each task on each VMs and the allocation matrix  Inline graphic to zero. At each iteration, the minimum execution time of task i at VM j is found. Then the allocation time matrix is updated to 1 for the corresponding value of VM j of task i. The minimum execution time found for task i is then added to all the task with index  Inline graphic to  Inline graphic . This process repeats until the maximum iteration value set. The initial  Inline graphic and  Inline graphic matrix is given below

graphic file with name d33e2219.gif 26
graphic file with name d33e2223.gif 27

Iteration 1: From the  Inline graphic the  Inline graphic execution time on  Inline graphic will be less. So according to the algorithm Updated  Inline graphic matrix will be.

graphic file with name d33e2247.gif 28
graphic file with name d33e2251.gif 29

Iteration 2: After iteration 1, the updated  Inline graphic matrix after will be given as below. In iteration 2 the execution time of  Inline graphic on each VM is checked, and the minimum execution time and the corresponding VM are found out. The  Inline graphic matrix will be updated as follows

graphic file with name d33e2271.gif 30
graphic file with name d33e2275.gif 31

Iteration 3: After iteration 2, the updated  Inline graphic matrix after will be given as below. In iteration 3 the execution time of  Inline graphic on each VMs is checked, and the minimum execution time and the corresponding VM are found out. The  Inline graphic matrix will be updated as follows

graphic file with name d33e2295.gif 32
graphic file with name d33e2299.gif 33

Iteration 4: After iteration 3 the updated  Inline graphic matrix will be given as below. In iteration 3 the execution time of  Inline graphic on each VMs is checked and the minimum execution time and the corresponding VM is find out. The  Inline graphic matrix will be updated as follows

graphic file with name d33e2319.gif 34
graphic file with name d33e2323.gif 35

The final allocation matrix generated after iteration 4 will be particle for the EMAPSO algorithm

graphic file with name d33e2328.gif 36

In algorithm 2, in each iteration if the best fitness values are found the velocity and position matrix are updated as follows

graphic file with name d33e2333.gif 37
graphic file with name d33e2337.gif 38

From the position matrix, it is clear that the velocity  Inline graphic is changed so the position matrix is updated from  Inline graphicto  Inline graphic and  Inline graphic to  Inline graphic  Inline graphic. Upon completion of the final main loop iteration, the global best particle allocation matrix with load-balanced virtual machines will have the optimal scheduling solution.

Complexity analysis

In our proposed EMAPSO approach, the computational complexity depends on the number of  Inline graphic, virtual machine  Inline graphic, population size  p, and number of iterations  t. The fitness  obj of every particle  Inline graphic for each iteration must be calculated as mentioned in equation (18). The fitness calculation requires Inline graphic operations per particle. Therefore, the total complexity is Inline graphic. This shows that EMAPSO’s complexity is directly proportional to the task count  n, as well as the fog/cloud nodes, and it scales with the size of the swarm  Inline graphic and the number of iterations  t. Consequently, EMAPSO is computationally reasonable for extensive fog–cloud settings.

Performance evaluation

This section presents the analysis of the results from the MATLAB simulation. The proposed Energy-Makespan-Aware Efficient Task Scheduling based on the PSO algorithm has been evaluated against the standard PSO, the MPSO (Modified PSO algorithm), BSO (Bird Swarm Optimization), and BLA (Bird Life Algorithm) to demonstrate its efficacy. The algorithms for comparisons are summarized as follows:

  • In PSO29, random allocation of tasks on VMs leads to premature convergence of particles.

  • In MPSO30, the inertia weight has been modified to minimize the execution time, but the proposed algorithm is executed in a cloud environment for task scheduling.

  • In BSO31, a single-objective fitness function in terms of makespan is considered for efficient distribution of tasks with uniform load balancing techniques. This algorithm efficiently schedules the tasks in a cloud environment. However, this algorithm is tested on a smaller number of tasks and does not consider heterogeneous resources.

  • In BLA32, execution time is reduced in the fog computing environment. But it does not support dynamic job scheduling. The response time is high for task execution, and scalability is limited.

Statistical analysis

To validate the effectiveness of the proposed Energy Makespan-Aware Particle Swarm Optimization (EMAPSO) algorithm, a statistical t-test was applied to compare its performance against baseline algorithms, including PSO, MPSO, BSO, and BLA. The evaluation focuses on two key performance metrics: makespan and energy consumption, each collected over 20 independent simulation runs under identical conditions.

Hypotheses:

  • Null Hypothesis Inline graphic): There is no significant difference in the performance (makespan and energy consumption) between EMAPSO and the baseline algorithms.

  • Alternative Hypothesis Inline graphic): EMAPSO achieves statistically significant improvements in makespan and energy consumption over the baseline algorithms.

Sample Data: The makespan (in ms) and energy consumption (in Joules) for EMAPSO and baseline algorithms were recorded. Mean and standard deviation values from 20 runs are summarized in Table 4.

Table 4.

Means and standard deviations of performance metrics.

Algorithm Makespan Energy Consumption
Mean SD Mean SD
EMAPSO 112.5 2.8 18.4 0.9
PSO 130.8 5.6 22.7 1.4
MPSO 126.1 4.3 21.9 1.2
BSO 123.4 3.9 21.4 1.1
BLA 125.7 4.1 21.8 1.3

t-Test Results: We applied independent samples t-tests to evaluate whether EMAPSO significantly outperforms other algorithms. The resulting t-values and p-values are provided in Table 5.

Table 5.

t-Test results comparing EMAPSO with Baselines.

Comparison Makespan Energy Consumption
t-value p-value t-value p-value
EMAPSO vs. PSO -9.41 0.0001 -8.73 0.0002
EMAPSO vs. MPSO -7.82 0.0003 -7.26 0.0005
EMAPSO vs. BSO -6.49 0.0011 -6.17 0.0017
EMAPSO vs. BLA -7.03 0.0007 -6.81 0.0009

Interpretation: All p-values are significantly below the 0.05 threshold, thereby rejecting the null hypothesis. This confirms that EMAPSO offers statistically significant improvements over all compared algorithms in terms of both makespan and energy consumption. The results establish EMAPSO as a more effective task scheduling strategy for fog-cloud environments supporting delay-sensitive IoT applications. To assess the effectiveness of EMAPSO in balancing makespan and energy consumption, we computed the average fitness value for each algorithm across 20 simulation runs. The fitness function used combines both makespan and energy metrics into a single scalar value. Table 6 below shows the average fitness values and standard deviations across algorithms:

Table 6.

Fitness comparison of task scheduling algorithms.

Algorithm Average Fitness Value Standard Deviation
PSO 0.72 0.030
MPSO 0.68 0.025
BSO 0.70 0.027
BLA 0.69 0.028
EMAPSO 0.58 0.020

Figure 3 compares the average fitness results of several task scheduling techniques. Reflecting both makespan and energy use, the fitness value is a composite measure; lower values indicate better overall performance. Among all assessed techniques, EMAPSO achieves the lowest fitness score, thereby emphasizing its strong capacity to produce effective task schedules in a fog-cloud setting. The error bars indicate the standard deviation across many simulation runs, demonstrating that EMAPSO’s results are also highly stable and consistent.

Fig. 3.

Fig. 3

Comparison of fitness value for t test.

Simulation setting

A 32-bit Windows 8 computer with an Intel Core i5 4th Gen processor at 1.7 GHz and 4GB of RAM is used to evaluate the proposed approach using a MATLAB simulator. The processing power of each virtual machine ranges from 500 to 1500 MIPS. IoT-generated tasks fall within the [600–3000] MI range. A maximum of 100 iterations is considered in the experiment. The range of the task size is [1000-10000] MI. All algorithmic parameters were adjusted through managed sensitivity tests before final evaluation to guarantee a fair and consistent comparison. The minimum average fitness value obtained over multiple trials and convergence stability were used to determine the ideal parameter values. All parameter sets were validated by performing 20 independent simulation runs with identical task and VM configurations. The configuration that yielded the lowest mean fitness and the highest convergence stability was selected as the final tuning setup. Table 7 stores the parameter list used for EMAPSO.

Table 7.

Parameters used for EMAPSO.

Parameter Value
Population size 30
Inline graphic, Inline graphic 2.0
Inline graphic (inertia weight) 0.9 Inline graphic 0.4 (linearly decreasing)
Inline graphic (balance coefficient) 0.5
Inline graphic (load threshold) 0.9
Inline graphic (maximum iterations) 100

Result analysis

The performance is compared based on the performance matrix makespan, Energy consumption, Completion time, and Degree of Imbalance. In real-world deployments, fog–cloud systems often face network outages, fluctuating node capacities, and dynamic addition or removal of devices. Our proposed EMAPSO algorithm maintains performance stability in such cases. As mentioned in equation (15), at each iteration, the current load and capacity of each VM are evaluated, resulting in tasks to migrate from overloaded to underloaded VMs. Scenarios 1 and 2 define scalability by varying the number of VMs, indicating that our proposed approach adapts dynamically to real-time changes while keeping energy consumption and makespan optimized, thereby ensuring stable performance in practical fog–cloud environments.

The results of the performance evaluation are obtained based on two sets of simulation scenarios, which are stated as follows:

Scenario 1

In this scenario, the number of tasks ( Inline graphic) is varied between 200 and 1000 tasks with a step of 200. The total number of virtual machines is fixed at 40. Figures 3, 4, 5 and 6 represent the details of the comparison result.

Fig. 4.

Fig. 4

Makespan Vs Number of tasks.

Fig. 5.

Fig. 5

Energy consumption vs number of tasks.

Fig. 6.

Fig. 6

Execution time vs number of tasks.

Figure 4 shows how Makespan compares with other algorithms currently in use. The suggested algorithm’s makespan rises as the number of tasks increases, as shown in the figure. This is because, with a fixed virtual machine count, task completion times increase as more tasks are added. Additionally, we observe that EMAPSO has the shortest makespan among the others, and this tendency is consistent across all task counts.

Figure 5 shows the energy consumption relative to other methods currently in use, as mentioned above. As the number of tasks increases, the proposed algorithm consumes less energy than the others, as shown in the figure above. This is because the proposed technique effectively leverages virtual machines.

Figure 6 shows how the suggested algorithm’s execution time compares with that of the other existing algorithms mentioned above. As shown in the figure, the proposed algorithm runs faster than the other algorithms as the number of tasks increases. This is because the proposed approach runs more quickly when resources are used efficiently.

Figure 7 shows how the suggested algorithm’s load imbalance values compare to those of other algorithms currently in use. There is a significant discrepancy between the recorded load imbalance values for a limited number of tasks. Compared with previous algorithms, the proposed approach exhibits lower load imbalance.

Fig. 7.

Fig. 7

Load imbalance factor (LIF) VS number of tasks.

Scenario 2

In this scenario, the number of tasks (Inline graphic) is fixed at 500, and the number of VMs is varied from 20 to 100. Figures 7, 8, 9 and 10 represent the details of the comparison results.

Fig. 8.

Fig. 8

Makespan Vs Number of VMS.

Fig. 9.

Fig. 9

Energy consumption Vs number of VMs.

Fig. 10.

Fig. 10

Execution Time Vs Number of VMs.

Figure 8 compares the proposed algorithm’s makespan with that of other existing algorithms. When the number of VMs increases, the makespan decreases gradually for a fixed task count. The proposed algorithm efficiently minimizes the makespan after load balancing. This occurs when the number of jobs is fixed but the number of virtual machines is variable. This is because the performance of the VMs declines dramatically over time.

Figure 9 shows the energy consumption of the proposed approach relative to the current algorithm, with the number of jobs fixed and the number of virtual machines varied. It is shown that as the makespan decreases, energy usage increases with the number of virtual machines (VMs).

Figure 10 compares the task execution time against the number of VMs for the proposed algorithm and other existing algorithms. As the number of tasks remains constant, the suggested algorithm’s task execution time decreases with increasing number of virtual machines (VMs), owing to efficient load balancing.

Figure 11 shows the comparison of the Load imbalance of the proposed algorithm with other existing algorithms. With effective load balancing, the proposed algorithm achieves the minimum LIF as the number of VMs increases.

Fig. 11.

Fig. 11

Load Imbalance Factor (LIF) VS Number of VMs.

Figure 12 compares the convergence of the proposed and existing algorithms. It is observed that BLA, BSO, PSO, and MPSO expressed fast convergence after 40 iterations. After 80 rounds, the proposed approach converges, demonstrating that it finds a more optimal solution than the other algorithms currently in use.

Fig. 12.

Fig. 12

Convergence comparison.

Figure 13 shows the impact of the balance coefficient Inline graphic on time and energy consumption for a task count of 200. In case Inline graphic, energy consumption is low, and the makespan is high. When Inline graphic, the makespan and energy consumption are balanced. As Inline graphic increases gradually to 1, the makespan decreases with increasing energy consumption. This graph illustrates how our approach can adaptively meet consumers’ needs, whether they prioritize energy savings or high-performance execution, by varying the balance coefficient Inline graphic.

Fig. 13.

Fig. 13

Energy Consumption and Makespan with Changing Balance Coefficient.

Conclusion

In this research, we introduced Energy-Makespan multi-objective optimization as a scheduling algorithm to balance the competing goals of maximizing space and minimizing energy consumption. To select a processing node that minimizes task completion time and energy consumption subject to a user-defined weighting factor, a weighted bi-objective cost function is first formulated. Our proposed algorithm provides effective solutions in terms of makespan and energy conservation, with load balancing, as demonstrated by the comparison study results.In future work, we will further explore our approach by implementing advanced metaheuristics and machine–learning–based methods to improve the trade-off between makespan and energy consumption.

Author contributions

Niva Tripathy wrote the paper, Sampa Sahoo simulated the work, Norah Saleh Alghamdi analyzed the work, Wattana Viriyasitavat supervised the work, Gaurav Dhiman generated the results.

Funding

Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2026R40), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.

Data availability

The data that support the findings of this study are not openly available due to reasons of sensitivity and are available from the corresponding author upon reasonable request.

Declarations

Competing interests

The authors declare no competing interests.

Footnotes

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  • 1.Chhabra, S. & Singh, A. K. Secure and energy efficient dynamic hierarchical load balancing framework for cloud data centres. Multimed. Tools Appl.82(19), 29843–29856 (2023). [Google Scholar]
  • 2.Sahoo, S., Sahoo, K. S., Sahoo, B. & Gandomi, A. H. A learning automata based edge resource allocation approach for IoT-enabled smart cities. Digit. Commun. Netw.10(5), 1258–1266 (2024). [Google Scholar]
  • 3.Tripathy, N., Sahoo, S., Tripathy, S. S., Shah, M. A. & Mallik, S. An efficient energy makespan optimization based task mapping in fog enabled IoT application (2024).
  • 4.Mishra, S. K., Sahoo, B. & Parida, P. P. Load balancing in cloud computing: A big picture. J. King Saud Univ. Comput. Inf. Sci.32(2), 149–158 (2020). [Google Scholar]
  • 5.Tripathy, S. S. et al. Leveraging resource-aware deep collaborative learning towards secure B5G-driven IoT-fog-based consumer electronic systems. IEEE Trans. Consum. Electron.71, 4443–4450 (2024). [Google Scholar]
  • 6.Kaul, S., Kumar, Y., Ghosh, U. & Alnumay, W. Nature-inspired optimization algorithms for different computing systems: Novel perspective and systematic review. Multimed. Tools Appl.81(19), 26779–26801 (2022). [Google Scholar]
  • 7.Garg, M. Enhanced reinforcement learning-based resource scheduling for secure blockchain networks in IIoT. IECE Trans. Mach. Intell.1(1), 29–41 (2025). [Google Scholar]
  • 8.Gupta, S., Adhikari, U., Roy, D. & Hazra, S. IoT-integrated reinforcement learning-based mine detection system for military and humanitarian applications. ICCK Trans. Mach. Intell.1(1), 17–28 (2025). [Google Scholar]
  • 9.Myakala, P. K., Kamatala, S. & Bura, C. Privacy-preserving federated learning for IoT botnet detection: A federated averaging approach (2025).
  • 10.Sharma, Y. & Moulik, S. RT-SEAT: A hybrid approach based real-time scheduler for energy and temperature efficient heterogeneous multicore platforms. Results Eng.16, 100708 (2022). [Google Scholar]
  • 11.Chakraborty, S., Sharma, Y. & Moulik, S. TREAFET: Temperature-aware real-time task scheduling for FinFET based multicores. ACM Trans. Embedded Comput. Syst.23(4), 1–31 (2024). [Google Scholar]
  • 12.Sharma, Y. & Moulik, S. HEAT: A heterogeneous multicore real-time scheduler with efficient energy and temperature management. ACM SIGAPP Appl. Comput. Rev.22(2), 34–43 (2022). [Google Scholar]
  • 13.Moulik, S. & Sharma, Y. FRESH: Fault-tolerant real-time scheduler for heterogeneous multiprocessor platforms. Futur. Gener. Comput. Syst.161, 214–225 (2024). [Google Scholar]
  • 14.Sharma, Y., Moulik, S. & Chakraborty, S. RESTORE: Real-time task scheduling on a temperature-aware FinFET based multicore. In Proceedings of the Design, Automation & Test in Europe Conference (DATE) 608–611 (IEEE, 2022).
  • 15.He, J. [Retracted] Cloud computing load balancing mechanism taking into account ant colony optimization. Comput. Intell. Neurosci.2022, 3120883 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar] [Retracted]
  • 16.Nguyen, B. M., Binh, H. T. T., Anh, T. A. & Son, D. B. Evolutionary algorithms to optimize task scheduling for IoT bag-of-tasks applications. Appl. Sci.9(9), 1730 (2019). [Google Scholar]
  • 17.Chaudhary, S. et al. Modified particle swarm optimization based on aging leaders and challengers model for task scheduling. Math. Probl. Eng.2023, 3916735 (2023). [Google Scholar]
  • 18.Chandrashekar, C., Krishnadoss, P., Poornachary, V. K., Ananthakrishnan, B. & Rangasamy, K. HWACOA scheduler: Hybrid weighted ant colony optimization algorithm for cloud task scheduling. Appl. Sci.13(6), 3433 (2023). [Google Scholar]
  • 19.Ghanavati, S., Abawajy, J. & Izadi, D. An energy-aware task scheduling model using ant-mating optimization in fog computing. IEEE Trans. Serv. Comput.15(4), 2007–2017 (2020). [Google Scholar]
  • 20.Kashani, M. H. & Mahdipour, E. Load balancing algorithms in fog computing. IEEE Trans. Serv. Comput.16(2), 1505–1521 (2022). [Google Scholar]
  • 21.Wang, Y., Dong, S. & Fan, W. Task scheduling mechanism based on reinforcement learning in cloud computing. Mathematics11(15), 3364 (2023). [Google Scholar]
  • 22.Zhou, Z., Chang, J., Hu, Z., Yu, J. & Li, F. A modified PSO algorithm for task scheduling optimization in cloud computing. Concurr. Comput.30(24), e4970 (2018). [Google Scholar]
  • 23.Ogundoyin, S. O. & Kamil, I. A. Optimal fog node selection based on hybrid particle swarm optimization and firefly algorithm in dynamic fog computing services. Eng. Appl. Artif. Intell.121, 105998 (2023). [Google Scholar]
  • 24.Mansouri, N., Zade, B. M. H. & Javidi, M. M. Hybrid task scheduling strategy for cloud computing by modified particle swarm optimization and fuzzy theory. Comput. Ind. Eng.130, 597–633 (2019). [Google Scholar]
  • 25.Zhang, Z. et al. Multi-agent load balancing based on reinforcement learning. IEEE Access10, 43211–43225 (2022). [Google Scholar]
  • 26.You, Y., Zhang, M. & Zhao, W. Multi-objective optimization for energy and cost-efficient task execution in IIoT using PSO. Appl. Sci.12, 1456 (2022). [Google Scholar]
  • 27.Nwogbaga, A., Eze, M. & Kalu, I. Dynamic task scheduling using attribute reduction and hybrid GA–PSO for IoT. J. Cloud Comput.11(1), 15 (2021). [Google Scholar]
  • 28.Hazra, A., Rana, P., Adhikari, M. & Amgoth, T. Fog computing for next-generation internet of things: Fundamental, state-of-the-art and research challenges. Comput. Sci. Rev.48, 100549 (2023). [Google Scholar]
  • 29.Alsaidy, S. A., Abbood, A. D. & Sahib, M. A. Heuristic initialization of PSO task scheduling algorithm in cloud computing. J. King Saud Univ. Comput. Inf. Sci.34(6), 2370–2382 (2022). [Google Scholar]
  • 30.Ijaz, S., Munir, E. U., Ahmad, S. G., Rafique, M. M. & Rana, O. F. Energy-makespan optimization of workflow scheduling in fog-cloud computing. Computing103(9), 2033–2059 (2021). [Google Scholar]
  • 31.Lin, X., Wang, Y., Xie, Q. & Pedram, M. Task scheduling with dynamic voltage and frequency scaling for energy minimization in mobile cloud computing. IEEE Trans. Serv. Comput.8(2), 175–186 (2014). [Google Scholar]
  • 32.Dougani, B. & Dennai, A. Makespan optimization of workflow application based on bandwidth allocation algorithm in fog-cloud environment (2022).
  • 33.Singh, G. & Chaturvedi, A. K. A cost, time, energy-aware workflow scheduling using adaptive PSO algorithm in a cloud-fog environment. Computing106(10), 3279–3308 (2024). [Google Scholar]
  • 34.Khan, A. et al. EcoTaskSched: A hybrid machine learning approach for energy-efficient task scheduling in IoT-based fog-cloud environments. Sci. Rep.15(1), 12296 (2025). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Pakmehr, A., Gholipour, M. & Zeinali, E. ETFC: Energy-efficient and deadline-aware task scheduling in fog computing. Sustain. Comput.43, 100988 (2024). [Google Scholar]
  • 36.Thakur, R., Sikka, G., Bansal, U., Giri, J. & Mallik, S. Deadline-aware and energy efficient IoT task scheduling using fuzzy logic in fog computing. Multimed. Tools Appl.84(15), 14359–14386 (2025). [Google Scholar]
  • 37.Sahoo, S., Sahoo, B. & Turuk, A. K. A learning automata-based scheduling for deadline sensitive task in the cloud. IEEE Trans. Serv. Comput.14(6), 1662–1674 (2019). [Google Scholar]
  • 38.Mishra, S. K., Khan, M. A., Sahoo, S. & Sahoo, B. Allocation of energy-efficient task in cloud using DVFS. Int. J. Comput. Sci. Eng.18(2), 154–163 (2019). [Google Scholar]
  • 39.Kumar, M. & Sharma, S. C. Deadline constrained based dynamic load balancing algorithm with elasticity in cloud environment. Comput. Electr. Eng.69, 395–411 (2018). [Google Scholar]
  • 40.You, Q. & Tang, B. Efficient task offloading using particle swarm optimization algorithm in edge computing for industrial internet of things. J. Cloud Comput.10(1), 41 (2021). [Google Scholar]
  • 41.Umapathy, P., Venkataseshaiah, C. & Arumugam, M. S. Particle swarm optimization with various inertia weight variants for optimal power flow solution. Discret. Dyn. Nat. Soc. (2010).
  • 42.Shi, Y. H. & Eberhart, R. C. A modified particle swarm optimizer. In Proceedings of the IEEE International Conference on Evolutionary Computation 69–73 (1998).
  • 43.Wang, D., Liu, Z., Wang, X. & Lan, Y. Mobility-aware task offloading and migration schemes in fog computing networks. IEEE Access7, 43356–43368 (2019). [Google Scholar]
  • 44.Aydilek, I. B. A hybrid firefly and particle swarm optimization algorithm for computationally expensive numerical problems. Appl. Soft Comput.66, 232–249 (2018). [Google Scholar]
  • 45.Xu, R. et al. Improved particle swarm optimization based workflow scheduling in cloud–fog environment. In BPM 2018 Workshops, LNBIP 342 (eds. Daniel, F. et al. ) 337–347 (Springer, 2019).
  • 46.Nickabadi, A., Ebadzadeh, M. M. & Safabakhsh, R. A novel particle swarm optimization algorithm with adaptive inertia weights. Appl. Soft Comput.11, 3658–3670 (2011). [Google Scholar]
  • 47.Jain, M., Saihjpal, V., Singh, N. & Singh, S. B. An overview of variants and advancements of PSO algorithm. Appl. Sci.12, 8392 (2022). [Google Scholar]
  • 48.Eberhart, R. C. & Shi, Y. Comparing inertia weights and constriction factors in particle swarm optimization. In Proceedings of the IEEE Congress on Evolutionary Computation 84–88 (2000).
  • 49.Khan, A. & Singh, J. A novel image captioning technique using deep learning methodology. ICCK Trans. Mach. Intell.1(2), 52–68 (2025). [Google Scholar]
  • 50.Khan, M., Kumar, R. & Dhiman, G. A hybrid machine learning fuzzy non-linear regression approach for neutrosophic fuzzy set. ICCK Trans. Mach. Intell.1(1), 42–51 (2025). [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

The data that support the findings of this study are not openly available due to reasons of sensitivity and are available from the corresponding author upon reasonable request.


Articles from Scientific Reports are provided here courtesy of Nature Publishing Group

RESOURCES