Abstract
DevOps represents the fusion of cultural philosophies, tools, and practices that rapidly enhance an organization's capacity to deploy services and applications. Cloud-based tools, a subset of DevOps services, facilitate collaboration between development and operations teams within an organization. However, persistent challenges such as inadequate security management, substantial leakage of sensitive data, and system/service unavailability pose significant threats to sustainability. We propose an end-to-end enhanced security framework to fortify DevOps resilience by implementing authentication and vulnerability management through the Slide-Block methodology. Our approach comprises four sequential processes: pattern-based authentication, tri-level access control, privacy-focused data storage, and vulnerability management and correction. Initially, we establish candidate legitimacy through pattern-based authentication using the Magnificent Chacha-Poly 1305 algorithm. Subsequently, we devise effective access policies using the Enhanced Deep Deterministic Policy Gradient (EDDPG) algorithm, employing tri-level access control based on trust value, attributes, and roles for optimal user and developer selection via the African Vulture Optimization Algorithm (AVOA). Moreover, we encrypt data in transit and at rest using Mcha-Poly 1305, considering sensitivity, and store it in a blockchain to enhance data privacy. Our approach incorporates a sliding window blockchain for secure data transmission and storage. Finally, we identify and address attack and application-based issues using the Tweak Naive Bayes (Tweak-NB) algorithm and Intruder Vulnerability Scanner (IVS). Our Slide-Block framework demonstrates superior performance in detection rate, authentication time, packet loss, security strengthening, communication overhead, and latency compared to existing models.
Index Terms: Security in DevOps-Cloud, Vulnerability management, Pattern-based authentication, Tri-level access control, Secure data storage, Attack detection and mitigation
1. Introduction
In an Information Technology (IT) environment, “DevOps” is one of the software development cultures that is meant for integrating the software and development teams to boost up development activities with higher convey rates [[1], [2], [3]]. By utilizing DevOps-based software development we can achieve continuous deployment, continuous monitoring, continuous development, security, flexibility, and expandability [4]. To be more specific, DevOps helps to deliver the developed software either on the customer side or server side in a continuous pace in short and fast delivery cycles [5]. Cloud computing is another paradigm that provides on-demand services and infrastructure [6,7].
The combined adoption of cloud computing and DevOps enhances production speed and resilience [[8], [9], [10]]. Specifically, incorporating cloud computing in software development empowers developers to manage their tools and offers additional components for continuous application automation, building, testing, and monitoring [11,12]. Despite these benefits, careful consideration and research are necessary to address potential drawbacks and design improved models. Security emerges as a primary concern, as the lack of empirical research in DevOps has led the research community to hesitate in integrating security practices [13].
Only limited works in recent years to provide a recommendation of security practices but it doesn't provide end-to-end security practices as research articles [14]. Authentication and access control are basic security operation that restricts unauthorized access [15]. However, the existing works on authentication and access control provide limits with poor metrics as they only consider limited metrics to authenticate and access the users, developers, and operators [16,17]. The existing works lack security during continuous monitoring, leading to several cyber security attacks [18]. Machine learning and data engineering are emerging technologies that enable intelligence [19,20]. Thus, the proposed work addressed the challenges and research gaps that are faced in the former works further in this work, the betterment solution is to accommodate for secure and practical application in a cloud environment with DevOps.
1.1. Research aim & objectives
The main aim of this research is to enhance the security for the betterment of the resilience of DevOps by integrating cloud computing, machine learning and data engineering. In addition, the research also identifies the problems of improper security management, inefficacy authorization, enormous sensitive data leakage and system and service unavailability. Moreover, the main objective of this research is to enhance security for the betterment of the resilience of DevOps by integrating cloud computing through effective vulnerability management based on continuous monitoring.
-
•
We have integrated an effective authentication mechanism to enhance secret management, allowing only legitimate candidates access to the applications. We have developed significant access control policies to improve authorization and implement trust-level-based access control.
-
•
To enhance data privacy and prevent sensitive data leakage, we have implemented two categories of data encryption. Continuous monitoring and rectification, aided by a security monitoring agent, have been employed to address application-based issues and amplify system and service availability.
-
•
In order to strengthen security, we have adapted attack detection and enhanced blockchain to improve Quality of Service (QoS)
1.2. Research Motivations
To enhance the security of cloud-based applications with integrated DevOps, most of the existing works perform continuous monitoring for attack detection. However, the current works are limited by improper security management, inefficacy authorization, enormous sensitive data leakage and system and service unavailability.
-
I.
Improper Security Management: In most of the existing works, insufficient metrics and ineffective authentication mechanisms were utilized, which led to improper security management. In addition to that, in several earlier works, only the application users were authenticated; however, the lack of authenticating the developers and resource owners leads to improper security management.
-
II.
Inefficacy authorization: In most of the existing works, the access control policies are only provided for application users, and several previous works don't fabricate effective or secure access policies where this inefficacy authorization leads to high-security breaches. Furthermore, the access control policies are stored without any security measures where attackers can easily tamper and modify the policies that affect the security level of cloud-based applications with integrated DevOps.
-
III.
Enormous Sensitive Data Leakage: In most of the previous works, the data privacy and sensitive data were not secured effectively, and the communication between the cloud and users was performed without any privacy concerns that led to high data leakage. Moreover, the rest of the data, which is sensitive data like organization policies, access control policies, passwords, and other sensitive data, were stored in the cloud without encryption or any other security measure where the attackers can effortlessly access the data.
-
IV.
System and Service Unavailability: In several existing works, the application users' sides are continuously monitored where the inconsideration of application settings, bugs, data errors, etc.., leads to high system and service unavailability. Besides, the bugs and data errors are continuously monitored manually, and immediate remediation is not taken where this ineffective continuous monitoring leads to high system and service unavailability.
1.3. Research contributions
Improving the resilience of DevOps thereby enhancing security is the primary target of our research. To achieve that, we have proposed several requisite contributions that are explained below as follows.
-
•
All candidates undergo pattern-based authentication to secure the application and network, with password encryption using the MCha-Poly 1305 encryption algorithm. This algorithm operates efficiently, allowing only legitimate candidates access to the service, and their credentials are stored securely in the blockchain.
-
•
We generate tri-level access control policies using EDDPG, assigning them to candidates based on their attributes, roles, and trust values. AVOA is employed to optimize access control.
-
•
To ensure communication and transaction certainty, we implement a privacy-focused data storage mechanism that encrypts data using the MCha-Poly 1305 encryption algorithm, minimizing the risk of highly sensitive data leakage.
-
•
Continuous monitoring amplifies system and service availability, identifying attacks and application-based issues through an intruder vulnerability scanner and the Tweak NB algorithm.
1.4. Paper organization
The rest of the paper is organized as follows: Section II illustrates state-of-the-art research with its gaps. Section III delineates the foremost problem statement which is faced by cloud-based applications. Section IV articulates the proposed Slides-Block framework, encompassing an appropriate diagram, mathematical equations, algorithm, and pseudocode. Section V exemplifies the experimental analysis of simulation setup, comparative analysis, and research summary. Finally, Section VI concludes the proposed Slides-Block framework.
2. Literature survey
In this section, we have briefly described the state-of-arts with its limitations faced in DevOps-cloud-based applications. Furthermore, this section is divided into three sub-sections which are defined below,
2.1. Analysis of attack & malware detection
The paper [21] introduced an auto-encoder entrenched-based mechanism for detecting advanced persistent threat (APT) attacks. Users underwent authentication through a two-factor authentication system based on the OTP scheme. We employed the auto-encoder neural network to analyze informative features derived from unsupervised network traffic. We performed feature extraction and dimension reduction using Principal Component Analysis (PCA). Subsequently, we added a softmax regression layer to the top layer of the auto-encoder network to classify APT attacks. The detection of the attack prompted the strengthening of cloud-based security.
In a different study [22], the author introduced an approach to detect DDoS attacks in a cloud computing environment to reduce misclassification errors during DDoS attack detection. Initially, the study applied feature selection schemes, including mutual information (MI) and random forest feature importance (RFFI). Subsequently, the study utilized various algorithms for classifying DDoS attacks, such as random forest, gradient boosting, weighted voting ensemble (WVE), k nearest neighbour, and logistic regression. Ultimately, the random forest algorithm demonstrated superior performance compared to the others [23]. implemented a secure SaaS approach for detecting and mitigating attacks. The Deep Belief Network (DBN) is utilized for attack detection, with the weight and activation function fine-tuned through the Median Fitness-oriented Sea Lion Optimization Algorithm (MFSLnO). Upon detecting an attack, the system transitioned control to a lightweight bait mechanism, ensuring the reliable mitigation of the most common attack nodes without disrupting routine connections. The evaluation of the proposed work focused on assessing the packet loss ratio and throughput.
In [24], researchers proposed a method for real-time detection of attacks in the cloud environment. The study initially identified attacks in the application layer by employing multiple machine learning algorithms, such as the multi-layer perceptron (MLP) and random forest (RF), utilizing the Scikit ML library and big data architecture. The researchers optimized the model's performance to decrease prediction time, achieving superior accuracy with the random forest algorithm compared to other approaches.
The author offered an intelligent behavior-based malware detection framework in a cloud environment [25]. Multiple virtual machines initially collect malware data, examining distinctive features and selecting effective ones. Then, the selected features are fed into learning-based and rule-based detection agents to detect whether the data is normal or malware using several machine learning algorithms effectively. The proposed methodology can detect both known attacks and unknown attacks effectively. Finally, the proposed work enhances security by effectively attack detection using random forest. An effective IDS chronological salp swarm algorithm entrenched deep belief network was designed to detect suspicious intrusion in the cloud [26]. Initially, we designed this method by integrating the chronological concept with the Salp swarm algorithm. We established a fitness function to seek an optimal solution that accepts a low error value. Subsequently, we optimally tuned the weights using this method to identify an efficient solution for detecting intruders. Finally, the designed chronological salp swarm algorithm entrenched deep belief network acquired enhanced performance by the exploitation and exploration facility in search space. Researchers employed a hybrid deep learning approach to create an efficient intrusion detection system [27]. Initially, this work focused on enhancing the efficiency of the Intrusion Detection System (IDS) in analyzing abnormal network traffic. The Pearson correlation feature selection algorithm was utilized for efficient feature selection. The intrusion detection process involved utilizing a recurrent neural network embedded with gated recurrent units (GRU) and enhanced long short-term memory (LSTM), forming Cu-LSTMGRU. Subsequently, the system effectively classified network flows as either malicious or benign. The limitations of attack and malware detection are outlined in Table 1.
Table 1.
Limitations of attack & malware detection.
| Reference | Objective | Methods/Algorithms | Limitations |
|---|---|---|---|
| [21] | To utilize auto-encoder based mechanism for APT attack detection. | Auto-encoder neural network & PCA | However, this algorithm misinterprets the significant variable that leads to ineffective attack detection. |
| [22] | To perform DDoS attack detection in cloud computing using MI. | RFFI, MI, GB, WVE, KNN, RF & LR | Anyhow a MI and RF method selects the redundant and irrelevant feature that leads to high false positive rate. |
| [23] | To design a method for attack detection and mitigation using secure SaaS approach. | DBN, MFSLnO & lightweight bait mechanism (LBM) | Once intrusion is detected the LBM was performed for mitigation, but ineffective attack mitigation and countermeasure affects the security. |
| [24] | To method for real time attack detection in the cloud environment. | MLP & RF | Anyhow, RF generates numerous of trees while classification that increase complexity. |
| [25] | To offer an intelligent behavior-based malware detection framework. | RF | However, lack of ensuring user legitimacy affects the network security. |
| [26] | To perform an effective IDS CSS algorithm entrenched DBN for detecting suspicious intrusion. | Chronological salp swarm (CSS), DBN & Fuzzy entropy | Anyhow, consideration of inadequate features for performing IDS affects the detection rate. |
| [27] | To introduce hybrid DL approach for an efficient intrusion detection system. | Cu-LSTMGRU & Pearson's correlation coefficient (PCC) | PCC this algorithm cannot effectively differentiate among dependent and independent variables. |
2.2. Analysis of access control mechanism & secure data sharing
A blockchain-based multi-authority access control mechanism (BMAC) for secure data sharing in a cloud environment was incorporated [28]. Initially, the Shamir secret sharing (S3) technique and permission blockchain were utilized to execute individual attributes and jointly supervised by several authorities. Then it adapts the smart contract to generate tokens for attributes handled over several management domains that minimize the communication and computation overhead on data users. The blockchain aids in recording the process of access control in an auditable and secure way. Finally, the security of the proposed algorithm was examined. Several mechanisms are combined to provide fine-grained access control and secure data sharing [29]. Here, the blockchain, ciphertext-policy attribute-based encryption (CP-ABE), and interplanetary file system (IPFS) – BSSPD were utilized for secure personal data sharing. Initially, the user-centric approach was employed where the data owner encrypts the sharing data and stores it in IPS, which increases the approach's decentralization. The decryption key and address of the transmitted data were encrypted using CP-ABE as per the specific access policy, the data owner adapted the blockchain to publish his data-correlated information and distribute the keys to data users. The data user whose attributes are eligible for access policy can download and decrypt data. Finally, the ciphertext keyword search was utilized to protect the data privacy of user's while retrieving the data. Blockchain an entrenched access control technique in a cloud computing environment, was introduced [30]. Initially, in this environment, the data owner (DO) handles an access matrix which was stored in blockchain to illustrate the access policy. Then, the public keys of entire nodes and the access matrix are also stored in blockchain, to assure the security of this system. Here, the DO will encrypt the files that are largely shared once utilizing a symmetric key in a long time. Also, public key to authorized users is encrypted with the symmetric key in parallel within a minimal time. Finally, the proposed mechanism enhanced the security, thereby reducing computation overhead.
The author introduced the Blockchain-based Multi-Authority Access Approach (BMAC) to enhance secure data sharing [31]. Initially, they utilized the Shamir secret sharing approach and the permissioned blockchain Hyperledger Fabric to execute individual attributes. This execution was a collaborative effort supervised by multiple authorities, effectively avoiding a single point of failure. Moreover, blockchain technology established trust between authorities and tokens for attributes generated from smart contracts. These contracts were overseen across various management domains, helping to reduce computation and communication overhead on the data user side. The access control approach was audibly and securely recorded.
In the realm of cloud-based applications, the author implemented a secure data securing mechanism by introducing access control [32]. The Hyperledger Fabric and Attribute-Based Access Control (Fabric-ABAC) were initially proposed for secure data sharing across domains. To address data security issues, a trusted central organization was implemented, and a distributed environment involving stakeholders between parties was developed. The multi-environment was integrated with intelligent contracts, constructing a unified attribute model. The proposed Fabric-ABAC achieved multi-level, auditable access control and fine-grained data security by automatically examining permissions. Finally, intelligent contracts exploited Proxy Re-Encryption (PRE) to identify ciphertext communication without involving a third party. Table 2 outlines the limitations of access control mechanisms and secure data sharing.
Table 2.
Limitations of access control & secure data sharing.
| Reference | Objective | Methods/Algorithms | Limitations |
|---|---|---|---|
| [28] | To develop BMAC mechanism for secure data sharing in cloud. | BMAC, Permissioned blockchain & S3 technique | BMAC mechanism was performed for secure access control, however, the usage of traditional blockchain leads to ineffective immutability. |
| [29] | To tangle several mechanisms for providing fine-grained access control and secure data sharing. | CP-ABE & IPFS- BSSSPD | Lack of authentication increase the malicious traffic in the network which misleads the access control. |
| [30] | To design blockchain entrenched access control techniques in cloud computing environment. | Encryption & Access control | Access control policies are randomly generated where inconsideration of their role and attributes leaks to high data leakage. |
| [31] | To develop blockchain based multi-authority access approach BMAC for secure data sharing. | S3 & BMAC | Only considering the attribute while providing access control affects the efficiency of access control. |
| [32] | To introduce the secure data securing mechanism by implementing access control in the cloud-based applications. | Fabric-ABAC & PRE | Anyhow the traditional blockchain suffers from scalability issues. |
2.3. Analysis of authentication mechanism & secure data sharing
A mechanism for generating enhanced secure keys was explicitly designed for encrypting data in a cloud environment [33]. To begin, security keys are generated using segments of an identity bit string to enable an enhanced identity-based encryption approach. This method ensures that the user's identity remains concealed, preventing any possible adversary or attacker from decoding the key or decrypting the data. The key benefit of this method is that it leverages a polynomial interpolation function consisting of a Lagrange coefficient to hide the user's identity. Additionally, the system's security depends on the computing complexity of the bilinear Diffie-Hellman problem. Ultimately, this mechanism efficiently performs the encryption and decryption processes, thereby reducing latency. Another secure authentication scheme based on blockchain was introduced in cloud computing [34]. Initially, all users were registered with the authentication server (AS) and obtained their secret key through AS using Harmony search optimization (HSO). The elliptic curve integrated encryption scheme (ECIES) was then utilized to encrypt the data packets in mobile nodes and transfer them to a cloud server.
The SDN controller oversees the blockchain to protect evidence gathered from the users' signatures and data, which are embedded in the cryptographic hash algorithm of the SHA-256. The authorized investigator then conducts various processes such as identification, evidence gathering, examination, and report generation using the Logical Graph of Evidence (LGoE). A searchable encryption technique has been included for authentication and authorization in cloud computing [35]. This work consists of three components: classic user authentication (based on username, password, and a message with a code sent via SMS), a searchable encryption scheme, and biometric authentication. The first two components comprise two-factor authentication (2FA), with the second component illustrating the initialization process of the searchable encryption technique. Special attention has been given to the trapdoor function, which generates a value that can be used to execute the search process and function. Table 3 outlines the limitations of the authentication mechanism and secure data sharing.
Table 3.
Limitations of authentication & secure data sharing.
| Reference | Objective | Methods/Algorithms | Limitations |
|---|---|---|---|
| [33] | To work, I proposed an enhanced secure key generation mechanism to encryption in cloud environment. | Lagrange coefficient & Bilinear Diffle-Hellman | Here, the insecure channel selection for critical transformation leads to greater risk of disclosure. |
| [34] | To propose blockchain based secure authentication scheme in cloud computing. | HSO, ECIES, SHA-256 & LGoE | Consideration of insufficient credentials for authentication and lack of ensuring developers and resource owners legitimacy increases security breaches. |
| [35] | To implement a searchable encryption technique for authentication and authorization in cloud computing. | SMA, 2FA, special attention & trapdoor function | 2FA approach was utilized which enhance the security but this approach can turn against the users due to factors get losing that limit with QoS. |
3. Problem statement
Network traffic analysis for anomaly detection in the integrated environments of cloud computing and DevOps was introduced [36]. Initially, the weight agnostic neural networks (WANNs) framework was designed to automate the detection of malicious intent through darknet traffic examination and network management. Then it was utilized as an intelligent forensics tool for analyzing network traffic, and clarification of malware traffic, and the traffic identification was encrypted in real-time. After that, features are extracted, and feature selection is performed using the predictive power score (PPS) method. Then, the automated searching neural-net scheme was implemented to detect zero-day attacks. Finally, the employed process of malicious intent detection, the most critical asset of many organizations, was protected effectively by reducing effort barriers. The major limitations of this proposed work are described below as follows,
-
•
Here, all the users are considered legitimate users and developers, further permitting them to access the cloud application that leads to high complexity and communication overhead in both the network and application due to the presence of high number of illegitimate users.
-
•
In this work, even though intrusion detection was performed to enhance the security in DevOps, permission for accessing the applications was provided to all the users and developers, leading to the high leakage of sensitive data.
-
•
The features are extracted, and appropriate features are selected by feature selection using predictive power score which executes effectively; however, this method consumes a considerable time for performing calculations that leads to high latency.
-
•
Here, even though the intrusion detection was implemented using WANNs to minimize the security threats in the DevOps, lack of effective and secure data storage affects the data privacy and leads to security breaches.
Efficient feature extraction is incorporated for intrusion detection in a cloud computing environment [37]. Initially, the set significant features is selected using a univariate ensemble feature selection mechanism. It utilizes five dissimilar filter feature selection mechanisms for acquiring the subset of optimal features from the collected data. Then the feature map is generated from the set of filtered features for classification. Finally, the ensemble majority voting is performed using the ensemble of several machine learning algorithms which classify into two classes: intruder and normal. The major issues of the work are delineated,
-
•
All the users in this work are considered legitimate candidates and granted access to the cloud-based applications without any limitations and permission policies that lead to high-security breaches.
-
•
The feature extraction employed five different filter selection mechanisms, while an ensemble of machine learning algorithms was used for intrusion detection, leading to increased system complexity.
-
•
Here, the intruder was detected using an ensemble learning algorithm, which improves security; the lack of countermeasures and other technical issues (i.e., bugs, network settings.., etc.) led to system and service unavailability in the cloud environment.
A hybrid optimized deep learning approach was developed to improve security in DevOps by detecting attacks. The approach consists of two phases: feature extraction and classification. Initially, network traffic is monitored, and data from individual applications are processed to extract features. These extracted features are then fed into a the classification model for attack detection. To execute the classification model, an optimized deep belief network (DBN) algorithm was proposed. Finally, the activation function was optimized using the hybrid optimization algorithm called Firefly Alpha-Evaluated Grey Wolf Optimization Algorithm (FAE-GWO). The limitations of this approach are further explained below,
-
•
In this work, communication and data sharing were implemented without performing any privacy concerns that lead to the leakage of user's sensitive information and organizational policies.
-
•
Here, the network traffic is monitored, data flows are collected, and statistical features are extracted for attack classifications. However, the consideration of limited features leads to ineffective attack detection.
-
•
The attack detection was accomplished by an optimized deep belief network where the complexity of this algorithm was minimized by optimizing the activation method, however, the traditional drawback of this algorithm was its robust nature which is unsuitable for handling large data that leads to increases in high latency.
-
•
Even though the attack detection was performed to enhance the security of DevOps effectively, anyhow, lack of consideration of the bugs, network settings, etc …, leads to service unavailability thereby limiting QoS.
Fast and continuous monitoring (F&CM) effectively improves system availability and security in DevOps, as stated in Ref. [39]. Initially, this mechanism was executed using the software and system process engineering metamodel (SPEM). The real-time scenario demonstrated that the execution of F&CM availability mechanism helps teams to detect and remedy outage problems and attacks better. By promptly detecting and identifying outage problems and attacks, teams can quickly and effectively apply the necessary remediation. However, some drawbacks of this research are mentioned below,
-
•
Continuous monitoring was carried out in this study to monitor system outages and attacks, which was effective. However, improper monitoring and remediation failed to meet QoS requirements. DevOps enhanced system availability and security through fast and continuous monitoring. Nonetheless, the lack of secure data communication and information sharing resulted in high leakage of organizational policies and sensitive application information.
-
•
Continuous monitoring was employed to detect attacks and outage issues in a cloud-based application. However, the packet features were not considered, leading to ineffective attack detection. The Software and System Process Engineering Metamodel (SPEM) was used for continuous monitoring, but no intelligence was adapted to detect outages and attacks, resulting in security issues.
A method of fine-grained access control using an attributed-based searchable encryption approach was proposed in Ref. [40]. The framework includes an attribute-entrenched searchable encryption scheme allowing precise access control. The data owner stores the access rules with a searchable encryption service provider (SESP). When a user requests access, the SESP returns the encrypted search results using the SHA algorithm within a specified timeframe. If the user has any disputes, they can initiate an arbitration request. The blockchain handles such requests but only arbitrates on entrenched details. The main issues addressed by this research are also defined. The Neural network approach as suggested by Liu et al. [41] leveraging mixed mode-dependent time-delays were also found interesting.
In a cloud environment, all users are considered legitimate. However, malicious users increase the amount of malicious traffic, which negatively affects security. Access control is provided using a searchable encryption scheme, which can be complicated to implement and may not be fully effective due to limited consideration for this attribute. Searchable encryption service providers (SESPs) preserve data and privacy during transmission to address this issue. While this method is effective, traditional blockchain technology may need to provide more confidentiality and scalability. In this work, SESP stores and provides user requests through encryption using the SHA algorithm, which is effective but can result in high latency due to its time consumption.
3.1. Research solutions
We have proposed an end-to-end security amplified framework to overcome the disputes. Initially, the users, developers, and resource owners are candidates authenticated by TCA using the Mcha-Poly 1305 algorithm based on their credentials. After that, the access control policies are effectively generated by EDDPG. Based on trust value evaluated by SMA, attribute, and role, the optimal user and developer are selected using AVOA, and access control is provided, thus minimizing high data sensitive leakage. Then, data privacy in transmission and rest is enhanced by performing encryption before transmission using the Mcha-Poly 1305 algorithm.
Moreover, based on the sensitive data, it is decided to store it in blockchain and cloud servers, improving the application's security. System and service unavailability are the foremost issues; we have proposed vulnerability management and revision to address this. In this process, we have identified and mitigated both application-based issues and attacks using IVS and Tweak-NB algorithms, respectively, in terms of considering several parameters.
4. Proposed work
In this research, we concentrate on amplifying the security of cloud-based applications by integrating DevOps. In addition, data engineering, DevOps, and machine learning are combined to ensure the automation of the DevOps cycle in the production environment. The slide window blockchain is employed to increase data privacy and immutability. Fig. 1 represents the architectural flow of the proposed Slide-Block framework. The proposed work consists of several entities, which are elaborated on below,
-
•
Users: Users in the physical layer (fundamental layer) are seeking cloud applications, performing data collection, and storing their data in a cloud server from several sensors. The users can access cloud applications from any location through laptops, mobile phones, computers, etc.
-
•
Developers & Resource Owners: Developers in the network were responsible for developing the cloud-based application. The resource owners are the ones who own the cloud-based applications.
-
•
Trust Certification Authority (TCA): Trust Certification Authority deployed in the physical layer is one of the blockchain nodes for accommodating authenticity to users by analysing their credentials and affording them with security keys.
-
•
Edge Server: The edge layer encompasses several servers accountable for collecting the network traffic. Furthermore, it continuously monitors the network for attack detection to strengthen the security and privacy of users, developers, and resource owners.
-
•
Cloud Server: Cloud layer composed of blockchain to upsurge network security and minimize computational burden by providing adequate access control. Moreover, it is responsible for accommodating servers for users.
-
•
Security Management Agent (SMA): A Security Management Agent is deployed in the network to amplify network security. This agent monitors and maintains the candidate records in the blockchain based on the historical data of the candidate and is constantly monitoring. Furthermore, it also evaluates trust value for individual users while affording access control and transmitting the application-based issues if they occur.
Fig. 1.
Overall architecture of proposed slide-block framework.
4.1. Pattern-based authentication
Initially, the users , developers and resource owners legitimacy are ensured through performing authentication. For that, the and are known as regitration candidates they are register by providing their credentials such as user name , user Id , device Id , role , password , mail Id and grid-based pattern selection to the Trust Certification Authority (TCA) which sends the candidate credentials to blockchain to improve network security. This scheme consists of two stages, where the registration and authentication phase are detailed below as follows,
4.1.1. Registration phase
•Step 1: At first, the is registered to TCA by providing their credentials , , , , , , and which can be formulated as,
where, denotes the registration of with parameters , , , , , , and respectively.
-
•
Step 2: Once, the is registered in this stage, the TCA display the registration Compute Alphabets (CA) where that selects two Compute Alphabets (CA) which comprises of hide mathematical operations.
In that, the first alphabet denotes the addition, and the second alphabet defines subtraction that will only be acknowledgeable for registration .
-
•
Step 3: Then the must select the computer integer (CI) among 0 to 5 where the CA and CI generates new password pattern each time.
where defines the user selected CI within the range of 0–5 that is denoted as .
4.1.2. Authentication phase
•Step 4: In the second stage, that is the authentication phase the registered must enter the , and that can be expressed as,
-
•
Step 5: After that, the authenticate pointer (AP) display single letter which is selected by the during registration phase and display two random numbers among 1–6.
where denotes the displayed single letter and denotes the displayed random two numbers from range of .
-
•
Step 6: Furthermore, the executes the mathematical operation which is hidden in the AP of CA displayed letter i.e., either addition or subtraction between the candidate selected CI and the digits displayed by TCA.
-
•
Step 7: Then the obtains two numbers by performing the mathematical operation. After that, the TCA displays the 7 × 11 grid where the candidate must draw the pattern through obtained numbers (i.e., the first number is considered as column and second number as row).
-
•
Step 8: During, this the behavioural features of the person such as finger velocity and stroke time features are also extracted for authenticating the person.
where the parameters and refers to the finger velocity and stroke time features of .
-
•
Step 9: Here, the password, selected CA and CI are encrypted by TCA, and it provides the security key to the legitimate , then the encrypted data is stored in blockchain using MCha-Poly 1305 algorithm.
where represents security key affords to legitimate .
Likewise, the registration and authentication are performed for both and . The employed MCha-Poly 1305 [detailed in section C] algorithm improves the resistance to cryptanalysis with low complexity. By performing this pattern-based authentication, the shield against shoulder surfing attack and smudge attacks are resisted by TCA where the AP is displayed while the candidate touches the screen and disappears while taking their finger from the screen enhancing security. Fig. 2 illustrates the workflow of pattern-based authentication using the MCha-Poly 1305 algorithm.
Fig. 2.
Workflow of pattern-based authentication.
4.2. Tri-level access control
After performing successful authentication, access control is implemented using the Enhanced Deep Deterministic Policy gradient (EDDPG) reinforcement algorithm where this algorithm effectively fabricates the access policies. Based on the actor-critic structure, the DDPG algorithm is developed with a dual deep neural network (DNN). To be more specific, critic network and policy network . Here, the policy network act as an actor to map the composition of state-space to continuous action , while the value network act as critic, that timely estimates the performance of policy function and provides feedback for enhancement. Target networks and are utilized for tracking the original network of and , hence for mitigating the impact of incorrect estimation. The determination of DDPG action in certain timestamps contemplatedboth inherent policy and exploration, which is mathematically expressed as,
where refers to the state space, is the parameter of and refers to the Gaussian noise that occurs only in training phase. Subsequently, the policy is evaluated in training phase; the ideologies of offline training are illuminated hereafter. Then, the policy estimation is executed by means of Bellman's principles as,
where represents the function of optimal value, is the single-step reward and known as discount factor. From the equation, it is clears that the optimal estimation of current composition of and can be acquired repeatedly. It is anticipated that deep networks and might indefinitely repetition task precisely. To recognize it, the updating error of critic network can be estimated by.
where the first two terms in (9) represents the anticipated value denoting to (10), and the last term represents the actual output of current critic network. By this way, the squared error is acquired, and the updating method of gradient-descent is performed for enhancing the ability of policy evaluation. An ideal critic network is anticipated to generate effective policy, henceforth that the actor network can modify its policies corresponding to abandon the action with worst value feedback. Thus, the performance of objective policy network can be represented as ,
where defines the expectation operator. Next, the policy network repeats updating autonomously towards the promoting direction of performance objective. Consequently, the updating error expressed as objective gradient regarding network can be expressed as,
A strategy of soft updating is employed for target networks and can expressed as,
Here, the method of experience method is exploited for evade the back-forth correlations while training which enhance the stability and efficiency of learning. The probability of appraised experience can be defined as,
where is the total index of experience pool and denotes the hyperparameter to compute degree of priority which ranges from 0 to 1. Lower leads to uniform conventional sampling DDPG, is the prominence degree of set experience that can be estimated by,
By exploiting the replay experience, those experiences initiating huge important variations to policy estimations will be allocated huge weights, and therefore, are mostly selected and replay in training process. Once, the policies are generated, the access policies are provided steps involved are articulated as follows. Initially, the legitimate user (users and developers) initiates the request to TCA for accessing the cloud based application. Here, the TCA is responsible for generating the access policies and providing the access control based on their attribute role and trust value using smart contract. Once the TCA received the request, it redirects the request to the security Management agent (SMA) where this agent monitoring and maintain the candidate records in the blockchain based on the historical data of the candidate, historical access behaviour and constant monitoring which can be represented as,
Furthermore, it will calculate the trust value based on the information stored in blockchain which can be evaluated as,
where is the current time period and signifies the trust estimation of candidate records , in which, can be expressed as,
where the equation defines the proportion of candidate records set , and is decrease with time. Here, the denotes the permission of an entity collection of degree and scope of candidate operations on resources which includes of writing, deleting, data reading etc. Then, the SMA sends the candidate's trust value and historical data to TCA. Based on the attribute, role and trust value of the candidate, the TCA assigns access control for the candidate through resource owner. The resource owner will sign the token to prove that the resource is issued by the owner. Fig. 3 demonstrates the tri-level access control based on EDDPG and AVOA. For providing access control optimally, the African Vulture Optimization Algorithm (AVOA) is employed which is illustrated below.
Fig. 3.
Tri-level access contract.
4.2.1. Preliminary stage
At first, the primary members are moulded, and then the appositeness of entire solution is determined. Here, there are two kinds of candidate solutions, individual group's optimal solution is determined. In our work, the TCA is act as a vulture which search for optimal candidate.
4.2.2. Starvation rate
For providing access control, vulture keeps searching for food (i.e., optimal candidate) which become violent while starvation that can be represented as,
where defines the eagles satisfied status, the parameters and is the present and maximum iterations respectively, is the random number which ranges [-1,1], is random number which ranges [-2,2] and is also an random number among 0 and 1. The eagle is predicted as starved, if the value of is minimal than 0, and the eagle is satisfied if the value gets increased. Here, the is fixed number that represents the exploration and exploitation phase in AVOA.
4.2.3. Exploration phase
In this stage, the eagle scan diverse area arbitrarily that can be attained by two dissimilar strategies. In AVOA, is the parameter that is adopted to choose any one of those two strategies, the value is among 0 and 1, the strategy is chosen utilizing following equation,
where designates one of the finest eagles, is the distance that the eagle moves to shield food from others and which is an random number [0,1], is the fitness value and the parameters and designates the upper and lower bounds in search space respectively.
4.2.4. Exploitation stage
In this stage, the eagle has adequate energy for searching the food. At such times, the vulture with extreme physical strengthens and it generates rotational flight to typical spiral motion. The exploitation is the first stage which can be formulated as follows,
where and denotes the random numbers [0,1]. In second stage, the movements of eagle fascinate various vultures. The second stage of exploitation phase is mathematically formulated as,
where denotes the function of levy flight. After performing, the AVOV for optimal candidate selection, if the candidate is not suitable for certain access or with low trust value, then the TCA terminates the request and send the notification to the specific candidate. Here, the permission level that is the degree of operations on resource is imitated with three levels as Operator (developer), subscribed and unsubscribed. Those permission levels are determined as,
-
•
Operator: The developer who developed the program coding and implementation-data reading, program reading and changing, program-only reading, the program only deleting.
-
•
Subscribed user (authorized users): Access the application with certain limits.
-
•
Unsubscribed users: The candidate who is login as guest-here, the demo or basic instructions for those specific applications can be only viewed by those users with the limited time using Just-in-time (JIT) mechanism. Once the time is completed, the robust warning notification will be displayed, and their demo session will be concluded.
The access control policies are encrypted and stored in the blockchain for tamper-proof. By this, way the access control policies enhance security thereby reducing highly sensitive information linkage.
4.3. Privacy attentive data storage
After accommodating the access control, privacy-attentive data storage and communication is executed for evading sensitive data leakage. Here, the two types of the data are encrypted and stored in blockchain. The data in transmission and the data in rest are the two categories. Initially, the that is the data transmitted between the cloud, developers, users, and resource owners are encrypted. Then, the , which is whenever the candidate must store their data in cloud, the query is raised to whether the data is sensitive or non-sensitive. The sensitive data are password, hard code passwords, rules and regulations, organizational policies and access control policies and are stored in the blockchain respectively. Moreover, the sensitive and non-sensitive data are evaluated based on certain threshold which can be determined as follows,
If the is sensitive then the is encrypted and stored in blockchain which can be illustrated as,
where denotes the blockchain. If the If the is non-sensitive it is encrypted and stored in cloud server which can be exemplified as,
For that encryption purpose, we have adapted the Magnificent Chacha algorithm where this algorithm encrypts the data effectively thereby incorporating poly 1305 algorithm. Henceforth, the proposed algorithm is known as Magnificent Chacha-poly 1305 algorithm (MCha-Poly 1305) which is employed for its randomization characteristics and rotation technique to secure the data that execute on low duty cycle meanwhile poly 1305 is employed for its confidentiality and integrity. Both the algorithms are tangled to effectively encrypt the data. MCha-Poly 1305 considers input , nonce of 12-byte which is depicted as,
where, and which are tuple of 256-bit. The tuple is accommodating as an input to MCha-Poly 1305 for computing the . With 256-bit tuple, the MCha-Poly 1305 executes randomized zig-zag rounds that can be mathematically formulated as,
From the equation, is the Magnificent Chacha algorithm and is denoted as quartier function that implements randomized zig-zag approaches to update the input for individual rounds. By performing the update of randomized zig-zag approaches, security is strengthened where the attackers faces high complexity to tamper the data. Generated is accommodate as input to ploy 1305 for acquiring the . The poly 1305 is computing based on and the polynomial co-efficient at that can be expressed as,
By this way of encryption, data privacy is enhanced thereby reducing sensitive data leakage. Furthermore, the cloud-based application is secured, and the attackers can't tamper the policies and candidate's information.
4.4. Vulnerability management and emendation
Once, the data is securely stored and data transmission is encrypted successfully, vulnerability management and emendation are performed. In DevOps, continuous integration, continuous delivery, and continuous monitoring are the vital processes to enhance the QoS and security for cloud based application integrated DevOps. To improve the security, continuous monitoring is established. In this process, both the application-based issues and attack is detected and rectified. Here, the application-based issues such as bugs, application issues, network settings, data errors are detected using IP configuration, user privileges, security protocols, file system infrastructure and patch levels through the Intruder Vulnerability Scanner (IVS) constantly. Furthermore, the packet features and behavioural features are continuously collected from the gateway and monitored for detecting the attackers using Tweak Naive Bayes (Tweak-NB) Algorithm where this algorithm significantly analyses the packet features and detect the attackers accurately.
Here, the linear relationships among the attributes are eradicated by an orthogonal matrix to minimize their relations thereby enhancing the algorithm performance. Assume defines the set of entire samples associate to class in network traffic , the samples in is n-dimensional vector. The covariance matrix is determined as,
where is denoted as the mean in th attribute value in individual class, is the number of samples presented in and is denoted as covariance matrix. is an matrix, assume eigenvalues in be and the eigenvectors as . Each eigenvectors is combined by equation (36) to acquire the basis of standard orthogonal and to fabricate the orthogonal matrix .
From that, , where is the combined eigenvector. Following covariance, matrix is harmoniously diagonalized through orthogonal matrix , entire elements are excepting the diagonal are 0's. Specifically, the linear relationships among the attributes are eradicated, which is nearer to the conditional independence assumption of naïve bayes (NB). The samples in are transmuted by then the mean and variance of individual attribute are evaluated. After executing the abovementioned transformation, the mean is 0 and variances modifies from to . By means of this basis, the weight of attribute is enabled to optimize NB, and Tweak-NB is formulated as,
where is denoted as the new sample acquired by transformation of of sample .
Here, the parameters and are denoted as class labels, and are represented as the means in th attribute value in classes and . Furthermore, the is the variance in th attribute value in classes and respectively. Among these, the attribute variance can illustrates as network traffic concentration, imitating the classification pre-eminence of attributes under the class when minimizing the noise interference. The product of absolute value in mean difference among different classes of same attribute, specifically, , enhances the classification performance. Once the application-based issues are detected, the SMA agent notifies to specific developer to amend the issues and it will block the attackers. Furthermore, feedback from the users, and developers are collected to enhance the continuous delivery. By executing this, both the system and service unavailability and the security are amplified.
5. Experimental results
In this section, we have illustrated experimental result of the proposed Slide-Block framework using artificial intelligence (AI) approach and blockchain technologies. This experimental research enfolds of three categories including implementation setup, comparison analysis and research summary. The result section describes that the proposed work attains betterment performance with compared to existing models.
5.1. Implementation setup
In implementation setup, the experimental setup of proposed Slide-Block framework is demonstrated. The server utilized for proposed Slide-Block framework is Wamp server 2.0 and MySQL 5.1.36 as backend. Furthermore, we utilize OS windows 10 and the programming language of Java is adopted with development tool kit of JDK 1.8. We proposed conducted our work on Integrated Development Kit (IDE) by means of NetBeans 8.2. The above Software/Hardware requirements are installed in PC with Central Processing Unit (CPU) of Intel (R) Core (TM) i5-4590S CPU @ 3.00 GHz 3.00 GHz for the proposed Slide-Block framework.
5.2. Comparative analysis
In this section, several metrics are considered from proposed Slide-Block framework is compared with existing methods. Here, the following are the various performance metrics which we have evaluated such as detection rate, authentication time, packet loss rate, security strengthen, latency and communication overhead. The proposed Slide-Block framework is compared with several existing works such as Darknet [36], HOD-Net [38] and SPEM [39] to prove its efficacy of proposed work.
5.2.1. Result of detection rate
Detection rate is the metric which utilized to evaluate the rate of attack detection in network. Generally, this is characterized as number of detected attacks in ratio to the number of candidates increasing that can be described as,
where, designates the detected attacks and designates the increasing candidates. Fig. 4 the graphical plot shows the comparison of proposed work and existing models in terms of detection rate with respect to number of candidates. Among that, it is clearly shown that the proposed work have achieved higher detection rate then the Darknet, HOD-Net and SPEM existing works. The main reason for achieving such high detection rate is due to executing vulnerability management and emendation process. In this, we have focused on both application-based issues and attacks detection through continuous monitoring. Data errors, application issues, bugs and network settings are some of the application-based issues which are detected using IVS based on user privileges, patch levels, file system infrastructure etc. Furthermore, the packet and behavioural features are considered for detecting the attacks using Tweak NB algorithm which provides effective and accurately results thus aids to increases in high detection rate. Meanwhile, in existing works lack of considering application side-based issues and manually monitoring leads to ineffective. Besides, while attack detection the adequate features are considered which are leads to low detection rate.
Fig. 4.
# of Candidates vs Detection Rate.
The numerical result shows that the of the proposed work is increased to 97% when the number of candidates increased to 100. In contrast, for the same number of candidates, the of the existing works Darknet, HOD-Net and SPEM achieves 86%, 78% and 70.3% of respectively. Overall, the of the proposed Slide-Block framework increases about 11%–27.3% than the existing works.
5.2.2. Result of authentication time
Authentication time is the vital metric utilized to estimate the amount of time occupied for implementing authentication of users, developers, and resource owners. is illustrated as the difference among the amount of time occupied for accommodating access to the individual request of the equivalent user to total amount of time. can be mathematically expressed as,
where, indicates the time occupied to access the request and is the total amount of time. Fig. 5 the graphical plot displays the comparison of proposed work and existing models in terms of authentication time with respect to number of candidates. From that, it is clearly exposed that the proposed work has attained minimal authentication time then the Darknet, HOD-Net and SPEM existing works. The main reason for attaining such minimal authentication time is due to proposing of pattern-based authentication. In the proposed Slide-Block framework, pattern-based authentication is performed for users, developers, and resource owners by contemplating several credentials, which is implemented by TCA using MCha-poly 1305 algorithm. Once, the authentication is completed the credentials are stored in slide window blockchain. Authentication with minimal time consumption is achieved by executing sliding window blockchain which occupies minimal amount of time to execute authentication while compared with other existing approaches. Whereas, in existing works the authentication is only performed for users with insufficient credentials and traditional blockchain which are tends to increase the authentication time.
Fig. 5.
# of Candidates vs Authentication Time.
The numerical result shows that the of the proposed work is minimized to 550 ms when the number of candidates increased to 100. Meanwhile, for the same number of candidates, the of the existing works Darknet, HOD-Net and SPEM reaches 820 ms, 880 ms and 920 ms of respectively. Overall, the of the proposed Slide-Block framework minimized about 270 ms–370 ms than the other existing works.
5.2.3. Result of packet loss rate
Packet loss rate is the number of packets plunge tersely against the total number of packets while transmitted that can be formulated as,
where represents the average number of packet loss while transmission from total packets and is the transmitted packets. Fig. 6 the graphical plot indicates the comparison of proposed work and existing models in terms of packet loss rate with respect to number of candidates. From that, it is clearly visible that the proposed work has reached a lower packet loss rate then the Darknet, HOD-Net and SPEM existing works. The main reason for reaching such lower packet loss rate is due to proposing of tri-level access control and privacy attentive data storage. In our work, at first the access control policies are effectively generated using EDDPG. Once, the request initiated by the candidates to TCA, which is responsible for providing access based on attribute role and trust value by utilizing smart contract Here, the TCA redirects those candidate request to SMA, it will evaluate the trust value through constant monitoring as well as historical candidate data. Evaluated trust values are transmitted to TCA, then optimal candidates are selected by AVOA and then the access control are provided in three level permission levels as operator, subscribed users and unsubscribed users that improves the data privacy level thereby minimizing packet loss rate. Besides, in privacy attentive data storage, both data in transmission and rest are encrypted using Mcha-Poly 1305 algorithm where the packet loss rate is minimized. Meanwhile in existing works, the data communication is performed without any privacy concern that leads to high data leakage. Moreover, the sensitive are stored in clouds server without encryption which are tends to high data leakage thereby increasing high packet loss rate.
Fig. 6.
# of Candidates vs Packet Loss Rate.
The numerical result shows that the of the proposed work is reduced to 53.5% when the number of candidates increased to 100. Meanwhile, for the same number of candidates, the of the existing works Darknet, HOD-Net and SPEM increased as 75%, 80% and 85% of respectively. Overall, the of the proposed Slide-Block framework reduced about 21.5%–31.5% than the other existing works.
5.2.4. Result of security strengthen
Security strengthens is the vital metric utilized to estimate the security level of the cloud-based applications during authentication, access control, data storage and vulnerability analysis. High security strengthen enriches the resistance of cloud-based applications against several vulnerabilities. is denoted as (%).
Fig. 7the graphical plot specifies the comparison of proposed work and existing models in terms of security strengthen with respect to number of candidates. From that, it is clearly observable that the proposed work has achieved maximum rate of security strengthen then the Darknet, HOD-Net and SPEM existing works. The main reason for achieving such maximum security strengthens is due to proposing of authentication and vulnerability analysis. In our work, pattern-based authentication is performed for users, developers, and resource owners by contemplating several credentials, which is implemented by TCA using MCha-poly 1305 algorithm. Once, the authentication is completed the credentials are stored in slide window blockchain and further, the data are encrypted and stored in blockchain as well as data is encrypted before transmission thus improves data privacy. Furthermore, the packet and behavioural features are taken into an account for detecting the attacks using Tweak NB algorithm which are increases the security. However, in existing works, the insufficient metrics and ineffective authentication, as well as lack of authenticating the resource owner leads and developers leads to improper security management. In addition to that, while attack detection the adequate features are considered which reduces the security strengthen.
Fig. 7.
# of Candidates vs Security Strengthen.
The numerical result shows that the of the proposed work is increased to 98% when the number of candidates increased to 100. In contrast, for the same number of candidates, the of the existing works Darknet, HOD-Net and SPEM reduced as 87%, 80% and 74% of respectively. Overall, the of the proposed Slide-Block framework increased about 11%–24% than the other existing works.
5.2.5. Result of communication overhead
Communication overhead is describes as the ratio of overhead packets when transmission to receiver which can mathematically expressed as,
where is denoted as the overhead packets occurs while transmission and is the transmitted packets. Fig. 8 the graphical plot symbolizes the comparison of proposed work and existing models in terms of communication overhead with respect to number of candidates. From that, it is clearly visible that the proposed work has achieved a slightest latency then the Darknet, HOD-Net and SPEM existing works. The main reason for achieving such slightest latency is due to performing of pattern-based authentication. In this, the legitimacy of candidate is ensured by TCA using MCha-poly 1305 algorithm based on their credentials. Furthermore, the vulnerability analysis is performed by Tweak NB algorithm by considered several constraints which are tends to minimize the malicious traffic in network thus reduce communication overhead. Whereas, in existing works the authentication is only performed for users and ineffective vulnerability analysis is performed which are leads to increases in high communication overhead.
Fig. 8.
# of Candidates vs Communication Overhead.
The numerical result shows that the of the proposed work is minimized to 0.54 ms when the number of candidates increased to one hundred. In contrast, for the same number of candidates, the of the existing works Darknet, HOD-Net and SPEM increased as 0.78 ms, 0.8 ms and 0.84 ms of respectively. Overall, the of the proposed Slide-Block framework minimized about 0.24ms–0.3 ms than the other existing works.
5.2.6. Result of latency
Latency is utilized to evaluate the amount of time occupied for performing data encryption, authentication, access control and vulnerability analysis. is defined as the difference between the amount of time taken for performing specific task from the above-mentioned tasks to the total amount of time. can be expressed as,
where refers to the time taken for performing the task. Fig. 9 the graphical plot represents the comparison of proposed work and existing models in terms of latency with respect to number of candidates. From that, it is clearly noticeable that the proposed work has attained minimal latency then the Darknet, HOD-Net and SPEM existing works. The main reason for achieving such minimal latency is due to adopting appropriate algorithms and techniques in individual process. Initially, the authentication with minimal time consumption is achieved by executing sliding window blockchain which occupies minimal amount of time to execute authentication. Then, the access control policies are generated by EDDPG, and the tri-level access control is provided by TCA based on attribute and trust level estimated by SMA. Furthermore, the data in transmission rest are encrypted using MCha-Poly 1305 where this algorithm encrypts the data effectively thereby minimizing the number of rounds. In vulnerability analysis, the Tweak NB algorithm is employed for attack detection which removes linear relationship that are tends to minimize high latency. However, in existing works utilization of ineffective algorithms and techniques leads to high latency.
Fig. 9.
# of Candidates vs Latency.
The numerical result shows that the of the proposed work is reduced to 4700 ms when the number of candidates increased to 100. In contrast, for the same number of candidates, the of the existing works Darknet, HOD-Net and SPEM increased as 7800 ms, 8400 ms and 9000 ms of respectively. Overall, the of the proposed Slide-Block framework reduced about 3100 ms–4300 ms than the other existing works.
5.3. Research summary
In this section, we summarize the experimental results that demonstrate the superior performance of the proposed Slide-Block framework compared to existing approaches. We evaluate the performance of our work in terms of several metrics, including detection rate, authentication time, packet loss, security strength, communication overhead, and latency, presented in Fig. 4, Fig. 5, Fig. 6, Fig. 7, Fig. 8, Fig. 9. Additionally, Table 4 provides a numerical analysis of the performance metrics for proposed and existing works. Finally, we list the research highlights as follows.
-
•
For enhancing application and network security, pattern-based authentication is performed for all candidates and the passwords are encrypted using the MCha-poly 1305 encryption algorithm. This algorithm executes effectively, which permits only legitimate candidates to access the service.
-
•
For efficient authorization, the Tri-level access control policies are fabricated using an enhanced deep deterministic policy gradient algorithm (EDDPG) and provided to the candidate who meets the control policy. The African vulture optimization algorithm is adapted to provide optimal access policies.
-
•
To ensure communication and transaction certainty, the privacy-attentive data storage mechanism is executed, encrypting the data using the MCha-poly 1305 encryption algorithm that minimizes highly sensitive data leakage.
-
•
For amplifying system and service availability, continuous monitoring is established where the intruder vulnerability scanner detects the Tweak NB algorithm detects both the application-based issues and the attack.
Table 4.
Performance analysis of proposed & existing works.
| Performance metrics (%) | SPEM | HOD-Net | Darknet | Slide-Block |
|---|---|---|---|---|
| Detection Rate (%) | 37.86 | 45.4 | 52.6 | 66.4 |
| Authentication Time (ms) | 640 | 582 | 522 | 348 |
| Packet Loss (%) | 62.2 | 55.2 | 49 | 35.5 |
| Security Strengthen (%) | 43.2 | 47 | 53.2 | 65.2 |
| Communication Overhead (ms) | 0.644 | 0.598 | 0.54 | 0.34 |
| Latency (ms) | 5860 | 5180 | 4500 | 3200 |
6. Conclusion
Improper security management, enormous sensitive data leakage and system and service unavailability are the primary concerns in DevOps-based cloud applications, which are addressed and resolved by our proposed work. For that, initially, the users, developers and resource owners are candidates authenticated based on their credentials by TCA using the Mcha-Poly 1305 algorithm. After that, the access control policies are effectively engendered by EDDPG. Based on trust value evaluated by SMA, attribute and role, the optimal user and developer are selected using AVOA and access control is provided, thus minimizing high data sensitive leakage. Then, data privacy in transmission and rest is enhanced by performing encryption before transmission using the Mcha-Poly 1305 algorithm.
Furthermore, based on the sensitivity, it is decided that the data will be stored in blockchain and cloud servers, improving the application's security. System and service unavailability are the foremost issues; we have proposed vulnerability management and emendation to address this. In this process, we have identified and mitigated both application-based issues and attacks using IVS and Tweak-NB algorithms, respectively, in terms of considering several parameters. The proposed work is implemented by Java/JDK 1.8 to prove this efficacy in several performance metrics such as detection rate, authentication time, packet loss, security strength, communication overhead and latency, where our proposed Slide-Block achieves better performance.
Data availability statement
The data used in this research will be made available on request.
Ethics statement
Informed consent was not required for this study because no specific information from humans is used.
CRediT authorship contribution statement
Gopalakrishnan Sriraman: Writing – review & editing, Writing – original draft, Software, Methodology, Data curation, Conceptualization. Shriram R: Validation, Supervision, Project administration.
Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Contributor Information
Gopalakrishnan Sriraman, Email: gopalakrishnan.sriraman2019@vitbhopal.ac.in.
Shriram R, Email: shriram.r@vitbhopal.ac.in.
References
- 1.Khan M.S., Khan A.W., Khan F., Khan M.A., Whangbo T.K. Critical challenges to adopt DevOps culture in software organizations: a systematic review. IEEE Access. 2022;10:14339–14349. [Google Scholar]
- 2.Battina D.S. 2022. THE CHALLENGES AND MITIGATION STRATEGIES OF USING DEVOPS DURING SOFTWARE DEVELOPMENT. [Google Scholar]
- 3.Almeida F.L., Simões J., Lopes S. Exploring the benefits of combining DevOps and agile. Future Internet. 2022;14:63. [Google Scholar]
- 4.Rafi S., Akbar M.A., Sánchez-Gordón M., Palacios R.C. 2022. DevOps practitioners' Perceptions of the low-code trend. (Proceedings of the 16th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement). [Google Scholar]
- 5.Altunel H., Say B. Software product system model: a customer-value oriented, adaptable, DevOps-based product model. Sn Computer Science. 2021;3 doi: 10.1007/s42979-021-00899-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Azad N. 2022 IEEE/ACM International Workshop on Software-Intensive Business (IWSiB) 2022. Understanding DevOps critical success factors and organizational practices; pp. 83–90. [Google Scholar]
- 7.Al-Marsy A., Chaudhary P., Rodger J.A. A model for examining challenges and opportunities in use of cloud computing for health information systems. Applied System Innovation. 2021 [Google Scholar]
- 8.Zeb S., Mahmood A., Khowaja S.A., Dev K., Hassan S.A., Qureshi N.M., Gidlund M., Bellavista P. Industry 5.0 is coming: a survey on intelligent NextG wireless networks as technological enablers. ArXiv, abs/2205.09084. 2022 [Google Scholar]
- 9.Al-Marsy A., Chaudhary P., Rodger J.A. A model for examining challenges and opportunities in use of cloud computing for health information systems. Applied System Innovation. 2021 [Google Scholar]
- 10.Camacho C., Cañizares P.C., Llana L., Núñez A. Chaos as a Software Product Line—a platform for improving open hybrid‐cloud systems resiliency. Software Pract. Ex. 2022;52:1581–1614. [Google Scholar]
- 11.Werner C., Li Z.S., Lowlind D., Elazhary O., Ernst N.A., Damian D.E. Continuously managing NFRs: opportunities and challenges in practice. IEEE Trans. Software Eng. 2021;48:2629–2642. [Google Scholar]
- 12.Elazhary O., Werner C., Li Z.S., Lowlind D., Ernst N.A., Storey M.D. Uncovering the benefits and challenges of continuous integration practices. IEEE Trans. Software Eng. 2021;48:2570–2583. [Google Scholar]
- 13.Rajapakse R.N., Zahedi M., Babar M.A. 2021. An empirical analysis of practitioners' perspectives on security tool integration into DevOps. (Proceedings of the 15th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM)). [Google Scholar]
- 14.Plant O.H., Hillegersberg J.V., Aldea A. 2021 IEEE 23rd Conference on Business Informatics (CBI) 2021. How DevOps capabilities leverage firm competitive advantage: a systematic review of empirical evidence; pp. 141–150. 01. [Google Scholar]
- 15.Plant O.H., Hillegersberg J.V., Aldea A. Rethinking IT governance: designing a framework for mitigating risk and fostering internal control in a DevOps environment. Int. J. Account. Inf. Syst. 2022;45 [Google Scholar]
- 16.Alonso J., Orue-Echevarria L., Huarte M. CloudOps: towards the operationalization of the cloud continuum: concepts, challenges, and a reference framework. Appl. Sci. 2022;12:4347. doi: 10.3390/app12094347. [DOI] [Google Scholar]
- 17.Alam S.R., Gila M., Klein M., Martinasso M., Schulthess T.C. Versatile software-defined HPC and cloud clusters on Alps supercomputer for diverse workflows. Int. J. High Perform. Comput. Appl. 2023 [Google Scholar]
- 18.Farooq M.S., Ali U.M. Harnessing the potential of blockchain in DevOps: a framework for distributed integration and development. ArXiv, abs/2306.00462. 2023 [Google Scholar]
- 19.Surya L. AI and DevOps in information technology and its future in the United States. InfoSciRN: Artif. Intell. 2021 [Google Scholar]
- 20.Applying azure to automate devops for small ML smart sensors. International Research Journal of Modernization in Engineering Technology and Science. 2022 [Google Scholar]
- 21.Abdullayeva F.J. Advanced Persistent Threat attack detection method in cloud computing based on autoencoder and softmax regression algorithm. Array. 2021;10 [Google Scholar]
- 22.Alduailij M.A., Khan Q.W., Tahir M., Sardaraz M., Alduailij M.A., Malik F. Machine-learning-based DDoS attack detection using mutual information and random forest feature importance method. Symmetry. 2022;14:1095. [Google Scholar]
- 23.SaiSindhuTheja R., Shyam G.K. A machine learning based attack detection and mitigation using a secure SaaS framework. J. King Saud Univ. Comput. Inf. Sci. 2020;34:4047–4061. [Google Scholar]
- 24.Awan M.J., Farooq U., Babar H.M., Yasin A., Nobanee H., Hussain M., Hakeem O., Zain A.M. Real-time DDoS attack detection system using big data approach. Sustainability. 2021 [Google Scholar]
- 25.Aslan Ö., Ozkan-Okay M., Gupta D. Intelligent behavior-based malware detection system on cloud computing environment. IEEE Access. 2021;9:83252–83271. [Google Scholar]
- 26.Karuppusamy L., Ravi J., Dabbu M., Lakshmanan S. Chronological salp swarm algorithm based deep belief network for intrusion detection in cloud using fuzzy entropy. International Journal of Numerical Modelling: Electronic Networks. 2021;35 [Google Scholar]
- 27.Aldallal A.S. Toward efficient intrusion detection system using hybrid deep learning approach. Symmetry. 2022;14:1916. [Google Scholar]
- 28.Qin X., Huang Y., Yang Z., Li X. A Blockchain-based access control scheme with multiple attribute authorities for secure cloud data sharing. J. Syst. Archit. 2020;112 [Google Scholar]
- 29.Gao H., Ma Z., Luo S., Xu Y., Wu Z. BSSPD: a blockchain-based security sharing scheme for personal data with fine-grained access control. Wirel. Commun. Mob. Comput. 2021;2021:6658920. 1-6658920:20. [Google Scholar]
- 30.Liu T., Wu J., Li J., Li J., Li Y. Efficient decentralized access control for secure data sharing in cloud computing. Concurrency Comput. Pract. Ex. 2021 [Google Scholar]
- 31.Qin X., Huang Y., Yang Z., Li X. A Blockchain-based access control scheme with multiple attribute authorities for secure cloud data sharing. J. Syst. Archit. 2020;112 [Google Scholar]
- 32.Liu Y., Yang W., Wang Y., Liu Y. An access control model for data security sharing cross‐domain in consortium blockchain. IET Blockchain. 2023 [Google Scholar]
- 33.Gupta R.K., Almuzaini K.K., Pateriya R.K., Shah K.A., Shukla P.K., Akwafo R. An improved secure key generation using enhanced identity-based encryption for cloud computing in large-scale 5G. Wireless Commun. Mobile Comput. 2022 [Google Scholar]
- 34.Velmurugadass P., Dhanasekaran S., Anand S.S., Vasudevan V.K. Enhancing Blockchain security in cloud computing with IoT environment using ECIES and cryptography hash algorithm. Mater. Today: Proc. 2020 [Google Scholar]
- 35.Mihailescu M.I., Nita S.L. A searchable encryption scheme with biometric authentication and authorization for cloud environments. Cryptogr. 2022;6:8. [Google Scholar]
- 36.Demertzis K., Tsiknas K.G., Takezis D., Skianis C., Iliadis L.S. Darknet traffic big-data analysis and network management to real-time automating the malicious intent detection process by a weight agnostic neural networks framework. ArXiv, abs/2102.08411. 2021 [Google Scholar]
- 37.Krishnaveni S., Sivamohan S., Sridhar S.S., Prabakaran S. Efficient feature selection and classification through ensemble method for network intrusion detection on cloud computing. Cluster Comput. 2021:1–19. [Google Scholar]
- 38.Sarma S.K. Hybrid optimised deep learning-deep belief network for attack detection in the internet of things. J. Exp. Theor. Artif. Intell. 2021;34:695–724. [Google Scholar]
- 39.López-Peña M.A., Díaz J., Pérez J.E., Humanes H. DevOps for IoT systems: fast and continuous monitoring feedback of system availability. IEEE Internet Things J. 2020;7:10695–10707. [Google Scholar]
- 40.Gao H., Luo S., Ma Z., Yan X., Xu Y. BFR-SE: a blockchain-based fair and reliable searchable encryption scheme for IoT with fine-grained access control in cloud environment. Wireless Commun. Mobile Comput. 2021 [Google Scholar]
- 41.Liu Yurong, Liu Weibo, Ali Obaid Mustafa, Abbas Ibrahim Atiatallah. Exponential stability of Markovian jumping Cohen–Grossberg neural networks with mixed mode-dependent time-delays. Neurocomputing. 2016;177:409–415. doi: 10.1016/j.neucom.2015.11.046. ISSN 0925-2312. [DOI] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
The data used in this research will be made available on request.












